We disassemble words into phenom, that is assemble to words; concepts to read/write.
Picture: How we learn words.
Interpretation: Powerful tool
1. disassemble a word stream, from media A: TV
2. disassemble a word stream, from media B: Twitter
3. correlate the streams, and
3.1: read the current discussion,
3.2: write the future discussion
Conclusion: Thought/Discussion Influencing Factors
Awareness is advised, since it will be used for both good/evil purposes.
---
In computer science and information science, an ontology is a formal naming and definition. An ontology compartmentalizes the variables needed for some set of computations and establishes the relationships between them. The word element onto- comes from the Greek ὤν, ὄντος, ("being", "that which is"). There is also generally an expectation that the features of the model in an ontology should closely resemble the real world (related to the object).[3]
The Zachman Framework is an enterprise ontology and is a fundamental structure for Enterprise Architecture, which provides a formal and structured way (methodology to discern) of viewing and defining an enterprise.
Picture: 1. Contextual model, 2. Conceptual model, 3. Information model, 4. Data model
The definition of contextual is depending on the context, or surrounding words, phrases, and paragraphs, of the writing. An example of contextual is how the word "read" can have two different meanings depending upon what words are around it.
A pattern is a discernible regularity in the world - it follows certain rules - observable by analysis.
Any of our senses may directly observe patterns - as the governing rules.
Systems theory is the interdisciplinary study of systems in general, with the goal of discovering patterns and elucidating principles. A central topic of systems theory is self-regulating (governing) systems, i.e. systems self-correcting through feedback.
Taxonomy is the practice and science of classification.
A meronomy or partonomy is a type of hierarchy that deals with part–whole relationships, in contrast to a taxonomy whose categorisation is based on discrete sets. The part–whole relationship is sometimes referred to as HAS-A, and corresponds to object composition in object-oriented programming.[1]
The study of meronomy is known as mereology, and in linguistics a meronym is the name given to a constituent part of, the substance of, or a member of something.
In philosophy and mathematical logic, mereology (from the Greek μέρος, root: μερε(σ)-, "part" and the suffix -logy "study, discussion, science") is the study of parts and the wholes they form. Whereas set theory is founded on the membership relation between a set and its elements, mereology emphasizes the meronomic relation between entities, which—from a set-theoretic perspective—is closer to the concept of inclusion between sets.
25 October 2015
The Math of Matching Humans (OKCupid)
R&D for Organizing HR
Picture: Match2=nRoot(A x B)Interpretation
Match Influencing Factors
1. Define object A - Likes: cooperative consumes: A=B --> Good
2. Define object B - Likes: competing consumes A=B --> Bad
3. Define object A - Values: ethical rules + aspirations/target A=B --> Essential
4, Define object A - Importance: weight of each answer: A=B --> Good
Conclusion:
GroupOptimisation: MatchN(--> most effective) + Leader (--> most efficient)
...interesting
Unpublished:
http://ed.ted.com/lessons/michael-mitchell-a-clever-way-to-estimate-enormous-numbers
http://ed.ted.com/lessons/kevin-slavin-how-algorithms-shape-our-world
20 October 2015
Faster than Light speed - 0 refraction index
"Although this infinitely high velocity sounds like it breaks the rule of relativity, it doesn't. Nothing in the universe travels faster than light carrying information -- Einstein is still right about that. But light has another speed, measured by how fast the crests of a wavelength move, known as phase velocity. This speed of light increases or decreases depending on the material it's moving through.
When the refraction index is reduced to zero, really weird and interesting things start to happen.
In a zero-index material, there is no phase advance, meaning light no longer behaves as a moving wave, traveling through space in a series of crests and troughs. Instead, the zero-index material creates a constant phase -- all crests or all troughs -- stretching out in infinitely long wavelengths. The crests and troughs oscillate only as a variable of time, not space.
It could also improve entanglement between quantum bits, as incoming waves of light are effectively spread out and infinitely long, enabling even distant particles to be entangled."
Link
Unpublished
First life 4.1Billion years ago
04 October 2015
New dimensions measured in vacuum
"So far, physicists have assumed that it is impossible to directly access the characteristics of the ground state of empty space. Now, a team of physicists has succeeded in doing just that. They demonstrated a first direct observation of the so-called vacuum fluctuations by using short light pulses while employing highly precise optical measurement techniques."
"The existence of vacuum fluctuations is already known from theory as it follows from Heisenberg's uncertainty principle, one of the main pillars of quantum physics. This principle dictates that electric and magnetic fields can never vanish simultaneously. As a consequence, even total darkness is filled with finite fluctuations of the electromagnetic field, representing the quantum ground state of light and radio waves."
'Leitenstorfer was stunned by the research results himself: "We have had a few years of sometimes sleepless nights -- all possibilities of potentially interfering signals had to be excluded," smiles the physicist. "All in all we found out that our access to elementary time scales, shorter than the oscillation period of the light waves we investigate, is the key to understand the surprising possibilities that our experiment opens up."
Source: ScienceDaily
03 October 2015
Excel Machine Learning Addin
Machine Learning inside Excel.
Try without installing:
Source: Jen Underwood tweet
How to make an intelligence:
Link: Twitter flow, Machine Lerning
Recurrent Neural Networks (RNN) - one of many neural network patterns.
"The idea behind RNNs is to make use of sequential information. In a traditional neural network we assume that all inputs (and outputs) are independent of each other. But for many tasks that’s a very bad idea. If you want to predict the next word in a sentence you better know which words came before it. RNNs are calledrecurrent because they perform the same task for every element of a sequence, with the output being depended on the previous computations. Another way to think about RNNs is that they have a “memory” which captures information about what has been calculated so far. In theory RNNs can make use of information in arbitrarily long sequences, but in practice they are limited to looking back only a few steps (more on this later).
Here is what a typical RNN looks like:
Semantic analysis: http://news.mit.edu/2015/more-flexible-machine-learning-1001
Cheat sheets: * http://designimag.com/2015/06/best-machine-learning-cheat-sheets/
Other
* Power BI Custom Visualisation Competition - Developer tools, Start page
* http://thevisualcommunicationguy.com/wp-content/uploads/2015/06/Infographic_RulesOfPunctuation1.jpg
* the Dark net (TED-talk)
* Human made mini Brain
* Extra dimensions in vacuum, measured
Try without installing:
Source: Jen Underwood tweet
How to make an intelligence:
Link: Twitter flow, Machine Lerning
Recurrent Neural Networks (RNN) - one of many neural network patterns.
"The idea behind RNNs is to make use of sequential information. In a traditional neural network we assume that all inputs (and outputs) are independent of each other. But for many tasks that’s a very bad idea. If you want to predict the next word in a sentence you better know which words came before it. RNNs are calledrecurrent because they perform the same task for every element of a sequence, with the output being depended on the previous computations. Another way to think about RNNs is that they have a “memory” which captures information about what has been calculated so far. In theory RNNs can make use of information in arbitrarily long sequences, but in practice they are limited to looking back only a few steps (more on this later).
Here is what a typical RNN looks like:
Semantic analysis: http://news.mit.edu/2015/more-flexible-machine-learning-1001
Cheat sheets: * http://designimag.com/2015/06/best-machine-learning-cheat-sheets/
Other
* Power BI Custom Visualisation Competition - Developer tools, Start page
* http://thevisualcommunicationguy.com/wp-content/uploads/2015/06/Infographic_RulesOfPunctuation1.jpg
* the Dark net (TED-talk)
* Human made mini Brain
* Extra dimensions in vacuum, measured
Subscribe to:
Posts (Atom)