Imagine That You Hear God Speaking To You & Giving You Instructions That Seem To Be Contrary To What You've Always Been Taught About God
Editor's Note: As this author has stated many times in the past, the science fiction movies which Hollywood creates, are oftentimes a furtive way for the U.S. Military Intelligence complex to gauge public opinion on classified technology and programs which currently exist, without actually admitting their existence to the public.
Many of you will remember the 1983 movie War Games, in which the predecessor to today's artificial intelligence computers is accessed by a high school student seeking to play military strategy games.
The computer then creates a game based on a nuclear weapon's conflict between the United States and the Soviet Union. Except that the U.S. Military doesn't realize that it is just a game and nearly declares World War III on the Soviet Union.
In the final moments of War Games, the audience is left on the edge of their seats as the military computer which controls the Department Of Defense's war operations, learns that some situations are futile, and that the only logical answer to such situations is to give up.
Interestingly enough, the code name for the super computer in War Games is Joshua. And you will see the relevance to this name in the following posts.
Many targets of the non consensual mind control experimentation being conducted through the global military-intelligence complex, are used to hearing voices in their heads, having their thoughts electronically stolen and manipulated via signals intelligence satellites, as well as the utilization of brainwave monitors and analyzers which are used to decode the unique set of bioelectric resonance/entrainment frequencies of any American citizen's brain.
This signals intelligence technology has also spawned a secretive brain fingerprinting program implemented by the NSA in 1981 under Executive Order 12333, signed by *President Ronald Reagan, which has been used to destroy the American citizenry's inherent protections under the U.S. Bill of Rights.
* This information would come as a shock to those Americans who have always considered Reagan to be a model U.S. President, given that EO 12333 is easily one of the most treasonous pieces of legislation ever enacted in the United States. In fact, many people have stated that Reagan's best acting job was not in Hollywood, but during the eight years that he was in the White House, which was really run by his Vice President, CIA asset-drug smuggler George H.W. Bush.
Moreover, according to NSA whistle blower John St. Clair Akwei, the NSA's Signals Intelligence EMF Scanning Network can be used to remotely access the brain of any American citizen without that citizen's knowledge or consent, thereby completely circumventing that person's inherent rights to privacy under the 4Th Amendment and due process of law under the 6Th Amendment.
Furthermore, this EMF Scanning Network has made a myriad of American citizens unwitting targets of the NSA's satellite predation.
In fact, men, women and children from virtually every continent on this planet are being subjected to these crimes on a daily basis, while experiencing many different manifestations as targets of non consensual human experimentation.
These crimes are occurring while these people are being denied their Constitutionally protected rights.
And these citizens are being ignored by the political infrastructure in their respective countries, whose representatives have been told to disregard any attempts to end these Orwellian crimes.
The following article will give the layperson a better understanding of how this technology is being used to remotely enter the human mind, as well as the nefarious agenda that those who deploy it against us are attempting to carry out, in the interest of establishing a global dictatorship.
Legitimate whistle blowers who attempt to promulgate this information are being vilified by the governments who are attempting to conceal these crimes from the global populous, as these whistle blowers attempt to survive the murder attempts being made on their lives, long enough to circulate this disturbing information.
Moreover, as an inhabitant of this planet, it doesn't matter which country you live in, since all governments have declared it open season on your mind. And unless you understand what makes you vulnerable to these forms of satellite predation, you have no chance of preventing the enslavement of your own mind.
“The advancement of techniques propel us toward the third step in the Blue Beam Project that goes along with the telepathic and electronically augmented two-way communication where ELF, VLF and LF waves will reach each person from within his or her own mind, convincing each of them that their own god is speaking to them from the very depths of their own soul. Such rays from satellites are fed from the memories of computers that have stored massive data about every human on earth, and their languages. The rays will then interlace with their natural thinking to form what we call diffuse artificial thought."
"So how could that be possible you ask? Easy. Enter Project Joshua Blue, currently under development by our favourite business machines company, IBM. Joshua Blue is a program with the stated goal of “Evolving an Emotional Mind in a Simulated Environment”, 'to enhance artificial intelligence by evolving such capacities as common sense reasoning, natural language understanding, and emotional intelligence, acquired in the s'me manner as humans acquire them, through learning situated in a rich environment.”
This is software that is capable of learning and developing ‘emotions’ according to a related document 'Feeling Fabricated: Artificial Emotion“. Parts of this project involve the development of Joshua so that emotion is part of it’s reasoning and these guidelines are being followed: Naturalness and believability, social effectiveness, and meaningfulness of displays to human observers. In fact, the main goal of Joshua Blue is to achieve cognitive flexibility that approaches human functioning. In other words – this is artificial intelligence that could be diffused with our own thought because it has been designed to ‘think like a human’.
A computer program which can read silently spoken words by analysing nerve signals in our mouths and throats, has been developed by NASA. In my previous post I outlined the technology that exists. In 1994, the brain wave patterns of 40 subjects were officially correlated with both spoken words and silent thought. This was achieved by a neurophysiologist, Dr Donald York, and a speech pathologist, Dr Thomas Jensen, from the University of Missouri. They clearly identified 27 words / syllables in specific brain wave patterns and produced a computer program with a brain wave vocabulary. It does not take much thinking to realise that the US agencies have access to a perfected version of this technology. In fact the relevant computers have a vocabulary in excess of 60,000 words and cover most languages. In fact, the NSA’s signals intelligence monitor the brainwaves of their targets by satellite and decode the evoked potentials (3.50Hz 5 milliwatts) that the brain emits. So, using lasers / satellites and high-powered computers the agencies have now gained the ability to decipher human thoughts – and from a considerable distance (instantaneously).
With these seemingly far fetched technologies in mind, although evidently not so far fetched, also consider this final one. Silent subliminal presentation system: A communication system in which non aural carriers (in the very low or high audio frequency range or the ultrasonic frequency spectrum) are amplified or frequency modulated with the desired “intelligence”, and propagated acoustically or vibrationally for inducement directly into the brain. This can be done 'live' or recorded/stored on magnetic, mechanical or optical media for delayed/repeated transmission to the target. Sound can also be induced by radiating the head with microwaves (in the range 100 to 10,000 mhz) that are modulated with a waveform consisting of frequency modulated bursts. HAARP of course is going to be handling all the microwaves.
Knowing all of this, Joshua Blue could in fact be being fed with your personal data (facebook, myspace, medical records, driving records, police records, shopping records etc.) and could be simultaneously extracting your actual, real time thought processes. It could then calculate alternate thought trains and depending on the ‘mission’, it could distract you, lead you to the wrong conclusions or drive you insane. It could even manage to make you convince yourself that yes, aliens truly are invading or Jesus really is talking to you from the clouds. With all the data we put out into the Internet about ourselves through MSN, I’m 100% certain ‘they’ know how we talk and therefore how we think. This is way too feasible for me to be comfortable. If the thought that was generated artificially by the computer was so accurately precise how could you reasonably differentiate from AI and your own thought? This scares me.
Joshua Blue is specifically being designed to handle emotions, so thought patterns and logically chosen emotions to compliment them are going to be very hard to overcome if this were to go live. I really do hope this is just a figment of my imagination, this technology is right in front of our eyes how ever so I’m going to have an extremely tough time convincing myself otherwise."
Also See:
Project Joshua Blue: Design Considerations for Evolving an Emotional Mind in a Simulated Environment Nancy Alvarado, Sam S. Adams, Steve Burbeck, Craig Latta
IBM, Thomas J. Watson Research Center
Abstract
This paper contrasts the implementation of motivation and emotion in Project Joshua Blue with current approaches such as Breazeal’s (2001) sociable robots. Differences in our implementation support our different goals for model performance and are made possible by a novel system architecture.
Project Joshua Blue applies ideas from complexity theory and evolutionary computational design to the simulation of mind on a computer. The goal is to enhance artificial intelligence by evolving such capacities as common sense reasoning, natural language understanding, and emotional intelligence, acquired in the same manner as humans acquire them, through learning situated in a rich environment.
This project is in its beginning stages. A simple model of mind has been implemented in a limited virtual environment. Even in this first, simple model, emotion and motivation are not separate programs or subroutines but are integral to the basic functions of mind and have a constant and pervasive influence on all mental activity. We believe that the complex social behaviors observed in humans will emerge as capacities of mind from the exercise of emotion and motivation in social environments.
More importantly, however, we believe that integrating emotion and motivation with cognition is essential to achieving common sense reasoning and natural language understanding, to autonomous learning, and to goal-setting. In short, this integration is essential to endowing a computer with the ability to comprehend “meaning” as humans do.
The main goal of Project Joshua Blue is to achieve cognitive flexibility that approaches human functioning. We believe emotion is a mediating mechanism that permits flexible assignment of meaning and significance in different contexts, coupled with a way of navigating a dynamic environment. To do this, emotion itself must not be fixed in its relationships, but free to associate variably with environmental stimuli and internal mental events.
That emotion guides cognition is contrary to the theory of emergent emotions, where emotion is in the eyes of the observer, attributed to a robot or other entity based on its interaction with the environment (Shibata, 1999). Further, it is contrary to the modularity proposed by Brooks (1986)and others. We believe isolated or limited imp lementations of emotional capacity must result in limited functionality.
Sociable robots are relevant to our project because we expect Joshua Blue to ultimately learn through embeddedness in a social environment. Breazeal’s (2001) promising approach to implementing emotion in robots appears to directly instantiate emotion using logic. She gives sociable robots emotion by specifying: (1) the conditions under which certain affective states arise, (2)criteria for arbitrating among competing emotions, and (3)the instrumental and expressive behaviors resulting from each affect (Breazeal, 2001).
In her model, the releasers for affect must be specified, which implies that the designer must anticipate the possible drives or goals and define emotion-evoking situations. The response to those situations is fixed once the emotion is identified, and there is no ability for the robot to override it.
As yet, there is no reflexivity, self-awareness, consciousness of emotional state, subjective feeling beyond what is simulated behaviorally, and there is no ability to maintain affective privacy or engage in impression management by dissembling. Without ability to selectively inhibit behavior, there is no possibility of conforming to social display rules or using affective expression instrumentally through deceit.
This approach, and similar rule-based or logic-based implementations of emotional intelligence have accomplished an amazing amount of functionality. Their designers clearly intend to expand emotional competence, but in doing so they are likely to encounter the same resource limitations as are faced by those using rule-based approaches to reasoning or knowledge management (Brooks, 1986). Thus, for Joshua Blue we sought a different approach to implementing both emotion and other cognitive abilities, beyond rule-based approaches, neural nets that also have limitations, and statistical approaches to simulating cognition.
Joshua Blue incorporates an emotional model derived from current emotion theory, and is thus superficially similar to models implemented by Breazeal and others. Like such models, our system includes valence and arousal, homeostasis, and drive states, but it also includes proprioception and a pain/pleasure system. The system
architecture is based on a semantic network of nodes connected by wires along which activation spreads (Quillian, 1966; Collins & Loftus, 1975).
In traditional spreading activation models, the length of wires captures semantic distance. In our model, the conductance of wires is adjusted dynamically based on the emotional context. This design permits cognitive processes and mental representations to be continuously influenced by affect.
Further, like many current approaches, the system is motivated and guided by affect to navigate its environment and acquire meaning through principles of learning. A key difference between our model and current approaches is that emotion is implemented in both global and specific ways. Like Breazeal’s (2001) model, our system uses tags for valence, but not for arousal or stance. When a node is activated, its valence influences the valence of the entire system, but is also modified by the global affect of the system. This makes possible emotionally driven shifts in cognition.
Arousal guides attention and determines the strength of associations formed, interacting with valence to tag specific objects with additional significance. Affective weighting is important in determining which associated objects will cross a threshold for consciousness or be retained in memory. Proprioceptors for affect were implemented to permit the system to introspect on its own global affective state, to be aware of the affect associated with a specific set of objects, and to experience pain and pleasure.
This latter constitutes the reward and punishment system that guides exploratory behavior, generates expectations and ultimately motivates goal-directed behavior.
Unlike Breazeal, we have made no attempt to instantiate Ekman’s basic emotions. We believe such states will emerge from learning and social interaction, providing a test of current emotion theories. Aside from the motive to seek pleasure and avoid pain, we are also incorporating a more complex structure of drives. We reserve the term “drive” for innate or hard-wired motives essential to survival, such as hunger or thirst in humans. Beyond that, the coupling of affect and experience should result in the formation of acquired motives and associated goals that have attained emotional significance through social learning (Reeve,1997).
Breazeal (2001) uses positive emotions to signal that activity toward a goal can terminate and resources can be released. Neuropsychological evidence supports the idea that pleasure indefinitely sustains seeking or approach behavior, while other mechanisms indicate satiety (Panksepp, 1998). In our system, positive affect or pleasure arises not only from consummatory behavior but also from the exercise of certain intrinsic cognitive processes that require no homeostatic regulation (e.g., autonomy and control, familiarity and liking, competence and self esteem,
social attachment).
Pleasure is thus not a signal to terminate a drive state but a motive for approach behaviors. To terminate goals, our model incorporates the notion of quasineeds, social-needs and deficit motivations. These needs are acquired motives that give rise to negative affect when unsatisfied (e.g., need for power, social status, achievement). Negative affect is reduced and goes toward a neutral state once such a need is satisfied, terminating the goal. This reduction in negative affect is itself reinforcing, and demonstrates the importance of implementing the capacity for relativistic subjective states.
Stance is determined by whether pleasure or relief of pain is the guiding motivation. While more complex, this conceptualization permits acquisition of an endless array of motives and goals without the need to hardwire them as drives. It also more closely resembles human functioning.
Our early exp erience with this model suggests that establishing exact homeostatic set points and bounds is not critical to system functioning. Attaching negative affect or pain to homeostatic imbalances creates temporary drive states that motivate regulatory behavior. We have observed that the same behavior results regardless of the values established. Instead, the temporal cycles for satisfying drives vary with differences in the strength of affect arising from imbalances, resulting in behavioral differences comparable to temperament observed in humans.
Unless the system is placed in an environment where satisfaction of imbalances is impossible, extremes are never reached, obviating the need for boundaries. Our system can function without predetermining “correct” set points or boundaries because the system’s emotional behavior is not defined on the basis of its distinct homeostatic drive states, as it is in Breazeal’s (2001) model.
When emotion arises as a fixed consequence of cognition or of some appraised environmental event, affect does not guide or influence cognition but is determined by it. We believe flexible thought can be achieved by linking affect to semantic meaning and using affect as a weighting mechanism, a significance indicator, a tuning mechanism for attention and memory, a choice mechanism, and a motivator of situation-appropriate behavior linked to accomplishing desired goals. This potential for flexibility is diminished when emotionality is specified to a system, not emergent from it.
References
Breazeal, C. 2001. Designing Sociable Machines. The MIT
Press. Forthcoming.
Brooks, R. A. 1986. Achieving Artificial Intelligence
through Building Robots. MIT AI Lab Memo 899.
Collins, A. & Loftus, E. 1975. A spreading-activation theory
of semantic processing. Psychological Review, 82, 407-428.
Panksepp, J. 1998. Affective Neuroscience. Oxford Press.
Quillian, M. 1966. Semantic Memory. Cambridge, MA: Bolt,
Beranak and Newman.
Reeve, J. 1997. Understanding Motivation and Emotion,
Second Edition. Harcourt Brace College Publishers.
Shibata, T., T. Tashima, & K. Tanie 1999. Emergence of
Emotional Behavior through Physical Interaction between
Human and Robot, Proceedings of the 1999 IEEE
International Conference on Robotics and Automation
(ICRA’99), Vol. 4, Page(s): 2868 –2873.
<< Home