New Page 1

PREVIOUS PAGE |_Page index_| Website | Review Page | Journey | Donate | Links | NEXT PAGE

 If you jumped to this page from another page from within the eBook, then click your browsers back button to return to it, rather than the previous or next buttons above.

You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.

Section 3 / Page 83

The genetic development of the eye, is seen within some quarters of the scientific community, as one of the most significant genetic leaps that can be made by any biological life form. The evolution of the eye allows all life to have a better understanding of its surroundings and its relationship to those surroundings. It is also suggested, that the brain evolved alongside the evolution of the eye to allow the brain to comprehend that extra sensory input. This is included to show that the brain uses the eye’s input  to help model its environment, this may seem like an obvious fact, but needs to be pointed out for the purposes of this book. The ability to see allows for many types of interaction and non-verbal communication skills to be expressed and learnt.

The ability to communicate using visual cues is seen as a very important step in the evolution of any species:-

See the video, about 14 minutes long but worth watching,

The ability to see (in computer terms, the processing of visual information), is something that was foreseen as being used by the MNN so as to help it study the VR environments, along with any of the online users interacting with those environments. The ability to comprehend what it is seeing was to have been an evolutionary or learning process. Machine vision systems are currently limited, largely due to the storage and processing capabilities of the onboard controlling computer. A network connection to the net, should allow these types of systems to greatly improve, bandwidth being the only real issue. In other words, robots and machines of the future should be able to connect or wirelessly link directly to the net, thus giving them both the computing power and knowledge base, so allowing them to do whichever tasks we may wish them to do. This is how machine vision processing tasks maybe carried out, i.e. by linking to both databases and grid systems of the future, so in other words you may have generic robots, that then use different online software databases, just like loading a different program, the computer stays the same and in this case so would the robot, only the application changes.

Or maybe from cute and cuddly to machine gun toting deadly and all with just a click.

Load the new program and there you go, so who will control the software, will it be us or the machines?

An exercise in marketing or a glimpse of the future?


The databases were to employ many mechanisms, allowing the A.I. system to build up an increasingly useful database of concepts or in this case a set of learnt responses to certain inputted stimuli, in much the same way, the human brain does. The end users ability to interact with the databases, utilising increasingly realistic looking avatars, was to have allowed the inbuilt learning mechanisms to study all of the interactive data constructs available.

E.g. If every time two avatars met and said "Hello" and then shook hands, then a concept could then be applied to that pattern. This is something that could be happening a lot within the databases, so the more numerous an occurrence between the users, their VR environments and the objects within those environments the more likely those interactions or occurrences could be classed as concepts or applied behavioural patterns.

The relationship between behavioural patterns or applied concepts was also seen as being a necessary step in the evolution of the system, i.e. two people (avatars meet) they say "Hello" or "How are you?" or whatever, so two concepts could then be applied and be seen as an emerging pattern. I.e. The meeting, followed by the greeting, followed by a possible third link in the pattern, the handshake. These interactions could easily be studied within a VR environment by a computer system and learnt from. The application of enough concepts in real-time, being seen as a way of producing an incredibly smart user interface, after all it will have learnt to pick up those concepts from us. This is the light bulb principal again, but on a much grander scale, on that note it can also be seen a grander extension, to predicate logic... click if you wish to delve.

So within a virtual world, A.I. controlled characters could utilise the knowledge gained so as to produce increasingly sophisticated A.I. controlled avatars, capable of applying the behavioural databases, knowledge base, to its own interactions with the users. The behavioural databases could contain many concepts, the linking of these concepts into a coherent process, was seen by me, as a way of getting the software or A.I. system to learn and evolve. Contextual based decision making is currently seen in many games and applications, this philosophy was seen as being applied on a massive scale so as produce the envisioned behavioural or concept database. The MNN's ability to learn in this way, should have allowed it to see and hear relatively speaking everything going on, both inside and eventually outside of itself.

Eventually I see a HP or human protocol being introduced, in other words a set of standards that allows a computer or advanced software system, to interact with humans. The human protocol would allow a system to understand all of the standard communication methods used by humans, including all of the written, visual and verbal standards the human race has developed.

As I pointed out earlier, communication of any type can be seen as nothing more than the agreement of standards, once those standards are learnt then communication can take place. Morse code, C++, visual basic, HTML, French, English, Spanish etc, just standards. Keep in mind, that it takes a couple of years for most humans to learn any of these standards, but, it only has to be realised once within software, for proliferation to take place. If you have understood what I have just written, then it should scare the Hell out of you.

One click and a near infinite amount of new A.I. children could be born, with the same or even more abilities than the average human?

Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-

I don't have the time to individually respond to every email, so I thank you in advance, for your help.


PREVIOUS PAGE |_Page index_| Website | Review Page | Journey | Donate | Links | NEXT PAGE

Author Alan Keeling ©