You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.
Whenever we meet anybody new, our brains start to make a new model of how that person looks, behaves, talks and all the other characteristics that go to make up that person. The more exposure one person has to another, the more detailed that mental model becomes, the MNN was also seen as being able to build up incredibly detailed models or profiles of how all of the online subscribers behaved. This was to have been done, by studying all of the interactions that the users had within the network, i.e. their shopping, gaming, personal online friendships details etc.
By letting the system observe and interact with millions of online users, it should have been able to build up very detailed models of both individual and group behaviour on both a small and a large scale. Humans do a similar type of thing, when we use words like ‘YUPPIE’ or ‘HOOLIGAN,’ both these words bring up mental models, those models go to represent both individual and group behaviour. Humans and computers can both build models of just about anything, given enough information, but it’s only recently that the software and hardware capabilities to get all that information into a computer system cost effectively has become available. Part of the virtual databases envisioned tasks, was to use it as a cost effective data gathering system, so as build up a concepts or behavioural database (explained on page 83)
Imagine you had an apple, now in your minds eye you can imagine all of the things you can do with that apple, eat it or whatever. What your brain has actually done, is drawn a mental image of some of the possibilities. In VR, a computer system can also be given the ability to paint its own equivalent of a mental image. Now if a human controlled character (avatar) came along in VR, and then picked up a 3D representation of an apple and then ate it (in relative terms), then an artificial intelligence program could see that happening. A computer system could then store that data as a learnt behaviour, just like when you teach kids how to do anything, they watch and learn.
The problem is, a runaway effect may occur, i.e. the proposed A.I. systems ability to learn for itself can be likened to the way children, once taught how to read and write can then move into self-learning. This is the danger facing an uncontrolled growth or proliferation of these types of A.I. systems. Kids also learn how to fit into society by observing the people around them, education and through TV etc. This social conditioning can be seen as a form of pattern recognition, these patterns are also capable of being seen and learnt from, by a computer within a virtual reality type system. Society generally works, because we all know how to behave in certain situations, in other words most of us react to certain inputted stimuli, with a set of pre-conditioned responses. It's in this way, that there is no reason to think, that a computer program could not also learn how to react in the same preconditioned way, when faced with the same type of inputted stimuli. This conditioning is covered in more detail further on in this ebook, it was also seen as a very important part of the basic model concept.
Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-
I don't have the time to individually respond to every email, so I thank you in advance, for your help.