You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.
Section 3 / Page 63
If a neural network program was for example, shown a virtualised image of a bed and then it was told that the image was a bed and then it was shown lots of other virtualised beds from lots of other manufactures or retailers etc, which of course it would have access to as the net becomes a truly virtual place, then this type of system was seen as learning what a bed is supposed to look like. These patterns would allow the weights within a neural network to become set, so that pattern would then become a standardised pattern. In other words if the A.I. system was shown a bed it had never seen before, then it was to have been able to compare the new bed, to it's list of stored objects and then tell you which objects where the most similar.
If you scale the principal up, then using pattern recognition and lots of virtualised products from lots of different manufactures and retailers, then an A.I. system given enough real time processing power and patterns, could soon start to learn what everything was called, along with how those objects are supposed to look.
This in conjunction with some real world physics, as in how these objects should act in accordance with gravity and such like, would mean that the A.I. system, would have been able to build up a comprehensive visual / audio database. If the MNN concrept had been fully developed then it could have had access to an incredible amount of virtualised real world imagery and sounds to learn from. Just as humans learn by using sight, sound, touch and the interactions they have everyday, with people, objects and environments, so can an A.I. based system.
Just as the brain makes internal or mental models of the real world, so in relative terms can a computer system. VR systems can be given their own internalised and mathematically accurate view of the real world, along with the ability to see every individual object, just data structures representing real world objects and environments. (see Visual Perception). The MNN could I believe, eventually have been able to draw its own conclusions about anything new it came across, by doing a comparison check with its own internal image or knowledge base. So just as in the real world, when a human being comes across something new, they have to study it first before they know what it is, and it is only through observation and comparison that we can derive conclusions, the MNN system, was also seen as learning how to to do this.
The myriad of virtualised environments, sounds and objects available within the MNN, should have allowed it to see any emerging patterns. E.g. it should have been able to see that a bedroom is usually upstairs or that a bedroom is normally where the bed is kept. The TV and video are usually kept in the living room, or the cooker is usually kept in the kitchen etc. This was to be used to build up an increasingly smart basic model or Universal view (explained further on).
As you can see, 3d modelling of real environments is coming along nicely - welcome to 3d CG New York:-
Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-
I don't have the time to individually respond to every email, so I thank you in advance, for your help.