New Page 1

PREVIOUS PAGE |_Page index_| Website | Review Page | Journey | Donate | Links | NEXT PAGE

 If you jumped to this page from another page from within the eBook, then click your browsers back button to return to it, rather than the previous or next buttons above.

You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.

Section 3 / Page 67

Internally set patterns or local descriptions (as they are known in the industry), are just self-contained data structures (also see ontology) that have descriptors capable of being integrated into a larger framework (also see RMI). If a virtualised image of a spoon was contained within the database, then the data describing that object could be described as an internally set pattern. If your into biology or chemistry then this can almost be seen as a lock and key mechanism, imagine each piece of data being smart tagged thus allowing some pieces of data to fit, whilst other pieces would be rejected. Eventually a larger picture is developed via the joining of pieces of fragmented data, this is software evolution at work.

The data structures describing any object, were to have included as much information as could be used to describe those objects, i.e. mass, weight, price if any, what it looked like, functionality if it had any, also each object was to have been 3D scanned so as to accurately describe all of its dimensions. The databases were to have been set up in such a way, so as to allow third parties to input their own captured images, products etc. The MNN would then have contained millions of internally set patterns describing millions of individual objects and environments. These data structures where then to be studied by the MNN looking for similar patterns, allowing it build up a very smart visual and audio database.

A simple example, showing how the databases were to have been set up:-

Objects name, followed by other associated information such as:-

Price if any Currency
Type of object or environment Electrical goods, house, animal, even person, etc
Standard 2D picture of object or environment Format, jpeg etc
Weight of object if attainable, if not then best estimate Metric
Mass of object if attainable, if not then best estimate Metric
Real world object or something else Real / imaginary/etc
Sound used to describe object or environment Formatted into a standardised sound file
3D model of object or environment Formatted into a standardised 3D model format
Text description of object if applicable In HTML or some other metaformat 
Description, for an EVA or virtual guide to read out in HML (human markup language) or other such format
Geographical location if any GPS co-ordinates or other such information
Time period object existed in, if applicable The present or date in history
Etc Etc

As can be seen - if you pause the video above, this type of idea is catching on , see 3d Turtles. All the necessary data was then to be collected, optimised and encapsulated into a more useable form, once the data had matured and had been processed.

 

Humans have billions of neural connections which are used by our brains to store and represent information. The fact that our brains represent that data in a biological / chemical way, rather than in a silicon way should not detract from the fact, that all that is really going on is the manipulation of data. The data that we store is based upon the data that we expose ourselves to everyday, e.g. if you want to become a good football player then you would normally expose yourself to a lot of football. If you wish to learn something new, then you have to expose yourself to whatever that thing is.

The extremely accurate capturing of virtual data should, for the purposes of this proposal / ebook, be seen as a way of getting any sufficiently advanced A.I. system to intelligently copy and learn from that data. Developments in the field of motion capture, force feedback and vision processing, are showing the way in which computer systems can now acquire this type of data. The next step is to get an A.I. system to intelligently understand this data and apply it to the rest of the concepts it may have learnt. The overlaying of the data to fit in with its basic model (see page 141, being seen as just another learnt concept to be applied and in this case to the MNN's proposed knowledge base. The basic model concept was used to address one of those fundamental A.I. questions, i.e. how do you give a computer system, even a basic awareness.

As the MNN developed it was to have human experts in many fields, showing it and teaching it, any of the practical skills that we wished it to learn. Through the use of virtual reality, haptic interfaces (force feedback devices) and advanced vision processing systems etc, the system should have been able to take that data in and then allow the software to transform that data into a format, capable of being processed by the MNN. This should eventually have allowed the MNN to intelligently copy or mirror almost any expert  in almost any field, after all, in a similar way, this is what we all do. I.e. we see other people doing stuff and then we copy them, this is called learning:-

As the A.I. system evolved, then through the use of intelligent picture interpretation or advanced vision processing, it was  to have been given the ability to carefully study a TV image of an expert at work and then learn those skills (just patterns within patterns). The MNN was seen as having the ability to study any image 2D or 3D and then interpret those images into very realistic CGI images and in real time. Computers will become increasingly capable of visually processing information and in a very similar way to humans.

It is all down to the interpretation of the data gathered and the intelligent copying or mirroring of that data or learnt behaviour. The type of A.I. system being described (MNN), linked to such a vast sound and image database whilst using the latest NLP software, was seen as a way of getting it to relate what was being said in real time by any of the online and interactive audience to any of the objects / environments, held within the database network. This should have allowed the MNN to learn in much the same way we do, i.e. by relating sounds to images to actions and vice a versa.

The problem was that the MNN may have started to learn very quickly, given the fact that it was to have been connected to an online community of millions. This would have given it access to a lot of people talking, interacting and whatever else that online community was doing. Thus giving the MNN the ability to watch and hear everything that was going on inside itself. This ability was envisioned as being semi-autonomous in nature. The networks software, would have been permanently learning and adapting with each newly inputted piece of data. The automatic mechanisms built into the MNN's software, should have allowed the system to automatically evolve, thus leading in this case to section 4 of this ebook, because in section 4, I will tell you what could be the very probable outcome, if this type of software evolution is introduced and goes unchecked.

Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-

problems_or_suggestions@heaven-or-hell-its-your-choice.com

I don't have the time to individually respond to every email, so I thank you in advance, for your help.

Google

PREVIOUS PAGE |_Page index_| Website | Review Page | Journey | Donate | Links | NEXT PAGE

Author Alan Keeling