New Page 1

PREVIOUS PAGE |_Page index_| Website | Review Page | Journey | Donate | Links | NEXT PAGE

 If you jumped to this page from another page from within the eBook, then click your browsers back button to return to it, rather than the previous or next buttons above.

You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.

Section 3 / Page 81

Virtual Concepts The End Result

The Macroscopic Neural Network (MNN).

The A.I. system was seen as employing the network itself to form a kind of macroscopic neural network and in this way the network was seen as emulating the brainís own neural network in a macroscopic format. The Internet could also be described in a similar way, but the Internet would by its diverse nature, not allow a single operator to exploit all of those capabilities, the proposed franchised / licensed database network should have.

Itís in this sense, that the franchise operator could have utilised all of the networks power in the grid mode to form the envisioned macroscopic neural network. This should have allowed the MNN to learn on many varied levels, from how the networked systems or databases talked or interacted with one another, to how the individual systems internally handled and arranged all of the information contained, whilst also learning from all of the interactions taking place, between all of the online users.

Utilising multiple agents / data mining / distributed computing and artificial intelligence techniques built into the networks software architecture, then this should have allowed the grid concept to be employed, in the perceived MNN mode. The franchised database network in conjunction with the grid concept, could have become an extremely cost effective way, of producing and centrally controlling such a large parallel system. This parallelism was seen as being analogous to some brain functions. The MNN was foreseen as being a hybrid peer to peer network, with many dispersed agents and processing elements, being utilised collectively by the overall operators, so as to achieve certain goals, not possible by other means.

An analogy can be drawn between the proposed network and the brain, where the hardware running the network can be viewed as the brain and the software as the mind. Its in this sense, that I believe it would have only been only a matter of time until the networks mind / software, would have evolved to a point to where it would have become useful enough, so as to be employed in all of the ways, envisioned in this ebook. This is the premise for describing the network as a macroscopic neural network, i.e. the brains neuroanatomy is not yet capable of being matched in its processing ability or complexity in any singular computer system, but on a macroscopic scale, then at least certain brain functions should be capable of being emulated to some degree.

The idea behind the MNN, was to get all of the processing elements and agents within the grid to be used by the developers, so as to apply all of that processing power / storage and algorithms etc, towards a singular purpose. I.e. to allow the network to process all of the A.I. routines needed in real-time, so allowing the operators to utilise that level of A.I., or (parallelism), for whatever purpose, they had for it. Massively parallel neural networks are inherently difficult to coordinate in their activities, the grid was seen as a partway answer, towards achieving this goal. The application of current day programming techniques, such as the ability to execute multiple threads, whilst also allowing the software to evolve and morph itself, utilising self-modifying code, whilst employing increasingly autonomous agents etc, should eventually have allowed each platform connected, to act as a group of neuron modeling agents within the MNN.

As the end user platforms connected increased in capability, then the MNN, was envisioned as increasingly using those capabilities. So the automatic mechanisms built into the software architecture, was seen as allowing the network to evolve, whilst constantly configuring and then reconfigure'ing the MNN's structure, thus allowing, the system to take advantage of all of the capabilities, on and throughout the network. So allowing the MNN to evolve and learn, (very brain like).

Most work in cognitive science assumes that the mind has mental representations analogous to computer data structures. This is one of the main reasons behind the idea of selling virtual goods within virtual environments, these data structures were seen as being the relative equivalent to the mental models held within the brain. Cognitive modelling software and A.I. middleware is already available, but the MNN was envisioned as using a more diverse approach, not relying on any single agent or interpreter. All the applications, algorithms / microworlds and local descriptions etc, contained within the network, were seen as being used so as to provide the MNN, with a stock of functional building blocks for it to draw upon, thus providing it with an expanding knowledge base or a more accurate basic model. (We all live within our own information bubbles).

Most A.I. projects to date have not included visual and audio information in their approach to A.I, largely due to the cost of memory and processing power. The rapid advances in computer technology are now making this less of a problem. The problem of disambiguation seen in most A.I. systems, was seen as being less of a problem within the MNN, due to the overall completeness of it's available information intake. I.e. the VR style representation of data, along with the time and environments (microworlds) each object maybe used and found within etc. This is known as strong A.I. as in the designing of a system capable of displaying real intelligence, the MNN was seen as being able to meet this challenge. The flip side of strong A.I. is logically weak A.I., this approach to A.I. design is currently being used in  most games and in most agent design. Click here for more on this.

The MNN was to employ brute force pattern recognition algorithms, that's if less elegant solutions could not be found to some problems, the grids processing power was to be used for this task. The ability to correlate the data contained into one massively cross-referenced and eventually understandable database by the interpretation software, was the principal upon which the MNN was to be based. Also see, common sense reasoning and the basic model concept. The Wiki database system can be seen as an interconnected knowledge base, but with as yet, no understanding of it's own content. on the other hand, the MNN in contrast, was envisioned as being able to increasingly understand, the content it found within it's own  databases / networks reach?

Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-

problems_or_suggestions@heaven-or-hell-its-your-choice.com

I don't have the time to individually respond to every email, so I thank you in advance, for your help.

Google

PREVIOUS PAGE |_Page index_| Website | Review Page | Journey | Donate | Links | NEXT PAGE

Author Alan Keeling ©