You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.
Section 3 / Page 82
A.I. Failures and why an MNN type system should work:-
There have been plenty of attempts to build A.I. systems and as far as I know, none have yet to achieve any sign of real intelligence. So without trying to list them all, I include references to just two, these being two of the biggest and most publicized A.I. failures. A.I. is inherently processor heavy, so its only recently, that the human race could really start to make inroads into practically developing the tools needed, to build such systems.
The advent of evolutionary electronics has prompted a lot of researchers to jump on to this bandwagon, some seeing it as the holy grail of artificial intelligence. The building of large neural network arrays, using adaptive hardware (see Field Programmable Gate Arrays (FPGAs etc), such as the attempt tried by Hugo what's his name at Starlabs brain building lab, is, at least in my mind, not really the issue. I believe there is a lot of scope for evolutionary hardware, but its current development should be seen as a crawling baby. However harnessing the computing power within a grid setup, should allow A.I. systems to be employed much more cost effectively and in all probability, it should yield faster results. The million dollar supercomputer Starlabs used, I class as a rather silly attempt, due to an age old saying - garbage in - garbage out, the system could reconfigure its neural net as many times as it liked, but at the end of the day it was blind and stupid. Although the goals of the project where not as grand as the goals set for the MNN, the limits of the hardware involved, seemed a little underwhelming for the tasks set, at least to me.
Evolutionary electronics or adaptive hardware systems will play a big role in A.I., but current manufacturing techniques, means, that trying to emulate human intelligence, within such limited systems, will I believe, not yield any real results, for quite some time to come. Their complexity in my opinion is no where near that required, to do anything really smart. These systems are in their infancy, but the MNN in my estimation, is a much more practical and cost effective answer to solving the self aware A.I. problem. Software simulations, once run on sufficiently powerful hardware (the grid), should be able to achieve everything adaptive hardware systems, can at least currently be seen to be doing.
Starlabs effort failed for three main reasons:- 1. because of bureaucracy or internal politics, 2. Funding, 3.Generally speaking the wrong approach (current electronics being what they are, just my opinion).
The beauty of the proposed MNN system, was four fold:-
1. It should have paid for itself, whilst making a huge profit at almost every step of its development.
2. The computing power available should have been enough to process any of the simulations which were to be run.
3. The data required to give the system the real world awareness envisioned, was to have come from all of the inputted data from all of the sources specified throughout this ebook. I.e. inputted mostly for free by users and 3rd parties.
4. The training of any neural network so as to get it to understand human beings is one of the major tasks confronting almost every A.I. scientist. The proposed networks ability to learn by intelligently mimicking the interactions it would be able to see and hear going on within itself, was seen as just the first step in developing a truly self aware A.I. system.
Starlabs and MIT's Cog system (yet another A.I. failure in some respects), I liken to the blind leading the blind. It's like having the most intelligent man in the world, who has spent his entire life living in a deep dark cave and then asking him what the time is or what sunlight is, he will only know if he has interfaced with what we call reality. This is why the MNN should have worked, because it should eventually have developed its own virtual copy of reality and more importantly, it should also have come to understand that reality, in much the same way we do, because humans, would be training it. Remember the light bulb analogy, the more the brain sees the linking between sounds and objects or applied patterns, the more it gets to describe those connections as right, this was similar to the idea behind the MNN. A more promising attempt I believe at brain simulation is a project being undertaken by IBM and partners called Blue Brain. I am not dismissing Cog's achievements, but it's aims are limited in scope.
The macroscopic neural network in English:-
Computer systems can be compared to the brain, they are both just processors of information. Take a typical word processor, here is an application that allows for the transformation of inputted data through the application of a set of rules, these rules tell the computer how to interpret that information. If a computer system can run enough of the right type of applications in real-time, then it should be able to interpret or transform enough data and in real-time, to get it, to at least simulate the act of thinking. An oversimplified example maybe but included for the people who are not I.T. literate or donít understand programming.
If you compare, let's say Asimo and a human being then you can see, that we can both walk, this can be seen as just one application running (the walking algorithm). Just as we can adapt and learn, so can computer based learning systems, as can be seen in the video below:-
The computer program controlling the robot, learns by its failure, then the program adapts itself automatically and then tries again, each attempt allows new measurements to be taken, this is an adaptive or dynamical system at work.
As was exampled with the remote control planes on page 80, many different programs exist, each capable of doing different tasks. The application of one algorithm / program or concept at a time, does not a smart machine make or human for that matter, but the ability to process many concepts in real-time, should be seen for what it is, i.e. how to produce a very smart machine. Common sense reasoning i.e. the ability to analyze a situation based on its context, using millions of integrated pieces of common knowledge, is a step I believe the proposed MNN could have eventually achieved.
So imagine a hierarchical system that could see all of the data contained within the system, each piece of data encoded in such a way so as to allow it to link to every other piece of relevant data. For instance if a picture of the sky was contained in the system, then the colour blue could be associated to that picture, a picture of an ocean could also contain a blue descriptor in it's data, so the two pieces of data, could be intelligently linked. Although this smart tagging of each piece of data is difficult for programmers, there are new software standards being introduced so as to allow this to happen. So the net could eventually contain billions of pieces of information that will be linked in a smart way. The MNN was seen as a massive interpreter capable of utilizing all of these pieces of data, so allowing it to do what we do, i.e. produce a form of common sense reasoning. Just as we build up a picture of the world or universe through the acquisition of data, in a virtual future, then machines may do the same type of thing, but they may have access to a lot more data, than any human could ever hope to acquire?
If software ever does evolve to a point, to where it can begin to learn totally autonomously, whilst having access to the Net and all of this smart tagged data, then you work it out, I did that's why I wrote section 4.
In the old days it was hard enough to get a computer to perform more than one task at a time, now multiple applications run simultaneously, e.g. Windows / office and lets say solitaire can all be run at the same time. The MNN was to have been a massively parallel system, simultaneously running thousands of applications, on thousands of internet connected devices. This is a scaling problem, that is achievable when you look over the horizon, bandwidth, processing power along with the file handling protocols needed will be in place. So in my mind, it is just a question of time until all of the pieces come together, so allowing such a software system to be built.
I can see some programmers already scratching their heads, but keep in mind though, its a question of protocols, that at least you won't have to develop, see China grid,
Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-
I don't have the time to individually respond to every email, so I thank you in advance, for your help.