New Page 1

PREVIOUS PAGE |_Page index_| Website | Review Page | Journey | Donate | Links | NEXT PAGE

 If you jumped to this page from another page from within the eBook, then click your browsers back button to return to it, rather than the previous or next buttons above.

You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.

Section 4 / Pages 144 -145

The Basic Model And The Making Of The Tea Problem.

This is one of those questions I feel like answering. The conversation usually goes something like this, "Yes I have heard it all before mate, but can it make me a cup of tea and deliver it to me in bed?" This being an age old problem for gadget designers.

Well the simple answer is yes, the rather more complex answer goes like this:-

First here is a look at what is currently possible in the field of lifelike human robotics.

So if we accept the fact that robotic systems will become increasingly up to the job (any job), physically speaking, then that just leaves the mind and this comes back to the artificial intelligence problem. The solution is basically a question of applying some of the concepts which I have outlined in this book. Anyway back to the making of the tea problem, the following is how I would do it:-

The basic model concept, is one in which an A.I. system through the use of observation and comparison could compare what it was seeing, in this case what a robots onboard sensors, (probably a pair of CCD cameras or even better, a real time 3D scanner in conjunction with a camera plus measuring devices etc, would be showing it and then comparing that data and those images with its own onboard virtual model. The basic model in this particular case, would not have to be that complex, that's if the robot was being used for simple household applications. The basic model in this case, would be all of the relevant virtual models representing the home, along with all of the objects contained within the home. In this way, by only having a limited knowledge base or small basic model, that is only relevant to both to its working environment and the acting out of certain specific tasks to be done within that environment, then it should be able to be trained to navigate itself around and then do set tasks. This can be seen as an extension of  the macro function philosophy, described earlier or in other words, the machine or robot could be programmed to copy a set of procedures.

So once the A.I. system had been given a good enough set of virtual models or form of internal map representing both its environment and the objects within it, then it should then be able to visualise its virtual environment and then compare it to the real world environment it finds itself in. We already have laser rangefinders to measure and compare distances and there are plenty of other sensors in development. The ability to do set tasks comes back to the manipulation of digital data, as I said itís all down to what the data represents. This is about matching input to output and vice a versa. In principle this is very similar to the mapping procedure described on page 17,  but this time, the coordinates would be representing reality rather than points or pixels on a screen. A bit like when you see the binary code in the Matrix, remember this line, "I don't even see the code anymore, I just see blonde, brunette etc", as I said, it's all about what the code represents. If your a programmer, think of it like this, the distance between the robot and the kettle is assigned a value, lets say D, every move is then compared, distance (D) is then re-measured and so the value changes, millions of these types of values and comparisons etc, would have to be made every second, (see feedback loops).

So for the purposes of this example, if the robot / A.I. system can recognize a teapot because it can compare it with its own internal virtual model of a teapot, along with a kettle and the virtual model representing the home, then it should be just a case of getting the robot to accurately carry out a set of moves, which correspond, to the already mapped and stored virtual models and macros. So by comparing its real world input (data), with the stored virtual data and tasks, it should be able to learn to match the template / macro to its own real world actions. It could compare reality to its virtual reality, so it would no longer come down to mathematical equations as such, more a case of scene manipulation and comparison. The brain also carries out very similar functions by producing  mental models for doing set or learnt tasks, and by having a stored mental model of its environment. The mind can then replay the actions whilst comparing its stored mental images to its real world input, allowing us all to do the tasks that we may have learnt or seen others doing.

So anyway, yes the robot will be able to make the tea and deliver it to you in bed. Also by adding a T axis or time awareness to how certain objects move in relation to others, then the basic model should be able to compensate in the same way we do. So the robot shouldn't spill a drop, whilst on its way up the stairs. Add a bit voice recognition and the ability to verbally respond to that input, hope you get the point. There will be robotic software designers shaking their heads, thinking Christ, does this guy realise just how much work goes into getting these things to work, every hardware change requiring God knows how many software changes. Well obviously I do, but the future of computing hardware and software along with large scale commercial interest, will mean, that what is now considered very hard, will become increasingly easy, especially as robotic software tools become as plentiful, as today's software applications. As I said, this is about millions of programmers all over the world, working out all those little bugs in the code.

Current software design is generally a labour intensive effort, but the future of some software, will increasingly rely upon embedded net based user feedback loops and seeing as hardware, is increasingly becoming a non factor, within software design, then any problems that may arise, will increasingly become, purely a software based problem. Just remember that as companies compete in the global market, then the urge to improve their designs as quickly as possible, so as to meet the markets demands, may lead them to diverge from let's say, a formalised approach, to a, let's just get it sorted strategy. As software inspection techniques become less stringent and as complacency etc, set's in over time, then flaws will inevitably become a factor. This is a question of software evolution, can we evolve systems on the scale envisioned, without them becoming unsafe and intellectually unmanageable?

The MNN was seen as being able to study all of the models / simulations and microworlds contained within the network. So as the title suggests, the virtualising of all space and time, was seen as a way of providing the MNN with the largest possible basic model (or a form of internal representation, as the cognitive science likes to describe it). Also the above example / basic model concept, was to be tied into the concepts or behavioural database set-up, so the robot or program, would increasingly be capable of handling different situations (a bit like humans). Common sense reasoning is also an applicable term that could be used to describe both the MNN and the basic model concepts.

Some people have suggested that the basic model can be described as nothing more than a highly accurate map, this is partly true, but the map in this case, was to be used so as to represent reality. So just like us, the ability to map are relative relationship to our environment, allows us a much better understanding and interaction with that environment, the same principal, I believe will eventually allow an artificial intelligence, to understand and respond to the real world, in much the same way as ourselves. (see Google to map US cities in 3d).

VR is the key to A.I., because it is all about the audio, visual and possibly tactile processing of information, which is the same type of information processing that is going on within the human brain. As technology progresses, other sensory data was to have been inputted into the VR models, such as smell and taste, the systems models could even have included data that are human sensors might not have been able to register. This should have lead the MNN to having a much larger basic model or should I say a much greater awareness of both itself, along with its environment than we ever could. Yet this is what the I.T. industry wants, in other words an end user interface that is capable of total understanding and interaction with any user, but that level of A.I., paradoxically means, that the system would have to be smarter than the user?

So anyway, if you say to the robot go make the tea and it gets on with it as a semi autonomous task, then some very hard questions have to be asked about the nature of intelligence. Most academics, live in a very different world to the general public, I watch the masses, the people down at my local shops etc, they don't seem to talk about A.I., nanotech etc, they talk about things they can relate to. Usually their basic models, do not include information that is not directly relevant to themselves or their current situations and most of them seem to have only a smattering of knowledge about history and the world they live on. So intelligence is a very subjective issue indeed, but I have observed, that they can all make the tea, this seems to be a universal app or macro, that most people can run, how about you and on that note, just how many apps can you run?

I've also noticed, that it takes a couple of years for the average human to get the walking and talking application right, followed normally by the reading and writing application. This is how I see it, lots of applications running in perfect unison well almost, but all run on a standardised hardware platform, the human brain. The MNN was eventually seen as a type of brain, with each connected net device in effect forming and accessing this brain, this included robots, oh my God, I.Robot could become a reality, VIKI (Virtual Interactive Kinetic), is just a short technological step away?

Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-

I don't have the time to individually respond to every email, so I thank you in advance, for your help.


PREVIOUS PAGE |_Page index_| Website | Review Page | Journey | Donate | Links | NEXT PAGE

Author Alan Keeling ©