You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.
Computers can now store very accurate visual information, in simple terms this can be likened to plotting dots on a grid just like you did at school on graph paper. E.g.
This explanation is added for anybody who is still in the dark about how computers work, at least at a fundamental level. Computers have millions of transistors, built into the silicon chips (ICís) that run and control them. So if a transistor is off, then in binary code that can be read as zero, if it is on then it can represent a 1. Itís in this way that computers can be used to represent almost anything, and also build up complex pictures within a 3D grid. e.g.
The point is, that it is only recently that computer memory and processing power has become cost effective enough and powerful enough, to manipulate the data or grid fast enough to display very accurate 3D images and in real-time. This equates to millions of, on and off electrical pulses being switched very quickly, updating the on screen grid, so giving the impression of on screen movement. Just like when you were a kid and flipped pieces of paper with similar images on, when you did it fast enough, you got the impression of movement. This works on the same principal, but with millions of transistors switching parts of the grid on and off every fraction of a second, so producing the illusion of a moving image on the surface of the grid or screen.
Although an object may look real on screen, as far as the computer is concerned it is just a series of on and off pulses switching very fast. The interpretation of the data stored within all these on and off switches is what computer languages and operating systems are all about. The introduction of computer programming languages (such as Basic, C++, Java etc), is just a way of explaining to the transistors what we want them to do, i.e. these programs act as interpreters between us and the transistors, binary (0 & 1 / on and off) being the lowest language.
A quick history lesson in CG - Computer Graphics
The proposed A.I. system was seen in the same vein, i.e. it was seen as being just another program, capable of manipulating and processing the data stored. Just like our own brains, it all comes down to the type of information stored and how it is manipulated or processed. The programming or interpretation of the data is the complex part, the storing and processing is rapidly becoming very cheap and easy to manage, within a computer system.
Humans just store and manipulate or process information in a slightly different way to computers, this being the only real difference between computer intelligence and human intelligence. We can all process visual / audio and tactile information etc, in our minds; computers through the use of virtual reality are fast catching up with us. Itís all about having enough data at hand, whilst also having the real time processing power to interpret it, the macroscopic neural network concept, was seen as a way of getting machines to meet this challenge, i.e. producing a self aware or thinking machine.
The question is, what is thought ?
I personally think thought, can be described, as nothing more
than the manipulation of information within a system, any system that is,
biological or silicon, think about that, because by the time you get to section
3, you should be able to rationalise A.I. the same way I can.
Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-
I don't have the time to individually respond to every email, so I thank you in advance, for your help.