THIS IS A PREVIEW PAGE OF A NEW E-BOOK

CALLED

PLEASE CLICK

>HERE<

 TO GO TO THIS SITES HOME PAGE

&

IF YOU LIKE WHAT YOU SEE

THEN WHY NOT DOWNLOAD THIS NEW SHAREWARE E-BOOK

AND TAKE A LOOK

AFTER ALL, IT'S FREE TO TRY.

PREVIEW PAGES HAVE HAD SOME LINKS AND FEATURES REMOVED

ONLY IF YOU PURCHASE THIS E-BOOK WILL THESE EXTRA LINKS AND FEATURES BECOME AVAILABLE.


This an E-Book endorsed by some of the smartest people on the planet.

PREVIEW PAGE BELOW:-


  • Voice Recognition etc;-

Voice Portals are a new wave in client based and web based services now becoming available using the VoiceXML standard. Voice over net systems should have allowed the MNN to gather many varied voice patterns, to learn from. Microsoft has a flexible codec in the form of its game voice system, it is freely available to developers and users, it is built into the DirectX API. Cracking the NLP (natural language processing) problem, is a monumental task, but considering the quantity of data that would have been available within the MNN, then it should have been a much easier task. NLP should be much easier to crack once it is tied into VR style data. I.e., how a user gestures when speaking, is in some ways just as significant as what he or she maybe saying, the open source computer vision library is a good resource pertaining to this problem. Also systems capable of reading micro expressions are now available, so this ability along with real-time web cam footage (see Eye toy) from a user etc, should allow NLP to progress to a new level of understanding. Speaker independence should eventually have allowed any user, to have interaction with the system. Also see the X-Box gets speech recognition software + Net Speech Software a Microsoft developers toolkit designed to add voice to the list of methods for inputting into the WWW, also see (SALT). There is so much going on in the speech recognition field that it is hard to keep up with, but in general, the web should be able to understand fluent speech input from any connected device within 5 to 10 years. Also check out Interact.

  • Processing Power: -

The databases were to use the spare processing power built into the networked servers, these systems during off peak times have a lot of spare processing power. Intel is currently doing this with its in-house distributed computing system. Intel has its own proprietary tools to do this which they call Netbatch, it allows all of their networked systems throughout the world to work on common problems. The MNN was seen as utilising the networked servers in a similar way, this processing power was seen as being tied into the grid setup, the potential processing power available should have been incredible. Intel already has a name for this, it's called Macroprocessing, other companies are following suit, check out parasitic computing, along with Hewlet Packard's planetary computing plans. Also see Cluster computing and Terascale computing, these are all in principal similar to the Grid technologies described throughout this ebook.

The inclusion of 3D graphics processors within most next generation net appliances, including mobile phones, should allow them to handle a lot of the proposed networks facilities. PowerVR has announced their bid for this market, using a single chip capable of producing advanced graphics, using low bandwidth techniques and capable of decoding mpeg 2, plus it has low power consumption, making it ideal for mobile systems. Sega's moves into providing games for the net appliance market show that mobile systems are the new target market for a lot of major players, including A.T.I..

The introduction of the EPIC scalable architecture in Intel's Itanium 2 range can be seen as the future of processor design, at least for the next couple of years. Other manufacturers, have also entered into the 64bit market, check out the AMD Athlon 64bit range and IBM's Power 4 range. This is just the tip of the processing iceberg about to be unleashed within the home and server market. Also Intel has incorporated a form of parallel processing into some versions of the Pentium 4 and it's replacement, codename Cloverton, also see hyper-threading and  PAT. The uptake of these chips by the mass market, should mean that in the short term future (the next 5 years), there should be plenty of spare and easily tapable processing power available for any grid set-up, (also see 3d chips). The big question is not whether grid technologies work, because they do, the question is how do you tap enough end user platforms and servers to make it work. At the end of the day, this is what the MNN was designed to do, i.e. supply a neat package, so as to allow the MNN's developers to use everybody else's networked hardware. Good eye candy in the form of VR content = interested users = interested ASP's = big grid.

Chip designers such as AMD and Intel are leading the way in processor design and it looks like, its going to be a multi-processing future (more than one processor on a single chip that is), for us all. Also check out BBUL and FinFET. The introduction of PCI express is allowing chip developers to go up to and beyond 10ghz, its a whole new I/O architecture. Also the introduction of Extreme Ultra Violet lithography is showing the way to build such hardware. AMD is also moving into the mobile market with it's, PIC system, along with it's Geode and Alchemy Processor's which translates to massive 3rd party development in the PDA / mobile phone sector, which in effect would have meant more customers for the MNN. Its all heading the same way, as in the PC that we all recognise today, eventually becoming a discrete wearable wireless, voice operated device and with much more processing power. Or as Intel puts it, manage locally, compute globally, this is the new philosophy in the computing world, with scalable networked hardware supplying whatever processing power is needed, discreetly to the end user, bandwidth and software robustness being the only real issues. But new processors utilising polymorphous computing principals, found in new processor designs, such as the Teraop Reliable Intelligently Adaptive Processing System (TRIPS for short), should take the burden of programming of the programmer and lay it firmly at the feat, of the hardware.

I can't keep up with processor design spec's, Intel and AMD are at war - so, this is just my best guess at the time of writing.

The next, next step in computing is bound to come from optical computing devices, with speeds of up to 40ghz predicted by 2010, also see quantum optical storage.

To put all this processing power into some type of perspective, then consider this, the human brain is reckoned to be able to carry out roughly 100 Tflops, don't ask me who worked this out, but I have a feeling it was a bunch of  people at MIT. IBM has already built a computer it claims has more processing power than the human brain, it reportedly can run at over 135 Tflops, called Blue Gene/L. It will eventually have access to over 2 petabytes of memory and once fully built, will be capable of running at about 360 Tflops. Now the thing is, that this new supercomputer will be using 64,000 processors so as to accomplish, this minor miracle of engineering. Now the surprising thing is, that these processors, are not some new fantastic design, they are in fact the same processors, that power both the Nintendo game cube and most of the new computers produced by Apple. So although it is quite an achievement to the get a system containing over 64,000 processors to work together, it is still only a small percentage of the potential processing power available within future GRID systems, i.e. a couple of million users, with Playstation 3's, PC's, PDA's etc. In other words, once the bandwidth is available, then even big blues new baby could be put to shame.

If a GRID setup can be implemented on the scale proposed, which after all, the latest P2P systems show that they can, (KaZaA has over 3 million regular users), then it is only a matter of time until software engineers, turn all these new always on networks and platforms, into an MNN type system. As I said it is a software design problem.

The protocols for grid computing are already in development, see page 85 - software management.

Of course Cray always want to go one better so their aiming to build a machine capable of running at over 1,000 trillion calculations per second or a petaflop?, in their desire to create the worlds fastest supercomputer. They reckon they will be able to build this "deep thought" type machine by 2010. The Japanese say they won't be beaten and have decided to build a 10 petaflop machine?

Optical computing is really where it's going to be at, if you ask me, it's hard to beat the speed of light when it comes down to it, it's just a question of building and scaling optical switching components down to a scale that competes with silicon. (see Nanophotonics and how to beat the speed of light).

 

 

>MAIN SITE<