You are reading a page from a free video eBook called Heaven or Hell It's Your Choice, for more information click on the website button above.
Section 3 / Page 77
Voice Recognition etc
Voice Portals are a new wave in client based and web based services now becoming available using the VoiceXML standard. Voice over net systems should have allowed the MNN to gather many varied voice patterns, to learn from. Cracking the NLP (natural language processing) problem, is a monumental task, but considering the quantity of data that is beginning to be input into the net, via voice systems, then this is an industry that is maturing fast. NLP should be much easier to crack once it is tied into VR style data. I.e., how a user gestures when speaking, is in some ways is just as significant as what he or she maybe saying, the open source computer vision library is a good resource pertaining to this problem. Also systems capable of reading micro expressions, along with real-time web cam footage from users etc, should allow NLP to progress to a new level of understanding. Speaker independence should eventually allow any user, to have interaction with almost any system. Also see Net Speech Software a Microsoft developers toolkit designed to add voice to the list of methods for inputting into the WWW, also see (SALT). There is so much going on in the speech recognition field that it is hard to keep up with, but in general, the web should be able to understand fluent speech input from any connected device within 5 years. Also check out Interact and Google's 'Voice interface for a search engine".
Standardised Grid protocols are now beginning to be used in the server market, these systems during off peak times have a lot of spare processing power. Intel is currently doing this with its in-house distributed computing system. Intel has its own proprietary tools to do this which they call Netbatch, it allows all of their networked systems throughout the world to work on common problems. The MNN was seen as utilising the networked servers in a similar way, this processing power was seen as being tied into the grid setup, the potential processing power available should have been incredible. Intel already has a name for this, it's called Macroprocessing, other companies are following suit, check out parasitic computing, along with Hewlet Packard's planetary computing plans. Also see Cluster computing and Terascale computing, these are all in principal similar to the Grid technologies described throughout this ebook.
The big question is not whether grid technologies work, because they obviously do, the question is how do you tap enough end user platforms and servers to make it work. At the end of the day, this is what the MNN was designed to do, i.e. supply a neat package, so as to allow the MNN's developers to use everybody else's networked hardware. Good eye candy in the form of VR content = interested users = interested ASP's = big grid.
Chip designers such as AMD and Intel are leading the way in processor design and it is obvious that multi-core chips are leading the pack at least for now. The future for processor design is beyond the scope of this eBook, but more speed, more processing power is a trend which will not stop any time soon. The new philosophy as Intel puts it, is this, manage locally, compute globally, this is the new philosophy in the computing world, with scalable networked hardware supplying whatever processing power is needed, discreetly to the end user, bandwidth and software robustness being the only real issues. As new systems utilising polymorphous computing principals, find their way into the public domain, then we should see the hardware and software meld into a flexible platform, that is much better than current systems. An example of this, is the Teraop Reliable Intelligently Adaptive Processing System (TRIPS for short), these types of systems should take the burden of programming of the programmer and lay it firmly at the feat, of the hardware?
I can't keep up with processor design spec's, Intel and AMD are at war, so this is just my best guess at the time of writing.
To put all this processing power into some type of perspective, then consider this, the human brain is reckoned to be able to carry out roughly 100 Tflops, don't ask me who worked this out, but I have a feeling it was a bunch of people at MIT. IBM and others have already built computers that work at much higher speeds than this -see the Top 500 list if your interested.
IBM has devised a new Blue Gene supercomputer--the Blue Gene/P--that will be capable of processing more than 3 quadrillion operations a second, or 3 petaflops, a possible record. Blue Gene/P is designed to continuously operate at more than 1 petaflop in real-world situations. WOW, that's some going, but if a GRID setup can be implemented on the scale proposed, which after all, the latest P2P systems show that they can, (torrent has millions of regular users), then it is only a matter of time until software engineers, turn all these new always on networks and platforms, into an MNN type system. As I said it is a software design problem, but of course this field never stays still, so next week this will be seen as old hat in the computing world.
Optical computing is really where it's going to be at, if you ask me, it's hard to beat the speed of light when it comes down to it, it's just a question of building and scaling optical switching components down to a scale that competes with silicon. (see Nanophotonics and how to beat the speed of light).
IF YOU PUT IT ALL TOGETHER THEN YOU SHOULD SEE THAT THERE WILL BE NO SHORTAGE OF AVAILABLE PROCESSING POWER AND STORAGE TO ANYONE CONNECTED TO A GRID SETUP.
Please report any problems you see on this page -such as broken links, non playing video clips, spelling or grammatical errors etc to:-
I don't have the time to individually respond to every email, so I thank you in advance, for your help.