So, here it is : Intel revealed a name for its most hoped growth engine : Atom, circuits made for :
Pocketable device.
Executing x86 instructions.
This's giving me an occasion to talk about what I've been calling "The Four Fifth", pictured below :

the_4th_5th.JPG

Here you can see the four 5th generations that will make the next and ultimate step in computers evolutions.

1) how to see :

In 100 years, screens became more and more pervasive and more and more personal, following the machine that's driving it.
Today, physics is the problem : the best answer to portability is still the pocket but you cannot fit more than a 5 inch wide screen into it, thus limiting the number of usable pixels you can have with you most of the time.
Wireless screens could be a mid term solution to get more pixels when available, but not carried all the time.

2) how to carry
Once upon the time, a computer was "carried" by a whole floor.
Then smaller and smaller transistors made it possible to move it to a simple desk, bringing by the way the desk straight onto the screen, with files, trashcan, folders...
Then this moved to a bag, making a "hard transition" between when it's used (on laps or desk) and when it's not (in a bag).
Now this is moving definitely to a pocket, from Nokia+ARM to Dell+Intel, the starting battle : there is still a "soft transition" when it's used (on the palm at a hand) and when it's not (in a pocket.)
Whither to be once in a pocket?
Indeed, computers CAN be smaller than a pocket but what being smaller for ?

3) how to tell
This is the most visible, from punching cards to fill into a computer, to touching a screen.
Touching a screen will be on the top list of any pocketable makers in 2008 after the first iPhones.
Reading usability forums, you can see there is a whole new field building up to standardize words for gestures.
But what could be better than this ?
What if there is no more screens to touch ?

4) whom to tell
Yet another tough question.
As you can see on the image, the last 5 years brought a major shift into the process.
Since ever, the text a person wanted to tell a computer was "done" something, either assembling, compiling or interpreting.
The cursor was on "what" to do to the text and "how" was the text to do upon more than whom the text to.
Virtual machines are changing this and they are more and more pervasive.
Sun is progressively focusing not on Java but on the Java Virtual Machine able to eat any kind of text written in Java, Ruby...
The main breaktrough of Google in Android is Dalvik, a virtual machine able to read Java.
Mozilla is upgrading Firefox to push it as a super virtual machine, reading HTML, CSS, XUL and more.
Adobe's just released the first AIR that can read anything from "Ajax" oriented to Flex, SQL... Microsoft is talking now about "offlining" Silverlight to make it a repackaged version of the CLR virtual machine as an universal reader of languages.
These virtual machines can read texts BUT also bits for images, videos, sounds.
What could be better than these ?

All these questions will forge the path to computers from 2010.
The winners of this period will have to master software, circuits, physics and ergonomics to not only add The Four Fifth but to multiply them, in order to make the next big revolution in computing.