It’s very cool and interesting to see that kind of things. Much progress has been made with this kind of multitouch interface, both from a technical and a user interface view. Things got much more responsive and start to make sense.. with that kind of data (especially data that’s hard to manage with keyboard). So, in my opinion, it’s a great replacement for most things that we do with the mouse… but hey, in a productive environement, we deal a lot with language input, don’t we? And while a virtual keyboard on an iPhone makes sense (switching languages, improving functionality via sw update, not taking any space where not much space is), i’m less than sure that this would work on a desktop device. Sure, writing on a keyboard is nothing easy to learn, but once you have it, you can write much faster than by hand, and everybody who has to write a lot knows about the importance of some good tactile feedback. Anyway, I’m more than interested where this will lead to. I’m pretty sure we’ll first see this kind of things in (productive) environments that have to deal with (3D) imagery (like in medicine). Hmm hmm.