Wednesday, 14 August 2013
Leap Motion a flawed and isolated advance
I was one of the people who funded Leap otion on Kickstarter. I had been wowed by the potential of the product. I put my money down and waited in eager anticipation. I loved the idea of an almost magical way of interacting. I dreamt of the way it might, might, help my daughter with disabilities interact with computers. She has mastered the iPad - the biggest advances in computing for those with special needs- and I hoped that this might provide a new way for her to use technology.
I have been using Leap Motion for over two weeks now. The best two words to sum up my experiences are, disappointed and tiring. High expectations are always hard to deliver on, but in the case of Leap Motion its biggest problem is not to do with the technology or the hardware. Granted there are only a limited number of useful applications in the Airspace store. The ones that work best, to my mind are the simple games where the accuracy of the device is not as important as the gestural interaction. When you want to really interact with your computer, when you try and use Leap Motion as a replacement for the mouse it falls down. The problem is the lack of touch. Using the device to gesture to move things around or change views is fine but if you want to select, or attempt to move items with fine control, the lack of haptic feedback is a real problem.
Selecting requires a steady hand and the need to keep it in place to indicate that the item you are pointing at is the one you want to select. This requires the unnatural behaviour of holding your hand, pointed finger very still in 3 dimensions for a few seconds. Touch based interfaces faciliate near instant selection. But more importantly they provide a surface which your finger(s) can push against and rest upon. Keeping your hand of fingers resting in thin air is very unsatisfactory. If you try and do this for anything but a short time you find gestures and selections make you arm tired. The old human computer interaction term, coined in the 80's, "Gorilla Arm" applies. This was originally used to describe the fatigue induced from extended use of vertical touch screen interfaces, but it sets in even earlier where there is no surface to resist or use to gain some support from.
So, rather than be a device I use habitually, it has already become another novelty article I pull out to briefly wow others, and then put away do real work. Another technology which fails to work for my daughter. A bold and interesting technology, but fundamentally flawed. So what is the real future of interaction? How will we interact with devices in ten, twenty years time? I suspect there will be two. One is already here and well established, the other is in its early stages.
Touch based devices will become increasingly more common. We forget that decent responsive, natural touch screens have only been in the consumer market for a few years. Prior to the first iPhone most touch screens used resistive panels as still used in some cash machines. They required you to actually register a push and only allowed single finger interaction. The iPhone brought in high resolution capacitive screen which supported multi touch. While the technologies to deliver these intuitive experiences are several decades old, the commercial versions in smartphones are already so common they seem old hat. Touchscreens in devices and objects will continue its rise in use against the old guard of the mouse. As tablets and internet of things devices proliferate, these will exceed those devices which still use a mouse or track pad. However despite the success of IOS and Android, touch user interfaces are still subordinate to the mouse in highly functional tools such as professional image and video editing. The mouse has been around since 1965 and still has life in it for at least another five or ten years.
Speech interaction paradigms will grow in complexity and ability to work as effective natural language interfaces. Speech input and feedback is an odd method for use in public spaces and while it possesses the potential to allow complex instructions, is useless at "house keeping" interaction. Asking a computer to move a window from one position to another or to swipe a virtual paintbrush across a screen along a given path, is not where speech input works. But instructions which contain meaning, "where is the nearest restaurant which my friends recommend", "find when my doctor can next fit me in", "when is my current account likely to go into overdraft", are more useful scenarios. But these are not excel or word type applications. And here is the problem.
When predicting the future is that we tend to see it in terms of today's capabilities. In these predictions I am grounded in the historic models of the desktop and newer tablet application model, and the latest siri and Google now and Google glass models. As we have seen in the past, new interaction approaches often come as a part of new interaction paradigms. Command line gave way to direct interaction because of higher resolution displays, graphical interfaces and the computer mouse. Without the combination of these elements we could not have had spreadsheets or WYSIWYG word processing and later desktop publishing. At the same time it did not make command line programming impossible, just less usable and effective when compared to the new alternatives.
Until recently we interacted with computers at the desktop, they were go to devices for specific uses. The world has radically changed, we now use smart phones, tablets, connected devices, as well as the PC. New interaction models are appearing, most focusing on context. Many contexts are focused around direction and making a computerised device provide a service in the form of information, or rudimentary functionality. Google Now is based on a lack of explicit interaction, on an anticipation model based on individual and aggregated big data. While this may work in environmental and lifestyle applications, they fail to hit the mark in productive situations where we generate new content or documents or rich creations. The traditional territory of the PC. Tablets try and provide some tools for content production, but they have yet to reach the productivity levels one can achieve with a mouse at a PC. Kinect shows how one may interact with a living room computing system. Again, applicable in one context but not another.
Leap Motion has it tough. Put aside the flaws I have already identified. Leap Motion is trying to slot into an existing system, rather than the much, much harder task of creating a complete new paradigm. It may be that my disappointment with it may stem from the fact that it does not fit, and it is unclear what the supporting technologies are required to create a new effective paradigm. One where, like the mouse, Leap Motion is just one element. A new model where new forms of applications will be discovered which may transform our ways or working as much as spreadsheets, word processing and the like have done in the past.
Labels:
Airspace,
apple,
Google,
human behaviour,
IPad,
iPhone,
Kickstarter,
Kinect,
Leapmotion,
review,
Smartphone,
test drive,
Touchscreen,
WYSIWYG
Subscribe to:
Post Comments (Atom)
Would quantising the gesture to a selected grid (say +/- 20mm), or having gestures to establish radii and centres (again quantised) work? Perhaps if there was a visual reference to provide haptic feedback tis would help, though it would only be 2D unless there was a 3D screen, with all the spectacle horror that implies.
ReplyDeleteDuncan W
I think they do allow some leeway in the selection of buttons, at least for the games and apps specifically build for the device. Basically big buttons. But when you try to work with a desktop the problem would be that controls on a desktop are too small. This is why most of the useful interactions on desktop are swipes and pinch zoom type gestures.
ReplyDeleteIt still does not remove the problem that you have to hold your arm out and have no real haptic feedback.