Microsoft Research’s “Manual Deskterity” explores the synergy of pen & touch

Pen and touch computing have long-thought to be mutually exclusive methods of human-computer interaction, but as the Microsoft Research project “Manual Deskterity” shows, the two intuitively combined makes for a much powerful input method than each of them might ever be on their own.

If you’re short on time, the real soul of the demo – a custom application for the Microsoft Surface with a special infrared pen – starts at the 1 minute mark and shows off capabilities that either wouldn’t be practical or possible at all by either pen or touch alone. Bear in mind however this is a research project so the application is quite limited in scope.

Having seen Bill Buxton briefly explore the combination of pen and touch only weeks ago in a Microsoft Research video, it came as no surprise to me that he’s very much involved with this project. To quote him, “everything is best for something and worst for something else” sums up the pen and touch debate perfectly. Combining the best attributes of each just makes sense.

With dual pen and touch display solutions and products already on the market today, I don’t think it’ll be very long until this idea matures into the mainstream too.

83 insightful thoughts

  1. This is quite cool. While on the topic, I don’t think this would make a lot of sense on a smaller form factor like the Courier. A book like foldable device and having to use 2 hands to operate certain features wouldn’t make sense but, cutting images using the pen would be pretty neat!

  2. I have been waiting years for a Slate that combines both pen and touch. While multi-touch input is great for getting rid of the mouse, it is seriously lacking in the natural note taking area. On the other hand, while pen input is absolutely fantastic for taking notes and marking up documents, it leaves much to be desired in the quick input area where two hands are faster than one.

    Hopefully someone realizes that the combination is the way to go and releases something in the near future.

  3. This is exciting. I have seen some similar implementations of the touch + pen gestures int he courier videos so this makes me excited and a little more confident that MS is indeed working on the courier tablet 🙂

  4. Love it! But…

    If anything I would say there are too many options available here, and while some interactions are intuitive too many rapidly becomes confusing. If this was a PC program you’d have a tools panel to allow for exacto-knife like functionality, however here it seems to be assumed.

    I believe this is why companies like Apple release a very basic product to begin with and add functionality over time, this is GREAT NUI here but some questions:

    1. What if I wanted to trace a picture instead of chop it up? A tools toolbar would suit me better for that purpose, at this point the tech assumes I want a knife… give me the option to “pick-up” my knife, or my pen.

    2. This Bill Buxton demo with transfer paper had him manipulting the paper with his hand whilst the pen was used for writing. Here holding down on the paper with my hand may introduce a “lasso” tool. Th UI becomes non-intuitive for the way a person may naturally use it. Sure with a device the size of the surface a person may manipulate the page with more fingers or their palm but on a small device there’s more chance of a person manipulating the page with a single finger… which in this case brings up a lasso.

    Don’t get me wrong, I love where this is going but I feel it’s too feature heavy and complex, first they should really nail the experience of pen on paper, and then you can add as much lassoing and knife cutting as you like. The real question is whether speed or intuition is better, for instance when copying a pic the stamp functionality looks good but is it intuitive?

    Hopefully we see more of this, as well as more answers of where they plan to go with it.
    2. What if I

    1. Just to clarify after rewatching the video, I realise that there are solutions in this demo for tracing (i.e. don’t hold onto the photo at the same time as drawing) but if I were to try and explain all these different functionalities to my dad would he be abl to grasp the subtle differences or would it baffle him when things didn’t work the way he had intended? (He’s a smart guy but computers confuse the heck out of him)

    2. These are some basic ideas, it’s not the final product. They are just exploring the possibilities of combining these two input methods. There’s obviously a lot of thought that is going to have to go into making it intuitive to switch between functions. However, I think that how those functions are actually used works very well.

  5. Yes!
    I already use pen + touch when working in Photoshop (think – drawing with stylus and quickly tapping between tools with my finger) but I’d love to see better and more native implementations of it!

  6. Interesting, but I think you’re going down the wrong track here. Pen-based computing is, and always has been, a dead end. My take on some of the reasons:

    First, writing on a screen just doesn’t work as well as writing on paper. Aside from the physical differences (lack of tactile feedback, differences in textures), there is a lack of accuracy due to the fact that there is a layer of glass between the tip of the pen and the screen itself. Depending on the angle at which the user is viewing the screen, the virtual “ink” strokes may be off by as much as a few millimeters from where he/she intended. This requires the user to write much larger than he/she would with a pen and paper, which feels unnatural. Over time, the user gets fed up and goes back to typing because in the time it takes them to successfully write something by hand, they could have typed it twice. *And* they would be able to print it out using much fewer pages of paper.

    Second, handwritten notes are inherently less useful than typed notes. If you aren’t using handwriting recognition, then your notes aren’t searchable.

    Third, handwriting recognition will never reach an acceptable level of accuracy, at least not in the next decade or two. Even if it’s 95% accurate, it’s still going to misread several words in a page, and each time the user has to go back, erase the word, and write it over again. The inconvenience of this builds over time. Aside from interpreting words correctly, it is very difficult to translate complex formatting from handwriting to text. Multiple levels of indentation and nested bullet-point or numbered lists? Forget about it.

    Fourth, people are careless, and even the most pedantic of us will eventually lose or break a stylus/pen. Maybe it rolls under a newspaper and I don’t see it while I’m packing up. Maybe it rolls off the table and somebody steps on it, crushing it. External input devices for handheld devices are just a bad idea. For larger devices, i.e. desktop PCs, I don’t see a lot of utility in a pen. Unless you’re drawing, you will probably perceive greater accuracy from a mouse than a pen.

    Lastly, I’m not seeing a whole lot of need for a pen based on the video. It seems to me that a lot of the same operations could be performed just as well using a second finger instead of the pen. Some could be performed by tapping a toolbar button and then using a single finger.

    Again, interesting work from an academic perspective, but in terms of real-world applications, I would suggest looking elsewhere. Just my $0.02.


    1. I’ve never actually touched a Surface unit, so my observations are based solely on observations from available materials online.

      I think that Surface has a pretty good matte finish? I know the Cintiq devices from Wacom do, so I think texture is just a matter of experimenting with screen/stylus materials.

      This is something that only applies to Surface, but since it’s a rear projection system, the image projected on the matte finish at the top of the…errr…surface. So, there shouldn’t be any disconnect between stylus and image.

      When it comes to the size of the writing, this can be addressed in a few ways. Perhaps by zooming in a little, remember this is software, we are not limited by “size”.

      Printing notes? Why would we want to do that? This is the 21st century. I go weeks without printing anything, and I generate quite a few documents.

      Handwriting recognition in Windows is quite good. However, I wouldn’t use it for anything other than a few quick notes. I wouldn’t use pen and paper for anything other than quick notes either. It’s always going to be faster to type lots of information. Also, remember that handwriting recognition doesn’t mean that you have to lose your original ink. So, if you disagree with the interpretation, you can always see what you meant to say.

      As far as allowing you to do different kinds of formatting. Have you ever used OneNote?

      For this kind of input, you have to have a stylus. I don’t care how good the tracking is, a finger is never going to be as accurate. Regardless of what the orchardists in cupertino would have you believe, sometimes you need a stylus. Doing these kinds of tasks with all fingers is clumsy and not natural. Part of the reason why we use pens and pencils, apart from not getting ink on our fingers, is to get our big thick hands away from our work, so that we can see it.

      Those are just a few observations on your observations that I’ve picked up from following the various NUI conversations.

  7. Hi Devin,

    I actually own a Cintiq and went back to using an Intuos because I found the level of accuracy unacceptable for exactly the reasons I stated above. Maybe I’m an outlier in this respect.

    When I mentioned having to write larger, I was referring to the length of the strokes from the user’s point of view. While the user could certainly zoom in such that the strokes drawn on the screen are magnified from their natural size (at standard 100% zoom), the user is still having to draw much larger strokes than he/she would with a pen and paper. I personally find this very unnatural.

    As for printing, I agree with your position wholeheartedly, but the fact of the matter is that there are still a lot of people out there that like to prefer reading on paper. I think it’s silly and wasteful, but there’s a surprising number of people out there that just don’t care about the environment. Well, it’s surprising to me anyway (and depressing).

    As for the accuracy of fingers, you’re quite right that they present a problem. Fingertips are much fatter than the tip of a pen (some more than others), and using them for high-precision input is out of the question. However, it is certainly possible to achieve an elegant balance in which you can present a high-fidelity UX whilst accommodating peoples’ big ol’ fingers :). As someone who has used (and loved) an iPhone for a few years now, I can say I find it quite usable. Naturally, one would not want to be cropping photos or performing other tasks on a handheld device that require high-precision input. Those tasks are far and away better suited to traditional workstations (and mobile workstations) with a mouse or tablet.

    Really, my arguments against pen-based input are more targeted at mobile devices (handheld devices in particular), because that’s where we tend to see them used the most. For a device like the Surface or an artist’s desktop workstation, pen-based input is obviously more appropriate.


    P.S. I actually just started experimenting with OneNote for the first time yesterday, though I’ve only used it with a keyboard to help plan out a new blog post. It’s an interesting program. I’ll be sure to try it out with a pen if I get the chance.

  8. Hey! maybe you must work or sell this technology to Autodesk!
    Actually i work with Autocad for technical drawing, and i tired to work with my mouse and keyboard for drawing, and all that carry (pain in my hands and my arm for many hours at work). This technology rocks!!!

  9. This is really neat and people say MS doesn’t innovate. The problem is a lot of their innovations don’t make it into commercial devices, but hopefully that’ll change. If they can put some of these functions into Courier or into Windows Phone 7 tablet version (which is why I hate the Windows PHONE, because I see the OS being used for more than phones)…or even in Windows 7 for touch screens.

    Good job MS, not lets just see you do more than just show it off. Bring this out in tablet and touch computer products and you’ll blow away Apple.

  10. Thing is, the functions that APPEAR to be pen-dependent can still be done just with your fingers; all you need are some buttons somewhere, that let you switch to different modes. (examples: Cut Mode, Straight-Edge Mode, etc) In the future, I would hate to have to use a pen at every surface where I want to do anything other than simple finger-functions. Just put some mode-changing buttons, like in any normal program, etc.

  11. The fact that this exists gives even more reason to believe that the Courier exists.
    MS, please, don’t develop the coolest [censored] in the world and release none of it.

  12. a question for Long Zheng: the courier is there? do your know much about working at microsoft, you have some information that there is this beauty?

  13. This would be great for art… imagine, the pen acting as a charcoal or graphite pencil, and your finger able to blend the colors…. wow!!

  14. I have the solution of Two-Fingers/All-Points like MS “Manual Deskterity” in Tablet PC(~ about 9″).

    It is the next generation UI real. Please contact to me by e-mail

  15. Hi folks,

    Aspects of Manual Deskterity are already implicit in well written Windows 7 pen-aware apps.

    I’m surprised we are still seeing the “pen is useless” argument. I remember when people believed we didn’t need a mouse because the command line was fast and efficient. I remember when people thought we wouldn’t use email to replace the handwritten letter.

    Finger based interfaces are cute, but the stylus has been around almost as long as fingers have.

Comments are closed.