Tuesday, January 26, 2010

TapSongs: Tapping Rythm-Based Passwords

The TapSongs paper was published by a single researcher, Jacob Wobbrock. In in, he introduces a new authentication method based on tapping a tune on a sensor. He points out that this method is (surprisingly) quite secure for three main reasons:
  • A TapSong may be entered out of view, preventing "shoulder surfing"
  • Even if a TapSong is captured, it may be hard to represent
  • Unlike entering a stolen password, which is trivial, entering a stolen TapSong will often fail, because of individual rhythm differences
The third point is especially interesting, because people are quite successful in entering their own TapSongs after only a few entries for training purposes. TapSongs do allow for some deviation due to human timing inaccuracies, and stretches or compresses the entered password to match the length of the correct one stored before checking it.

After developing his tool, Wobbrock was able to test it on 10 subjects. Each of them were given a famous tune to tap 12 times to create a TapSong timing model. They were then asked to log in 25 times. The subjects were able to successfully reproduce the TapSong 83.2% of the time. Subjects then eavesdropped on someone entering each of the famous tunes and were asked to replicate them. They were only successful 10.7% of the time. Even when they were told what the famous tunes were, they were only able to log in 19.4% of the time.

As far as my thoughts go, I would like to first point out a flaw I have seen in several of these papers. Almost all of the user studies are incredibly small. I just don't feel that 10 people can provide enough data to make statistically significant conclusions. With this paper in particular, I feel that this method is actually far less secure than a regular password. In real life, people would probably pick common tunes and would enter the TapSong in the open where it could be easily picked up. Trying to memorize keys that someone is pressing is one thing, but songs are easy to remember. I feel that a musically-minded person could increase their odds from 19.4%, if only by trying the song multiple times.

Ripples: Utilizing Per-Contact Visualizations...

In this paper, several Microsoft researchers attempted to improve the usability of multi-touch displays by adding a simple set of visualizations to cover a variety of common tasks and common mistakes. In order to reduce visual clutter, they attempted to find the least number of visualizations that would cover common errors. Some of these errors include:
  • Accidental activation (resting an elbow on the table)
  • Object scaling constraints (object has reached maximum size)
  • Interaction at a distance (continuing to select a scroll bar while moving away from it)
  • Stolen capture (being unable to press a button that is being held down)
After researching various visualizations, the team of researchers conducted a user study testing accuracy and perceived responsiveness. The results from this user study showed that 62% preferred Ripples to be enabled, while 23% preferred it to be disabled (15% had no preference). A second test showed that Ripples consistently improved accuracy for touching small circles around the table.

While I believe that the concept of this paper is sound, there were several points that I felt could have been presented better. For starters, the fact that almost a quarter of the testers preferred Ripples to be disabled is a fairly significant statistic, but the paper mostly breezes right over it. Also, it seems like such a visualization system should be used in a more natural environment (with multiple people using the same touch table), in order to get a better subjective view of the full range of the table's visualizations.

Thursday, January 21, 2010

Disappearing Mobile Devices

In this paper, Ni and Baudisch look ahead to the future of mobile devices. As they point out, the primary limiting factor to the miniaturization of mobile devices is the need for user interaction. They try to extrapolate ways of interacting with a device that is essentially of size zero. At this miniscule size, they are left with three variations of touch:
  • Touch
  • Pressure
  • Motion
Of these, they eliminate pressure, since it is limited by where the device is located and how many different inputs can be consistently entered. They conclude the paper with two user studies, one on marking eight directions, and the other on entering letters of the alphabet. The first test had participants using an optical mouse held in one hand, while entering directions with the other hand. Results of this study showed that users entered the wrong direction anywhere from 2.5% to 6.7% of the time, making error rate mostly independent of direction (by one-way repeated-measures ANOVA).

In the second test, users were asked to enter letters in the form of a unistroke alphabet. In this case, a modified version of Palm's Graffiti and another alphabet known as EdgeWrite were used. Interestingly enough, Graffiti system resulted in many more errors, while the EdgeWrite system performed at a level comparative to current device applications.

I liked how this paper looked ahead to a problem that will be faced in the future, and was able to formulate a user-study that allowed the researchers to generate useful results. Future "disappearing device" designers will be able to use this paper to develop a user interface that is much more usable. I couldn't see any flaws in the paper, unless of course these devices never are created. Even still, the paper is an interesting look at the possibility of such devices. Future work would probably be to develop a system of providing output to the user as well as accepting input. I would look at colored LEDs and/or tactile feedback as starting points.

Wednesday, January 20, 2010

Integrated Videos and Maps for Driving Directions

This paper also has a webpage, which contains an overview video, as well as a link to the paper's full text. The demonstration starts around 3 minutes into the video.

In this paper, five researchers propose and implement a more effective system for giving driving directions. They point out that drivers are more comfortable driving a route the second time, because of the visual memory recalled from the first time. Their system, called Videomap, generates a video of a requested route (using previously captured panoramic images) and presents it to the user. Videomap uses several techniques to improve its usefulness:
  • Portions of the route between turns are shown quickly, while turns are shown slowly
  • The field of view is expanded near turns to take in (currently hand-selected) landmarks
  • The video smooths out turns by rotating the field of view before the car itself turns
  • Landmarks are freeze-framed alongside the video, while the video continues "driving"
After developing this tool, the researchers performed a user-study in which they gave users directions that they then had to follow. One group was given directions using Videomap, while the other used Picturemap, which only used pictures of the landmarks, not video. After viewing the directions in Videomap or Picturemap, the users then were presented with a real-time simulation of driving the route. Each group was also provided with a printed map of directions that they could use while "driving". At each intersection, the users were required to choose the correct path to take. Videomap was shown to be the better of the two applications in successful turns, times the user referenced the printed map, and user opinion.

Now, for some of my own thoughts on the paper. I found this paper to be very interesting, since it takes a novel approach to the common activity of getting directions. I believe that I would actually use such a system if it continued to be refined, and if some of the important caveats were handled well. I didn't really see any faults with this paper, except that the landmarks have to be hand-generated at the moment. I wouldn't be surprised if Google has their eye on this, especially since three of the researchers are with Microsoft! That being said, an obvious area for future work would be to hook up with a corporation like Microsoft or Google, and use their resources to improve and beta test this application.