Saturday, February 6, 2010

The Performance of Touch Screen Soft Buttons

This paper was published by Seungyon Lee and Shumin Zhai.

In this paper, Lee and Zhai investigate how efficient soft buttons are, compared to traditional hard buttons. After an introduction, they discuss four basic properties of touch screen interaction:
  • Operational mode (stylus or finger)
  • Activation mechanism (contact with screen or force applied to screen)
  • Feedback enhancements (audio or vibro-tactile)
  • Button size
The rest of the paper is dedicated to three experiments designed to test the effects of these properties on speed and accuracy. The first experiment tested operational mode and feedback on soft and hard button devices. They found that there was little difference between audio and vibration feedback on a soft button device. Also, hard and soft buttons performed at nearly the same level of accuracy and speed, with finger-operated soft buttons being slightly worse.

The second experiment compared contact-based (capacitive) screens with force-based (resistive) screens. As with the first experiment, accuracy was very high for all types of activation, including hard buttons. While capacitive and resistive screens each have pros and cons, Lee and Zhai point out that they are essentially equivalent and actually perform slightly better than hard buttons.

The third and final experiment was the only one where variables actually made a significant difference in the results. In it, they tested button size versus activation mode. While speed did not change significantly in small versus large buttons, the number of additional characters entered to correct errors was much higher with small buttons.

I found the results of this paper to be a little surprising, as I assumed that hard buttons would always out-perform soft buttons. That being said, I feel like they could have eliminated some repetition in describing their findings. Reading the description and then the summary of each experiment would have been enough to get all necessary information, without the lengthy middle section. This is especially true on the first two experiments where the results for the different variations were statistically the same. Still, the paper was informative and provides a good basis for future work in the area of soft button design.

The Design of Everyday Things

In his book, Donald Norman gives his thoughts on good and bad design, using common objects as his examples. The first chapter gives some important considerations when designing objects:
  • Affordances - Does the design indicate the proper use?
  • Conceptual models - Does the design make it easy to determine how the device works?
  • Visibility - Does the design make its functions apparent?
  • Mapping - Does the design of the device's controls have a strong correlation with the action they perform?
  • Feedback - Does the device indicate to the user the result of his/her action?
He covers multiple things that hinder and aid good design, as well as how a designer should approach a new (or old) design problem. He applauds doors designed to indicate where to push and derides light switches and stove controls laid out in 1D to control items in 2D. He also points out how people are quick to decide why their computer crashed or why the projector doesn't work, and how this is usually a result of a problem with the above list of design principles.

Towards the end of the book, he begins to extend his points to the design of computers and applications. Many of his earlier points relate as much to the design of doors as they do to the design of computers. He concludes with a message to the designers (and users) of the future, to not ignore design as devices become more powerful and feature-filled.

I felt like Norman's book was an interesting and easy read. (Perhaps it was well designed!) Norman was able to get his point across in a very understandable way by using our own experiences with confusing appliances and easy-to-use (though complex) cars. I learned a lot from his book, or rather, it brought what I already knew into my conscious thought process. I believe that his book will be a good reference for user interface design in the future, as it provides a reminder that design is not just a shiny GUI you slap onto an application; instead, it is a crucial part of the development process that should not be forgotten.

Thursday, February 4, 2010

The Application of Forgiveness in Social System Design

This paper was submitted by three researchers: Vasalou (University of Bath), Riegelsberger (Google UK) and Joinson (University of Bath).

This paper focuses on extending the concept of forgiveness to online communities. It begins with an overview of common online interaction problems, and practical ways that the offenders are punished. Examples include reputation on Slashdot and eBay, as well as moderation and page-locking on Wikipedia. Problems arise when a user inadvertently offends someone or has a momentary lapse of judgment. In these cases, users often desire a reparation system, which could allow their status in the community to be restored. The three researchers listed above, believe that this would best be implemented through a system of forgiveness.

In their paper, they borrow the following definition: "Forgiveness is the victim's prosocial change towards the offender as s/he replaces these initial negative motivations with positive motivations." They extend this to state that:
  • Forgiveness cannot be mandatory
  • Forgiveness is not unconditional
  • Forgiveness does not necessarily repair trust or remove accountability
With these principles, they hope to encourage communication between the victim and the offender, allowing misunderstandings to be cleared up, and legitimate offenses to be talked through. They point out that simply blocking the offending user (even temporarily) simply encourages them to leave, since they are alienated from the community. The opportunity for forgiveness, however, helps to build stronger communities, much like it helps build strong friendships in everyday life.

This paper felt a little vague and unscientific, even though they constantly referenced other sources. While I agreed with almost everything they said, I found it to be less than enlightening. Rather, it seemed to reiterate what most people could put together off of the top of their heads (albeit without sources). It would have been nice for them to conduct a user-study of an actual online community, using an actual forgiveness system. I believe this is a great idea for many online communities, but it would need to be tested in the real world, not just talked about on paper.

Tuesday, January 26, 2010

TapSongs: Tapping Rythm-Based Passwords

The TapSongs paper was published by a single researcher, Jacob Wobbrock. In in, he introduces a new authentication method based on tapping a tune on a sensor. He points out that this method is (surprisingly) quite secure for three main reasons:
  • A TapSong may be entered out of view, preventing "shoulder surfing"
  • Even if a TapSong is captured, it may be hard to represent
  • Unlike entering a stolen password, which is trivial, entering a stolen TapSong will often fail, because of individual rhythm differences
The third point is especially interesting, because people are quite successful in entering their own TapSongs after only a few entries for training purposes. TapSongs do allow for some deviation due to human timing inaccuracies, and stretches or compresses the entered password to match the length of the correct one stored before checking it.

After developing his tool, Wobbrock was able to test it on 10 subjects. Each of them were given a famous tune to tap 12 times to create a TapSong timing model. They were then asked to log in 25 times. The subjects were able to successfully reproduce the TapSong 83.2% of the time. Subjects then eavesdropped on someone entering each of the famous tunes and were asked to replicate them. They were only successful 10.7% of the time. Even when they were told what the famous tunes were, they were only able to log in 19.4% of the time.

As far as my thoughts go, I would like to first point out a flaw I have seen in several of these papers. Almost all of the user studies are incredibly small. I just don't feel that 10 people can provide enough data to make statistically significant conclusions. With this paper in particular, I feel that this method is actually far less secure than a regular password. In real life, people would probably pick common tunes and would enter the TapSong in the open where it could be easily picked up. Trying to memorize keys that someone is pressing is one thing, but songs are easy to remember. I feel that a musically-minded person could increase their odds from 19.4%, if only by trying the song multiple times.

Ripples: Utilizing Per-Contact Visualizations...

In this paper, several Microsoft researchers attempted to improve the usability of multi-touch displays by adding a simple set of visualizations to cover a variety of common tasks and common mistakes. In order to reduce visual clutter, they attempted to find the least number of visualizations that would cover common errors. Some of these errors include:
  • Accidental activation (resting an elbow on the table)
  • Object scaling constraints (object has reached maximum size)
  • Interaction at a distance (continuing to select a scroll bar while moving away from it)
  • Stolen capture (being unable to press a button that is being held down)
After researching various visualizations, the team of researchers conducted a user study testing accuracy and perceived responsiveness. The results from this user study showed that 62% preferred Ripples to be enabled, while 23% preferred it to be disabled (15% had no preference). A second test showed that Ripples consistently improved accuracy for touching small circles around the table.

While I believe that the concept of this paper is sound, there were several points that I felt could have been presented better. For starters, the fact that almost a quarter of the testers preferred Ripples to be disabled is a fairly significant statistic, but the paper mostly breezes right over it. Also, it seems like such a visualization system should be used in a more natural environment (with multiple people using the same touch table), in order to get a better subjective view of the full range of the table's visualizations.

Thursday, January 21, 2010

Disappearing Mobile Devices

In this paper, Ni and Baudisch look ahead to the future of mobile devices. As they point out, the primary limiting factor to the miniaturization of mobile devices is the need for user interaction. They try to extrapolate ways of interacting with a device that is essentially of size zero. At this miniscule size, they are left with three variations of touch:
  • Touch
  • Pressure
  • Motion
Of these, they eliminate pressure, since it is limited by where the device is located and how many different inputs can be consistently entered. They conclude the paper with two user studies, one on marking eight directions, and the other on entering letters of the alphabet. The first test had participants using an optical mouse held in one hand, while entering directions with the other hand. Results of this study showed that users entered the wrong direction anywhere from 2.5% to 6.7% of the time, making error rate mostly independent of direction (by one-way repeated-measures ANOVA).

In the second test, users were asked to enter letters in the form of a unistroke alphabet. In this case, a modified version of Palm's Graffiti and another alphabet known as EdgeWrite were used. Interestingly enough, Graffiti system resulted in many more errors, while the EdgeWrite system performed at a level comparative to current device applications.

I liked how this paper looked ahead to a problem that will be faced in the future, and was able to formulate a user-study that allowed the researchers to generate useful results. Future "disappearing device" designers will be able to use this paper to develop a user interface that is much more usable. I couldn't see any flaws in the paper, unless of course these devices never are created. Even still, the paper is an interesting look at the possibility of such devices. Future work would probably be to develop a system of providing output to the user as well as accepting input. I would look at colored LEDs and/or tactile feedback as starting points.

Wednesday, January 20, 2010

Integrated Videos and Maps for Driving Directions

This paper also has a webpage, which contains an overview video, as well as a link to the paper's full text. The demonstration starts around 3 minutes into the video.

In this paper, five researchers propose and implement a more effective system for giving driving directions. They point out that drivers are more comfortable driving a route the second time, because of the visual memory recalled from the first time. Their system, called Videomap, generates a video of a requested route (using previously captured panoramic images) and presents it to the user. Videomap uses several techniques to improve its usefulness:
  • Portions of the route between turns are shown quickly, while turns are shown slowly
  • The field of view is expanded near turns to take in (currently hand-selected) landmarks
  • The video smooths out turns by rotating the field of view before the car itself turns
  • Landmarks are freeze-framed alongside the video, while the video continues "driving"
After developing this tool, the researchers performed a user-study in which they gave users directions that they then had to follow. One group was given directions using Videomap, while the other used Picturemap, which only used pictures of the landmarks, not video. After viewing the directions in Videomap or Picturemap, the users then were presented with a real-time simulation of driving the route. Each group was also provided with a printed map of directions that they could use while "driving". At each intersection, the users were required to choose the correct path to take. Videomap was shown to be the better of the two applications in successful turns, times the user referenced the printed map, and user opinion.

Now, for some of my own thoughts on the paper. I found this paper to be very interesting, since it takes a novel approach to the common activity of getting directions. I believe that I would actually use such a system if it continued to be refined, and if some of the important caveats were handled well. I didn't really see any faults with this paper, except that the landmarks have to be hand-generated at the moment. I wouldn't be surprised if Google has their eye on this, especially since three of the researchers are with Microsoft! That being said, an obvious area for future work would be to hook up with a corporation like Microsoft or Google, and use their resources to improve and beta test this application.