Ditching the mouse and keyboard means a whole lot more than just doing without two common peripherals. As those who've worked with Microsoft Surface have found out, you have to jettison decades of GUI baggage and start with a whole new mindset.
For most, the traditional combination of mouse and keyboard is not just a mode of computer input, but a way of life. While it has been decades since both technologies were first welcomed to the mainstream, little about their core functionality has changed; side-by-side, the aesthetics may be different, but a 486 and modern-day Mac effectively use the same basic interface paradigm.
It is perhaps for this reason then, that in recent years, both users and developers alike have approached the wide-scale adoption of touchscreen technology with more of a walk than a run. While the technology has increased in use, it has usually done so as an augment to the traditional mouse and keyboard, proving that the force of habit is hard to break.
Yet, about a year ago, Microsoft promised something new, with the launch of its multi-touch computing table, Surface. No keyboard, no mouse—just a table with a screen. Developers quickly realized that designing for Surface is more than an exercise in coding—serious consideration has to be given to constructing a coherent user interface for a device that completely forgoes the standard mode of input that has been in use for almost half a century.
From an interface perspective, touch- and multi-touch-based implementations like the Microsoft Surface are still very much in their infancy. While there are many things the device does well, there are still a number of curious design quirks that make developing an entirely touch-dependant UI challenging. With an increasing number of developers worldwide gaining access to Surface in recent months, a much clearer picture is forming of what the future may hold for Surface and similar technologies.
When user interaction relies entirely upon your fingers, it's hard not to fall back upon the trappings of traditional interface design. What is important to note is that, while Surface may simply be a glorified PC internally—Infrared cameras notwithstanding—the actual mode of interaction is a decidedly different experience.
We recently met with Jeremy Bell and Brendan Lynch of Teehan+Lax, a Toronto-based design firm that received their Microsoft Surface this past December. What's interesting is that the company has approached the device not from a coding background, but instead with their own UI and design experience developing for the Web. But as the pair have quickly learned, design methodologies and practices that may work online don't always adapt well to a technology like Surface.
"From a design philosophy it's completely different," explained Bell. "We're so used to designing interfaces that are for one person—and not just for one person but for a screen and a keyboard. So everything about the approach is quite different."
What Bell is getting at is Microsoft's 360 interface, a design methodology that Microsoft says should allow interaction with the screen from any possible angle, and more importantly, with multiple users in mind. But while sound in theory, in practice, things can get a little tricky.
Take Microsoft's Virtual Earth Surface application, for example. The map utilizes the table's entire screen, with additional overlays for items like landmarks, restaurants and other points of interest. While items such as these can be easily rotated for users on opposing sides of the table, the underlying map cannot.
The result is a scenario in which the Virtual Earth application's core functionality is still largely limited to one side of the table, an aspect that seems to be quite the opposite of what Microsoft hopes to achieve with Surface, and one that represents a very real pitfall for potential Surface developers.
To be fair, it's hard to lay the blame on Microsoft, or even the inexperienced third-party developers. Surface, like any other screen, is still bound by the rules of language, making text orientation tricky to deal with from multiple angles. August de los Reyes, Director of User Experience at Surface, told us that while text orientation is an issue, it's not necessarily a huge detriment to the interaction model that the table offers.
Based on both Microsoft’s research and his own observations, de los Reyes says that people tend to gravitate towards others already situated at one of the table’s sides, in some ways mitigating the issue of upside-down or hard to read text.
“One of the challenges is text orientation and how people read. But I think for now, we’re trying to be smart about how we solve all the design issues. The way people use Microsoft Surface, they self-organize. They sit next to each other and flip the content around. But short of reinventing text-display, and that’s not to say we won’t, “ he said with a laugh, “we’ll solve that problem eventually.”
Regardless, these experiences serve as important reminders to developers that Surface's viewing angle greatly differs from that of a PC screen. This means adjusting design practices accordingly, and making core functionality available from any point of entry, without limiting the number of users. It's for this reason that focus and compartmentalization on the device are key.
In some ways, the table's screen forces developers to both guide and focus their users' interaction even more precisely than within a PC environment. "When you have multiple people interacting with [Surface], the workable area you have is even smaller," explained Bell, "and you have to focus that interaction down to a very precise space."
But while such an approach may work in a large number of scenarios, it may not prove successful in all applications, particularly ones that are heavily rooted in a traditional computing experience.
For example, designing a Web browsing experience for Surface may prove to be particularly challenging, says Bell. While most browsers are designed for single-user operation, having a similar online experience in a multi-touch environment can be tricky. It places developers in a tough position, wherein there is no easy way for multiple users to interact with online content simultaneously on one machine.
De los Reyes was mum on how a Microsoft-designed implementation of Web browsing would work, but assured us that there were design principles the software company was working on to make a multi-user experience on the Web a successful one. Whether these design principles will result in an environment reminiscent of the desktop browsing experience, or perhaps something entirely different, is anybody's guess.
Anyone who has tried to interact with a copy of Windows via touchscreen knows it can be a particularly daunting task. Things like tabs, scrollbars and contextual menus, all perfectly acceptable in a modern-day GUI, can hamper a touch-driven experience immensely.
A study conducted by de los Reyes and the Surface team noted that users displayed a great deal of satisfaction when interacting with a touch-driven mapping application developed by the team. But "the moment we introduced a scrollbar, which is a GUI element, the level of satisfaction wavered a bit," he noted.
De los Reyes likened this decrease in satisfaction to the impression users get when forced to interact with a command line in a GUI environment. Not only does it break the flow of the graphical interface, but it can also interrupt the suspension of disbelief encouraged by the GUI.
Source: ars technica