20121212

Final Essay


How Cognitive Ergonomics can be used to Inform Interactive User Design within New Media.


“Ergonomics is sometimes described as fitting the system to the human"(Ergonomics Today, 2001) it is the process implemented throughout design to inform decisions and help direct functionality unique to the user(s). Ergonomics is applied in an assortment of applications within New Media to provide the user with an “optimal experience” (Csikszentmihalyi, 1991). 

New Media in its raw definition is simply an instrument used to attain data, as primitive man would have used tools to find food; Better tools make for more food or in modern terms: faster data and more accurate data. New Media devices are rapidly closing the gap between man and the wealth of information that surrounds us.

It is a constantly evolving term that defines the interplay between the internet, technology and its different medias. There are multiple definitive sub-sections at present including on-demand access, web-community formation, creative participation and interactive user feedback. This essay will look primarily at the use of interactive user feedback within previous, current and future New Media systems.

As fast as new technologies emerge and replace the old, they begin to fall short of what is defined as being an emergent technology. Mobile Phones older than a few years once considered New Media Devices are now out-dated, within the last 9 years mobiles have “got bigger, gone colour and become interactive.” (Three, 2012) It is important that we continue to develop new means of communicating information, methods of interaction within our diverse technologies and the ability to integrate ourselves in the world around us.

As Moore’s law has shown “the number of transistors on integrated circuits doubles approximately every two years.“ (Intel, 2005). This clear progression in hardware allows for the development of new software capabilities. There is a definitive relationship between hardware and software. Software can take advantage of new hardware (or be restricted by it) and hardware is pushed forward to meet expectations of future software.

The first – third generation of Apple iPod models had a standard mono-screen, lacking in colour from the hardware of the time; this meant the software struggled to clearly communicate an array of different information to the user. Advancements within technology lead to the release of a fourth generation colour-screen model (28th June, 2005). This meant Apple’s UI creatives could take advantage of colour theory to help inform and easier navigate users within the systems interface; it also directly benefitted sales of the device as for the first time album artwork, photos and videos could now be displayed alongside the music control interface.

Apple has refined its iPods user-centred design over the last several years to such lengths that its user interaction is almost unrecognisable from its earlier models. The first generation iPod (2001) was designed predominantly by Tony Fadell and featured a circular tactile interface, in which the user could cycle through various menus via a physical rotatable scroll wheel, partnered with five radial orientated buttons for core inputs (Play, pause, forward, back, select). 

The changing gestalt of the iPod under the direction of Jon Rubenstein revealed how minor adjustments to the organization and input gave profound impact to the users experience and visual aesthetics of the product. Later models began to see the buttons seamlessly combined with a touch-sensitive scroll wheel, which in turn was a turning point for touch based user interaction in technology as a whole.

Apple again led the industry forward with the release of its first generation iPod Touch (September, 2007) – a touch-screen technology, utilising beautiful design with an intuitive user interface which won global recognition amongst the critics, the huge demand for this type of user interaction lead to a huge transformation within the mobile industry.

With this new technology came new cognitive ergonomic challenges; users were faced with a completely alien method of interacting with information displayed on their mobile screens.  Larger screen displays assisted in the visibility and scale of different medias, cognitive methods began to adapt from how users expected to navigate the “natural user interface” (Ballmer, 2010) systems.

A number of common design theories for touch devices have developed with the popularity growth of the platform. Humans of all different ages and sizes have a broad range of different finger widths; Touch-screen buttons or interactive regions had to accommodate “blunt” presses without triggering other neighbouring controls. This problem contrasted to the accuracy users were accustomed to from traditional desktop computers using mouse or trackpad input.

Depending on the nature of the application being run on the device, consideration needs to be given to whether a user will have the capability of both hands to interact with the device (or if one must be used to hold the device while standing/moving). 
 

When a user is touching/holding something on-screen, they obscure their view of the particular area they are selecting. There needs to be careful consideration of where information is displayed, and how users are expected to interact with particular objects. 

Perhaps the most significant adaptation to the new systems was the removal of the physical keypad, and implementation of the digital keyboard. “The QWERTY layout has persisted for almost one and a half centuries.” (Xiaojun, 2010) and continues on to the digital screen, to the extent of a ‘press’ sound and use of haptic feedback to imitate a button press. Keeping the familiarization of the keyboard layout helped many users make the transition to touch-screen devices, but presented an almost impossible task for UI designers.

The new keyboards had to “allow reasonably close input speeds with which a user can type on a physical keyboard, avoid errors and easily correct mistakes and finally enter text comfortably in terms of posture and interaction with the device.” (Edney, 2012). With the vast array of new devices including the digital keyboard, they had to scale to different screen ratios and resolutions whilst maintaining enough space between keys to allow for ‘blunt presses’. Blunt presses we’re also combated by error-correction algorithms within the devices software, learning from patterns and mistakes made by their individual users. 

Within the UK and US at least, we still conform to this former method of text input within our digital devices, “It is well established that optimized touchscreen keyboards have a significant time and motion advantage over QWERTY” (Xiaojun, 2010). Xiaojun explains through a series of user trials (MacKenzie, 1999) the main obstacle for adopting a more optimized keyboard layout is the learning requirement, which is shown to take only a few hours (on average). Perhaps due to the alien-nature of the recent touch devices it would have been overwhelming to overhaul the design of the QWERTY keyboard at the same time.



One of the most important new developments brought along through the technology was ways in which users could converse with their devices, including multi-touch (which later grew to a standardised set of functions or ‘gestures’ amongst mobile operating systems including: forward, back, zoom, drag and hold down). More methods for a user to communicate with a system lead to greater, more informed feedback, users no longer had to navigate a whole array of menus to perform an action such as zoom on a mobile web browser, simply pinching the screen in or outwards would perform the same function.

With this rapid progression within user interaction, it is imperative that the technology does not alienate new users, with the loss of tactile feedback and the entirely different approach to interact with information it is easy to see why so many people still assume they are unable to navigate the device. With the capabilities of the ever more powerful devices it is technically becoming easier for developers to replicate ‘natural’ interaction with touch-screen technologies.

It appears the next significant jump in New Media, in particular how consumers will interact with a wealth of new information will come with the introduction of Google’s: Project Glass. A TED talk by Pattie Maes and Pranav Mistry in March, 2009 : SixthSense, paved the way for ‘profound interaction with our environment’. The talk displayed examples of a person using wearable projectors to cast information on to the real world, seamlessly bridging the divide between screen and world.

Google has suggested users will be able to interact with the device through audio commands, visualisation and location triggers. It would appear to be the first real use of integrating augmented reality in to our daily lives. Removing the need to input text through any form of keyboard, and having the interface seamlessly overlaid on to your eyes would appear to be a revolutionary change within the industry. 

"A group of us... started Project Glass to build this kind of technology, one that helps you explore and share your world, putting you back in the moment," (Google, 2012). Google’s main intention of the device is to blend the technology with the world, instead of the great divide we have always had between digital and real (Through the window of the screen). Directly overlaying information on to our own vision would bring a complete fresh approach to the interface design theory used for the software, having the freedom of a user’s entire vision to position information.

How will the glasses know where to place information, how to acknowledge what is a command and what is just general conversation? If there is a busy scene in view what should the glasses focus on?

As with the touch devices, there will be almost an insurmountable series of problems designers and software engineers will need to overcome to make this a viable (and ultimately successful) mass-market device.

Google released a video that suggested some of the features it would provide, many of these are already in place to an extent on mobile devices at present, so technically the user should not feel unfamiliar with the range of information it displays. The real innovation will come with how users navigate the interface, with no mouse, no trackpad and no physical interaction it will be a complete new method of input compared with current systems.

A key example of a current-gen system that is taking advantage of previous New Media devices and combining their best elements is the Windows 8 OS. The same operating system works across multiple platforms (Desktop, mobile, tablet) with the same functionality and methods of user interaction one would expect. An action a user triggers through a desktop PC’s mouse can be mimicked in the same way on a tablet device. This is the first stage of a pioneering new approach within the industry to make people feel more used to to a broad range of different input devices; it does have its issues though. 

Windows 8 lacks some of the basic functionality users have grown accustomed to: the simplistic process everyone learns on their first experience of a PC is to drag and drop a file from one folder to the next, this has been removed through a series of aesthetic design choices in favour of right clicking and an array of menus. This seems to be a step backwards “Could we ever switch to this or any of the many other rational systems? Unlikely: tradition is difficult to overcome.”(Norman, 1988)

There are functions that touch-screen technology took from older technology and improved upon (word correction on digital keyboard, to counter the finger size of different users), and others that had to be re-engineered due to the input constraints (highlighting text). For this reason, this new approach is still quite raw and hopefully with the data they will collect from the millions of users that interact with its interface, they will learn and refine the design through the gestalt process.

With the rapid expansion of touch screen technology and its profound impact on the world over the past four years, it is paramount that emerging technologies learn from both the successes and failures of their predecessors. The bridge between digital and real is closing and for the emergent technologies to successfully integrate in to our daily lives they will need users to feel familiar with their completely innovative methods of user input. 

What could be more natural than speaking? Is that not the most instinctive form of transmitting information, and could this be the future of New Media technologies with the likes of Apple’s Siri and Google’s Project Glass?

Bibliography


Ballmer, S. (2010). Natural User Interfaces Are Not Natural. Retrieved from JND: http://www.jnd.org/dn.mss/natural_user_interfa.html

Csikszentmihalyi, M. (1991). Flow : The Phychology of Optimal Experience. Harper Perennial.

Edney, A. (2012, July). Windows 8 : Designing the Touch Keyboard. Retrieved from Connected Digital World: http://connecteddigitalworld.com/2012/07/17/microsoft-talks-about-designing-the-windows-8-touch-keyboard/

Ergonomics Today. (2001). What is Cognitive Ergonomics? Retrieved 2012, from Ergoweb: http://www.ergoweb.com/news/detail.cfm?id=352

Google. (2012). Project Glass. Retrieved from Project Glass: https://plus.google.com/+projectglass

Intel. (2005). Moore's Law in Perspective. Information Sheet, Intel.

MacKenzie, Z. &. (1999). The design and evaluation of a high- performance soft keyboard. . ACM Conference on Human Factors in Computing Systems .

Norman, D. A. (1988). The Design Of Everyday Things. BasicBooks.

Three. (2012, March). The Evolution Of The Touchscreen. Retrieved from Three: http://blog.three.co.uk/2012/03/30/the-evolution-of-the-touchscreen/

Xiaojun, B. (2010). Multilingual Touchscreen Keyboard Design and Optimization. Shumin Zhai .