How
cognitive ergonomics can be used to inform interactive user design within New Media.
New Media is a constantly evolving term that defines the interplay between the
Internet, technology and its different medias. There are many sub-sections to
the definition such as on-demand access, creative participation, interactive
user feedback and web-community formation. In this essay I will be primarily be
looking in to the interactive user feedback that is implemented within current
and future technologies, and seeing how it strives to give users the “optimal
experience” (Flow, 1993) with the limitations of current technology.
An example of New Media which is used world-wide every
day by over 800 million customers is Facebook, allowing users to customize
their profile, network with friends, upload and interact with multimedia and
access it on a vast range of different technologies.
New Media is a continually progressing definition, as new technologies emerge and
replace the old, they begin to fall short of what people class as being an
emergent technology. Mobile Phones older than a few years were considered a New Media are now out-dated, within the
last 9 years mobiles have “got bigger, gone colour and become interactive.”
(REFERENCE TO THREE) It is important that we continue to
develop new means of communicating information, interaction with our diverse
technologies and the ability to truly integrate ourselves with the world around
us.
Apple has refined its iPod’s
user-centred design interface over the last to such an extent that its user
interaction is almost unrecognisable from its earlier models. The initial iPod
featured a circular physical interface, where the user could cycle through various
menus using a scroll wheel, alongside five buttons for core functions (Play,
pause, forward, back, select). The changing gestalt of the iPod revealed how
minor adjustments to the organization gave profound impact to the users input
and visual aesthetics of the product. Later models began to see the buttons
seamlessly combined with the scroll wheel, which in turn was a turning point for
touch based user interaction in technology as a whole.
Apple again led the
industry forward with touch-screen technology, utilising beautiful design with
an intuitive user interface which won recognition amongst the critics, the huge
demand for this type of user interaction lead to a huge transformation within
the mobile industry.
Users suddenly had double
the screen space, removal of the physical keypad from the phone and replacing
it with a digital keyboard that appeared only when required. This paved the way
for the significant increase in screen size. Interaction through haptic
feedback and movement made for easier, more accessible and intuitive
interaction with the technology.
One of the most
important new developments brought along through the technology was ways in
which the user could converse with their devices included multi-touch (which
later grew to a standardised set of functions or ‘gestures’ including forward,
back, zoom, drag and holding down). More methods for a user to communicate with
a system will lead to greater, more informed feedback, Users no longer have to
navigate a whole array of menus to perform an action such as zoom on a mobile
web browser, they simply pinch the screen in or outwards.
With this rapid
progression within user interaction, it is imperative that the technology does
not alienate new users, with the loss of tactile feedback and the entirely
different approach to interact with information it is easy to see why so many
people still assume they are unable to navigate the device. With the
capabilities of the ever more powerful devices it is actually becoming easier
for developers to replicate ‘natural’ interaction with touch-screen
technologies.
With new technologies
come new design challenges, some of the major issues that currently face GUI
designers within touch-devices are:
Children generally
have small, narrow fingers where as the majority of older men will have thick,
fat fingers; Touch-screen buttons or interactive regions need to accommodate
“blunt” presses without triggering other neighbouring controls. This problem
contrasts to the accuracy users will be accustomed to from a desktop computer’s
mouse or trackpad input.
MEASURE iPHONE 4S TO NORMAL KEYBOARD, COMPARISON TO OLDER KEYBOARDS,
SOUND EFFECTS, keyboard menu, horizontal/vertical keypad, separate for symbols,
numbers. No standardised system for keyboard input, unbuilt in to the OS,
different screensizes
When a user is
touching/holding something on-screen, they obscure their view of the particular
area they are selecting. There needs to be careful consideration of where
information is displayed, and how users are expected to interact with particular
objects.
Depending on the
nature of the application being run on the device, consideration needs to be
given to whether a user will have the capability of both hands to interact with
the device (or if one must be used to hold the device while standing/moving).
It appears the next
significant jump in New Media, in
particular how consumers will interact with a wealth of new information will
come with the introduction of Google’s: Project Glasses. A TED talk by Pattie Maes
and Pranav Mistry in March 2009
‘SixthSense’ paved the way for “profound interaction with our
environment”. The talk displayed examples of a person using wearable projectors
to cast information on to the real world, seamlessly bridging the divide
between screen and world.
Google has suggested
users will be able to interact with the device through audio commands,
visualisation and location triggers. It would appear to be the first real use
of integrating augmented reality in to our daily lives. Removing the need to
input text through any form of keyboard, and having the interface seamlessly
overlaid on to your eyes would appear to be the future.
"A group of us... started Project
Glass to build this kind of technology, one that helps you explore and share
your world, putting you back in the moment," – Google X (The firms
experimental lab)
This statement shows that Google’s main
intention of the device it to blend the technology with the world, instead of
the great divide we have always had between digital and real. To directly
overlay information on to our own vision would bring a complete over haul to
the interface design theory used for the software.
How will the glasses know where to place
information, how to acknowledge what is a command and what is just general
conversation, if there is a busy scene in view what should the glasses focus
on?
As with the touch devices, there will be
almost an insurmountable series of problems designers and software engineers
will need to overcome to make this a viable mass-market device.
Google released a video that suggested some
of the features it would provide, many of these are already in place to an
extent on mobile devices now so technically the user should not feel
uncomfortable with the range of information it offers. The real innovation will
come in how users will navigate the interface, with no mouse, no trackpad and
no physical interaction it will be a complete overhaul of the current system.
A lot of cognitive ergonomics from mobile
technology that currently exists can be analyzed and modified to work in the
same way, by tweaking the current input system and display of the interactive
elements to better suit its new ‘backdrop’.
A key example of a current-gen system that
is taking advantage of this is Windows 8. The same operating system works
across multiple platforms (Desktop, mobile, touch) with the same functions and
same methods of user interaction. Any action a user does with a desktop PC’s
mouse can be mimicked on a tablet device. This is the first stage of an
innovative new approach within the industry to make people feel more familiar
with a system across a range of different input devices; it does have its
issues though.
There are functions that touch-screen
technology took from older technology and improved upon (word correction on
digital keyboard, to counter the finger size of different users), and others
that had to be re-engineered due to the input constraints (highlighting text).
For this reason, this new approach is still quite raw and hopefully with the
data they will collect from the millions of users that interact with its
interface, they will learn and refine the design.
With the rapid
expansion of touch screen technology and its profound impact on the world over
the past four years, it is paramount that new technologies learn from both the
successes and failures of the previous. The gap between digital and real is
closing and for the technology to really take off it will need users to feel familiar
with its completely new method of user-based input.