People often leverage artifacts and characteristics from their environment to reduce cognitive load and enhance their cognitive capabilities. External cognition refers to the activities that people use to support their cognitive efforts. These activities rely on: a wide range of artifacts such as computers, watches, pens and papers; characteristics of the environment such as visible landmarks, and signs; and other people. There are three main types of external cognition activities.
These three types of activities are heavily inter-dependent. In the diagram above they are listed from broadest to most specific. The externalization of memory load is the most basic external cognitive activity. It is involved in all types of external cognitive activities.
Computational offloading leverages memory externalization for the specific purpose of performing computational tasks. It is the next most basic external cognitive activity.
Annotation and cognitive tracing can be used to support both types of distributed cognitive activities mentioned above. This type of distributed cognition involves the manipulation or modification of memory and computational externalizations that impact the meaning of the externalizations themselves.
External cognitive activities are used to support experiential and reflective modes of cognition [more info on cognitive modes]. These activities rely on and support all types cognitive processes defined in my earlier post – attention, perception, memory, language, learning, and higher reasoning [more info on cognitive process types].
This framework of external cognitive activities complements the Information Processing model by identifying how people leverage their external environment to enhance and support their cognitive capabilities [more info on information processing model].
It also complements the model of interaction by providing additional insights regarding how people interact with the world (or system images) to support and enhance their cognitive capabilities. However, it does not provide insight into how people interact with systems for non-cognitive pursuits, such as physical and communication ones [more info on model of interaction].
[source: Interaction Design: Beyond Human-Computer Interaction]
** What the hell is ID FMP? **
Sunday, March 29, 2009
Friday, March 27, 2009
ID FMP: Information Processing Cognitive Model
One of the most prevalent metaphors used in cognitive psychology compares the mind to an information processor. According to this perspective, information enters the mind and is processed through four linear stages that enable users to choose an appropriate response.
Though this model offers insights into how people process information, it is limited by its exclusive focus on activities that happen in the mind. Most of our cognitive activities involve interactions with people, objects, and other aspects of the environment around us. In other words, cognition does not take place only in the mind.
In my next ID FMP post I will cover the external cognition framework that describes external cognitive activities; and distributed cognition models that attempt to map all internal and external activities. Here’s how this model aligns to the frameworks, models, and principles that I have explored over the past several weeks.
The cognitive activities modeled by Information Processing framework above can be mapped to the mental activities outlined in Norman’s Model of Interaction. At a high level, Norman’s model provides additional insights regarding the mental activities that take place and it features the external environment as an important, though unexplored, element. Here is a brief overview of how the phases from this model relates to the interaction one:
Here is how this model aligns with the framework regarding the relationship between a designer’s conceptual model and a user’s mental model. The focus of the Information Processing model is on the cognitive processes that occur in the user’s mind when they are interacting with the world. These processes are closely related to mental models in two ways:
The Conversation Turn Taking Model is related to the Information processing model in a broad sense only. The turn taking framework focuses on explaining an external phenomenon related to language and communication that is driven by the cognitive functions described in the Information Processing model. They do not contradict one another nor do they directly support each other.
The Information Processing model can be applied to both reflective and experiential modes of cognition, though the phases involved in each mode differ. Reflective cognition tends to be active during the “comparison” and “action selection” phases. On the other hand, experiential cognition can be active across all phases depending on the type of interaction.
The chart below provides an overview regarding which cognitive process types are involved with each phase of the Information Processing model.
[source: Interaction Design: Beyond Human-Computer Interaction]
** What the hell is ID FMP? **
Though this model offers insights into how people process information, it is limited by its exclusive focus on activities that happen in the mind. Most of our cognitive activities involve interactions with people, objects, and other aspects of the environment around us. In other words, cognition does not take place only in the mind.
In my next ID FMP post I will cover the external cognition framework that describes external cognitive activities; and distributed cognition models that attempt to map all internal and external activities. Here’s how this model aligns to the frameworks, models, and principles that I have explored over the past several weeks.
The cognitive activities modeled by Information Processing framework above can be mapped to the mental activities outlined in Norman’s Model of Interaction. At a high level, Norman’s model provides additional insights regarding the mental activities that take place and it features the external environment as an important, though unexplored, element. Here is a brief overview of how the phases from this model relates to the interaction one:
- “Input encoding” maps to “perception”
- “comparison” encompasses “interpretation” and “evaluation”
- “response selection” corresponds to “intention” and “action specification”
- “response execution” maps to “execution
Here is how this model aligns with the framework regarding the relationship between a designer’s conceptual model and a user’s mental model. The focus of the Information Processing model is on the cognitive processes that occur in the user’s mind when they are interacting with the world. These processes are closely related to mental models in two ways:
- First, mental models provide the foundation for people to understand their interactions with the world and select appropriate responses.
- Second, mental models evolve as people evaluate the impact of their own actions and other events on the world.
The Conversation Turn Taking Model is related to the Information processing model in a broad sense only. The turn taking framework focuses on explaining an external phenomenon related to language and communication that is driven by the cognitive functions described in the Information Processing model. They do not contradict one another nor do they directly support each other.
The Information Processing model can be applied to both reflective and experiential modes of cognition, though the phases involved in each mode differ. Reflective cognition tends to be active during the “comparison” and “action selection” phases. On the other hand, experiential cognition can be active across all phases depending on the type of interaction.
The chart below provides an overview regarding which cognitive process types are involved with each phase of the Information Processing model.
[source: Interaction Design: Beyond Human-Computer Interaction]
** What the hell is ID FMP? **
Labels:
cognition,
communication,
don norman,
experiential,
ID FMP,
interaction,
interaction design,
models,
reflective
Sunday, March 22, 2009
ID FMP: Model of Interaction
There are many theories that attempt to describe the cognitive processes that govern users’ interactions with products systems. Here I will focus on a model developed by Don Norman, which was outlined in his book Design of Everyday Things. This framework breaks down the process of interaction between a human and a product into seven distinct phases.
Seven Phases of Interaction with a Product System
Now let’s put this theory into context with some of the concepts and models that we’ve encountered thus far. First, I want to point out that this model aligns with Don Norman’s model regarding the relationship between a designer’s conceptual model and a user’s mental model [read more here]. The focus of this framework is the interaction between the system image, the product’s interface where user interaction happens, and the user’s mental model, the user’s understanding of how the product works which governs the user’s interpretation, evaluation, goals, intention, action specification.
I’ve extended Norman’s original model to account for the reflective cognition that is also involved in peoples’ interactions with products. Reflective cognition governs peoples’ higher-level evaluations, goals and intentions that ultimately drive peoples’ experiential cognition activities. Experiential cognition governs the second-by-second evaluations, goals, and intentions involved in peoples’ interactions with products. These two different modes of cognition are explored in greater detail here.
Here is an example to distinguish and highlight the interdependencies between these two different types of cognition and interaction. Let’s consider a person’s interaction with a car. In this scenario, a person’s reflective cognitive would include setting a goal such as choosing a destination and desired time of arrival, and evaluating what route to take based on understanding of current location and traffic patterns. These activities would govern a person’s experiential interactions with a car and drive their moment-by-moment evaluations, and creation of goals and intentions. Experiential interactions would include using the steering wheel to turn a corner or switch lanes, pressing the accelerator to speed up, and stepping on the breaks to stop the car.
How does the concept of mental models relate to this framework? The mental model itself is not represented by a single phase, or grouping of phases. It refers to the understanding that a user has of how a system works. Norman’s model was developed to describe how users interact with product systems on an experiential, minute-by-minute basis. At this level of interaction a user’s mental model drives their interpretations, evaluations, setting of goals and intentions, and specification of actions.
Now let’s explore how the different cognitive types come into play during the various phases of interaction. These cognitive types have been outlined in greater detail here.
** What the hell is ID FMP? **
Seven Phases of Interaction with a Product System
- Forming the goal
- Forming the intention
- Specifying an action
- Executing the action
- Perceiving the state of the world
- Interpreting the state of the world
- Evaluating the outcome
Now let’s put this theory into context with some of the concepts and models that we’ve encountered thus far. First, I want to point out that this model aligns with Don Norman’s model regarding the relationship between a designer’s conceptual model and a user’s mental model [read more here]. The focus of this framework is the interaction between the system image, the product’s interface where user interaction happens, and the user’s mental model, the user’s understanding of how the product works which governs the user’s interpretation, evaluation, goals, intention, action specification.
I’ve extended Norman’s original model to account for the reflective cognition that is also involved in peoples’ interactions with products. Reflective cognition governs peoples’ higher-level evaluations, goals and intentions that ultimately drive peoples’ experiential cognition activities. Experiential cognition governs the second-by-second evaluations, goals, and intentions involved in peoples’ interactions with products. These two different modes of cognition are explored in greater detail here.
Here is an example to distinguish and highlight the interdependencies between these two different types of cognition and interaction. Let’s consider a person’s interaction with a car. In this scenario, a person’s reflective cognitive would include setting a goal such as choosing a destination and desired time of arrival, and evaluating what route to take based on understanding of current location and traffic patterns. These activities would govern a person’s experiential interactions with a car and drive their moment-by-moment evaluations, and creation of goals and intentions. Experiential interactions would include using the steering wheel to turn a corner or switch lanes, pressing the accelerator to speed up, and stepping on the breaks to stop the car.
How does the concept of mental models relate to this framework? The mental model itself is not represented by a single phase, or grouping of phases. It refers to the understanding that a user has of how a system works. Norman’s model was developed to describe how users interact with product systems on an experiential, minute-by-minute basis. At this level of interaction a user’s mental model drives their interpretations, evaluations, setting of goals and intentions, and specification of actions.
Now let’s explore how the different cognitive types come into play during the various phases of interaction. These cognitive types have been outlined in greater detail here.
- Attention supports all phases of interaction from perception through to action execution. This cognitive process refers to a user’s ability to focus on both external phenomena and internal thoughts.
- Perception is clearly called out as its own phase in Don Norman’s model.
- Memory plays an important role during all phases from interpretation through to action specification.
- Language supports communication throughout all phases of a person’s interaction with a product. Here I refer to both verbal and visual languages.
- Learning enables people to use new products and increase effectiveness and efficiency in their interactions with existing products. This cognitive process supports all phases between the interpretation and action specification.
- Higher reason governs all activities related to the setting of high-level goals and intent, and driving evaluations.
** What the hell is ID FMP? **
Labels:
cognition,
communication,
don norman,
experiential,
ID FMP,
interaction,
interaction design,
models,
reflective
Saturday, March 21, 2009
ID FMP: Conversation Turn-Taking Model
Holding a conversation is a basic human activity. It requires a large amount of coordination between participants, a fact that is often unnoticed. People need to know when to listen, when they can start talking, and when to cede the floor. Conversation mechanisms facilitate the coordination of conversations by helping people know how and when to start and stop speaking. These mechanisms enable people to effectively negotiate the turn-taking required carry out a conversation.
Harvey Sacks, Emanuel Schegloff, and Gail Jefferson have developed a model that aims to explain how people manage turn taking during conversations. The focus of their research was to create a framework that can be applied across cultures and contexts, and that can accommodate several key observations about the structure and dynamics of conversations. Here is an excerpt from the abstract of their paper The Simplest Systematics for the Organization of Turn-Taking for Conversation.
“The organization of taking turns to talk is fundamental to conversation, as well as to other speech-exchange systems. A model for the turn-taking organization for conversation is proposed, and is examined for its compatibility with a list of grossly observable facets about conversation [outlined below].”
The Foundations
Before we explore the model itself let’s take a look at its foundation. Here is a list of the “grossly observable facets about conversation” that was referred to in the quote above:
** What the hell is ID FMP? **
Harvey Sacks, Emanuel Schegloff, and Gail Jefferson have developed a model that aims to explain how people manage turn taking during conversations. The focus of their research was to create a framework that can be applied across cultures and contexts, and that can accommodate several key observations about the structure and dynamics of conversations. Here is an excerpt from the abstract of their paper The Simplest Systematics for the Organization of Turn-Taking for Conversation.
“The organization of taking turns to talk is fundamental to conversation, as well as to other speech-exchange systems. A model for the turn-taking organization for conversation is proposed, and is examined for its compatibility with a list of grossly observable facets about conversation [outlined below].”
The Foundations
Before we explore the model itself let’s take a look at its foundation. Here is a list of the “grossly observable facets about conversation” that was referred to in the quote above:
- Speaker changes will always occur and often recur.
- For most of the time only one party talks at a time.
- More than one person will often talk at a time, but these occurrences are brief.
- Most transitions occur with no gap or overlap, or with slight gap or overlap.
- Turn order varies throughout conversation.
- Turn size or length usually varies.
- Length of conversation is not specified.
- What parties say is not specified.
- Relative distribution of turns is not specified.
- Number of parties varies considerably.
- Talk can be continuous or not.
- Turn-allocation techniques are used to facilitate the conversation.
- Sometime turn-constructional units are used to facilitate conversation.
- Repair mechanisms exist for correcting turn-taking errors.
The general model that they developed, which is pictured above, is composed of the three basic rules that govern the transition of turns in a conversation. These rules are:
- The current speaker chooses the next speaker by asking a question or making a request.
- If the speaker does not choose the next speaker, then another person can self-select to start speaking.
- The speaker can decide to continue speaking if no other person self-selects to start speaking.
** What the hell is ID FMP? **
Labels:
cognition,
collaboration,
communication,
conversation,
culture,
ID FMP,
interaction design,
models,
turn taking
Thursday, March 19, 2009
Chapter 4 Homework: What is interaction design?
This assignment was taken from the fourth chapter of the book Interaction Design: Beyond Human-Computer Interactions, written by Helen Sharp, Jenny Preece, and Yvonne Rogers.
Overview
The aim of this activity is for you to analyze the design of a virtual world with respect to how it is designed to support collaboration and communication.
Visit an existing 3D virtual world such as the Palace, habbo hotel, or one hosted by Worlds. Try to work out how they have been designed for taking account of the following:
Assignment Questions
Question A: General social issues
Virtual world selected: Second Life.
Question A
What is the purpose of the virtual world?
According to Linden, Second Life does not have a specific purpose. They describe Second Life as “a free online virtual world imagined and created by its Residents.” Most people use Second Life for entertainment. It enables them to escape to virtual world where then can interact with other real people. It offers an experience that can be likened to the birth child of the SIMS game crossed with a social network. A small segment of Second Life users actually make a living from creating virtual artifacts and owning virtual land.
What kinds of conversation mechanisms are supported?
Second Life supports many of the same conversation mechanisms that people are accustomed to using in real life to govern turn taking. In my personal experience, I continued to follow conversation practices that I am accustomed to using when speaking to someone in person, even though the conversation was taking place on a text-based medium.
The conversation turn-taking model developed by H. Sachs et al. [link] seems to be applicable to this environment (at least according to my very unscientific research). I assume that conversations using voice, which is available in Second Life, support standard conversation mechanisms even more effectively.
Another conversation mechanism that is supported by Second Life is body language. Let me clarify what I mean. Citizens are able select from a large pre-defined list of gestures that enable them to communicate attention, emotion, mood, and more. This is pretty cool feature that can be likened to emoticons on an instant messaging application or social network.
What kinds of coordination mechanisms are provided?
Second Life does a pretty good job here again. They offer robust support for both verbal and non-verbal types of communication. As stated above, users can communicate using a text or voice/audio interface. Avatars are also capable of using a variety of different gestures for communicate. These include nodding yes, or shrugging, clapping, blowing a kiss, and more.
Rules are the foundation of this virtual world on its most basic level. The software code provides a set of rules upon which the entire virtual world is build; these basic rules are documented in the online user guide and help tools. They define the “virtual-physical” world of Second Life, which is the platform upon which user coordination can take place.
One also encounters many rules while exploring the world itself. These external representations are created by users and Linden Lab. They inform other users and help coordinate personal and shared activities. Maps are another key mechanism that supports coordination. They are available to help the users easily locate and transport themselves between islands.
What kinds of social protocols and conventions are used?
Most people seem to mimic real world conventions in Second Life. Conversations are initiated in a manner more akin to real world conversations compared to other types of text-based conversations. Users are conscious of the organization and appearance of the physical artifacts in this virtual world. This is reflected by convention such as the practices of users face one another when speaking, and the fact that many users are extremely conscious of their avatars clothing and style.
What kinds of awareness information are provided?
At the most basic level of awareness, Second Life users are able know who is around them via the visual representation of the virtual world. For the most part, users are able to understand what is happening though this varies considerably based on expertise level. It is possible to overhear others’ conversations as long as they are not having a private chat. Most of the groups of people that I encountered whose physical proximity insinuated that they were having a conversation must have been holding private chats. An interesting design element from the game is how the avatars make a typing movement in the air when they are writing a reply in a conversation.
Does the mode of communication and interaction seem natural or awkward?
The mode of communication and interaction offered in Second Life is natural on most accounts. The natural feel of the text-based conversations is in large part due to our modern-day familiarity holding conversations using messaging applications such as IM and SMS. The overall look and feel of the virtual world is natural. The communicative gestures of the character are fluid and clear in their meaning.
Question B
What form of interaction and communication is supported, e.g. text/audio/video?
Second Life supports all main forms of interaction: text, audio, video, and computational.
What other visualizations are included? What information do they convey?
Second Life is well crafted from a visual perspective. The visual flair is actually provided mostly by the creativity of the members of the community, who develop most experiences and structures that exist in this world. Visualizations that are built into the interface include different modes for displaying chats, maps that provide location information, and the main interface of the virtual world environment.
How do users switch between different modes of interaction, e.g. exploring and chatting? Is the switch seamless?
The switch between different modes of interaction is seamless. If a user is exploring he can easily start chatting with someone else nearby by typing; if a user has a voice-enabled system then they just have to talk. Gestures are not integrated as seamlessly; these have to be selected from a drop-down menu.
Are there any social phenomena that occur specific to the context of the virtual world that wouldn’t in face-to-face setting, e.g. flaming?
As with any medium that allows people to communicate from a distance, people are definitely less concerned with politeness and manners. One social phenomena that I witnessed was a user who kept repeating everything that was said in a conversation between me and a third user.
Question C
Overall, I think that Second Life does a thorough job at providing users with effective communication and collaboration tools. So much so that technology companies such as IBM have built virtual campuses where they hold meetings with employees from around the world. Here are a few ideas that could be explored:
Overview
The aim of this activity is for you to analyze the design of a virtual world with respect to how it is designed to support collaboration and communication.
Visit an existing 3D virtual world such as the Palace, habbo hotel, or one hosted by Worlds. Try to work out how they have been designed for taking account of the following:
Assignment Questions
Question A: General social issues
- What is the purpose of the virtual world?
- What kinds of conversation mechanisms are supported?
- What kinds of coordination mechanisms are provided?
- What kinds of social protocols and conventions are used?
- What kinds of awareness information are provided?
- Does the mode of communication and interaction seem natural or awkward?
- What form of interaction and communication is supported, e.g. text/audio/video?
- What other visualizations are included? What information do they convey?
- How do users switch between different modes of interaction, e.g. exploring and chatting? Is the switch seamless?
- Are there any social phenomena that occur specific to the context of the virtual world that wouldn’t in face-to-face setting, e.g. flaming?
- What other features might you include in the virtual world to improve communication and collaboration?
Virtual world selected: Second Life.
Question A
What is the purpose of the virtual world?
According to Linden, Second Life does not have a specific purpose. They describe Second Life as “a free online virtual world imagined and created by its Residents.” Most people use Second Life for entertainment. It enables them to escape to virtual world where then can interact with other real people. It offers an experience that can be likened to the birth child of the SIMS game crossed with a social network. A small segment of Second Life users actually make a living from creating virtual artifacts and owning virtual land.
What kinds of conversation mechanisms are supported?
Second Life supports many of the same conversation mechanisms that people are accustomed to using in real life to govern turn taking. In my personal experience, I continued to follow conversation practices that I am accustomed to using when speaking to someone in person, even though the conversation was taking place on a text-based medium.
The conversation turn-taking model developed by H. Sachs et al. [link] seems to be applicable to this environment (at least according to my very unscientific research). I assume that conversations using voice, which is available in Second Life, support standard conversation mechanisms even more effectively.
Another conversation mechanism that is supported by Second Life is body language. Let me clarify what I mean. Citizens are able select from a large pre-defined list of gestures that enable them to communicate attention, emotion, mood, and more. This is pretty cool feature that can be likened to emoticons on an instant messaging application or social network.
What kinds of coordination mechanisms are provided?
Second Life does a pretty good job here again. They offer robust support for both verbal and non-verbal types of communication. As stated above, users can communicate using a text or voice/audio interface. Avatars are also capable of using a variety of different gestures for communicate. These include nodding yes, or shrugging, clapping, blowing a kiss, and more.
Rules are the foundation of this virtual world on its most basic level. The software code provides a set of rules upon which the entire virtual world is build; these basic rules are documented in the online user guide and help tools. They define the “virtual-physical” world of Second Life, which is the platform upon which user coordination can take place.
One also encounters many rules while exploring the world itself. These external representations are created by users and Linden Lab. They inform other users and help coordinate personal and shared activities. Maps are another key mechanism that supports coordination. They are available to help the users easily locate and transport themselves between islands.
What kinds of social protocols and conventions are used?
Most people seem to mimic real world conventions in Second Life. Conversations are initiated in a manner more akin to real world conversations compared to other types of text-based conversations. Users are conscious of the organization and appearance of the physical artifacts in this virtual world. This is reflected by convention such as the practices of users face one another when speaking, and the fact that many users are extremely conscious of their avatars clothing and style.
What kinds of awareness information are provided?
At the most basic level of awareness, Second Life users are able know who is around them via the visual representation of the virtual world. For the most part, users are able to understand what is happening though this varies considerably based on expertise level. It is possible to overhear others’ conversations as long as they are not having a private chat. Most of the groups of people that I encountered whose physical proximity insinuated that they were having a conversation must have been holding private chats. An interesting design element from the game is how the avatars make a typing movement in the air when they are writing a reply in a conversation.
Does the mode of communication and interaction seem natural or awkward?
The mode of communication and interaction offered in Second Life is natural on most accounts. The natural feel of the text-based conversations is in large part due to our modern-day familiarity holding conversations using messaging applications such as IM and SMS. The overall look and feel of the virtual world is natural. The communicative gestures of the character are fluid and clear in their meaning.
Question B
What form of interaction and communication is supported, e.g. text/audio/video?
Second Life supports all main forms of interaction: text, audio, video, and computational.
What other visualizations are included? What information do they convey?
Second Life is well crafted from a visual perspective. The visual flair is actually provided mostly by the creativity of the members of the community, who develop most experiences and structures that exist in this world. Visualizations that are built into the interface include different modes for displaying chats, maps that provide location information, and the main interface of the virtual world environment.
How do users switch between different modes of interaction, e.g. exploring and chatting? Is the switch seamless?
The switch between different modes of interaction is seamless. If a user is exploring he can easily start chatting with someone else nearby by typing; if a user has a voice-enabled system then they just have to talk. Gestures are not integrated as seamlessly; these have to be selected from a drop-down menu.
Are there any social phenomena that occur specific to the context of the virtual world that wouldn’t in face-to-face setting, e.g. flaming?
As with any medium that allows people to communicate from a distance, people are definitely less concerned with politeness and manners. One social phenomena that I witnessed was a user who kept repeating everything that was said in a conversation between me and a third user.
Question C
Overall, I think that Second Life does a thorough job at providing users with effective communication and collaboration tools. So much so that technology companies such as IBM have built virtual campuses where they hold meetings with employees from around the world. Here are a few ideas that could be explored:
- Allowing users to select moods and emotions. These features would work in a similar way to gestures. The main difference is the duration of a mood or emotion in comparison to a gesture. Moods and emotions last longer and would be controlled using on/off switches.
- Make it easy for users to create and share documents on the fly. Provide capabilities for users to work on documents simultaneously with seamless ability to switch back and forth between focus on the document and on the virtual world.
Labels:
collaboration,
coordiation,
course,
education,
id-book,
interaction design
Sunday, March 15, 2009
ID FMP: Types of Cognitives Processes
In my last post I identified two different modes of cognition. Here I will continue my investigation into the scope of cognition by identifying six different types of cognitive processes, taken from the book Interaction Design: Beyond Human-Computer Interaction. My focus will remain on the questions: “what is cognition? And what are the main types cognitive activities?”
The six types of cognitive processes that I will describe are attention, perception, memory, language, learning, and higher reasoning. The processes are interdependent and occur simultaneously. They play a role in experiential and reflective modes of cognition. Here is a description of each process along with a few related implications.
Attention: process for selecting an object on which to concentrate. Object can be a physical or abstract one (such as an idea) that resides out in the world or in the mind.
Design implications: make information visible when it needs attending to; avoid cluttering the interface with too much information.
Perception: process for capturing information from the environment and processing it. Enables people to perceive entities and objects in the world. Involves input from sense organs (such as eyes, ears, nose, mouth, and fingers) and the transformation of this information into perception of entities (such as objects, words, tastes, and ideas).
Design implications: all representations of actions, events and data (whether visual, graphical, audio, physical, or a combination thereof) should be easily distinguishable by users.
Memory: process for storing, finding, and accessing knowledge. Enables people to recall and recognize entities, and to determine appropriate actions. Involves filtering new information to identify what knowledge should be stored. Context and duration of interaction are two important criteria that function as filters.
Design implications: do not overload user’s memory; leverage recognition as opposed to recall when possible; provide a variety of different ways for users to encode information digitally.
Language: processes for understanding and communicating through language via reading, writing, speaking, and listening. Though these language-media have much in common, they differ on numerous dimensions including: permanence, scan-ability, cultural roles, use in practice, and cognitive effort requirements
Design implications: minimize length of speech-based menus; accentuate intonation used in speech-based systems; ensure that font size and type allow for easy reading.
Learning: process for synthesizing new knowledge and know-how. Involves connecting new information and experiences with existing knowledge. Interactivity is an important element in the learning process.
Design implications: leverage constraints to guide new users; encourage exploration by new users; link abstract concepts to concrete representations to facilitate understanding.
Higher reasoning: processes that involve reflective cognition such as problem-solving, planning, reasoning, decision-making. Most are conscious processes that require discussion, with oneself or others, and the use of artifacts such as books, and maps. Extent to which people can engage in higher reasoning is usually correlated to their level of expertise in a specific domain.
Design implications: make it easy for users with higher levels of expertise to access additional information and functionality to carry out tasks more efficiently and effectively.
[source: Interaction Design: Beyond Human-Computer Interaction]
** What the hell is ID FMP? **
The six types of cognitive processes that I will describe are attention, perception, memory, language, learning, and higher reasoning. The processes are interdependent and occur simultaneously. They play a role in experiential and reflective modes of cognition. Here is a description of each process along with a few related implications.
Attention: process for selecting an object on which to concentrate. Object can be a physical or abstract one (such as an idea) that resides out in the world or in the mind.
Design implications: make information visible when it needs attending to; avoid cluttering the interface with too much information.
Perception: process for capturing information from the environment and processing it. Enables people to perceive entities and objects in the world. Involves input from sense organs (such as eyes, ears, nose, mouth, and fingers) and the transformation of this information into perception of entities (such as objects, words, tastes, and ideas).
Design implications: all representations of actions, events and data (whether visual, graphical, audio, physical, or a combination thereof) should be easily distinguishable by users.
Memory: process for storing, finding, and accessing knowledge. Enables people to recall and recognize entities, and to determine appropriate actions. Involves filtering new information to identify what knowledge should be stored. Context and duration of interaction are two important criteria that function as filters.
Design implications: do not overload user’s memory; leverage recognition as opposed to recall when possible; provide a variety of different ways for users to encode information digitally.
Language: processes for understanding and communicating through language via reading, writing, speaking, and listening. Though these language-media have much in common, they differ on numerous dimensions including: permanence, scan-ability, cultural roles, use in practice, and cognitive effort requirements
Design implications: minimize length of speech-based menus; accentuate intonation used in speech-based systems; ensure that font size and type allow for easy reading.
Learning: process for synthesizing new knowledge and know-how. Involves connecting new information and experiences with existing knowledge. Interactivity is an important element in the learning process.
Design implications: leverage constraints to guide new users; encourage exploration by new users; link abstract concepts to concrete representations to facilitate understanding.
Higher reasoning: processes that involve reflective cognition such as problem-solving, planning, reasoning, decision-making. Most are conscious processes that require discussion, with oneself or others, and the use of artifacts such as books, and maps. Extent to which people can engage in higher reasoning is usually correlated to their level of expertise in a specific domain.
Design implications: make it easy for users with higher levels of expertise to access additional information and functionality to carry out tasks more efficiently and effectively.
[source: Interaction Design: Beyond Human-Computer Interaction]
** What the hell is ID FMP? **
Labels:
cognition,
cognitive models,
ID FMP,
interaction design,
process
ID FMP: Modes of Cognition
Cognition [define] encompasses a wide range of processes related to thinking, sensing, interpreting, evaluating, decision-making, remembering and communicating. It is important for designers to understand human cognition processes in order to design systems that are easy to learn, effective, efficient, pleasurable, and meaningful.
Here, I will first distinguish between two main modes of cognition. In my next post I will identify different categories of cognitive processes. The value of these distinctions is that different modes and types of cognition call for different technology and interaction solutions. It is important to note that both cognitive modes and multiple processes are always active simultaneously.
The focus of this, and my next, post is to explore the scope of cognition. In other words, the question being answered here is “what is cognition? And what are the main cognitive activities?” I will cover models that attempt to illustrate how cognition works at a later time; at which point the question I will address is “how does cognition function?”
The two modes of cognition identified by Don Norman are the experiential and reflective modes. Both of these are essential to human beings, and are continuously used in everyday life often in an overlapping manner. The description below and attached diagram aim to illustrate the main characteristics of each of each modes.
** What the hell is ID FMP? **
Here, I will first distinguish between two main modes of cognition. In my next post I will identify different categories of cognitive processes. The value of these distinctions is that different modes and types of cognition call for different technology and interaction solutions. It is important to note that both cognitive modes and multiple processes are always active simultaneously.
The focus of this, and my next, post is to explore the scope of cognition. In other words, the question being answered here is “what is cognition? And what are the main cognitive activities?” I will cover models that attempt to illustrate how cognition works at a later time; at which point the question I will address is “how does cognition function?”
The two modes of cognition identified by Don Norman are the experiential and reflective modes. Both of these are essential to human beings, and are continuously used in everyday life often in an overlapping manner. The description below and attached diagram aim to illustrate the main characteristics of each of each modes.
- Experiential: state-of-mind associated to perception of the environment around us, and to our engagement with that environment through our actions and reactions. Contexts where an experiential mode of cognition is used include when a person is having a conversation, driving a car, or reading a book.
- Reflective: state-of-mind associated to higher-level processing of knowledge, memory, and external information (or stimuli) through thinking, comparing, and judging. This type of cognition is needed for people to learn, create ideas, design products, and write books.
** What the hell is ID FMP? **
Labels:
cognition,
cognitive models,
ID FMP,
interaction design
Saturday, March 14, 2009
ID FMP: Framework for Developing Conceptual Models
We keep on coming back to conceptual models [define]. The reason being, a well-designed conceptual model is a fundamental element of successful product and service systems. To develop a well-articulated conceptual model designers need to think through the main metaphors, concepts, actions, and relationships of the systems they are designing before developing prototypes of any sort (including wireframes, drawings, renderings, etc).
Don Norman’s and Bjoern Hartmann’s model provides insight into how designers’ conceptual models interact and relate to a users’ mental models. It does not, however, provide any guidance to help designer synthesize conceptual models.
Johnson and Herderson’s framework, published in 2002, was developed with this purpose in mind. This framework identifies the standard components of a conceptual model. It provides a blueprint that designer can use to develop conceptual models. Here is an overview of the four components of conceptual model as defined by Johnson and Henderson:
** What the hell is ID FMP? **
Don Norman’s and Bjoern Hartmann’s model provides insight into how designers’ conceptual models interact and relate to a users’ mental models. It does not, however, provide any guidance to help designer synthesize conceptual models.
Johnson and Herderson’s framework, published in 2002, was developed with this purpose in mind. This framework identifies the standard components of a conceptual model. It provides a blueprint that designer can use to develop conceptual models. Here is an overview of the four components of conceptual model as defined by Johnson and Henderson:
- Major Metaphors and Analogies: Identify important metaphors and analogies used to enable the user to understand what a product does and how to use it.
- Concepts: Define the concepts that users are exposed to and that they need to understand, including the objects the concepts create and manipulate, any relevant attributes, and the operations that can be performed on the concept.
- Relationships and Actions: Identify the relationships between concepts, including whether an object contains another, or is part of it, and the relative importance of objects and actions.
- Mappings: Define the mappings between the metaphors and concepts and the user experience the product is designed to invoke.
- The first was developed in response an exercise from the book Interaction Design: Beyond Human-Computer Interaction.
- The second was written as a personal exercise for me to apply this conceptual framework to develop a silly pet product idea that I had been toying around with for a while.
** What the hell is ID FMP? **
Thursday, March 12, 2009
ID FMP: Map of Relationship Between Conceptual and Mental Models
Developed by Don Norman, the model illustrated below demonstrates how relationship between a designer’s conceptual model and a user’s mental model is mediated by the system image of products or services.
So here is my explanation of what this model means: Designers develop product and service systems based on conceptual models [define] that they create or borrow. I use the terms product and service systems [define] refer to the ecosystem that encompasses products, services and their related artifacts and resources; these can include assets such as manuals and knowledge bases, and resource such as user groups and communities.
Users do not have access to the conceptual models of designers. Their understanding of how a product works is developed based on their interactions with the product itself, their previous experiences with the world, and their existing knowledge and expertise. All of these considerations affect how people interpret their experiences with a product, and the mental model [define] they create to explain how products work.
The term system [define] image refers to the way a product or service system actually appears to a user. System images are always imperfect representations of the conceptual models upon which they were built. For a product to be usable the system image needs to enable users to develop an accurate mental model of how relevant aspects of a product or service works.
An interesting feature of Norman’s 1988 model, is that designers relationship with system images is represented as a one-way phenomenon. This implies that once a product has been designed there is little opportunity for on-going improvements. During the last 20 years advances in technology and design methodology have made it possible for designers to continuously fine-tune product and service systems. This is especially true in the increasingly service-based world of software.
Bjoern Hartmann has revised Norman’s model to reflect the opportunity for designers to play an on-going role in improving the system image of the products they’ve created. The model he proposes includes a feedback loop that enables the user to communicate to the designer via the system image.
Hartmann posits that user-initiated feedback via the system will help identify mismatches between the designer’s conceptual model and the user’s model of how the system functions. Another important consideration is that offering an instantaneous feedback option in the same media on which the interaction is taking place will generate more reliable and richer data than feedback elicited later, or via a different channel.
[Sourced from Don Norman’s website, though I know Norman's framework was also featured in this book The Design of Everyday Things; Paper by Bjoern Hartmann written during graduate studies at Standford]
** What the hell is ID FMP? **
So here is my explanation of what this model means: Designers develop product and service systems based on conceptual models [define] that they create or borrow. I use the terms product and service systems [define] refer to the ecosystem that encompasses products, services and their related artifacts and resources; these can include assets such as manuals and knowledge bases, and resource such as user groups and communities.
Users do not have access to the conceptual models of designers. Their understanding of how a product works is developed based on their interactions with the product itself, their previous experiences with the world, and their existing knowledge and expertise. All of these considerations affect how people interpret their experiences with a product, and the mental model [define] they create to explain how products work.
The term system [define] image refers to the way a product or service system actually appears to a user. System images are always imperfect representations of the conceptual models upon which they were built. For a product to be usable the system image needs to enable users to develop an accurate mental model of how relevant aspects of a product or service works.
An interesting feature of Norman’s 1988 model, is that designers relationship with system images is represented as a one-way phenomenon. This implies that once a product has been designed there is little opportunity for on-going improvements. During the last 20 years advances in technology and design methodology have made it possible for designers to continuously fine-tune product and service systems. This is especially true in the increasingly service-based world of software.
Bjoern Hartmann has revised Norman’s model to reflect the opportunity for designers to play an on-going role in improving the system image of the products they’ve created. The model he proposes includes a feedback loop that enables the user to communicate to the designer via the system image.
Hartmann posits that user-initiated feedback via the system will help identify mismatches between the designer’s conceptual model and the user’s model of how the system functions. Another important consideration is that offering an instantaneous feedback option in the same media on which the interaction is taking place will generate more reliable and richer data than feedback elicited later, or via a different channel.
[Sourced from Don Norman’s website, though I know Norman's framework was also featured in this book The Design of Everyday Things; Paper by Bjoern Hartmann written during graduate studies at Standford]
** What the hell is ID FMP? **
ID FMP: Useful Definitions (a living post)
As I embark on my exploration of frameworks, models, and principles related to interaction and experience design I will maintain this working list of definitions regarding key terms and concepts.
This is by no means a comprehensive list. This is actually an utterly selfish endeavor as these definitions are solely intended to help me in my own study of this field. They have been cherry-picked, mostly from online sources and the book Universal Principles of Design.
Framework: "A basic conceptual structure used to solve or address complex issues. This very broad definition has allowed the term to be used as a buzzword, especially in a software context." [source: Wikipedia]
Interference Effects: "A phenomenon in which mental processing is made slower and less accurate by competing mental processes." [source: Universal Design Principles]; an example of an artifact that would generate this effect is a green stop sign.
Mapping: "A relationship between controls and their movements or effects. Good mapping between controls and their effects results in greater ease of use." [source: Universal Design Principles]
Model: "A hypothetical description of a complex entity or process; representation of something, sometimes on a smaller scale" [source: Princenton Wordnet]
Principle: "A basic generalization that is accepted as true and that can be used as a basis for reasoning or conduct; a rule or law concerning a natural phenomenon or the function of a complex system." [source: Princenton Wordnet]
Scaling Fallacy: "A tendency to assume that a system that works at one scale will also work at a smaller or larger scale." [source: Universal Design Principles]
Serial Position Effects: "A phenomenon of memory in which items presented at the beginning and end of a list are more likely to be recalled than items in the middle of a list." [source: Universal Design Principles]
System: "a group of independent but interrelated elements comprising a unified whole; 'a vast system of production and distribution and consumption keep the country going.'" [source: Princenton Wordnet]; "System (from Latin systēma, in turn from Greek systēma) is a set of interacting or interdependent entities, real or abstract, forming an integrated whole." [source: Wikipedia]
This is by no means a comprehensive list. This is actually an utterly selfish endeavor as these definitions are solely intended to help me in my own study of this field. They have been cherry-picked, mostly from online sources and the book Universal Principles of Design.
Augmented Reality: "Augmented reality means that I have some mediating artifact that provides me with a visual overlay on the world. This could be a phone, it could be a windshield, it could be a pair of glasses or contact lenses, doesn’t matter. And you’re going to use that overlay to superimpose some order of information about the world and the objects in it onto the things that enter my field of vision – onto what I see." [source: Adam Greenfield, from interview by Tish Shute]
- Marker-Based: "Marker-based AR implies that there’s some reasonably strong relationship between the information superimposed over a given object, and the object itself. That object is an onto, a spime, it’s been provided with a passive RFID tag or an active transmitter. And it’s radiating information about itself that I’m grabbing, perhaps cross-referencing against other sources of information, and superimposing over the field of vision. Fine and dandy."
- Markerless: But there’s another way of achieving the same end, right? Instead of looking at a suit jacket on a rack and having its onboard tag tell you directly that it’s a Helmut Lang, style number such-and-such from men’s Spring/Summer collection 2011, Size 42 Regular in Color Gunmetal, produced at Joint Venture Factory #4 in Cholon City, Vietnam, and packed for shipment on September 3, 2010, you’re going to run some kind of pattern-matching query on it. And without the necessity of that object being tagged physically in any way, you’re going to have access to information about it.
Framework: "A basic conceptual structure used to solve or address complex issues. This very broad definition has allowed the term to be used as a buzzword, especially in a software context." [source: Wikipedia]
Interference Effects: "A phenomenon in which mental processing is made slower and less accurate by competing mental processes." [source: Universal Design Principles]; an example of an artifact that would generate this effect is a green stop sign.
Mapping: "A relationship between controls and their movements or effects. Good mapping between controls and their effects results in greater ease of use." [source: Universal Design Principles]
Model: "A hypothetical description of a complex entity or process; representation of something, sometimes on a smaller scale" [source: Princenton Wordnet]
- Mental model: "Representations of systems and environments derived from experience." [source: Universal Design Principles]; "An explanation of someone's thought process for how something works in the real world.” [source: Wikipedia]; “a mental representation that people use to organize their experience about themselves, others, the environment, and the things with which they interact; its functional role is to provide predictive and explanatory power for understanding these phenomena” [source: Virginia Tech]
- Conceptual model: “an abstraction, representation and ordering of phenomena using the mind.” [source: Charles Darwin University]; “conceptual model represents 'concepts' (entities) and relationships between them.” [source: Wikipedia]
Principle: "A basic generalization that is accepted as true and that can be used as a basis for reasoning or conduct; a rule or law concerning a natural phenomenon or the function of a complex system." [source: Princenton Wordnet]
Scaling Fallacy: "A tendency to assume that a system that works at one scale will also work at a smaller or larger scale." [source: Universal Design Principles]
Serial Position Effects: "A phenomenon of memory in which items presented at the beginning and end of a list are more likely to be recalled than items in the middle of a list." [source: Universal Design Principles]
System: "a group of independent but interrelated elements comprising a unified whole; 'a vast system of production and distribution and consumption keep the country going.'" [source: Princenton Wordnet]; "System (from Latin systēma, in turn from Greek systēma) is a set of interacting or interdependent entities, real or abstract, forming an integrated whole." [source: Wikipedia]
- Product and Service Systems: the independent but interrelated elements through which a users experiences a product or service. These systems encompass the product or service itself and indirect elements such as manuals, user groups, and communities. To that extent, a product or service system can vary significantly depending on context of use since these external elements often play an important role inthe user experience or a product or service.
- System Image: As used by Don Norman, refers to the overall interface available for a user to interact with a product or service system including both direct and indirect elements. Direct elements of the interface include the product or service itself. Indirect elements encompass things such as instructional manuals, user groups, and communities. [source: me]
Labels:
definitions,
design,
experience design,
ID FMP,
interaction design
Sunday, March 8, 2009
Interaction Design Frameworks, Models and Principles (ID FMP)
Since I began my interaction and experience design curriculum six months ago I've come across a large number of frameworks, models and principles that provide guidance and insights to designers. These tools were developed by designers, psychologists, sociologists, and anthropologists who have long been exploring the ways in which people interact with products, with each other, with organizations, and with the world at large.
To help me keep track of all these useful tools I will start writing posts that provide a description of these individual frameworks, models or principles. I will also include source information and, when possible, list additional information sources. All of my posts related to this series will be tagged with ID FMP.
The frameworks, models and principles that I will cover span many different perspectives and domains. Some are user-focused while others center on design-related concerns; several provide general guidance for designers while others focus on considerations that are relevant to specific niches only. The common thread that holds these tools together is their applicability to the design of interactions and experiences.
To help me keep track of all these useful tools I will start writing posts that provide a description of these individual frameworks, models or principles. I will also include source information and, when possible, list additional information sources. All of my posts related to this series will be tagged with ID FMP.
The frameworks, models and principles that I will cover span many different perspectives and domains. Some are user-focused while others center on design-related concerns; several provide general guidance for designers while others focus on considerations that are relevant to specific niches only. The common thread that holds these tools together is their applicability to the design of interactions and experiences.
Chapter 3 Homework: What is interaction design?
This assignment was taken from the third chapter of the book Interaction Design: Beyond Human-Computer Interactions, written by Helen Sharp, Jenny Preece, and Yvonne Rogers.
Assignment Questions
Question A: first elicit your own mental model. Write down how you think a cash machine (ATM) works. Then answer the questions below. Next ask two people the same questions.
Question C: Next, try to interpret your findings in respect to the design of the system. Are any interface features revealed as being particularly problematic? What design recommendations do these suggest?
Question D: Finally, how might you design a better conceptual model that would allow users to develop a better mental model of ATMs (assuming this is a desirable goal)?
Assignment Answers
Question A
Here’s My Take
Here is my understanding regarding how an ATM functions. The user owns a card that has a magnetic stripe that holds his/her account number. To execute a transaction using an ATM, first the user has to insert his card in the appropriate slot for the machine to read the user’s card number. Next, the user is prompted to input a four-digit pin number to access the account.
Once the pin number is entered the ATM machine connects to a central server via the internet and authenticates the user. If authentication succeeds, then the ATM machine remains connected to the server to enable the user to access various services such as viewing account balance, funds withdrawal or deposit, and potentially account transfers. When the user performs an action on his account, the ATM machine communicates with the server to execute the command.
For security purposes the ATM machine will request that the user re-enter his pin number every time s/he requests to execute a new action, e.g. withdrawing money. Other security features include that the ATM machine asks the user whether s/he is ready to quit after every transaction; it also automatically logs off a user after a short period of inactivity.
Take from Subject One Card is entered and account is confirmed after pin number entry. The amount entered is calculated in terms of number of bills usually of $20 denomination and spit out at you, and appropriate debits are made on the account. You are then told to have a nice day. Meanwhile hardly noticed by you is that your bank, the bank that owns the atm, and perhaps the operator of the atm has embezzled “so-called” fees from your account.
Take from Subject Two
ATM works like a computer. Your ATM card is like an activation key only usable with the right password. If you don't provide the right password, the machine will eat it. The ATM uses software programmed by the bank (so I guess every bank's ATM is slightly different for that reason) and depending on which button you choose for what to do next, it does various things. So I guess you can think of the ATM like a road to search for treasure... Your cash is the ultimate treasure and what you do from the moment you stand in front of the ATM until you get the actual cash is like your path in search for the treasure. The ATM is also networked, so someone is always watching your every move.
[click on the charts to enlarge them]
Question B
For the most part everyone has a pretty accurate mental model regarding how an ATM works. All of us understand that the services provided by ATMs are accessed using a card with a corresponding pin number. Another shared understanding is that ATM services are enabled by connections to bank databases where transactions are authorized and captured.
The biggest difference between the each explanation was the focus of the author. I focused on providing a technical/systems description of how an ATM works; subject one’s description covered user experience elements such as frustrations with excessive bank fees; subject two provided an overview that from a much looser metaphorical perspective. Otherwise, there were small differences related to each person’s understanding about specific elements of the user experience (e.g. amount money that can be taken out, reasons for delay, response to wrongful input, etc).
These findings indicate that most people in my social circle have accurate mental models of the way in which ATM machines work. This seems to suggest that the way ATM systems work is, for the most part, transparent. However, there are certain elements of the interaction about which the users still lack clarity or dislike, these include: the amount of money that can be taken out; the total value of the fees being applied to the account; and the inability to count the money when the ATM is in a public place.
Question C
For the most part, users have a good understanding regarding how ATM systems work. Therefore, the improvement opportunities to address user issues are mostly small and incremental in nature (e.g. addressing the small information gaps). This is not to say that new technologies, concepts and approaches could not be used to improve the experience of using an ATM in ways that current users cannot envision.
Here are a few design recommendations to address the three design gaps identified between system image and the user’s mental model:
Many advances have taken place in the design of ATM systems over the past several years. The new ATM from Chase Bank in New York is a great example of a well-designed ATM system. It has several notable improvements from older systems including easy, envelope-free, deposits, and improved touch screen interfaces.
Here are a few areas related to the conceptual model of ATM systems that offer opportunities for improvements:
Access to services provided by ATM
Using presence awareness technology, similar to that available on luxury car models, banks could design ATM machines that can identify the user without the need for a card. Users would have a key (rather than card) that contains an RFID chip, or similar technology. Therefore, when a user approaches a machine s/he would be prompted to enter their pin without the need to insert a card.
Rather then focus on improving the experience of using ATM machines, it is also valuable to explore how to provide the same services using different channels. Cell phones offer a lot of promise in this area. Many people already prefer to use their cell phones to check their account balance when they are on the go. Money transfers and payment by cell phone is becoming more widely available across the world.
Despite the increased use of electronic forms of payment, there are still many types of transactions for which people need cold hard cash. From a cash withdrawal and deposit standpoint, no alternatives exist to having a physical device such as an ATM (other than cash back services available at select stores that accept debit cards). For these types of transactions the cell phone could be used to enhance the existing experience. Perhaps using Bluetooth technology it could serve as the key to support the presence awareness described above. It could also provide the user with a confirmation or electronic receipt of their transaction, including all relevant fees.
Security of services provided by the ATM
New types of technologies can be leveraged to improve the security of ATM systems. Fingerprint or other bio-authentication methods could replace the pin, which would not only provide increased security, but also reduce the cognitive load required to memorize the pin number (or rather, all of your pin number and passwords). Of course, this would mean that you can no longer take out money from your significant other’s ATM card.
Assignment Questions
Question A: first elicit your own mental model. Write down how you think a cash machine (ATM) works. Then answer the questions below. Next ask two people the same questions.
- How much money are you allowed to take out?
- If you took this out and then went to another machine and tried to withdraw the same amount, what would happen?
- What is on your card?
- How is the information used?
- What happens if you enter the wrong number?
- Why are there pauses between the steps of a transaction?
- How long are they?
- What happens if you type ahead during the pauses?
- What happens to the card in the machine?
- Why does it stay inside the machine?
- Do you count the money? Why?
Question C: Next, try to interpret your findings in respect to the design of the system. Are any interface features revealed as being particularly problematic? What design recommendations do these suggest?
Question D: Finally, how might you design a better conceptual model that would allow users to develop a better mental model of ATMs (assuming this is a desirable goal)?
Assignment Answers
Question A
Here’s My Take
Here is my understanding regarding how an ATM functions. The user owns a card that has a magnetic stripe that holds his/her account number. To execute a transaction using an ATM, first the user has to insert his card in the appropriate slot for the machine to read the user’s card number. Next, the user is prompted to input a four-digit pin number to access the account.
Once the pin number is entered the ATM machine connects to a central server via the internet and authenticates the user. If authentication succeeds, then the ATM machine remains connected to the server to enable the user to access various services such as viewing account balance, funds withdrawal or deposit, and potentially account transfers. When the user performs an action on his account, the ATM machine communicates with the server to execute the command.
For security purposes the ATM machine will request that the user re-enter his pin number every time s/he requests to execute a new action, e.g. withdrawing money. Other security features include that the ATM machine asks the user whether s/he is ready to quit after every transaction; it also automatically logs off a user after a short period of inactivity.
Take from Subject One Card is entered and account is confirmed after pin number entry. The amount entered is calculated in terms of number of bills usually of $20 denomination and spit out at you, and appropriate debits are made on the account. You are then told to have a nice day. Meanwhile hardly noticed by you is that your bank, the bank that owns the atm, and perhaps the operator of the atm has embezzled “so-called” fees from your account.
Take from Subject Two
ATM works like a computer. Your ATM card is like an activation key only usable with the right password. If you don't provide the right password, the machine will eat it. The ATM uses software programmed by the bank (so I guess every bank's ATM is slightly different for that reason) and depending on which button you choose for what to do next, it does various things. So I guess you can think of the ATM like a road to search for treasure... Your cash is the ultimate treasure and what you do from the moment you stand in front of the ATM until you get the actual cash is like your path in search for the treasure. The ATM is also networked, so someone is always watching your every move.
[click on the charts to enlarge them]
Question B
For the most part everyone has a pretty accurate mental model regarding how an ATM works. All of us understand that the services provided by ATMs are accessed using a card with a corresponding pin number. Another shared understanding is that ATM services are enabled by connections to bank databases where transactions are authorized and captured.
The biggest difference between the each explanation was the focus of the author. I focused on providing a technical/systems description of how an ATM works; subject one’s description covered user experience elements such as frustrations with excessive bank fees; subject two provided an overview that from a much looser metaphorical perspective. Otherwise, there were small differences related to each person’s understanding about specific elements of the user experience (e.g. amount money that can be taken out, reasons for delay, response to wrongful input, etc).
These findings indicate that most people in my social circle have accurate mental models of the way in which ATM machines work. This seems to suggest that the way ATM systems work is, for the most part, transparent. However, there are certain elements of the interaction about which the users still lack clarity or dislike, these include: the amount of money that can be taken out; the total value of the fees being applied to the account; and the inability to count the money when the ATM is in a public place.
Question C
For the most part, users have a good understanding regarding how ATM systems work. Therefore, the improvement opportunities to address user issues are mostly small and incremental in nature (e.g. addressing the small information gaps). This is not to say that new technologies, concepts and approaches could not be used to improve the experience of using an ATM in ways that current users cannot envision.
Here are a few design recommendations to address the three design gaps identified between system image and the user’s mental model:
- Lack of clarity regarding the amount of money that can be taken out. Possible solution includes: providing users with information regarding their daily withdrawal limit (as well as any ATM specific limits). This issue is only present when using ATM machines that are not from the issuing bank.
- Lack of clarity regarding the total value of the fees being applied to the account. Possible solution includes: providing users with information regarding ATM and bank fees applied to transactions. This issue is only present when using ATM machines that are not from the issuing bank.
- The inability to count the money when the ATM is in a public place. Possible solutions include: create cash dispensers that leverage arrangement of bills and time delay to enable users to count the money in the tray while it is being dispensed.
Many advances have taken place in the design of ATM systems over the past several years. The new ATM from Chase Bank in New York is a great example of a well-designed ATM system. It has several notable improvements from older systems including easy, envelope-free, deposits, and improved touch screen interfaces.
Here are a few areas related to the conceptual model of ATM systems that offer opportunities for improvements:
Access to services provided by ATM
Using presence awareness technology, similar to that available on luxury car models, banks could design ATM machines that can identify the user without the need for a card. Users would have a key (rather than card) that contains an RFID chip, or similar technology. Therefore, when a user approaches a machine s/he would be prompted to enter their pin without the need to insert a card.
Rather then focus on improving the experience of using ATM machines, it is also valuable to explore how to provide the same services using different channels. Cell phones offer a lot of promise in this area. Many people already prefer to use their cell phones to check their account balance when they are on the go. Money transfers and payment by cell phone is becoming more widely available across the world.
Despite the increased use of electronic forms of payment, there are still many types of transactions for which people need cold hard cash. From a cash withdrawal and deposit standpoint, no alternatives exist to having a physical device such as an ATM (other than cash back services available at select stores that accept debit cards). For these types of transactions the cell phone could be used to enhance the existing experience. Perhaps using Bluetooth technology it could serve as the key to support the presence awareness described above. It could also provide the user with a confirmation or electronic receipt of their transaction, including all relevant fees.
Security of services provided by the ATM
New types of technologies can be leveraged to improve the security of ATM systems. Fingerprint or other bio-authentication methods could replace the pin, which would not only provide increased security, but also reduce the cognitive load required to memorize the pin number (or rather, all of your pin number and passwords). Of course, this would mean that you can no longer take out money from your significant other’s ATM card.
Labels:
conceptual models,
course,
education,
id-book,
interaction design,
mental models
Tuesday, March 3, 2009
Product Concept: Cyber Cat Heating Pad
Having read the first two chapters of my interaction design textbook, I decided to begin applying the frameworks that I've learned to the numerous product ideas that float around in my head all the time. These ideas will vary in subject, scope, category, and seriousness (or rather silliness). My purpose here is not to develop commercially viable products but rather to explore and experiment with the stuff that I am learning. This is just the first of several of these explorations.
Product Description:
A cat bed with a heating pad designed that is internet-enabled and able to sense and share information about the cat's presence and activities on the cat bed.
Goals & Requirements
Usabiliy Goals
The metaphors and analogies: Most important metaphors and analogies that can help user understand what the product can be used for, and how to use it.
The metaphors:
Physical Cat Bed and Related Concepts:
Relationships between concepts:
Here are some ether ideas for my fun, and utterly silly, ideas for cyber cat products: the cyber scratching post, and fuzz ball playing field. More to come on these (maybe).
Product Description:
A cat bed with a heating pad designed that is internet-enabled and able to sense and share information about the cat's presence and activities on the cat bed.
Goals & Requirements
Usabiliy Goals
- The heating pad needs to be easy to set-up, and its standard functionality (e.g. turning on heat and setting heat levels) should be available without the need to set-up connection with internet application.
- The online application should be simple enough to set-up without the need for a user manual. A set-up assistant (a.k.a. wizard), or limited functionality can be used to assist new users may be required to achieve this objective.
- The cat bed should fit a window sill (9" wide) while still being able to accommodate an average sized cat comfortably (20" long).
- The cat bed should to provide heat and a comfortable environment for the cat. The cat bed should leverage a comfortable fabric and the heat should be easily adjustable to support this objective.
- The online application should provide advanced user the ability to customize their virtual connection to cat bed via visual, textual and sound displays. The display should be customizable on multiple levels, via templates for novices, via advanced preferences for intermediate users, and via API for advanced users (with programming/development experise).
- System should leverage and interface that is memorable so that users can easily recognize how to carry out activities even after prolonged periods of inactivity.
- System should protect users from common errors, such as accidental temperature changes (e.g. mistakenly turning off heating pad), and limit action for novices to simple activities related to data visualization.
- Aesthetically Pleasing: what is the users initial response to the cat bed's appearance? Is it one of enjoyment, pleasure, or dislike?
- Delightful: What is users users response to the presence awareness visualizations and alerts? Is it one of delight, surprise, enjoyment, or annoyance?
- Rewarding: Does the user suspend disbelief to enable a remote connection to the cat on the bed? Is the user's connection experienced as rewarding, boring, cutesy, or frustrating?
- Entertaining: In what ways does the user engage with the cat bed on an ongoing basis? Does s/he feel entertained, satisfied, safe, frustrated, or surprised?
The metaphors and analogies: Most important metaphors and analogies that can help user understand what the product can be used for, and how to use it.
The metaphors:
- The main metaphor is closeness, the idea of having your pet close to you wherever you go by receiving updates about their presence and activities on the cad bed.
- Another metaphor is connection, the idea of being connected to your pet wherever you go by being able to respond to the cats presence in a way that impacts an aspect of the cad bed (e.g. temperature, light, sound).
- Customization is also an important metaphor, the idea of being able to customize how the pet's presence is communicated and how your responses are made manifest by the cat bed.
- The most obvious analogy is using a standard heating pad and cat bed.
- Monitoring a home webcam is an useful analogy to describe the presence monitoring feature of the product.
- Receiving email and text messages is an useful analogy for the presence alerts.
- Setting preferences on web applications such as email clients, social networks, or feed readers is an useful anology for the simple customization features of the system.
- Selecting templates is an analogy for the using pre-created set-of-preferences options that are developed based on wide range of themes.
Physical Cat Bed and Related Concepts:
- The comfort layer is made up of the outer fabric and soft stuffing that provide comfort and are durable.
- The heating layer includes a safe heating solution that features remote temperature controls that override virtual controls.
- The electronic layer includes the sensors and processors that enables presence awareness by supporting data collection and distribution via wireless connection.
- The protection layer ensures that all elements are kept protect from the environment (including the cat).
- The communication module is the main user interface. It displays data visualization, alerts and controls. Data visualization includes real time presence monitoring, and over-time data infographics. Alerts include cat zen and standard updates via website (these can also be delivered via a widget on social networks, desktops and mobiles, emails, and SMS). The controls enable users to set the temperature of the cat bed, and activate light/laser or sound. Users can create own cat zen or use existing from other users and product developer.
- The preference module allows users to input information about the pet and provides customization options for the communication module, widget, and alerts. Pet information includes such as name, birthday, attitude, likes, dislikes, and picture. Communication options include setting visualization and channel settings for real-time and over-time event updates related to pet's presence, position, movement, and duration. Multiple preference settings can saved as templates and shared with others.
- The third module is the database. It captures data from the physical cat bed receives, stores the information in the database, makes it available through an API, and transmits it via built-in channels.
- The last module is the API, which enables advanced users to develop extensions to the functionality. All cat bed sensors would be accessible via the API. Applications could be shared online.
Relationships between concepts:
- The cat bed is comprised of four related concepts described above, here is an overview of how they relate to one another: The comfort layer is connected to the electronic layer by the presence of sensors that enable the presence sensing. It is also adjacent to the heat layer, which is responsible for heating the comfort layer to provide a warm environment for the cat. The heating layer is connected to the electronic layer to enable the electronic layer to control the heat settings on the cat bed. A manual override on the cat bed itself ensures that heat settings can be changed directly from the cat bed. The protective layer encases the main components of the electronic layer and the bottom of the cat bed for protection purposes.
- The application contains the four related concepts described above, here is an overview of how they relate to one another: the communication module, preferences, database and API. The preferences is part of the communication module. It enables users to customize the how the data captured from the cat bed is displayed via the application interface, widget or alert. The database holds the data and makes it available for existing application and new ones developed via API.
- Priority of action for the cat bed: The bed should usable as a standard pet bed first and foremost; the next most important user action related to the bed is setting and controlling the temperature of the bed (manual temperature settings override application-based ones); this is followed by the ability to activate the electronic layer and link the cat bed to a computer via wireless connection.
- From a general use perspective, the most important actions afforded to users via the communication module are: (a) first, the ability to easily log-in to his/her pet information; (b) then comes the ability to toggle between real-time and over-time visualizations; (c) next up is the ability to change the visualization style for either real-time or over-time visualizations; (d) then comes the ability to communicate with the cat bed via sound, light or temperature; (e) which is followed by the ability to access the application preferences; (f) then the ability to give thumbs up or down to cat zen haikus; (g) followed by the ability to submit your own haikus; (h) sharing haikus via email, social networks or SMS is next on the action priority list.
- From an initial set-up perspective here are the priorities: (a) to allow users to easily connect to their cat bed; (b) provide a wizard that allows users to get started quickly using predefined templates easily, without needing to access detailed preferences.
- The most important action on the preferences module is setting up of pet information including name, birthday, attitude, likes, dislikes, and picture;
- Next up are the preferences related to visualization which include in priority order: (a) setting of awareness preference (what events related to the cat's presence should act as triggers for visualization purposes?); (b) setting and saving channel preferences (what channels should be used for each communication triggered by an event?); (c) setting visualization preferences (how will the communication be presented in each channel?); (d) saving awareness and preference settings into templates that can be easily swapped from the communication module; (e) set cat wisdom preferences (Ability to activate or deactivate? Select haiku library, standard or user generated.); (f) saving cat wisdom preferences into templates; (g) share awareness, preference, and cat haiku templates with other users;
- Last are the preferences related to data visualization beyond the application, including: (a) settings for widget-specific preferences; (b) settings for API preferences and authentication.
- The design and construction of the bed will be guided by aesthetic, sustainability as well as comfort-related considerations. The look and feel of the artifact will be designed to be aesthetically pleasing; the materials and methods used for manufacturing each layer of the beds will be sustainable; the bed will be designed to be comfortable for the cat. These principles aim to provide aesthetically pleasing products on visual, ethical and emotional levels.
- The visualizations provided by the application corresponds the different events associated to the cat's presence and activity on the cat bed. Using interesting visualization options to bring the pet's presence to life across physical distances, using both reminders and contextual awareness, will serve to provide delight to users throughout their day.
- The controls provided by the application correspond to actions that the user can take to impact the environment of the cat bed. Users will be able to choose between multiple possible actions. These action include but not are limited to the following list: generating a sound, activating a light, changing temperature, activating a webcam. These features aim to provide the user with an interactive experience that is rewarding.
- Caring details, such as the cat haiku, and the open API enable the user to interact with the data generated by the cat bed in innovative ways. These features are provided to enable the user to connect with their pet in an entertaining way.
Here are some ether ideas for my fun, and utterly silly, ideas for cyber cat products: the cyber scratching post, and fuzz ball playing field. More to come on these (maybe).
Subscribe to:
Posts (Atom)