Memory Inc. reads like a tragedy, but the results of Scoville's experiments are amazing, as is the current work being done by Memory Pharmaceuticals. By chance, Scoville discovered that the hippocampus is vital to the acquisition of new memories when he psychologically maimed a patient to cure his epilepsy. Since then, a great deal of research on the biological basis of memory has been conducted, and much has been learned. Currently, a company called Memory Pharmaceuticals is working on a drug that will enhance synapses in the brain to fortify and restore memories. The ethical, evolutional, and societal implications of such a drug are mind-boggling, but the possibility of such a drug even existing is amazing.
Chipped is the tale of Moniz, a horrible, ambitious man that ran around poking holes in people's brains to see what would happen. He's the father of psychosurgery, a practice outlawed in several countries and a few states. The present-day incarnation of psychosurgery is psycholpharmocology, whereby psychological disorders are treated with chemicals to alleviate their suffering. Both of these practices, despite their successes are very crude and lacking in specificity, and therefore, lacking in scientific evidence. The drugs we take and give to our children such as Zoloft, Prozac, Aderall, Ritalin, etc affect the brain in completely unknown ways. There has been little to no research done on the long-term effects of these medications, both physiological and psychological.
Tuesday, November 13, 2012
Reading: Skinner 7 & 8
Rat Park was an experiment conducted by one of Harlow's students, Alexander. These experiments studied the nature of addiction and challenged the long-held belief that substances such as morphine and heroin are innately addictive and irresistible. Alexander questioned this based on his own observations and conducted Rat Park to find the truth. Amazingly, rats not only avoid mind-altering drugs when in social situations but will even avoid these drugs after being forcibly "addicted". This led to the rise of the view of addiction in which drugs are used as a coping mechanism, a view that I personally agree with based on my observations.
Lost in the Mall echoed many things I've learned in previous classes, but was no less disturbing to read than it always is. Personally, I can't help but wonder who I'd be without my memories and experiences, and to think that the building-blocks of my personality could be fabricated so easily is troubling. I imagine that the constant bleeding of memory and imagination that occurs within our minds is a huge contributing factor to the amazing ideas and inventions that humans are capable of, but at what cost? I think that the average person is better off the way they are, faulty memory and all. The real problems occur when unethical people leverage this weakness in cognition to brainwash people and convince them of lies.
Lost in the Mall echoed many things I've learned in previous classes, but was no less disturbing to read than it always is. Personally, I can't help but wonder who I'd be without my memories and experiences, and to think that the building-blocks of my personality could be fabricated so easily is troubling. I imagine that the constant bleeding of memory and imagination that occurs within our minds is a huge contributing factor to the amazing ideas and inventions that humans are capable of, but at what cost? I think that the average person is better off the way they are, faulty memory and all. The real problems occur when unethical people leverage this weakness in cognition to brainwash people and convince them of lies.
Thursday, November 8, 2012
Reading: Skinner 5 & 6
Chapter 5 covered Festinger's theory of cognitive dissonance, a theory still held by many cognitive psychologists today. Cognitive dissonance occurs when a person's internal beliefs and predispositions are placed in opposition to their external observations and influences. An example of this would be a person, Bob, witnessing another man walking on water. If Bob is Christian or otherwise believes in miracles, this is a completely "normal" observation. However, if Bob is a man of science, seeing such an event causes a state of tension (dissonance) which drives him to formulate some kind of explanation for this event that fits his way of thinking. Perhaps the water is shallow, or maybe the man is using some new kind of technology. Cognitive dissonance explains why people NEED to rationalize their actions. The most interesting thing that I take away from this chapter is the Insufficient Rewards Paradigm, which reveals that changing a person's thinking is as easy as giving them a dollar to do something they don't really believe in, and then questioning their reasoning.
Chapter 6 was brutal. It discussed Harlow's experiments with monkeys which led to some interesting discoveries at an extreme cost. Harlow used monkeys to study the nature of nurture (lol) and did so by placing infant monkeys in traumatic conditions, subjecting them to torture, and constructing devices like the "Rape Rack" and the "Well of Despair". The results made significant contributions to child psychology and got Harlow elected president of the American Psychological Association, but at what cost? Harlow's actions spurred the animal rights movement into questioning the ethics of using animals to further science. I, personally, find Harlow's treatment of his monkeys appalling and I am convinced that he must have been disturbed in some way, however, I am grateful for the advances in medicine that have been made possible through animal trials. Although I could never conduct animal research myself, I do think that it is necessary but within strict bounds and under ethical scrutiny. What Harlow did was way outside the line.
Chapter 6 was brutal. It discussed Harlow's experiments with monkeys which led to some interesting discoveries at an extreme cost. Harlow used monkeys to study the nature of nurture (lol) and did so by placing infant monkeys in traumatic conditions, subjecting them to torture, and constructing devices like the "Rape Rack" and the "Well of Despair". The results made significant contributions to child psychology and got Harlow elected president of the American Psychological Association, but at what cost? Harlow's actions spurred the animal rights movement into questioning the ethics of using animals to further science. I, personally, find Harlow's treatment of his monkeys appalling and I am convinced that he must have been disturbed in some way, however, I am grateful for the advances in medicine that have been made possible through animal trials. Although I could never conduct animal research myself, I do think that it is necessary but within strict bounds and under ethical scrutiny. What Harlow did was way outside the line.
Reading: Skinner 3 & 4
Chapter 3 focused on Rosenhan's experiment involving psychiatric diagnoses. Rosenhan felt that people were often institutionalized for much longer periods of time than appropriate based on his theory that once you attach a diagnosis to a patient, all that patient's actions are then interpreted with a bias. He conducted an experiment in which he and 8 confederates went to separate psychiatric institutions claiming to hear a "thud" and see how the institutions responded. They were instructed to behave normally, other than the "thud", and to claim to be cured once institutionalized. The results were disappointing and revealed several flaws in the methods of psychiatric care. Even today, psychiatry is considered a "soft-science" and riddled with pop-psychiatrists.
Chapter 4's topic was the infamous Genovese case. This case is discussed in many books due to the uncomfortable reality that it shed's light on. Genovese was murdered, slowly over the course of 35 minutes, screaming for help and drawing the attention of 38 witnesses. Despite all of this, nobody called the police until long after Genovese was dead. This drew a torrent of media attention and prompted Darley and Latane to conduct a series of experiments that revealed several interesting facts. Most notable was the phenomenon of responsibility diffusion, whereby the number of people witnessing an event drastically decreases the likelihood that someone will intervene, because nobody feels directly responsible enough to act.
Chapter 4's topic was the infamous Genovese case. This case is discussed in many books due to the uncomfortable reality that it shed's light on. Genovese was murdered, slowly over the course of 35 minutes, screaming for help and drawing the attention of 38 witnesses. Despite all of this, nobody called the police until long after Genovese was dead. This drew a torrent of media attention and prompted Darley and Latane to conduct a series of experiments that revealed several interesting facts. Most notable was the phenomenon of responsibility diffusion, whereby the number of people witnessing an event drastically decreases the likelihood that someone will intervene, because nobody feels directly responsible enough to act.
Tuesday, November 6, 2012
Reading: Skinner 1 & 2
I'm not a fan of Slater's writing. The topic of the book is Skinner, which is a great subject, but her introduction seems way over dramatized. Skinner was a behavioral psychologist that expanded upon Pavlov's theories of classical conditioning with a new theory - operant conditioning. Operant conditioning changed the way psychologists thought and led psychology toward becoming a more respectable field, one of science instead of philosophy. Slater's portrayal of the facts is accurate(-ish) but her anecdotal additions damage her credibility and the readability of the book.
Wednesday, October 31, 2012
Obedience to Authority
Chapter Remarks
- The book begins by discussing the nature of obedience. This chapter serves as a good preface for the topics that follow.
- This chapter outlines the initial experiment in detail. Pictures are provided to illustrate the various apparatus and to give the reader a glimpse at the learner, which really aids in visualizing the experiments.
- This is a short chapter that takes a look at different populations predictions of the experiment's outcome. All participants expected the experiment to end around 150V and estimated that less than 1% of people (sociopaths) would continue to 450V.
- This chapter discusses experiments exploring the effect that the closeness of the victim has on disobedience. Unsurprisingly, the closer the teacher is to the learner to more likely they are to disobey. When the teacher has to physically force the learner to be shocked, obedience drops to 30%.
- This chapter discusses the results and experiences of a selection of participants from the first 3 experiments (focusing of proximity). The tension experienced by some of the participants was extremely evident. I'm glad that Milgram addressed the specific experiences of a few dissenters to round out the discussion.
- After experimenting with proximity, another series of experiments were conducted focusing on the learner's reaction to the shocks, the teacher's relationship to the authority, and the location of the experiment. The results were interesting, most notable was that teachers, when given the opportunity to choose what level of shock to administer, consistently chose very low levels.
- This chapter discusses individual experiences in experiments 5-11. Much like in chapter 5, this chapter serves to help the reader visualize the experiments. The anecdotal evidence also makes the book more interesting and sets it apart from a research paper.
- This chapter discusses experiments that vary the position of the authority in relation to the teacher and other authorities. This chapter provided a lot of the insight Milgram uses in his later discussions.
- The final experiments place the participant in a group of teachers. These experiments revealed that disobedience is easier when the teacher can follow another's lead and that participants have no problem being silently complicit in another teachers abuse of the learner.
- This chapter begins Milgram's academic and theoretic discussions on obedience. This chapter addresses the nature of hierarchy and the agentic shift in thinking that the teachers experienced during the experiments.
- This chapter discusses how people learn obedience early on and how society reinforces this by integrating people into hierarchies via promotions, rewards, and punishment. He also further elaborates on his agentic state and it's properties.
- This chapter is titled strain and disobedience. It discusses how some participants were able to overcome their agentic state and assert their individuality, through great psychic struggle. It also discusses coping mechanisms such as avoidance, denial, and subterfuge.
- In this chapter Milgram introduces and argues against an alternative theory built on aggression. This theory stands on Freudian principles and is refuted by several experiments.
- This chapter addresses criticisms of Milgram's experiments and theories. His defense is clearly composed and built on evidence from his experiments and others. Personally, I agree with Milgram's analysis and thing that his reasoning is sound.
- This chapter draws the book to a close by refocusing on Nazi Germany and how people behave in the real world. The interview with the veteran that participated in the execution of innocent people during Vietnam was troubling and ended the book on a strong note.
Book Reflection
I enjoyed reading this book quite a bit. I was pleased to find that it was much more than a recap of the infamous Milgram experiments, which I've been familiar with for some time.
The first two thirds of the book discuss the initial experiment and the follow-up experiments that it inspired in great detail. The initial experiment revealed that people are willing to obey an authority to an alarming extent. Even past the point that they would be comfortable operating of their own free will. Predictions made by professors, students, and psychologists all suggest that the average person would only cooperate to a small extent, and that only a sociopath would be willing to subject another human to 450V for the purpose of scientific experimentation.
Obviously results this unexpected, not to mention concerning, warranted further investigation. Milgram's first round of follow-up experiments focused on the relationship between the proximity of the teacher to the "victim" and the teacher's obedience to the authority. These experiments revealed that moving the victim closer to the teacher, therefore making the teacher's actions more explicit and more subject to their personal scrutiny. The other experiments that followed revealed that women are as obedient as men, that the reputability of the locale contributes to the power of the authority, that conflicting authorities undermine obedience, that participants are more obedient when they don't feel responsible for their actions, and various other findings.
The real contribution Milgram made by writing this book was his discussion of the theories he formed from these results, in the last third of the book. Milgram's theories center around his idea of the agentic state. Milgram proposes that people can function independently or as a part of a hierarchy, the later referred to as the agentic state. He asserts that this functioning is an entirely separate state of mind based on the way participants behave in contrast to their individual ideals and preferences.
Overall, Milgram's experiments were well conducted and his theories well formed. I agree with his analysis and enjoyed reading his book.
The first two thirds of the book discuss the initial experiment and the follow-up experiments that it inspired in great detail. The initial experiment revealed that people are willing to obey an authority to an alarming extent. Even past the point that they would be comfortable operating of their own free will. Predictions made by professors, students, and psychologists all suggest that the average person would only cooperate to a small extent, and that only a sociopath would be willing to subject another human to 450V for the purpose of scientific experimentation.
Obviously results this unexpected, not to mention concerning, warranted further investigation. Milgram's first round of follow-up experiments focused on the relationship between the proximity of the teacher to the "victim" and the teacher's obedience to the authority. These experiments revealed that moving the victim closer to the teacher, therefore making the teacher's actions more explicit and more subject to their personal scrutiny. The other experiments that followed revealed that women are as obedient as men, that the reputability of the locale contributes to the power of the authority, that conflicting authorities undermine obedience, that participants are more obedient when they don't feel responsible for their actions, and various other findings.
The real contribution Milgram made by writing this book was his discussion of the theories he formed from these results, in the last third of the book. Milgram's theories center around his idea of the agentic state. Milgram proposes that people can function independently or as a part of a hierarchy, the later referred to as the agentic state. He asserts that this functioning is an entirely separate state of mind based on the way participants behave in contrast to their individual ideals and preferences.
Overall, Milgram's experiments were well conducted and his theories well formed. I agree with his analysis and enjoyed reading his book.
Thursday, October 18, 2012
Gang Leader for a Day
I enjoyed this book quite a lot. The first chapter provides a good introduction to the book and captures the reader's attention very well. The author's portrayal of the characters makes it easy to keep in mind the fact that they represent real people. Chapter two chronicles the early days of his study. Reading the author's accounts of the projects contradicted a lot of what I previously assumed was true and supplied one revelation after another. The incident with C-Note was a sobering moment and a good reminder of the reality and consequences of the study. Chapter 3 provided insights into the roles of other community members and marked a shift in the focus of the study. Previously Sudhir focused almost exclusively on the Black Kings and their leader. Chapter 3 also introduced the idea that there might be something more to gang life than simple drug dealing by focusing on the Black King's and Lenny's voting movement. Chapter 4 was very enlightening, as it should be given that it became the book's namesake. A day in the life of a gang leader is a very interesting topic in its own right and I found this chapter incredibly interesting. Chapter 5's shift to Ms. Bailey was fascinating. The constant struggle for power between Bailey and JT was an entertaining read and was very thought provoking. Chapter 6 really drove in the fact that Sudhir's involvement has had consequences for not only himself but many of the people in Robert Taylor. This chapter was another sobering moment and one of the defining features of the book. I found chapter 7 distressing. Like Sudhir, growing up the way I did gave me a romanticized notion of police. This chapter was a very disenchanting experience. The last chapter was pretty dull. It read like an epilogue and was pretty anticlimactic, but I suppose that is the nature of research.
Tuesday, October 16, 2012
Ethnography Idea
My idea is to study Pentecostal churches. I got the idea from a coworker, whose grandparents are Pentecostal. Since I've known him I've heard several anecdotal tales of his grandparents' "crazy, extremist ideas". While Pentecostals are Christian, and I am familiar with Christianity, Pentecostals have a wide variety of views and practices that I am unfamiliar with that warrant study. Furthermore, my previous exposure to Christianity will help to provide a context with which to compare Pentecostal views.
Something that another group might do is study smokers. I have a lot of friends that smoke, and they are part of a very robust subculture that crosses student, faculty, and staff boundaries. Smokers are very social, and with the decrease in the popularity of smoking, most regular smokers become acquainted with each other quickly.
Something that another group might do is study smokers. I have a lot of friends that smoke, and they are part of a very robust subculture that crosses student, faculty, and staff boundaries. Smokers are very social, and with the decrease in the popularity of smoking, most regular smokers become acquainted with each other quickly.
Tuesday, October 9, 2012
Assignment 7
Secret Life of Pronouns + Article
The idea that pronouns reflect who we are and how we think is an very interesting proposition. Given that pronouns position yourself wit respect to the world around you, it's definitely plausible but I never considered that sort of analysis especially accurate. Minoring in psychology, I was familiar with the fact that writing and communicating traumatic experiences had a wide range of health benefits, I didn't however, know that the study originated with James Pennebaker. In Pennebaker's analysis of writing, three important findings emerged:
- the importance of positive emotion
- the use of positive language increases improvement in physical and emotional health
- the importance of constructing a story
- an increase in cognitive language over time demonstrates growth and understanding, which contributes to an increase in health
- the importance of changing perspectives
I find this book extremely interesting, both as a computer scientist and a psychologist. There is a lot of potential for the research discussed in the book and I feel that there is definitely something to take away from it.
Video + Observations
Creating our video was interesting for one reason in particular: nobody seemed to notice our camera. We used the lid cam, so it wasn't particularly obvious, but even the cashier didn't seem to notice its presence This, to me, indicates the massive amount of information hiding in the real world that could be revealed with even a little bit of observational awareness. In creating my ethnography, I will focus on the use of language, how people orient themselves with respect to other people and their surroundings, the use of formalities, and any unusual protocols accepted by the group as a whole. Also interesting would be the level of acceptance that the group in question has with regard to other groups and individuals.
Tuesday, October 2, 2012
Homework 4: Ethnography Articles
What is ethnography?
An ethnography is a systematic study of human culture that aims to capture the perspective of the typical member of said culture. Culture is a vast and complex subject of research - it permeates every aspect of social interaction. An ethnographer should study the institutions, customs, codes, and behaviors without becoming subjective to their influence - a difficult task.
Ethnography
An ethnography is a means to represent graphically and in writing, the nature of people. Typically, ethnographies are conducted by anthropologists and sociologists who work as participant observers, bound to strict ethical codes.
Data collection aims to capture the social meanings and ordinary activities of people in naturally occurring settings in a manner that minimizes the amount of bias imposed on the data. Reflexive researchers aim to explore their influence on their research as a means of maintaining transparency. Participation is key to data collection.
The ethics of ethnographic study are of great concern. The classic virtues of the ethnographer are kindness, friendliness, and honesty. Additionally, ethnographers should aim to be candid, chaste, fair, literary, precise, observant, and unobtrusive.
Coming of Age in Samoa
Book by Margaret Mead (American anthropologist) based on ethnographic research of youth on the island of Ta’u in Samoa. A widely popular text in the field of anthropology and a key text in the nature vs nurture argument. Mead sought to answer the question: “Are the disturbances which vex our adolescents due to the nature of adolescence itself or to the civilization?” Concluded that the transition from childhood to adulthood in Samoa was smooth and stress-free. Mead hypothesized that this was a result of the stable, open, monocultural culture they grew up in.
This book upset many Westerners when released in 1928 - Samoan ideas of proper female behavior were very unlike the ideals of the typical Westerner. Additional controversy arose when Derek Freeman challenged Meads observations and data. He was eventually regarded as misguided, and it is speculated that he waited to publish his book until after Mead’s death so as to deny her to opportunity to respond.
Emotional Design vs Design of Everyday Things
I found Emotional Design much more interesting than Design of Everyday Things. Emotional Design's focus on aesthetics was a much more novel study. I think Design of Everyday Things focused too much on models to explain why designs were good or bad and how people think. Emotional Design opens with the results of scientific studies and continues to rely on research. Models are nothing more than conjecture without evidence and research. As a psychologist, I enjoyed Norman's analysis of cognitive processing, and breaking it into visceral, reflective, and behavioral components was a novel approach. The application of his processing model to creativity, affect, and the effects of aesthetics was more valuable to me than the generalities proposed in Design of Everyday Things. I think that in many cases, although perhaps not corporate-aimed products, the perception of ease and efficiency is more important to users than hard facts. A users gut feeling about their experience with a device does more to keep them coming back than efficiency measures could. I think this is evident in Apple's popularity and design principles. Apple designs products to be aesthetically pleasing and as simple as possible, but at the cost of hampering power users. Despite this, Apple's products are wildly popular, because most people aren't power users.
Thursday, September 20, 2012
Good Design Examples
Firstly, I would like to begin with the spoon. It is a device that affords scooping, stirring, and never has to be explained or troubleshooted. It is simple, cheap, and can be made from a wide variety of materials. It hasn't been changed in years because there is nothing to change. The spoon is perfectly designed.
Next, consider the microwave. It provides audible feedback when entering input and once food is ready. It utilizes an interlock force function to ensure the users safety by not operating the microwave emitter when the door is ajar. Buttons are clearly labeled and have one-to-one control mapping. The microwave's only flaw is feature-creep, which can be avoided by shopping around and has yet to interfere with good mappings.
The consumer fan also features good design elements. Physical constraints ensure the safety of the user and indicate that the fan's operation doesn't involve interaction with the blades. There is a single knob to control fan speed, which operates as standardized with the off position located next to the highest fan speed position. The fan does nothing more than what is expected, and nothing less. Another humble, well designed device.
Something I was immediately grateful for when I ordered my Kindle was its packaging. It was functional - the Kindle arrived in mint condition, but was also both aesthetically pleasing and easy to open. It is obvious to the user how to open the package due to the tab placed next to the dotted-line, which everyone knows symbolizes perforation. Once the tab is pulled the lid easily opens to reveal a Kindle, nested in an aesthetically pleasing box with the charging cord and the instructions, which consist of 3 simple steps, placed on the Kindle's screen - impossible to miss.
Lastly, I would like to present a packaging concept for Coke that I stumbled onto a few years ago. The bottle affords portability, drinking, stacking, and advertising. The most notable of these features is stacking, which most bottles don't afford. Stacking coke bottles would allow for easier shipping and handling, and the new shape makes the bottles, and their logos, stand out. By offsetting the lid the new shape still affords the user the ability to drink easily.
Next, consider the microwave. It provides audible feedback when entering input and once food is ready. It utilizes an interlock force function to ensure the users safety by not operating the microwave emitter when the door is ajar. Buttons are clearly labeled and have one-to-one control mapping. The microwave's only flaw is feature-creep, which can be avoided by shopping around and has yet to interfere with good mappings.
The consumer fan also features good design elements. Physical constraints ensure the safety of the user and indicate that the fan's operation doesn't involve interaction with the blades. There is a single knob to control fan speed, which operates as standardized with the off position located next to the highest fan speed position. The fan does nothing more than what is expected, and nothing less. Another humble, well designed device.
Something I was immediately grateful for when I ordered my Kindle was its packaging. It was functional - the Kindle arrived in mint condition, but was also both aesthetically pleasing and easy to open. It is obvious to the user how to open the package due to the tab placed next to the dotted-line, which everyone knows symbolizes perforation. Once the tab is pulled the lid easily opens to reveal a Kindle, nested in an aesthetically pleasing box with the charging cord and the instructions, which consist of 3 simple steps, placed on the Kindle's screen - impossible to miss.
Lastly, I would like to present a packaging concept for Coke that I stumbled onto a few years ago. The bottle affords portability, drinking, stacking, and advertising. The most notable of these features is stacking, which most bottles don't afford. Stacking coke bottles would allow for easier shipping and handling, and the new shape makes the bottles, and their logos, stand out. By offsetting the lid the new shape still affords the user the ability to drink easily.
Bad Design Examples
I have a pair of Tenqa Remxd bluetooth headphones that I really enjoy, especially considering the $40 price tag, however, the initial process of pairing them with my iPhone took about 45 minutes. These headphones are poorly designed primarily due to poor mapping and a lack of feedback. The headphones are paired by holding down the play button and waiting for the LED to flash. This is impossible to determine without an instruction manual and furthermore, doesn't always work. As it turns out, the play button has to be held down prior to turning the headphones on and kept held down until the device flashes. Additionally, the flashing that indicates that the headphones are discoverable is a red and blue alternating flash, which is similar to a blue only flash that occasionally occurs for reasons I cannot determine.
The other day I discovered a new Monster energy drink called Ubermonster. The bottle for this drink is extremely poorly designed - it looks like a twist-off cap, but it isn't. Also, most bottle openers are too small to accomodate this lid. I opened mine using a pair of pliers. It took about 5 minutes to open, and I sliced my finger open in the process. As a side note, this drink also appears to be alcoholic, based on the design of the bottle and the advertising featuring "advanced brewing technology". Google-ing Ubermonster indicates that many, many people are confused about this bottle's appearance and that my frustrations with the lid are not isolated.
These are the exit doors in the basement of the Psychology Building. I can never open them correctly, which I now know has to due with a lack of visibility. This example isn't exactly unique, but I found it relevant due to my personal experiences with these doors. Shortly after I took this picture a girl confidently pressed on the door, smacked her head, and dropped her laptop down the stairs. She was fine, but her laptop didn't survive the fall. Another casualty of poor design.
This is a coworkers mouse that I was attempting to fix at work today. Other than being generally uncomfortable, my primary criticism of its design is that it doesn't afford the user the ability to turn it off without pulling the battery. I found this extremely unusual, and so did my coworker, whose computer did a variety of inconvenient things due to involuntary mouse clicks that occurred while I was examining it. This is a significant design flaw, as wireless mice are dependent on their battery to function, and the best way to conserve battery life is to turn the device off when it isn't in use.
Lastly, I address the issues with the iPhone's design. Yes, despite being one of millions of users that depend on the iPhone to function, its design is not perfect. The particular problem I have with the iPhone concerns mapping and affordability. Firstly, Apple's insistance on minimizing the number of hardware buttons means that the home button is responsible for accessing the home screen, activating Siri, and, little do most users know, performing a hard reset. The issue of the hard reset is my primary concern. When a phone freezes and no longer responds to a soft reset, most users respond by pulling the battery, something that the iPhone doesn't afford users the ability to do. Instead, iPhone users must perform a hard reset, a task that most users aren't aware exists and that is mapped to controls that nobody would think to try. A hard reset is performed by holding the sleep button and the home button simultaneously for roughly 10 seconds.
Design of Everyday Things: Overall Summary
The Design of Everyday Things can be summarized trivially by concatenating my previous blog posts, so I am choosing to use this entry to discuss my thoughts on the book and what I’ve taken away from it.
This book made me think about design in a completely new way. I’ve always known that a product’s success or failure hinged heavily on design, but I’d never before considered the aspect of designing a product or system analytically. Design makes an impact not in what you notice, but in what you don’t notice. It is a very subtle art that evidently eluded me prior to reading this book. I’ve learned that a product’s design should aim to be as intuitive as possible by utilizing constraints and natural mappings. Both of these forms of communicating function to the user operate on a subconscious level to guide the user without them becoming aware. Nobody explicitly thinks about the driver’s seat being located on the left or a floppy disk not fitting in an optical drive, these things are obvious to us because humans rely on previous experience and physical constraints to reason about the world. We are naturally hardwired to take these types of things into account, and therefore, they don’t impede our thought processes or interrupt our flow.
This book also opened my eyes to the processes involved in learning a new system and the various types of pitfalls that we encounter frequently across a wide variety of devices but attribute to human error. People depend on their perceptions to interpret events, which often leads to the misattribution of causality, leading people to blame themselves or software for instance, when the real issue is a hardware problem. The book analyses decision-making using the Action Cycle, Stages of Evaluation, and Stages of Execution, all of which are combined into the Seven Stages of Action. These models help to identify two primary sources of error that result from poor design: the Gulfs of Execution and Evaluation. The Gulf of Execution is the gap between user intentions and actions allowable by the system and the Gulf of Evaluation is the gap between a user’s interpretation of the system and how well the intentions have been met. Errors occur more frequently when these gulf are large, primarily due to a lack of visibility and feedback. Visibility is a design principle aimed at improving a user’s ability to identify available actions and feedback is important because it informs the user of the state of the system and the effects of their actions.
I definitely feel that reading this book has made me a better programmer by informing me of the aspects of design outside the scope of software. Knowing how a user thinks about systems and formulates decisions is a valuable asset when designing for usability. Additional topics that were discussed in varying length that I found particularly interesting were forcing functions, the reversal of design principles for increasing task difficulty, and the use of information in the world to remind and cue user behavior.
Design of Everyday Things: User Centered Design
Design should enable the user to figure out what to do and to figure out what is happening by:
- Making it easy to determine what actions are possible by using constraints
- Making the conceptual model, alternative actions, and results visible
- Making it easy to evaluate the state of the system
- Following natural mappings
Additionally, the principles of design are:
- Use knowledge in the world and in the head
- Knowledge in the world is useful if it is natural and easily available
- Knowledge in the head is more efficient and therefore, design should not impede experienced users
- Simplify
- Minimize the amount of planning needed to complete a task
- Understand the limits of human memory
- Use technology to enhance visibility
- Provide mental aids
- Make things visible
- The user should know what is possible and how to carry an action out
- Actions should match intentions
- System states should be readily apparent
- Use good mappings
- Relationships between intentions, actions, effects, and states should be intuitive and natural
- Take human factors into account
- Exploit constraints
- Use real and imagined limitations to guide users actions
- Design for error
- Understand that error is inevitable
- Make actions reversible and irreversible actions difficult to initiate
- When all else fails, standardize
- When good mappings aren’t possible, use standardized mappings
- Users are trained to recognize standards
- Standardizations are an extension of cultural constraints
Principles and practices of good design can be manipulated to make tasks that should be difficult, difficult. Guns should not be readily accessible, some doors shouldn’t be opened, etc.
Design of Everyday Things: The Design Challenge
The design process involves testing, modification, and retesting, which necessitates that the item in question be relatively simplistic and the craftsmen flexible. This process is known as hill-climbing. This process is hampered by time constraints and the desire for individuality - everyone wants their product now and the want it to be unique. These goals prevent designs from benefitting from their previous iterations.
There are also other considerations that affect design. Usability affords comfort at the expense of aesthetics and designing for aesthetics negatively impacts comfort and efficiency. When cost considerations dominate, comfort, aesthetics, and durability all suffer. All three considerations must be carefully balanced to create a good design. This is a difficult prospect when you consider that designers are “professionals” that are often judged by their colleagues based on aesthetics, clients are typically focused on cost, and the users’ desire for usability is often not met.
Wednesday, September 19, 2012
Design of Everyday Things: To Err is Human
Humans are prone to error, which can be broken into two categories. Slips are errors resulting from automatic behavior and mistakes result from conscious decisions. Slips are typically minor, easily identified errors, whereas mistakes are much more difficult to detect and can be major events.
Slips can be further broken down into capture errors, description errors, data-driven errors, associative activation errors, loss-of-activation errors, and mode errors. Capture errors occur when habitual actions override intended actions, such as finding yourself driving to work on Sunday instead of church. Description errors occur when an intended action is similar to other available actions and a lack of specification results in a slip. This type of error is most frequent when right and wrong choices are in close proximity. Data-driven errors occur when extraneous data influences our actions, for example, typing a phone number instead of a credit card number shortly after calling someone. Associative activation errors when external data triggers an incorrect action - saying "come in" instead of "hello" when answering a phone. Loss-of-activation errors occur when we forget what we are doing. Lastly, mode errors occur when devices have contextual operations and we operate one mode thinking we are operating another. Design can be improved by taking slips into account - minimizing them and providing adequate feedback and correction when they inevitably occur. Minimizing slips can be done by differentiating choices and requiring confirmation. Another good design consideration would be the elimination of irreversible actions.
Mistakes occur from choosing inappropriate goals - poor decision-making, misclassification, or lack of information. Humans make decisions based on expectations and prior experience rather than logical deduction. Previous chapters have already explored the problems associated with memory, so it is no big surprise that people make so many mistakes. Another theory of cognition is the neural net approach. This theory uses the structure of the brain to conclude that human cognition is based on activation and inhibitory signals that travel along neurons through the brain. Thoughts are represented by stable patterns of signal activity.
Tasks can be structured into models for analysis (data structures). Turn-based decision-making games such as tic-tac-toe can be modeld using a decision tree - the author refers to this type of structure as wide and deep. Simpler sets of data, such as a menu, can be represented using a list - this is a shallow structure. An example of a narrow structure is a cookbook recipe, where there are few alternatives, resulting in a decision tree that is narrow and deep.
Behavior is often thought of as conscious, but much of it is subconscious. Subconscious thought is based on pattern recognition and generalization. By analyzing trends the subconscious is able to guide behavior quickly and efficiently, but perhaps not as accurately as we might like. Conscious thought is slow, laborious, and relies on STM, which we know is very limited and subject to flaws.
Mistakes are very hard to identify, especially when they are a result of misinterpretation. Furthermore, understanding of an event before and after is occurs can be drastically different. Another factor to consider is the role of social pressure. The perception of pressure leads to misunderstanding, mistakes, and accidents.
The author discusses 4 things that designers can do to design for errors:
Slips can be further broken down into capture errors, description errors, data-driven errors, associative activation errors, loss-of-activation errors, and mode errors. Capture errors occur when habitual actions override intended actions, such as finding yourself driving to work on Sunday instead of church. Description errors occur when an intended action is similar to other available actions and a lack of specification results in a slip. This type of error is most frequent when right and wrong choices are in close proximity. Data-driven errors occur when extraneous data influences our actions, for example, typing a phone number instead of a credit card number shortly after calling someone. Associative activation errors when external data triggers an incorrect action - saying "come in" instead of "hello" when answering a phone. Loss-of-activation errors occur when we forget what we are doing. Lastly, mode errors occur when devices have contextual operations and we operate one mode thinking we are operating another. Design can be improved by taking slips into account - minimizing them and providing adequate feedback and correction when they inevitably occur. Minimizing slips can be done by differentiating choices and requiring confirmation. Another good design consideration would be the elimination of irreversible actions.
Mistakes occur from choosing inappropriate goals - poor decision-making, misclassification, or lack of information. Humans make decisions based on expectations and prior experience rather than logical deduction. Previous chapters have already explored the problems associated with memory, so it is no big surprise that people make so many mistakes. Another theory of cognition is the neural net approach. This theory uses the structure of the brain to conclude that human cognition is based on activation and inhibitory signals that travel along neurons through the brain. Thoughts are represented by stable patterns of signal activity.
Tasks can be structured into models for analysis (data structures). Turn-based decision-making games such as tic-tac-toe can be modeld using a decision tree - the author refers to this type of structure as wide and deep. Simpler sets of data, such as a menu, can be represented using a list - this is a shallow structure. An example of a narrow structure is a cookbook recipe, where there are few alternatives, resulting in a decision tree that is narrow and deep.
Behavior is often thought of as conscious, but much of it is subconscious. Subconscious thought is based on pattern recognition and generalization. By analyzing trends the subconscious is able to guide behavior quickly and efficiently, but perhaps not as accurately as we might like. Conscious thought is slow, laborious, and relies on STM, which we know is very limited and subject to flaws.
Mistakes are very hard to identify, especially when they are a result of misinterpretation. Furthermore, understanding of an event before and after is occurs can be drastically different. Another factor to consider is the role of social pressure. The perception of pressure leads to misunderstanding, mistakes, and accidents.
The author discusses 4 things that designers can do to design for errors:
- Understand the causes of error and design to minimize those causes
- Make actions reversible whenever possible and make irreversible actions difficult to carry out
- Make erros easily discoverable and correctable
- Change the attitude toward errors. Think of actions as approximations of what is desired.
Forcing functions address errors by forcing users' actions. An interlock forces operations to take place in proper sequence (a microwave cannot function with the door open). A lockin keeps an operation active when appropriate (soft functions as opposed to hard functions). Lockout devices prevent actions, such as safety rails that prevent accidental death. Forcing functions are very effective but almost universally hated by users. People don't like constraints, even if they are in the user's best interests.
Monday, September 17, 2012
Design of Everyday Things: Knowing What to Do
This chapter begins with an experiment involving Legos, in which a 13 piece motorcycle has constructed with no prior information or guidance purely based on the set physical, semantic, and cultural constraints applied to it. Physical constraints limit possibilities - a square peg cannot be placed in a round hole. No special training is required to understand physical constraints because they are governed by the world, however, their effectiveness is determined by the ease with which they can be determined and interpreted. Semantic constraints rely on the meaning of the situation - in this case, there is only one sensible position for the driver. Cultural constraints rely on accepted conventions - signs should be visible, screws are tightened clockwise. In this case, the "Police" sign on the motorcycle should be placed right-side-up and in a visible location and the clear yellow brick is obviously a headlight, as is the convention. Logical contraints applied to the construction of the motorcycle include the imperative that all blocks be used with no gaps in the final product. Natural mappings work by providing logical constraints.
Next the author discusses several examples of poor design. He begins by re-addressing the door situation presented in chapter 1. Then, he moves on to discussing switches, which frequently lack a logical mapping and grouping. These discussions are conducted in the context of mapping and constraint principles discussed in previous chapters.
The principle of visibility states that relevant parts should be made visibile and the feedback principle states that actions should have an immediate and obvious effect. Visibility allows users to infer how an object is manipulated and what these manipulations can be expected to produce. Feedback allows users to learn through trial-and-error and reduce misattributions of causality and misconceptions. Feedback can be given visually or audibly. In fact, most sounds made by devices aren't made out of necessity, they are made to inform the user that an event has occurred (such as the shutter sound made by a digital camera).
Next the author discusses several examples of poor design. He begins by re-addressing the door situation presented in chapter 1. Then, he moves on to discussing switches, which frequently lack a logical mapping and grouping. These discussions are conducted in the context of mapping and constraint principles discussed in previous chapters.
The principle of visibility states that relevant parts should be made visibile and the feedback principle states that actions should have an immediate and obvious effect. Visibility allows users to infer how an object is manipulated and what these manipulations can be expected to produce. Feedback allows users to learn through trial-and-error and reduce misattributions of causality and misconceptions. Feedback can be given visually or audibly. In fact, most sounds made by devices aren't made out of necessity, they are made to inform the user that an event has occurred (such as the shutter sound made by a digital camera).
The Design of Everyday Things: Knowledge in the Head and in the World
This chapter begins with the shocking realization that people's knowledge and behavior are not always equivalent - for example, a typist can type with speed and accuracy without being able to arrange the keys on a keyboard. The author attributes the following reasons for this phenomenon: information is in the world and combined with knowledge to produce behavior, precision is not required - the correct choice need only be differentiated from the others, natural constraints provide limits, and cultural constraints guidance. Furthermore, people possess two kinds of knowledge: knowledge of (explicit memory, declarative knowledge) and knowledge how (implicit memory, procedural knowledge).
Memory is divided into short-term memory (STM) and long-term memory (LTM). In computing terms, STM in analogous to RAM and LTM a hard disk. STM is limited to 5-9 segments - these segments need not be individual characters, a technique called chunking can be used (ex. a phone number is typically 7 digits in 3 chunks) and STM is very volatile. LTM contains information that takes longer to retrieve but isn't as easily forgotten. The author categorizes memory: memory for arbitrary things (rote learning), memory for meaningful relationships, and memory through explanation. Memorizing arbitrary things is difficult because of a lack of clues - no context. Memorization based on relationships is significantly easier, giving constraints and structure to limitless possibilities. The best form of memorization involves understanding. This allows a person to reconstruct the knowledge trying to be remembered using procedural memory. This is why mental models are so valuable.
In addition to memory, knowledge also exists in the world. One such form of knowledge is reminders, which consist of a signal and a message. Another way the world can communicate is with natural mappings, such as with burners on a range. There are tradeoffs associated with how knowledge is stored. Memory is requires learning and is not readily retrievable, but is very efficient and doesn't rely on clues. Knowledge in the world doesn't require as much overhead, but is dependent on the environment.
Memory is divided into short-term memory (STM) and long-term memory (LTM). In computing terms, STM in analogous to RAM and LTM a hard disk. STM is limited to 5-9 segments - these segments need not be individual characters, a technique called chunking can be used (ex. a phone number is typically 7 digits in 3 chunks) and STM is very volatile. LTM contains information that takes longer to retrieve but isn't as easily forgotten. The author categorizes memory: memory for arbitrary things (rote learning), memory for meaningful relationships, and memory through explanation. Memorizing arbitrary things is difficult because of a lack of clues - no context. Memorization based on relationships is significantly easier, giving constraints and structure to limitless possibilities. The best form of memorization involves understanding. This allows a person to reconstruct the knowledge trying to be remembered using procedural memory. This is why mental models are so valuable.
In addition to memory, knowledge also exists in the world. One such form of knowledge is reminders, which consist of a signal and a message. Another way the world can communicate is with natural mappings, such as with burners on a range. There are tradeoffs associated with how knowledge is stored. Memory is requires learning and is not readily retrievable, but is very efficient and doesn't rely on clues. Knowledge in the world doesn't require as much overhead, but is dependent on the environment.
Design of Everyday Things: The Psychology of Everyday Actions
This chapter addressed the psychological considerations addressed by good design. People instinctively explain their surroundings, with or without adequate knowledge to do so. This results in misconceptions and misattributions of causality. The author illustrates these concepts using examples such as an A/C thermostat and a colleague's computer problems. Many people think of a thermostat as a valve or a timer, and that turning the temperature up higher or lower than intended can speed up the heating/cooling process. This is a misconception - a false mental model. The colleague's computer troubles resulted from a misattribution of causality. He thought that a program was causing his terminal to fail, when the real culprit was a hardware problem. When problems such as these occur, people are apt to blame themselves and become frustrated.
The chapter goes on to discussing the stages of action: perception, interpretation, evaluation, goals, intention to act, sequence of actions, execution of sequence. These stages form an approximate model and a continual feedback loop into the world. The loop can be started at any point, and people don't always behave logically with well-formed goals. These stages serve to aid design by re-emphasizing the principles of good design: visibility, good conceptual model, good mappings, and feedback.
The chapter goes on to discussing the stages of action: perception, interpretation, evaluation, goals, intention to act, sequence of actions, execution of sequence. These stages form an approximate model and a continual feedback loop into the world. The loop can be started at any point, and people don't always behave logically with well-formed goals. These stages serve to aid design by re-emphasizing the principles of good design: visibility, good conceptual model, good mappings, and feedback.
Thursday, September 13, 2012
The Chinese Room
John Searle’s experiment addresses the question of whether a machine can be programmed to literally “understand” a concept (he called this type of programming strong AI) or if the best a machine can do is only simulate “understanding” (weak AI). The experiment involves a closed room in which Chinese characters are entered through a slot and a man uses a procedure (program) to generate a response to the input without knowing Chinese (the procedure is written in English, or memorized). Since the man doesn’t actually understand Chinese, but to the outside observer he does, Searle asserts that this scenario translates to computers as well and that the apparent “understanding” demonstrated by an AI is only a simulation at best.
Searle responds to several criticisms in his paper. The reply I found most compelling (probably due to my psychology minor and its emphasis on cognitive processes) was the brain simulator reply. This reply argues that Searle’s experiment should have been redesigned by having the man manipulate valves that are mapped to synapses in a Chinese persons brain. This would result in the Chinese person receiving a reply in Chinese without the man (or the valves) understanding Chinese. I liked this reply because, upon initial examination, I agreed with it. However, upon reading Searle’s response, I understand its flaw. Attempting AI is pointless if you concede that understanding the brain is necessary to understand the mind because AI is essentially aiming to translate the “software” of the mind to mechanical hardware, rather than the brain. As a psychologist I would argue that an understanding of the brain is in fact necessary to understand the mind - as evidenced by the many advances made in psychology by examining the structure of the brain, however, as a computer scientist I understand that regardless of this, conceding (read: admitting) this would nullify the concept of strong AI, as defined by Searle. The brain simulator reply depends on having an understanding of the brain and is therefore an odd and counterproductive argument to make in the context of AI.
I really enjoyed these readings. I typically find thought experiments irrelevant, and perhaps this one is as well, but I can appreciate the question Searle was trying to address and its an important question. As for my opinion, I side with Searle: we have yet to understand the physical basis of the “mind” and until we do all we can hope for is a poor facsimile, simulation, of its functions. At least for now, “understanding” is reserved for the realm of the living.
Tuesday, September 11, 2012
Paper Reading #6: ClayVision: The (Elastic) Image of the City
ClayVision is a paper by Yuichiro Takeuchi and Ken Perlin. Takeuchi is an Associate Researcher at Sony Computer Science Laboratories Inc. and earned his PhD from The University of Tokyo. In March he obtained a Masters in Design Studies from Harvard. Perlin is a Professor of Computer Science at the NYU Media Research Lab and Director of the Games for Learning Institute.
Summary
ClayVision takes a new approach to augmented reality assisted urban navigation by utilizing knowledge from non-computer science fields to break the current paradigm, which informs the user by pasting potentially irrelevant and frequently unwanted information on top of reality. ClayVision uses computer vision and image processing to create a dynamic real-time replica of the user’s perspective that can then be morphed and adjusted to direct the user and convey information.
ClayVision seeks to take AR from being gimmicky to being a “calm” technology. Current navigation applications involve information bubbles and overlays, not augmenting reality, but distorting reality. A user’s attention is very limited and navigating an urban environment is potentially dangerous. ClayVision addresses the issue of user safety and attention using Edward Tufte’s Data-Ink Ratio, which states that the effectiveness of visual communications can be analyzed using a ratio of ink used to convey information to total ink used in the graphic.
Central to ClayVision’s function is computer-vision based localization, which the authors recognize as an emerging field and an open problem. To address this, the authors created a database of pictures using the iPad’s camera (the tablet used to prototype ClayVision) for a set of predetermined locations and calculate the device’s pose. The authors’ rationalize this sidestep by asserting that even if ClayVision only works in limited locations, it can provide insights into future applications and the design of the system.
Image processing of the video feed is done using a simplified procedure based on SIFT, which outputs a set of feature points and other data used to determine the relative position of the entire frame. This processing is done in real-time on an iPad 2. Output is used to compare the video feed to the database of pictures and the template pictures are transformed based on the iPad’s camera specifications to produce the correct pose. After localization, projection and modelview matrices are calculated to map 3D building models onto the feed. These models are then textured using information from the feed and transformed to communicate information to the user. Texturing is done correctly by altering the image background with template picture information in a way that doesn’t disrupt the video and allows for transformations that don’t cause excessive errors.

The authors’ approach to this paper is based on discussion around their prototype and possibilities for extending ClayVision in the future, from a software and hardware standpoint.
Related Works
- Augmented Reality Navigation by Uchechukwuka Monu & Matt Yu
- An Image-Based System for Urban Navigation by Duncan Robertson & Roberto Cipolla
- A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment by S. Feiner, B. MacIntyre, T. Höllerer & A. Webster
- A Wearable Computer System with Augmented Reality to Support Terrestrial Navigation by B. Thomas, V. Demczuk, W. Piekarski, D. Hepworth & B. Gunther
- Pervasive Information Acquisition for Mobile AR-Navigation Systems by Wolfgang Narzt et. al.
- AR Navigation System for Neurosurgery by Yuichiro Akatsuka et. al.
- Visually Augmented Navigation in an Unstructured Environment Using a Delayed State History by Ryan Eustice, Oscar Pizarro & Hanumant Singh
- A Vision Augmented Navigation System by Michael Bosse et. al.
- A Vision Augmented Navigation System for an Autonomous Helicopter by Michael Bosse
- A Survey of Augmented Reality by Ronald T. Azuma
Augmented reality as a means of navigation is not a new idea. In 1997, when the field of AR was relatively young, Azuma discussed the future of AR in his paper A Survey of Augmented Reality. In this paper he mentions the many potential applications for AR, including navigation.
In addition to being an established idea, AR navigation has also been implemented in a variety of different ways. Augmented Reality Navigation and An Image-Based System for Urban Navigation discuss AR navigation implemented on a mobile phone. A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment and A Wearable Computer System with Augmented Reality to Support Terrestrial Navigation explore AR navigation on custom wearable hardware. Pervasive Information Acquisition for Mobile AR-Navigation Systems discusses an AR navigation system for cars in great detail. AR Navigation System for Neurosurgery takes AR navigation into the operating room by focusing on microscopic navigation, rather than macroscopic. A Vision Augmented Navigation System goes into detail about an AR navigation system and follows up with an application of this system in A Vision Augmented Navigation System for an Autonomous Helicopter.
All of these papers take on the task of using computer enhanced reality to guide users, but each of these applications is very similar or addresses a niche problem (surgery). ClayVision doesn’t claim to be a unique application, it claims to take a unique approach. The only paper I could find that attempts AR navigation in a novel way was Visually Augmented Navigation in an Unstructured Environment Using a Delayed State History, but even this paper fails to address the design and human factors concerns discussed in ClayVision.
Evaluation
Evaluation in this paper is non-existent. The only users mentioned in the paper are the authors. The prototype was not subjected to any standards or any type of measures. There are basic comparisons made between ClayVision and similar research, as in any paper, but that is the closest the authors come to evaluating ClayVision.
Discussion
I think the premise behind ClayVision is really interesting and a valid topic for research, but I was disappointed that the authors neglected to get user feedback or test their prototype against existing AR software. I’d be really interested to see them followup with a more complete analysis in the near future.
Monday, September 10, 2012
Design of Everyday Things: Chapter 1
The first chapter of Design of Everyday Things was very thought provoking. I never realized how non-functional visual cues could have such an impact on a product’s usability. I’ve never read a book that analyzes the design process and considerations so systematically. I’ve always thought of design as a vague field without much structure - and therefore not something subject to formal analysis. I’m glad that I was wrong; it looks as though this book has a lot to offer and I look forward to the coming chapters.
Thursday, September 6, 2012
Paper Reading #5: Playable Character: Extending Digital Games into the Real World
Introduction
Playable Character: Extending Digital Games into the Real World was written by Jason Linder & Wendy Ju. Linder and Ju conducted their research at the California College of the Arts. Linder currently works for Adobe’s Creative Technologies Lab and Ju is a researcher at the Stanford HCI Group.
Summary
Playable Character discusses a series of prototype games developed to explore how real-world activity could be incorporated into digital game systems. These games led to the design of Forest, a game developed for the Friends of the Urban Forest (FUF). The prototype games were developed as probes using paper or simple Flash or Processing programming. Informal testing of these prototypes was conducted using friends and colleagues as players. Data collection was integrated into the games whenever possible and supplemented with player interviews.
In Simulation City, players were asked to imagine playing SimCity with the added constraint that any buildings added to their city had to be photographed in the real-world. The authors found that selections were heavily skewed toward interesting buildings and art installations and responses indicated that player “...interest can be maintained simply by providing a facility for personalized collections of real-world items.”
SphereQuest explored the connections that could be developed between a player and his avatar by asking players to perform real-world tasks to enhance their avatars in-game. Players were required to document these activities and complete a survey for the purpose of data collection. The authors found that players that chose to complete activities that they could imagine their character doing (being stealthy as opposed to reading a book) connected with the game and enjoyed themselves.
The Other End was designed specifically for a known social setting to illustrate the importance of context in a game created to overlap with an existing social structure. The game consisted of checking in at a camera station, walking to the other end of a hall, and checking in at the other station. Scoring was done based on frequency of participation (with punishment for lack of participation) and improvement & the players with the most points, most trips, and best time were displayed on a leader-board. The game quickly created a competitive environment centered around achieving the fastest time, to the exclusion of the other leader-board types. This competition served to advertise the game and facilitate social interaction.
Cubelord, the second social engagement game, involved accumulating territory (cubicles) via the game’s virtual currency. The player with the most cubes at any moment was crowned “Cubelord” and given a cape and scepter. Each cube had a price, price increase rate, and return price to encourage players to formulate strategies. Currency was earned by performing tasks involving the disclosure of personal information, singing, cleaning, providing homework assistance, etc. Game runners collected this information and credited players with game funds. The current state of the game was available online and a terminal was used for purchasing cubes. The authors observed that players prioritized tasks by convenience and players cleverly attempted to get credit for less than accurate responses, but no actual cheating took place.
The design probes explored “...how the physical world could be mapped into the game world (Simulation City), how the virtual-world could prompt real-world actions (SphereQuest), how people's social and physical motivations could be organized, (The Other End), and how virtual motivations could motivate social disclosure (CubeLord).” These observations resulted in the formulation of five design patterns: personalized collection (Simulation City), narrative alignment (SphereQuest), gaming the game (The Other End), progressive disclosure (The Other End), and persistent convenience (CubeLord).

Related Works
- Towards Massively Multi-user Augmented Reality on Handheld Devices by D. Wagnet, T. Pintaric, F. Ledermann & D. Schmalsteig
- ARQuake: The Outdoor Augmented Reality Gaming System by W. Piekarski & B. Thomas
- Touch-Space: Mixed Reality Game Space Based on Ubiquitous, Tangible, and Social Computing by A. D. Cheok, X. Yang, Z. Z. Ying, M. Billinghurst & H. Kato
- From Game Design Elements to Gamefulness: Defining “Gamification” by S. Deterding, D. Dixon, R. Khaled & L. Nacke
- Rethinking agency and immersion: video games as a means of consciousness-raising by Gonzalo Frasca
- Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces by Carolyn Snyder
- Player-Centered Game Design: Experiences in Using Scenario Study to Inform Mobile Game Design by Laura Ermi & Frans Mäyrä
- The PowerHouse: A Persuasive Computer Game Designed to Raise Awareness of Domestic Energy Consumption by M. Bang, C. Torstensson & C. Katzeff
- A Video Game for Cyber Security Training and Awareness by B. D. Cone, C. E. Irvine, M. F. Thompson & T. D. Nguyen
- The Digital Game-Based Learning Revolution by Marc Prensky
This research touched on many different fields and ideas, none of which were particularly novel on their own. Gamification is the central idea behind Playable Character and has become a very popular topic of research in the last decade. From Game Design Elements to Gamefulness: Defining “Gamification” is a paper that analyses the idea of using non-game context to motivate user activity and retention. Playable Character also incorporates augmented reality gaming into their prototypes and in the Forest mobile-application.
Towards Massively Multi-user Augmented Reality on Handheld Devices and ARQuake: The Outdoor Augmented Reality Gaming System explore applications of AR gaming very similar to Forest. Towards Massively Multi-user Augmented Reality on Handheld Devices is similar to Forest in its mobility and its emphasis on multi-user experience. ARQuake’s emphasis on outdoor AR gaming is very similar to Forest’s focus on trees. Touch-Space: Mixed Reality Game Space Based on Ubiquitous, Tangible, and Social Computing leverages AR and social networking to create an application aimed at being convenient to the user, much like Forest’s social networking and emphasis on convenience as a means of keeping users interested.
Rethinking agency and immersion: video games as a means of consciousness-raising and The PowerHouse: A Persuasive Computer Game Designed to Raise Awareness of Domestic Energy Consumption discuss the use of gamification to increase player awareness of important topics. A Video Game for Cyber Security Training and Awareness and The Digital Game-Based Learning Revolution discuss gaming as a teaching tool. These are concepts very central to the purpose of Forest, which is to increase membership, participation, and fund-raising for FUF.
Playable Character’s contributions include the methodology used in the study as well as the applications. Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces is a book that explores paper prototyping as a method to conduct informal studies. Much of the experimentation in Playable Character was conducted using paper prototyping and similar informal methods. Player-Centered Game Design: Experiences in Using Scenario Study to Inform Mobile Game Design is a study conducted in the same style as Playable Character. This study conducts small experiments in an informal way and applies the knowledge gained from this experimentation to the design of mobile games.
Evaluation
The only evaluation came in the form of informal feedback from test users and FUF members. Feedback during the experimental stages served to guide the development of Forest. Forest feedback was discussed in the paper but the application of this feedback was outside the scope of the research. Evaluation was purely qualitative, systematic, and informal.
Discussion
This is by far the most unusual CHI paper I have read. While the contributions made by Playable Character aren’t readily apparent, I do feel that there is something to take away from this study. The authors explored a variety of novel game scenarios through quick prototyping and applied their findings to a real-world application. This topic, while unusual, offers an interesting perspective that I found very unique. Evaluation of the prototypes developed in Playable Character was entirely informal, qualitative, and unstructured (no mathematical or statistical measures were applied to the analysis of their qualitative data). Normally this would pose a problem, but due to the nature of this research, I feel that their evaluation was adequate.
Wednesday, September 5, 2012
Paper Reading #4: Not Doing But Thinking: The Role of Challenge in the Gaming Experience
Introduction
Not Doing But Thinking: The Role of Challenge in the Gaming Experience was written by Anna L. Cox, Paul Cairns, Pari Shah & Michael Carroll. Dr. Cox is a senior lecturer in Human-Computer Interaction at University College London and an Associate Chair for CHI 2013, CHI 2012, and CogSci 2012. Dr. Cairns is a senior lecturer in Human-Computer Interaction at The University of York interested in video games and modeling user interactions. Cox and Cairns coauthored Research Methods for Human-Computer Interaction, published by Cambridge University Press in 2008. Pari Shah graduated from University College London in 2008 with a degree in Psychology and Michael Carroll studied Computer Science at The University of York.
Summary
This paper investigates the role of challenge in a user’s experience of immersion through three studies. The concept of challenge is distilled into two modalities: pushing a gamer’s physical limits (twitch mechanics) and pushing a gamer’s cognitive limits (time constraints).The first experiment “manipulate[s] the number of interactions required to make progress in the game and thus the speed with which the gamer must interact with the game.” Their second and third experiments focus on making the gamer think faster by manipulating the level of time pressure under which the gamer must perform. They hypothesize that cognitive challenges have a greater effect on immersion and therefore expect higher levels of immersion to be reported in experiments two and three, in comparison to the first experiment.
The authors identify immersion as “a graded experience ranging from engagement, through engrossment to total immersion.” Total immersion is synonymous to being in a state of flow, during which all of a gamer’s mental faculties are focused on the task at hand (the game). Flow is achieved “as a result of an appropriate balance between the perceived level of challenge and the person’s skills.” This idea led the authors to consider the role of expertise, hypothesizing that immersion will decrease if the game is too challenging (resulting in a state of anxiety) or if the game isn’t challenging enough (resulting in a state of boredom).


Experiment two tested the hypothesis that participants playing under time pressure will experience significantly higher immersion and challenge than those playing without time pressure. Testing was conducted using 22 players playing Bejeweled in a timed or un-timed mode for 15 minutes before completing the IEQ. The authors found that players playing the timed mode experienced a higher level of challenge as well as significantly more immersion. Effects of expertise were not measured in this experiment.

Experiment three addressed the hypothesis that expertise affects the level of cognitive challenge associated with a game, thereby affecting the level of immersion. The authors tested their hypothesis with 20 players, divided into expert or novice groups, playing Tetris at low difficulty (level 1) or high difficulty (level 6). Players played for 15 minutes with the low difficulty players not allowed to progress past level 2 and the high difficulty players allowed to continue until the game ended before resetting. All players played for a total of 15 minutes before completing their IEQ. Expert and novice players were equally immersed at high difficulty. At low difficulty, novice players experienced a slight increase in immersion and expert players experienced a significant drop. These results confirm that immersion is dependent on a balance between skill and challenge, but reveal that challenge has no effect on immersion when expertise isn’t taken into account.
Related Works
- Immersion, Engagement, and Presence: A Method for Analyzing 3-D Video Games by Alison McMahan
- Flow and Immersion in First-Person Shooters: Measuring the player’s gameplay experience by L. Nacke & C. A. Lindley
- Ludic Engagement and Immersion as a Generic Paradigm for Human-Computer Interaction Design by C. A. Lindley
- Revising Immersion: A Conceptual Model for the Analysis of Digital Game Involvement by Gordon Calleja
- Patterns in Game Design by S. Björk & J. Holopainen
- Video Games: Perspective, Point-of-View, and Immersion by L. N. Taylor
- Sex Differences in Video Game Play: A Communication-Based Explanation by K. Lucas & J. L. Sherry
- Video Game Designs by Girls and Boys: Variability and Consistency of Gender Differences by Y. B. Kafai
- Heuristic Evaluation for Games: Usability Principles for Video Game Design by D. Pinelle, N. Wong & T. Stach
- Explaining the Enjoyment of Playing Video Games: The Role of Competition by P. Vorderer, T. Hartmann & C. Klimmt
To begin, I will establish that the study of game experience is well established. Patterns in Game Design is a book on the paradigms of game design and Heuristic Evaluation for Games: Usability Principles for Video Game Design is a paper exploring rule-of-thumb evaluations aimed at improving game design. Both of these works focus on gaming experience from the point of a designer, explicitly emphasizing the role of challenge and the ultimate goal being the creation of an immersive experience.
Additionally, the role of challenge is well established and studied within the context of gaming. Explaining the Enjoyment of Playing Video Games: The Role of Competition takes a psychological look at what makes games fun on a universal level. The authors argue, successfully, that the unifying feature of fun games is competition, namely, the desire to win. This competition comes either in the form of opposing players, or in the form of challenges imposed by the game itself. Further research into the effects of challenge on video games revealed important data that the authors of Not Doing But Thinking failed to take into account. Sex Differences in Video Game Play: A Communication-Based Explanation and Video Game Designs by Girls and Boys: Variability and Consistency of Gender Differences are two independent studies that discuss the variation in response to challenges within video games between men and women. Not Doing But Thinking makes claims on the effects of challenge on game immersion and experience without taking into account that it is well known that men and women have different cognitive responses to challenge and that these differences are evident in the way that they play games.
Next, I assert that the role of immersion in gaming is well studied. Immersion, Engagement, and Presence: A Method for Analyzing 3-D Video Games explores the effects of 3-D design on immersion, analyzing the relationship between immersion and artwork. Video Games: Perspective, Point-of-View, and Immersion is a similar study that focuses on the player’s perspective on a game world and their point-of-view within said world to discuss immersion. Revising Immersion: A Conceptual Model for the Analysis of Digital Game Involvement takes a systemic approach to exploring immersion, focusing on the various forms and levels of involvement that contribute to an immersive experience. Ludic Engagement and Immersion as a Generic Paradigm for Human-Computer Interaction Design identifies immersion as a critical goal for all human-computer interaction applications and explores its potential use in ludic systems. These studies all take a different approach to analyzing the effects and conditions of immersion and agree that immersion is critical to the gaming experience.
Having established that the study of game experience is nothing new and that the role of challenge and the importance of immersion are well studied topics, Not Doing But Thinking only remains novel in that it addresses immersion from a cognitive standpoint and uses quantitative measures in its analysis. This sets Not Doing But Thinking apart from the aforementioned studies, but not from Flow and Immersion in First-Person Shooters: Measuring the player’s gameplay experience. This paper explores immersion from sensory, imaginative, and challenge-based perspectives. Nacke & Lindley use a host of measurements ranging from psychophysiological indications of arousal to qualitative flow measurements to effectively analyze what factors contribute to an immersive experience. I found their study to be much more robust and conclusive, effectively rendering Not Doing But Thinking insignificant.
Evaluation
The authors use a combination of quantitative and qualitative measures to evaluate their experiments on both a systemic level and a component level. In experiment one, evaluation was done based on players’ scores in the tower defense game, the number of actions they performed while playing the game, a quantitative measure of the players’ expertise, and an Immersion Experience Questionnaire. These measures were effectively used to validate their experimental design (component-based evaluation) and conclude that physical effort had no significant effect on player immersion and there was no interaction between expertise and level of challenge. Experiment two used players’ Bejeweled scores and IEQ data to conclude that time constraints increase the level of immersion experienced. Experiment three used a qualitative assessment of players’ skill levels, their Tetris scores, and IEQs to conclude that the level of challenge only affects immersion when skill is taken into account.
Discussion
While the authors’ evaluation methods were excellent their attempt at novelty failed. There is at least one other study that measures immersion using quantitative data and that study does so much more successfully. This study had a good premise but ultimately concluded very little. I enjoyed the paper until I realized that their experiments yielded little data. What little enthusiasm I clung to after reading this paper was quickly dashed once I discovered Flow and Immersion in First-Person Shooters.
Monday, September 3, 2012
Paper Reading #3: LightGuide: Projected Visualizations for Hand Movement Guidance
Introduction
LightGuide: Projected Visualizations for Hand Movement Guidance is a paper by Rajinder Sodhi, Hrvoje Benko, and Andrew D. Wilson. Sodhi is a grad student at the University of Illinois working on his PhD in computer science under the advisement of David Forsyth and Brian Bailey. Benko received his PhD from Columbia University and now works in the Natural Interaction Research group at Microsoft Research focusing on human-computer interaction. Wilson also works for Microsoft Research as a principle researcher, prior to which he obtained his PhD from MIT.
Summary
LightGuide is a proof-of-concept implementation for a system that takes a novel approach to gesture guidance using a projector and a depth camera (Kinect). The authors were motivated by the current how-to paradigm’s lack of valuable feedback in the form of physical interaction. Typically, when learning a gesture such as a yoga pose, an instructor provides feedback by correcting errors through physical touch. When an instructor isn’t present, users rely on videos, diagrams, and textual descriptions. With the rising dependence on do-it-yourself materials found online, this poses a challenge - one that the authors take on with LightGuide.
LightGuide uses a depth camera and a projector mounted to a fixed position on the ceiling to project gesture cues onto the hand of a user. The depth camera and projector are calibrated precisely to allow the user’s hand to be mapped to 3D world coordinates and accurately projected upon. The authors devised three types of cues: follow spot, 3D arrow, and 3D pathlet. The follow spot consists of a white circle and a black arrow centered on the user’s hand indicating z-axis movement as well as positive (blue) and negative (red) coloring indicating xy-axis movement. The 3D arrow is self-explanatory and the 3D pathlet consists of a small path segment with a red dot indicating the user’s current position along the path.
The authors divided the cues into two categories based on whether the cue moves at a steady rate that the user follows or whether the cue advances based on the user’s speed. This categorization, along with the control cases, resulted in the following six testing scenarios: follow spot, 3D follow arrow, 3D self-guided arrow, 3D pathlet (self-guided), video projected onto the user’s hand, and video played on a screen.
The results of the experimentation revealed that, while video cues result in much faster movement on the part of the user, the cues devised by the authors resulted in 85% more accurate performance. The most accurate cue was the follow spot, followed by the arrows, the pathlet, the projected video, and lastly, the video screen.
Related Works
- CounterIntelligence: Augmented Reality Kitchen by Leonardo Bonanni, Chia-Hsun Lee & Ted Selker
- Development of Head-Mounted Projection Displays for Distributed, Collaborative, Augmented Reality Applications by Jannick P. Rolland, Frank Biocca, Felix Hamza-Lup & Yanggang Ha Ricardo Martins
- The Studierstube Augmented Reality Project by Dieter Schmalstieg, Anton Fuhrmann, Gerd Hesina Zsolt Szalavári, L. Miguel Encarnação, Michael Gervautz & Werner Purgathofer
- MirageTable: Freehand Interaction on a Projected Augmented Reality Tabletop by Hrvoje Benko, Ricardo Jota & Andrew D. Wilson
- Efficient Model-based 3D Tracking of Hand Articulations using Kinect by Iason Oikonomidis, Nikolaos Kyriazis & Antonis A. Argyros
- Human Detection Using Depth Information by Kinect by Lu Xia, Chia-Chih Chen & J. K. Aggarwal
- Image Guidance of Breast Cancer Surgery Using 3-D Ultrasound Images and Augmented Reality Visualization by Yoshinobu Sato et. al.
- A Head-Mounted Display System for Augmented Reality Image Guidance: Towards Clinical Evaluation for iMR1-guided Neurosurgery by F. Sauer et. al.
- Exploring MARS: developing indoor and outdoor user interfaces to a mobile augmented reality system by Tobias Höllerer et. al.
- A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment by Steven Feiner et. al.
While the system as a whole is somewhat novel, the underlying ideas and individual components have been demonstrated in various other studies. Firstly, the idea to use projectors to augment reality has been in circulation for some time. CounterIntelligence: Augmented Reality Kitchen, written in 2005, explores the idea of using projectors and various sensors to augment a kitchen environment with the goal of improving user speed and safety. Development of Head-Mounted Projection Displays for Distributed, Collaborative, Augmented Reality Applications, a paper written in 2006, and The Studierstube Augmented Reality Project, a paper written in 2002, both discuss the idea of using head-mounted projectors to facilitate an AR experience. MirageTable: Freehand Interaction on a Projected Augmented Reality Tabletop (2012) is another study that chose to monopolize on the tested success of projectors in AR systems, but did so in a much more effectively than LightGuide and with a much more novel approach. LightGuide built on the success of CounterIntelligence by incorporating user-tracking and projecting onto non-static surfaces and improved upon the ideas set forth in Development of Head-Mounted Projection Displays for Distributed, Collaborative, Augmented Reality Applications by enabling the LightGuide system to work without forcing the user to wear any special equipment.
Using Microsoft’s Kinect as a depth camera to track users is another recycled idea. MirageTable, Efficient Model-based 3D Tracking of Hand Articulations using Kinect and Human Detection Using Depth Information by Kinect are all studies that have used Kinect to track user movement within a system. While MirageTable’s use of Kinect was somewhat basic in comparison to LightGuide (MirageTable only tracks shutter glasses about a very limited space), Efficient Model-based 3D Tracking of Hand Articulations using Kinect and Human Detection Using Depth Information by Kinect both utilize Kinect in a much more novel method than LightGuide. Hand articulation tracking and human detection are significantly more advanced applications of Kinect’s capabilities and LightGuide’s implementation would have greatly benefitted from exploring these capabilities and incorporating them into the system to create a more robust application.
The application of AR to guidance systems has been explored within the fields of CHI and medicine to a great extent. Exploring MARS: developing indoor and outdoor user interfaces to a mobile augmented reality system is a paper written in 1999 that discusses the application of AR to allow users to guide each other through environments. A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment, a paper written even earlier (1997) takes this idea even further by imagining a 3D AR system capable of guiding a user through a complex urban environment such as a university campus. Image Guidance of Breast Cancer Surgery Using 3-D Ultrasound Images and Augmented Reality Visualization and A Head-Mounted Display System for Augmented Reality Image Guidance: Towards Clinical Evaluation for iMR1-guided Neurosurgery are studies that use head-mounted video displays to allow doctors to visualize tissue within a patient and provide guidance to assist in surgery. These works not only serve to demonstrate that AR guidance is anything but novel, but that a guidance system aimed at coordinating a users movements with an exact pre-programmed path is trivial compared to other applications explored in previous studies.
Evaluation
The authors evaluated the success of LightGuide using a quantitative comparative evaluation and a qualitative user feedback discussion of pros and cons of their approaches. The depth camera tracked the movement of the users hand through the world coordinates and compared this movement path with the intended path used to formulate the cues. In the case of the video-based cues, scaling wasn’t easily interpreted and the results of the analysis for skewed. The authors corrected for this using an iterative closest point algorithm used to analyze the performance of a user’s shape without taking scale into account. Even after correction, the follow spot still resulted in 85% more accurate movements than the video screen.
The qualitative study was based on user interviews following the completion of their trials. Users stated that they found the follow spot cue easily understandable and close to second nature, but felt that it didn’t provide enough information. The 3D pathlet was found favorable because of its feedforward - users liked knowing what was ahead. The majority of users stated that they preferred the 3-D self-guided arrow over all other visualization cues. Users liked setting their own pace and arrows serve as a familiar directional cue. Note that the 3D arrow cue was the only self-explanatory approach devised by the authors.
I found the evaluation performed in this study excellent. The authors addressed not only the factual success and accuracy of their system, but also took user experience into account - which is ultimately more important in a system aimed at guiding, and therefore teaching, a user.
Discussion
I enjoyed this article but found that it only scratched the surface of what’s possible with this type of technology. The authors limited their study to hand translation ignoring other movements such as rotation, as well as other body parts. While the authors succeeded in demonstrating the potential power of projector-based gesture cues, LightGuide ultimately raised more questions than it answered.
Subscribe to:
Posts (Atom)