Tuesday, May 11, 2010
Goodbye CHI!
Brett
Tuesday, March 9, 2010
OMGLOL!!!
- CHI 2009 - choice of any 4
- IUI 2009 - choice of any 3 + An interactive, smart notepad for context-sensitive information seeking
- UIST 2008 - choice of any 3 + Iterative design and evaluation of an event architecture for pen-and-paper interfaces
Tuesday, March 2, 2010
Emotional Design
- Visceral - The way that something appears speaks to us.
- Behavioral - We want to feel in control.
- Reflective - The voice inside our head that separates us from others.
Aside from that, the only new ground that Norman covers that interested me was the last chapter (which seemed a bit out of place). Here he talks about robots in the future and how they should understand our emotions. This can help them interact with humans more efficiently than before. I personally think that getting robots to that level of understanding is scary. If we can learn how to make robots function exactly like us, who's to say that we were not made by some other creatures... Anyways, that is kind of a different topic that I could talk about for a really long time.
Back to Norman: if he would have just called me in 2004, I could have told him what to do. I would have told him not to write another book. Everyone makes mistakes... We understand the need for him to have written TDOET, but everyone saw its flaw. That includes me, check my blog post on the book:
There is one part where I stray from Norman's thoughts. I think that visibility is important in design. However, I do not think that it should trump elegance. If something is aesthetically pleasing to look at, consumers are naturally inclined to buy it. Especially compared to something that looks like... Ugh! Hopefully designers can find a way to integrate elegance and functionality, but elegance should never be completely disregarded.Come on Norman... I knew this was coming and so did everyone else. We all know that emotions speak to us louder than functionality. Why did you feel the need to write a book about it?
Monday, February 22, 2010
How Well do Visual Verbs Work in Daily Communication for Young and Old Adults
- Single static image
- Panel of four static images
- Animation
- Video clip
Pictured above is the example they give for the four visual models for "work". The researchers wanted to make it known that verbs are more difficult to visualize than nouns because nouns typically represent a tangible thing. To collect a sample of verbs, they found 48 frequently used verbs from the British National Corpus. They got all of their images (for the single image and the panel of images) from web pages which had been tagged. To select the best images, the researchers got people to rate a sample of images. They then selected the four top-rated images for the panel. They got their animations from a website for visualization, and they conducting the video making themselves.
The study showed that there was a significant aging effect on interpreting visualizations. The young adults scored higher on average in all four methods of visualization. The score was on a 6 point scale (an exact match was worth 6, synonym was worth 5...). They came up with a collection of results from the experiment:
- Multiple pictures are better for conveying verbs
- Utilize common gestures, but be wary of cultural differences
- Use symbols carefully, especially when ambiguous
- Simplify backgrounds and use common scenes
- Use special effects carefully because elderly might not understand
- Consider age-related effects
Comments:
I think that, like most research in the CHI papers, the experiment was pretty interesting. I did not really catch onto the application of the research though. In their conclusion, they mention that visual communication is helpful in multilingual settings - I would agree with this. They assert that "verbs must be well illustrated in visual languages...as an essential part to most languages". This part I find hard to agree with - where is the application?
Tuesday, February 16, 2010
Fast Gaze Typing with an Adjustable Dwell Time
Gaze typing, also known as eye typing, is using a gaze as input as opposed to normal keyboard use. It is primarily used for people who have severe disabilities and motor skill deficiencies. This is how it typically works:
- The user's eyesight is tracked.
- They keep their focus on a certain point for a certain amount of time (dwelling).
- After the allotted time has elapsed, the gaze is considered input.
Experiment Specifics:
- They studied 11 college students who had normal vision.
- They used a QWERTY keyboard layout.
- Users could vary dwell time from 2000 ms to 150 ms.
- An animation was used to show dwell time elapsing (circle around the key).
- The activation area of the key was bigger than the actual key visualization - this was done to boost accuracy.
Comments:
I think that the research presented took an easy idea and applied it to an interesting, novel topic. After all, don't we all know that practice makes perfect? That is basically what the paper concluded. Allow the user to learn at their own rate and the results were pretty good.
The Inmates are Running the Asylum
When I started to skim through this book, the first thing I noticed is how much the computer industry has changed. It seems like Cooper is really mad at every person he used to work with... Back in the day, programmers were in charge of everything, even if that meant designing a program for computer noobs. We have all been there before - sometimes it requires too much effort and it is easier to say never mind. It is easy to take that road, but usually it is not constructive.
I do have some experience in industry, and I have to say that it is completely different than what Cooper describes. I really believe that poorly designed systems are a thing of the past... Lets face it - if you don't subscribe to interactive design, your product is going to fail. The methods used by the programmers Cooper talks about are nostalgic to my early programming years. Writing code that even the writer cannot understand a few months later. Ah, those were the days.
But enough reminiscing. There are some pretty good points that Cooper makes in between his angry rants. I think breaking users into apologists and survivors is a really neat idea. However, I do not think that just the two categories can fully explain all users. I think they are more of a stereotype to get a point across. Programmers used to be apologists - we would defend all systems because we knew that there was some merit behind them, regardless of how difficult it was to understand. After all, programmers were the authors of some of that nonsense. When I think of an apologists, I think of my parents...
Now, I think programmers are starting to realize that we cannot be apologists anymore. There was a renaissance (sort of) amongst the computer world not too long ago - products should be easy to use. Wow, what a novel concept! I think the main reason this came to be is because people generally got tired of crappy software. And all it took was a few good companies to notice that and start developing for the user. Look at where we are now...
Sunday, February 14, 2010
PrintMarmoset: Redesigning the Print Button for Sustainability
This paper begins by introducing sustainable interaction design (SID). SID deals with conventions of learned perceptions and behaviors. That means that SID motivates users to pay attention to sustainability, while still concentrating on usability issues. The research mentions that SID demands a deep understanding of the social and evolving aspects of design.
When evaluating SID, usually task-centric techniques are used. In this paper, they come up with a SID for printing to prove their hypothesis that behavior change is a more convincing metric than attitude change. A primary use of the study is to examine a new design of the print button that will reduce the amount of paper we waste.
The researchers conducted a study amongst several different people and concluded that printing is here to stay. Most subjects agreed that when you print something it holds more importance. Some said that printing directions is easier than using a GPS. When asked about printing a large amount of pages, most subjects said they would think twice. They generally agreed that wasting paper was bad. They also said that given the overhead of sorting through content needed and not needed on a website, they would overwhelmingly choose to print the entire page.
After doing some prototyping, they found that a solution required the following:
- require neither developers to modify existing web sites nor users to change existing print flow
- require the least amount of user input effort, if not zero
- offer flexibility that allows users to choose what to print in addition to pre-defined filter or template
- maintain a history of print activities for collecting and organizing web clips
- raise awareness of the tool among people and build a “green” community around it
- Go to a web page - use a news article, for example.
- Press print, PrintMarmoset automatically selects content.
- The user can then 'stroke' over content to remove it.
- Print out the remaining content.
The paper did not get into great detail about methods used for implementation. Instead, they discussed their goals in the research. Their first was to bring SID to light. Their second was to use an easy exercise (printing) to show the potential of SID.
Comments:
I think that the paper brought a very interesting idea forward. Printing usually is a hassle, especially off of a web page news article. Using a sustainable design proved to allow users to interact minimally with the program to achieve goals. SID is a cool area that I think can help a lot in CHI studies.
Wednesday, February 10, 2010
Thursday, February 4, 2010
Using Improvisation to Enhance the Effectiveness of Brainstorming
Summary:
This paper starts by giving a general idea of what most of us consider 'brainstorming' - a popular method used by design teams to generate new ideas. Some keys to good brainstorming she presents are from a book called Applied Imagination by Osborn. They are to withhold judgment, build on the ideas of others, generate a large quantity of ideas, free-wheel, and identify a leader. She argues that these concepts have helped teams of cross-discipline to tackle complex technological problems. She also mentions that when team members are able to break loose from cognitive and emotional bounds they are more likely to produce novel and valuable solutions.
The paper then moves on to discuss using technology to enhance brainstorming. She notes that technological support can help some brainstorming imperatives - "fluid idea expression and the generation of a large quantity of ideas". However, the technology does not allow for help in all parts - "building on each other ideas and taking turns speaking". Her desire is to come up with some form of technology that can assist teams in the keys mentioned earlier from Applied Imagination. To do this, she proposed the idea of using theatrical improvisation - improv.
Her research continues on to explore (in depth) integrating improv with brainstorming to support the keys from Osborn's Applied Imagination: withhold judgment, build on the ideas of others, generate a large quantity of ideas, free-wheel, and identify a leader. She concludes her paper by mentioning that improv fosters a healthy environment for brainstorming. She also tells that brainstorming is a great method by which leading companies can discover innovative ideas for the future - a valuable asset.
Comments:
I thought that the paper had some very interesting ideas about using improv in brainstorming activities. For example, the improv method used for 'free-wheeling' was to start with a familiar object. The group would then pass the object around and come up with alternate uses for it. I think that brainstorming is sort of an improvisational activity to begin with, and that is where I think the research is lacking. Teams naturally use improv to help them brainstorm. The research also seemed to lack a computer-human component. It briefly mentioned how computers can assist and constrict brainstorming, but the research of improv did not really get into CHI.
Tuesday, February 2, 2010
The Design of Everyday Things
This is a blog post for the summary and discussion of our first reading assignment: The Design of Everyday Things by Donald A. Norman.
This book is all about analyzing 'everyday things' - figuring out why they are successful or failures. As we can see by the cover, not everything is a success. Norman covers all types of things - phones, doors, and watches seem to reappear quite often in his analysis. In this analysis, the main objective is to find principals of good design. In fact, he encourages the reader to not feel discouraged when they cannot understand simple things. Instead, he says the designer is to blame... Interesting.
When reading the book, I noticed that Norman would come up with an idea then share a story about how the idea occurred. I liked that structure. Norman would then go into some detail about the idea and why it was important to good design. While reading, I tried to take note of some of these features:
- Conceptual models allow users to understand how to use something just by looking at it. An example he gave of this was scissors - it is obvious that your hands go into the holes and there is then only logical way to move your fingers. This is also an example of physical constraints which limits the actions one can perform with the object.
- Feedback is essential in design. When a user is performing options in a word processor (an example Norman uses), they need some response to what actions they have taken. If there is no feedback, the user can become confused and perform the same action multiple times without realizing that the action has been noted by the processor.
- Mappings are important to design, and natural mappings are something that all designers should strive for. A natural mapping that Norman mentions is a steering wheel in a car. Turn it left and the car turns left... Natural, what a great idea! Seems simple enough, but some designs really mess up here. If there are 10 buttons on a machine, the designer should strive to have around 10 functions, not 30.
- Visibility is an area that is interesting as well. This idea possibly takes away from the elegance of something, however it is essential in a good design. Of course we all like large, glass doors that are aesthetically pleasing. But is it worth the cost of being confused when you arrive at the door - push or pull? Norman argues that good design comes before beauty.
There is one part where I stray from Norman's thoughts. I think that visibility is important in design. However, I do not think that it should trump elegance. If something is aesthetically pleasing to look at, consumers are naturally inclined to buy it. Especially compared to something that looks like... Ugh! Hopefully designers can find a way to integrate elegance and functionality, but elegance should never be completely disregarded.
Sunday, January 31, 2010
Ethnography Idea
http://christalks436.blogspot.com/2010/01/ethnography-humans-and-doors.html
Tuesday, January 26, 2010
Collabio: A Game for Annotating People within Social Networks
Collabio is a game that this group developed for Facebook - a social network. The point of the game is to have the highest score, so it is basically a fight to the leader board. To get points, a user must enter tags for their friends and try to guess the tags that have been applied to them. There is also a 'friends who know me best' section, which can be seen in the screen shot below.
The group conducted a user survey to test tag accuracy. They found that the majority of tags were affiliations, while less tags conveyed interests, expertise, and hobbies. They concluded that most of the tags were accurate descriptors of the people in the social network.
Comments:
I find it difficult to find the merit in this research. I know that tags contain useful, easy to use information about (in this case) people, but why aren't the existing games good enough? I am not sure how much incentive the user has to spend time tagging and guessing tags either. The main question I find myself asking after this paper - why?
Augmenting Interactive Tables with Mice & Keyboards
The research presented in this paper is very interesting. It brings forth an idea that I have not seen seen associated with touchscreen technology. The idea is to combine physical input devices with an interactive tabletop. They hypothesize that this will "provide spatial sensing, augment devices with co-located visual content, and support connections among a plurality of devices."
The group implemented several different ways to interact with the tabletop via mice and keyboards:
- Drag a document to your keyboard to dock
- Place your keyboard on a document to dock
- Type commands through your keyboard
- Place devices close to each other to link them
- Link mouse and keyboard by clicking on the keyboard
- Remote 'touch' via mouse
- "Leader Line" which locates the mouse pointer
Comments:
I thought that the idea presented was extremely innovative. They mentioned that some previous work had been done in the area, but that their research would look farther into the topic. Since touchscreen technology has become a front runner as of late, this research could definitely come in handy. I think it is great that they are adding accuracy to the input methods of tabletop. Because tabletops are starting to mature application-wise, this research is very relevant.
Wednesday, January 20, 2010
A Practical Pressure Sensitive Computer Keyboard
This paper starts by mentioning that it is "sobering" that the keyboard has changed little with the advances in computer-human interaction. The main idea here is to make a change to the keyboard for the better. They realize that the only successful altered keyboards are ones that make very little change to the preexisting model. Something that the pressure sensitive keyboard has going for it is that it looks and feels exactly like a standard keyboard - the only difference being that it can report the pressure at which the keys are pressed.
Most of the modern keyboards use a flexible membrane to register when a key is clicked. This model uses a pressure sensitive membrane. They have created a membrane where "contact ... decreases in resistance as force is increased," this is the opposite of normal behavior.
The paper goes on to discuss "practical" applications of a pressure sensitive keyboards. One example given is in gaming - if you want to move faster than simply press the key harder. Another example given is during instant messaging to convey emotion. Press keys harder to convey more emotion - this saves time because the user does not have to scale the font to their liking.
Comments:
I think that the subject matter is somewhat interesting and relevant. However, I think that there is a problem with the fact that they are trying to change the keyboard. It is very difficult for me to see any practical applications in everyday desktop use with this pressure sensitive keyboard. After all, would you press keys harder to convey emotion or would you use a smiley?
Mouse 2.0: Multi-touch Meets the Mouse
Touchscreen technology has been advancing rapidly as of late. However, it seems as though this technological advancement is only being taken advantage of in mobile and tabletop devices. Because a desktop environment is still preferred for most computing tasks, there needs to be some way to incorporate multi-touch and the desktop. Naturally, the mouse was chosen - enter the multi-touch (MT) mouse. This research covers five different prototypes of MT mice, each of which "presents a different implementation and sensing strategy that leads to varying device affordances and form-factors, and hence very unique interaction experiences":
Each mouse has a very intricate design. Some of them use a system of cameras and infrared illumination, while others use optical sensing grids. Though many attempts have been made to append to the standard mouse, the only successful idea thus far has been the scroll wheel. Since the use of a standard mouse is embedded in most users, the idea for MT mice is to "support multi-touch gestures alongside regular mousing operations." To do this, they conjured the multi-touch cloud. In the picture below, the cloud is shown on the computer screen and the user can click on any portion of the cloud. This is an example of seamlessly integrating multi-touch into standard mouse use.
The group conducting this research did a user study in which six different users were asked to perform Rotate-Scale-Translate (RST) actions with all five MT mice. Of the five, users preferred Arty mouse the most (pictured third above). The only problem with Arty mouse is that it has two points of touch - compared to the Orb mouse (pictured fourth above) which has five potential points of touch. The Orb mouse was also a hit amongst the users, but the feedback suggested that Arty was more natural.
The research concluded that their contribution was a technical one, opening the door to integrating multi-touch into the desktop. They plan to continue work in the area by making better prototypes of their models and testing them again. They will also study the interaction techniques of these new devices.
Comments:
I think that it goes without saying that this is very significant work. It is interesting too! I believe that the technology for multi-touch environments is going to make huge strides in this decade, and this research is pointing us in the right direction. If we can get users more accustomed to integrating multi-touch into everyday computing than we are essentially making the UI layer transparent. After all, we use our fingers and thumbs in an intricate way in everyday activities - why not take advantage of our hand mastery?
I thought that the design of each MT mouse was interesting, but I was more interested in the observations of the user study - I wanted to see how the users interacted with the mice. The paper seemed to be lacking here because it focused more on the concepts and design of each mouse than it did on the observations and results.
It is obvious that there is plenty of room to improve for desktop MT mice. However, I think that an even more interesting area to do continuing work is trying to make multi-touch the norm. How long is it going to take until users are generally able to perform MT tasks with ease? How long is it going to take for widely used applications to incorporate MT interfaces?
Tuesday, January 19, 2010
Introduction
Hi! My name is Brett Hlavinka, and my email is brhlavinka@yahoo.com. I am a 3rd year senior CPSC major at Texas A&M. I enjoy working on projects that are human centered so computer human interaction was a natural choice for me. In ten years, I expect to have a job that I enjoy and a family to share that with. I hope that the user interface layer will become completely transparent soon, and I believe that touchscreen technology will be a large part of that advancement. Outside of computer science, I enjoy hanging out with friends and playing sports and video games.