20 October 2009

Agile & UX

The presentation below started out as a short talk at the UX Cocktail Hour. However, the presentation has been gaining in popularity. So I decided to post it here. Because it does not really stand alone to people not at the presentation some misunderstandings have resulted. I will record the presentation with audio, but in the mean time, let a few words here suffice:
It is a design-centric view of Agile, or rather a War weary design centric view of agile.
The main point being that Agile is not a development method as much as it is a way of setting aggressive deadlines. What happens in one Agile project does not predict success for another. Instead the designer needs to be Agile in figuring out how they can best fit in. However, that agility should not extend to your design process. Designs still need to be well thought out concepts not something grown together in piecemeal increments.
The bottom line message (found on the last slide) is to be truly successful in Agile, you need to follow your own design process but be intimately involved in the Scrum process, preferably as a Scrum Master. This is essential for maintaining an overview of what is going on with your design. At the very least, own the user facing stories/requirements in the product backlog.
And Sherlock Holmes and Dr. Watson were meant to personify regression testing.

01 October 2009

Halcyon days at the EuroIA Conference

Last week I attended the EuroIA conference. I was there primarily to give a talk with my former Google colleague, Greg Hochmuth, on a project we did on on-line privacy. To be honest I had low expectations for the conference, thinking it was not going to be very professional. That was my estimation of the IA movement in general. I favored the more rigorous CHI model. This reliance/faith in CHI is why I have been working so hard to bring practitioners into CHI with the design track work and of course the DUX conference series, etc. I assume that CHI was where the interesting professional UX work would be done. I did not expect any such thing at an IA conference, which I thought was too narrow and too niche to be interesting.
I was wrong and closed minded, both of which I find annoying.
I was quite surprised to attend a very fine conference with a strong practitioner focus with competent representatives from industry giving case studies and thought provoking discussions. There were, of course, more than a few missers. However, when you attend a CHI conference misser you really wasted your time at some inapplicable pedantic presentation. These were all interesting even if not earth shattering.
I was also pleased to see that the attendees had a kind of willful confusion of IA with UX. Eric Reiss one of the leaders in the conference series said early on he was proud that they would have no debates on terminology or definitions.
What is IA
It seems to me that IA (Information Architecture) and HCI (Human-Computer Interaction) are two ways to achieve the same effect. One is information driven, the other is interaction driven. Both strive for but don’t quite achieve UCD. To borrow a Mahler analogy, these two movements seem to dig from opposite sides of the mountain to reach the center.
Setting the stage for the conference was an interesting case study keynote given by Scott Thomas on his work for the Obama presidential campaign web site. A refreshing talk, one would probably never hear at CHI, charting the work he did as both designer and web developer and IA for one of the most successful and high profile web presences.
It was clear at the conference that there are those who do specialize in IA and don’t touch interaction design with a ten foot pole; however the majority seem to blissfully switch between IA, ID, and UX designer labels based on what will get them the job or the most influence. The resulting conference content is interesting and competent, usually not pedantic (there were a couple regrettable forrays into pedantia--oh I am being pedantic aren't I?). I will hasten to add that probably 10% of these presentations would have been accepted at CHI.
CHI Bashing
Not that I am in anyway bashing CHI (well I guess i am sort of). CHI continues to be dominated by Academia, it is its reason to exist. So it makes sense that more practioner oriented organizations thrive and offer better conference experiences like EuroIA, SXSW is another such conference. However, there are some design heavy weights very active and present at CHI. People like Bill Gaver, Bill Verplank, Bill Buxton--hey are all of them named Bill? So I guess we should also include Bill Card and Bill Dray...
Still going to a CHI conference is daunting and if you do not stick to the Design or practitioner focussed papers it is really hit and miss. Then there is also the unfortunate academic who strays into a design paper and lambastes a practitioner for not holding double blind studies on a project with a limited client budget. Ah, it is always embarassing when people can't check their egos at the door.
So, it is good there are several credible alternatives to CHI. I guess this means I need to attend the next IA Summit and see what that’s all about. I don’t think I can take anymore good stuff...
This profession
In the end, I had a friendly familiar feeling at EuroIA. A feeling like I had met these people before. It seems that regardless of whether you are at CHI or EuroIA or UPA or wherever, people of our profession(s) share this common empathic passion for our stakeholders. This makes us a particularly caring and sympathetic tribe.

04 September 2009

Measuring the User Experience

This weeks post is a review of the book Measuring the User Experience by Tom Tullis and Bill Albert. From time to time other book reviews will follow.

Why a book review

The current state of books on UX is deplorable. Many UX books can’t make up their mind if they are about a given subject or the UX world according to Garp. Just looking at my UX bookshelves, I notice there are, for example, many books with authors who have a narrow or focussed expertise. These authors write books supposedly over a narrow subject, which they sustain for about a chapter or two before they deteriorate into their own homemade version of the User Centered Design process that has little if anything to do with the subject of the book they intended to write. The result is a book with grains of truth in a stew of platitudes. A review of just three books one claiming to be on prototyping, one on designing and another on UX communications, reveals that all of these books cover more or less the same material such as user research, task analysis, persona’s and prototyping; but it does it in such a way that they use both conflicting terminology and conflicting methods.

My more ideal UX books are those on a subject and stick to that subject. They explain their topic in a way that is process independent so that they can plug into whatever processes companies or organizations utilize. The fact of the matter is that no two organizations adopt the same software development process. What they all have in common whether they are called agile or waterfall, iterative or serial, is that they are all machiavellian. Therefore if a book's material cannot fit into the current machiavellian software development processes, then the book is largely worthless; even if entertaining (though probably not as entertaining as E.M. Forester).

I think one of the best services I can do then is to help people navigate around these literary cataracts and start a series of book reviews. These reviews will try and highlight the best of the UX literary corpus.

Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics by Tom Tullis and Bill Albert

I want to start with one of the brighter lights in our industry Tom Tullis. I have often wondered why he had not earlier written a book, given the high quality of contributions he has made to our profession. Well the wait is over.

It's true it is a book on usability metrics. Now I realize there are some people who hate metrics. These people particularly hate any accountability for their design work. I can’t tell you the hate mail i received, even from large design firms, when as Interactions Editor we did a special issue on measuring usability that was guest edited by Jeff Sauro. Well, I purchased Measuring the User Experience (MUX if you will) expecting a more thorough version of that special edition that went into the statistical significance of usability testing. I was in for a very welcomed surprise: this book does not just cover summative usability statistics but many different ways to collect user experience metrics and the also discuss proper analysis techniques.

The book empowers the user to make the right decision regarding what methods you can use and what you can expect the metrics to be able to tell you or not tell you. As the book states metrics can help you to answer questions such as:

  • Will the users like the product?
  • Is this new product more efficient to use than the current product?
  • How does the usability of this product compare to the competition?
  • What are the most significant usability problems with this product?
  • Are improvements being made from one design iteration to the next?

This is a refreshing change from just looking at time on task, error rates and task success rates. Though of course these play a role they are but ends to the means of answering these larger questions. Furthermore, the book also points out that there is also an analysis step that can greatly alter the seemingly obvious findings.

I cannot tell you the amount of time and money I have seen wasted as perfectly reasonable and wonderful user research was conducted, only to have its results obfuscated and mutilated beyond use. This book will not just enable the usability tester or researcher to avoid such mistakes it also empowers a project manager to see to it that a development project designs the solid usability study that will fit in the goals and needs of the development team.

In their discussion of designing the right usability study. The authors guide you in choosing the right metrics.

First you need to establish if the goal of your study is what the goal of the user’s are. Then on that basis you can look at which metrics, the authors identify 10 common types of usability studies:

  1. Completing a transaction
  2. Comparing products
  3. Evaluating frequent use of the same product
  4. Evaluating navigation and/or information architecture
  5. Increasing awareness
  6. Problem discovery
  7. Maximizing usability for a critical product
  8. Creating an overall positive user experience
  9. Evaluating the impact of subtle changes
  10. Comparing alternative designs

Then, a key issue they discuss is looking at the budgets and timelines, aka, the Machiavellian business case for the study. Then you can tailor the type of study: how many participants, will it be tests, or reviews or focus groups or a combination thereof.

In the conduct of these studies it is also important to track the right metrics. Tullis and Albert identify the following types of metrics:

  • Performance Metrics -- time on task error rates, etc.
  • Issue-based metrics -- particular problems or successes in the interface along with severity and frequency
  • Self-reported metrics -- how a user can report their experience with questionnaires or interviews
  • Behavior or physical metrics -- facial expressions, eye-tracking etc.

It handles these metrics as they should be as part of an overall strategy not favoring one over another as being innately superior. All too often usability testing consultants are one trick ponies, prisoners of whatever limited toolset they happen to have learned.

This book allows the user to assemble all the needed metrics across types to achieve a more holistic view of the user experience, or at least sensitize them that they are not looking at the whole picture.

What is also amazing is the focus and discipline in the book. I think many other authors would not be able to fight the temptation to then expand the book to include how to perform the different types of evaluations, usability tests, etc. These authors acknowledge there are already books that cover these other related aspects and keep their emphasis purely on the subject matter of their book: measuring the user experience.

Yes the book does also get into statistics and evens hows you how to do simple straightforward statistical analysis using that panacea to the world’s known problems’ excel (but that is next week’s topic).

And just in case your wondering the usability score for Amazon is 3.25, while Google’s is 4.13 and the Apple iPhone is a mere 2.97. While the web application suite I just finished designing got a perfect 4.627333.

29 July 2009

Confusing a Heuristic with a Moral Imperative

Heuristics are excellent assistance in identifying potential problems with a given user interface design. The trouble lies when people come to rely on these as the sole input, that somehow they can come and overtake the more rigorous and far more accurate methods of evaluation. So please don't read below as being anti-heuristic but rather anti-misuse of heuristics.
I have been working more and more with consultants and pseudo-designers who have been working on evaluating web applications with a ton of heuristics in their hands. I can hear them clear across cubeville with clipboards in their hands:
"This is terrible, you are inconsistent between these pages, those pages ignore web standards, these other pages behave differently than the others, and oh my gosh look at all these unnecessary graphics, rip these all out. Get rid of the background coors, and ugh those button colors!"
Concept and user groups can trump heuristics
The fact is there could be a valid reason for violating every single one of these heuristics. Worse yet, there are these type of evaluators who without so much as learning the context go in a tear apart a site for violating, standards, UI conventions and other heuristics of all sorts.
A well defined and innovative concept will often require breaking a few rules. Moreover, if a concept is tailored to a specific user group, who is not the evaluator, then all the heuristics are almost invalid.
Heuristics are defined as (according to my Mac dictionary, and why should we doubt Apple?):
Heuristic (/hjʊˈrɪs.tɪk/) is an adjective for experience-based techniquesthat help in problem solving, learning and discovery. A heuristic method is particularly used to rapidly come to a solution that is hoped to be close tothe best possible answer, or 'optimal solution'. Heuristics are "rules of thumb, educated guesses, intuitive judgments or simply common sense.


Well here are some of these so-called common sense rules of thumb with some food for thought to think about along side of them, I am using the list from Jakob Nielsen's site, just to pick 10 basic one (http://www.useit.com/papers/heuristic/heuristic_list.html) This is not to pick on Jakob, as the point here is to discuss the pitfalls when heuristics are used as the sole means for evaluation, as such every heuristic can be picked apart and discredited, these are just 10 examples:

Heuristic Justification (From Nielsen) Yes but...
Visibility of system status The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Maybe the user doesn't and shouldn't care. This heuristic assumes a user population actually cares about what is going on. Many user's could careless unless it's going to cause them a problem. You should have some basic trust built with your users and that trust may mean only informing them in the case of a problem, or handling the back end status problems yourself.
Match between system and the real world
The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
Unless the purpose of the site is teaching the user a domain, or new task. An example would be Google Ad Words, where a novice user does need to learn some basic Advertising terminology or the advanced features will be lost on them.
User control and freedom
Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
This heuristic seems to justify poor design. User control and freedom come more from safety is more than just redo or undo, its the ability to let the user explore and play around with the system. This is done through facile interaction design, a heuristic I have never seen listed.
Consistency and standards
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
This assumes 1. the user has no other reference point than platform standards 2. the platform has standards or usable ones
Again this justifies lazy design. Standards are a fall back (I say this as someone who has written UX standards for 3 major software companies); the conceptual design should be leading.
Error prevention
Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Here is a useless Heuristic. What is an error? One man's error is another man's exploration. Maybe you should enable errors?
Recognition rather than recall Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Indeed the memory load should be lightened for a user. However the better way to do this is to employ well established visual and interaction patterns. Worse, this explanation can be very misleading for the naive reader. Indeed I have experienced many a designer and developer use this heuristic explanation to 1. attack progressive disclosure and 2. To create a ridiculously busy screen throwing all functionality with equal visibility into a "one-stop shopping" kind of screen. Or worse a screen with a huge amount of text explaining how to use the screen. All of which are from a cognitive ergonomic perspective completely unusable.
Flexibility and efficiency of use
Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Building in redundancy to support multiple styles of interaction would be a better way of putting this. However, this needs to be seen in the context of a broader design concept. For example, there is often this designer fetish for drag and drop, when often it is only the designer who wants to perform this action. Also, implementing drag and drop in one place, invites the user to try it everywhere and very annoying when it does not work as they expect it to. So pick these accelerators well and not just for their own sake.
Aesthetic and minimalist design
Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
The explanation is here at odds with the heuristic. The heuristic seems to cry for everything to be a Scandinavian styled minimalist design; whereas the explanation goes on about text.
The visual design should leverage the brand and ability to communicate. Gratuitous graphics are supposedly bad unless the delight the target users (think of Google's doodles on their home page).
As far as minimalism, I recall Tufte who said anyone can pull information out, how you pack information into something and keep it intelligible and usable is the real challenge.
Help users recognize, diagnose, and recover from errors
Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
My only problem here is the "precisely indicate the problem." I am sure Jakob did not mean go into gory technical details of the problem, but rather concisely describe the issue. E.g. "Your data was not saved." not "Your data was sent to the application layer and experienced a time out longer than 3 ms and the system sent back the data in an unusable format."
My formula for error message writing:
"Short sentence what happened. (forget why) Short sentence how to fix it. A link can be added "Learn more" or "Why did this happen to such an undeserving user as me" for the morbidly curious.
Help and documentation
Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Far from apologizing for help we should revel in it.Help and documentation should be electronic and in context. For example, micro help (a question mark icon or What's this link which work on mouseover or a small popup) often assist the user without interruption.

07 July 2009

The mythical 80/20 rule

In a sad day for most digital products and services, an italian economist, Vilfredo Pareto, observed that 80% of Italy's wealth was owned by 20% of the population. From that economic observation has come a torrent of the most far flung interpretations of a non-existent 80-20 rule. There is no 80-20 rule. There never was and never will be. Yet so many developers, designers, product managers evoke this mythical rule to justify the most outlandish pipe dreams, shoddy work or just plain laziness. Which is a pity because it ruins the credibility of a principle that in 20% of the circumstances can be 80% helpful.

The crux of when the 80/20 principle is helpful is when you need to fend off perfectionists. The the 80/20 principle helps you to illustrate that a minority of factors can result in a majority of effects, the aka 'biggest bang for your buck.' But how can you tell when the principle is being used wisely or not.

80/20 Pipe dreams
Some G-d forsaken Gui Guru once said that 80% of screens could be driven by templates; while only 20% of the screens needed to be designed. This completely unsubstantiated drivel has lead to many efforts to "automatically generate a UI". It has lead to millions of wasted dollars and development effort in worthless tools and idiotic processes all aimed at designing without designers. If the Guru was right then about 80% of the screens could be generated, reasoned the technocrati, you hit the 80/20 rule and you applications will be fine except for 20% of the time. Some more thoughtful product managers then would hire in an army of designers to cover the 20% they thought 'really needed design.' But even then, as one Product manager in just such a project told me, in how own pipedream: "I want you to design templates with such a narrow path of movement that a designer can only make the right choice."
The reality of the matter is more along the lines that 80% of a given screen could be generated while 20% needed to be designed, but oh the devil is in the details and often that 20% is where the most difficult design challenges lay. Therefore, the 20& should end up driving the other 80%, not the other way around. [Never mind the fact that this pipe dream totally negates the necessity and power of the conceptual design (see It's all about the concept).]

80/20 Shoddy work
Often someone will deliver (or even ask for) 80% of the work they really need to get done. This is usually done to purposefully keep the quality low. Example. A software engineer asks the designer for rough documentation that is quick and easy to read, just giving the 20% key interactions and let the 'no brainers' to the engineer himself. This assures the design bar remains low. With it this low shoddy work can triumph the design goals being so low. The design fails to deliver but it was set up to fail and no one even expected it to succeed in the first place. This way the technology can triumph, reason the developer, while the design has a systematic back place.

80/20 Laziness
All to often a designer themselves will end with a rough sketch and miss some of the finer details of the design they need to deliver again claiming to deliver according to the 80/20 rule. Often the excuse is: "No need to over deliver those developers won't design it to spec anyway." Or: "Stuff always comes up during development it will change anyway. I will just give them 80% and give them 20% margin to play with." This is pure laziness. As any good designer will tell you that the devil is in the details. Or as some of the better desigers have pointed out G-d is in the detail; because those small details are often what separates an ordinary design from a truly excellent well thought out design. That last 20% is again, the last thing you want to leave to a developer or other non-designer. Furthermore, those gnarly details you have solved will go a lot further in helping developers improvise when they have to than if you just leave them a blank space to fill in all on their own.

80/20 rawks
The 80/20 principle works excellently when you need to stop someone going off into the weeds of perfectionism. The software should be bug free. The software should please every user to do everything. The software should have perfect tests. Anything that reeks of perfectionist is liable for the 80/20 rule. For as we all know at one moment, the waters can get murky. e.g. one user's bug is another user's feature. Just make sure whenever some pulls an 80/20 on you, or you pull it on someone, that you have an objective measure to back up your 80/20. Yes, 20 percent of the people own 80% of the wealth. Yes, if we provide 20% of the functionality we will make 80% of the users happy as we can see in these usability tests, etc.

30 June 2009

Usability Engineers vs Designer: the process problem.

Another week has come (mine starts on Tuesday, you can do those things when you live in Europe). More and more, we see problems surfacing not from having the wrong people in place but rather the wrong process. My previous post discusses the rampant wrong process with prototyping. Here I want to touch on the process issues with usability testing and design. I want you to consider this familiar and completely unnecessary scenario:

A Designer works on a conceptual design with the customer. Then he works out a detailed design into a prototype that can be tested. So far so good. But what goes wrong is that the Usability Engineer is often disconnected to either the design concept or the detailed design. The usability engineer ends up suggesting new designs that totally contradict the conceptual design. The designer is gone. The engineering team implements the changes and the result is a Frankenstein's monster that despite the best UX resources, fails in the marketplace.

The obvious problem is the process disconnect between Designer and Usability. And the problems are serious. I want to discuss two aspects to these problems and how to resolve them: namely how False Negatives in a usability test and how to deal with Usability Engineer's design advice.

False negatives
When usability testing is conducted without input from the designer, this can lead to many false negative issues in the usability test. Examples of the errors that can result include:
  • Early tests will report usability issues with conventions that a user is expected to learn over time or with a different task flow than being tested.
  • Early tests especially lower fidelity ones may not catch learnability/system feedback issues due to lack of visual fidelity needed to communicate with the user
  • Test moderators, not knowing the underlying concept may inadvertently introduce the topic or task in a way that is at odds with the design, thereby confusing the test subject.
This list is just the tip of the iceberg. These negative side effects are completely avoidable by making sure the Designer and Usability Engineer work together on the Usability Test script, identifying tasks and their importance. Also the task order when that is appropriate for a test (for example one step is a required gateway: e.g. sign up). Also let the Usability Engineer attend some of the conceptual design sessions and, OMG let them participate in the conceptual design; so they gain a thorough understand of it. Conversely, Designers should observe the usability tests whenever possible. The tests themselves can be so much more inspiring and vivid than even the best written report.

Usability Engineer's design advice
It is an expected part of the Usability Engineer's work to include not just data and analysis; but also design advice or alternative designs. This does not need to be a problem. But without setting expectations, the innocent Product Manager or software engineer confronted with new contradictory designs can quickly conclude that the UX profession is a screwed up group who cannot make up their minds.
Among the possible problems with blindly taking Usability Engineer design advice is:
  • Designs may not be an ideal solution for the problems they have discovered
  • Designs often recommend things that will cause usability problems elsewhere by introducing conceptually non-standard interactions
  • Designs ignore larger issues since their advice focuses on the testing and not the larger issues (e.g. Business and Technical requirements which may lead to a different solution than suggested).
    • A common example of this is when the Usability Engineer suggests something that is technically impossible for the requirements or constraints.
These and other issues with the Usability Engineer design advice harms everyone's credibility both designer and usability engineer. This is not to say that Usability Engineers should not give their advice. But it is absolutely important to set the right expectations. Usability Engineer design advice should be viewed as input to the problem not the solution. If the Usability Engineer includes the design rationale this will often give the vital information for coming up with a more ideal solution.
The design rationale should enumerate the objective reasons for the alternative design. This allows the designer to bridge the problem design with a solution based on objective criteria instead of personal taste.
[Objective information is one that either refers to the usability data itself (e.g. only 2 out of 12 users understood this command) or conceptual data based on requirements, (This design does not appeal to our target users or is not constant with the image/branding of the company). Both types of information can lead to a solution. Comments like, "I don't like that color" or "It doesn't look right to me." do not lead to workable solutions.]

Usability data misinforming design
Usability data is rarely communicated with the limitations or short-comings in the data and this is a real pity. All too often a usability engineering report reads like a set of demands and commandments without stipulating under what conditions this advice or anaylsis should be given. Things like the significance, persistence, sampling issues, etc. are often underplayed. Again a faulty process is the problem. Many usability engineers are under pressure to work quickly and also find dramatic and significant results. This can put a Usability Engineer between a rock and a hard place: asked to review a product with three of Janitor's friends and then come up with a list of "just the most important recommendations." Ah if life were only so easy? Yet we are constantly being put in this position. The client maybe always king, but findings that can include a little context setting would help the end-users of the usability reports.

Lessons learned
  • Designers and Usability Engineers should insist on working together in projects. Meaning the Engineer is available during the concept design phase and the Designer is available during the testing design phase. (With iterative testing the designer must be available with each design cycle.)
  • The customers should require design and usability engineers to work together. This will often require the usability engineer to come in early for 1-2 days in the conceptual design phase before their main work begins week(s) later. (Yes that also means if the engineer is an external contractor, the customer must pay the Usability Engineer for this work.)
  • Customers should also realize the usability engineers do not provide solutions they propose challenges and problems that need to be solved.
  • Usability Engineers may be great designers or maybe crap designers but as long as they include objective design rationale for their proposed solutions they will always be helpful

26 June 2009

Prototyping software tools that work

The Blogosphere is full of posts over what is the best prototyping tool. This fight over Axure, Flash, Dreamweaver, PowerPoint, etc., is misplaced and looks at prototyping as something that is purely a single concrete one size its all activity, or at least one tool fits all. This is misplaced for two reasons. First, this discussion disempowers the designer, product manager, developer who wants to use software tools already at their disposal. Secondly, ignores the fact that prototypes are sometimes created extremely rapidly (like in an hour) or more thoroughly (in a month). The two prototypes on the opposite of these extremes have far different tool requirements. Therefore, it is far better to think of prototyping toolsets.
A prototyper should have many tools just like a mechanic has a toolbox. Otherwise the prototype who knows just one tool is not very effective. Like to the hammer every problem looks like a nail, a designer who knows just one prototyping tool offers just one type of solution when many many more are possible. If you look for a different perspective mainly what are the specific prototyping needs, some surprising prototyping tools emerge.

Specialities
Prototyping tools can usually do some kinds prototyping better than other kinds. Furthermore, you may also prefer to use a certain tool simply because you know it's special features better than another tool.

General Availability
A prototyping tool that is not generally available to the design team makes the team reliant on a chosen few with access. If the resulting file format is also unreadable outside the prototyping tool then access to the prototype is further limited. In some organizations this is a good thing; but when you want to maximize the knowledge of your design team having a prototyping tool set hat empowers people is important.

Interaction styles & Functionality
Tools support certain platforms, interaction styles and functionalities. These actually limit what types of prototypes you can create. For example, if the widget set in your prototyping tool does not support spinners, then spinners will not appear so easily on your prototype. Moreover, the more sophisticated the UX widget set and the interaction capabilities the more conservative and less innovative your prototype will be. Conservative because you are cornered into the widget set and interactive capabilities of the tool.

Design Team Talents
If a design team loves one product, making the switch over to another product robs them of their talent they already have. Of course that can be counterbalanced if there are long term advantages in adding the prototyping to the prototyping toolset.

Deliverable formats
How a prototype will be used is also essential. If the resulting prototype will be a paper prototype. For example, choosing a tool which can print out designs and portions of designs easily would be a factor.

Tools review
There are many different dimensions to look at since prototyping involves multiple dimensions (fidelity, audience, style, etc.) However as one handy table (I can make more if they are helpful, here is a grouping of prototyping tools and what they are most appropriate for. (I will update this table based on user comments.) The table is more or less an overview of a possible look on similar tools. For your personal organization, it might be helpful to list all the tools you know and list for each: advantages, disadvantages, etc.

Methods Tools
Static wireframe PowerPoint, Excel, Photoshop, Illustrator, Fireworks, Visio, OmniGraffle, HTML editors, Word
Storyboard PowerPoint, or other presentation software, Acrobat, Excel
Paper Word, PowerPoint, Excel, Photoshop, Illustrator, Fireworks, Visio, OmniGraffle, HTML editors
Wizard of Oz WebEx or other video conferencing software or synchronous sharing tools.
Digital partial interactive Excel, PowerPoint, HTML Editors, Acrobat, Visio, OmniGraffle, Axure
Digital fully interactive Flash, Dreamweaver and other HTML Editors, Visual Studio, Director, Axure

24 June 2009

It's all about the Concept

This editorial is one very dear to my heart, as it touches on the cornerstone of good design and something I miss in all to many HCI designer's work: the concept. In order to keep to the same points as in the original, I have done little editing from the original.
How can a design review be conducted on static interfaces?

What constitutes good design?

Good design is harder and harder to find these days. It is disheartening when people present a single window or Web page and ask for an evaluation, especially when the question is: "Is this good design?" How can a design review be conducted on static interfaces? What is possible to evaluate? What constitutes good design? Is it possible to judge a design from static screens?
As we said earlier—in fact way earlier in an article co-written with Wendy Mackay in March 2001—when discussing the importance of contextualizing design, the issue is not whether a design is good, but is it a good design of...what?
When starting to review a single screen, your heuristic-laden ego itches to pour out criticisms. A copy command on the file menu! Are you nuts?! You don't put buttons on the bottom left! Serif fonts! Are you mad? Where did you get that typeface? Spinners are so 1990s! Script typefaces on mobile devices? Split buttons! Are you sure about that? But instead of continuing that tirade, let's pause for a moment and ask: design of what?
First off, let's suggest some good questions. Do you really know the context to start judging a design correctly? What aspect of design are you reviewing? The visual design? The information design? The layout design? The interaction design? How do you "see" an interaction design in a static page?
There are of course many ways to represent all of the above. We're interested in how you do it. Do you divorce these aspects of design, or do you combine them in certain ways? Which combinations have been most successful for you?
It's difficult to divorce one from the others: All aspects of design must work together in a unified concept. That concept involves rich knowledge preceding the design activity: of the end users, their background, their tasks, their mental models. It doesn't stop there: You then need to understand your engineering constraints: what your developers' toolkit can and can't do, what sort of custom code your design will require, and whether your design needs to absolutely follow standards for future evolution and code maintenance, or if you're able to leap into new territory and design a new widget or two. Further rich knowledge that can and should influence the conceptual design: understanding the business model of the company producing the software. Is this a demo? Can it cut corners, or is it production quality? Is it a step in a long line of .dot releases? Is it the first version? Can you take risks? Do you need to reach feature and usability parity with other competitors, or do you need to excel and claim best-in-class?
Before you can understand how to design, you need to understand design. The conceptual design is more than any one facet of design; it's a gestalt that is more than the sum of the parts. Taking this perspective, how do you evaluate that single screen?—Jonathan Arnowitz and Elizabeth Dykstra-Erickson

This is a draft version (preprint) of material which was later published in substantially the same form in my Rant column in ACM’s magazine. The published version is a copyrighted feature of Communications of the ACM. All requests for reproduction and/or distribution of the published version should be directed to the ACM.

29 May 2009

The Designer’s Hippocratic Oath—A Reformulation

The practitioner is constantly barraged with new guidelines: platform guidelines, guru guidelines, papers and articles on heuristics, case studies and anecdotes that promote practice directives. And then there’s the occasional Web forum, workshop, or hallway conversation that suggests there is an overarching method to our madness. We enter the fray with this, our May-June Rave, honoring a long tradition and making it our own: The Designer’s Hippocratic Oath. Say it out loud: I Believe!



I swear by Don Norman, Douglas Engelbart, the Eames, ergonomics and all the powers of cognitive psychology and good design, that, according to my ability and judgment, I will keep this Oath and this stipulation—to reckon the teachings of this Art equally meaningful as my own experiences, to share my experiences with my teachers and colleagues and free their perspectives from bias; to look upon all design in the same footing, and to share my thinking, if they shall wish to learn it, without condescension or confusion; and that by precept, lecture, and every other mode of instruction, I will impart a knowledge of the Art to my peers, my colleagues, and to designers similarly bound to the Oath, but to none others who are not willing to endure it.

I will follow that system of regimen which, according to my ability and judgment, I consider for the benefit of my users, and abstain from whatever is deleterious and mischievous.

I will give no crash or unrecoverable error to any one if asked, nor suggest any such counsel; and in like manner I will not give to a single person the means to procure confidential company information or illegal copies of my product.
With openness to learning, to constructive criticism, and to negotiation I will pass my life and practice my Art.

I will not write code or falsely pretend to be a visual designer, but will leave this to be done by those who are practitioners of this work. Into whatever projects I enter, I will go into them with balance for the benefit of the unwitting user and the bottom line, and will abstain from every voluntary act of mischief and corruption; and, further from the temptation of awkward and incomprehensible user experiences, even as practical jokes or “experiments.” Whatever, in connection with my professional practice or not, in connection with it, I see or hear in the life of usability participants, which ought not to be spoken of publicly, I will not divulge, as reckoning that all such should be kept secret, sometimes even when they have signed an NDA and release form. While I continue to keep this Oath unviolated, may it be granted to me to enjoy life and the practice of the art, respected by all men, in all times! But should I trespass and violate this Oath, may the reverse be my lot!

— Hippocrates with some updates by Jonathan Arnowitz & Elizabeth Dykstra-Erikson



This is a draft version (preprint) of material which was later published in substantially the same form in my Rant column in ACM’s magazine. The published version is a copyrighted feature of Communications of the ACM. All requests for reproduction and/or distribution of the published version should be directed to the ACM.

15 March 2009

Still not ready for prime time

It's been almost 2 years since Elizabeth and I first wrote this article, and it is sadly more topical than ever. Until e-voting has a standard of usability, and open source accountability, this will stay an anti-democratic development. E-voting has the power to increase democratic participation; unfortunately as it is currently being abused it is still not ready for prime time. --Jonathan Arnowitz

Not to mince words: November 2, 2004 was a low point in American technology. Let’s get this straight: There is a current system that is the lifeblood of democracy, voting. There is a new technology, e-voting, that is being deployed, which has not been thoroughly system tested, not checked for security bugs, not even tested with the wide variety of users who must use the system (many of whom are computer illiterate, more of whom are computer skeptical). If this technology fails there is no way to determine the scope of its failure: There is no paper trail to trace down the existence of a problem. And to add insult to injury, one leading system vendor’s machines can be cracked with a secret two-digit code. (Though we don’t think you can call a two-digit code very secret or difficult to crack.)
To make matters worse, the leading vendor of e-voting machines, Diebold, Inc., has been unashamedly activist-conservative. Their methods have been shoddy at best, and at worst, not in good faith. The Cleveland Plain Dealer reported on August 28th that Walden O’Dell, Chief Executive of Diebold, Inc., who became active in U.S. President George W. Bush’s re-election campaign, was quoted saying he was committed to helping Ohio deliver its electoral votes to the sitting president. Several other publications note similar qualms about the Diebold system, including Federal Computer Week, ComputerWorld, Investor’s Business Daily, and others, although an inspection of partisan publishing will, of course, surface opposing views.
Complicate system problems with human decisions and you have, shall we say, an interesting twist. In Santa Clara, California, where voters were given a choice (“paper” or “digital?”), users suspicious of the touch-screen systems opted to cast their ballots in the more traditional physical way. No problem ... until it became clear that these paper ballots were frequently placed in pink “provisional ballot” sleeves, introducing another layer of fragility on the business of counting votes.
How do smart-election officials make bad decisions? Simple, we say: Rush into a technological push before basic security standards, let alone ethical standards, can be developed. If a software company attempted this, the entire development team would be rightfully out of a job.
Yet, this scenario has just played itself out in the U.S. presidential elections. The voices of HCI professionals in this e-voting debacle have been loud, strong, vocal and completely ignored. On reflection: At least we know where we stand.

—Jonathan Arnowitz and Elizabeth Dykstra-Erickson

This is a draft version (preprint) of material which was later published in substantially the same form in my Rant column in ACM’s magazine. The published version is a copyrighted feature of Communications of the ACM. All requests for reproduction and/or distribution of the published version should be directed to the ACM.