22 January 2014

Taking the Fast RIDE: Designing While Being Agile

A new year and a new post in my less than active blog, here is an article that originally appeared in interactions magazine.

While many design methods are practiced “in the wild,” the most prevalent one appears to be “Design first and ask questions later”—also known as “Throw it over the wall and see if anybody salutes,” “Launch first, fix later,” and so on. Whatever you call them, these approaches are all responses to the pressure for rapid turnaround man- dated by Agile and other high-speed development environments. These design approaches are all proven methods—that is, proven to create Frankenstein UIs within a mere two to three iterations: That’s speed.
A single-minded focus on speed guarantees that these methods produce poor user experiences, because they do not allow for the reflection and deliberation necessary to achieve high-quality, coherent design. Instead, speed breeds pragmatic short-term solutions. An interaction style gets locked down in the early sprints. Then other ad hoc interaction styles emerge for parts added later. Too much emphasis on reactive speed reduces design to puzzle fitting: How can I put this new square-peg function into my round-hole application?
I am concerned that in an attempt to adapt to the pressure for speed, designers are failing to uphold what we know is good design practice. We compromise too much of the interaction strategy y in order to “ just get down to it.” The end result is competing styles, competing imagery, and unintelligible system/conceptual models.

Agile and the Emperor’s New Clothes

How do you get out of this cycle and still survive in the context of fast Agile or pseudo-Agile (aka reckless) release cycles?
This question may sound odd, coming after years of our learning to live in Agile environments. Agile’s true believers may argue that the whole point of Agile is not about producing speed, but instead about creating a structured process to produce high- quality results in the face of the pressure toward speed (which Agile itself did not create).
But there are obvious incompatibilities between Agile and the needs of good design practice. Unfortunately, an emperor’s new clothes mentality has developed in which it is not acceptable for user experience people to point these out. Instead, we become mock up

monkeys scurrying to meet the next sprint deadline, throwing in features and widgets at whim.
Part of the problem is that Agile’s own best practices are often not followed—for example, and most important, the regression testing, which introduces some flexibility in Agile software development. People tend to claim that Agile is iterative, but without regression testing, it is too often practiced in a piecemeal manner, wherein once something is developed it cannot easily be changed. This makes the emerging Agile practices look a lot like an incremental Waterfall development process. Once sprints begin, the train has left the station. You are then stuck with design decisions made at Sprint 0 or 1, with little ability to iterate on basic concepts.
Perhaps you have heard statements like this or experienced things like this in Agile environments:
  • Sprint 1: “Designer, just do something, anything, right now so we can get some feedback—we can always change it later.”
  • Sprint 2: “Designer, sorry, we can’t make changes anymore—that would affect the back end.”
  • Sprint 3: “Oh, we can’t do that because of lack of resources. You have to stick with your current design. Of course, you can always tack on a new visual design.”


As common and frustrating as this sequence of events sounds, the main point is not just that things keep changing; it is that the serial structure of Agile sprints, which may make sense for engineering, does not fit for design. Often with the best of intentions, designers are either too lazy to push back or intimidated into thinking serially instead of conceptually. This fundamental mistake causes poor design.

Design does not proceed by dividing a complex problem into parts and then working on them sequentially. Design is more of a layered process, moving from broad concepts to devilishly (or heavenly) detailed design. Broad design concepts, overall IA, and key interaction models need to be established first. This conceptual design then guides the detailed designs. Then, as this detailed design progresses, some detailed decisions may fracture the existing overall concepts, showing their limitations. This speaks of the need to supply feedback from the detailed design work back to the underlying conceptual layer.

Establishing a successful conceptual design calls for intensive collaboration between designers and researchers. I would advocate (hence my article’s inclusion in this section of interactions) that in Agile environments, designers and researchers need to be joined at the hip more than ever. They need to work in close concert in order to be as strategically and tactically “agile” as possible.

What About RITE?

Rapid Iterative Testing and Evaluation (RITE) is one of the main ways of trying to incorporate an iterative user-centered design mind-set into an Agile development process. It also joins research and design by coupling testing with quick design revisions. (See Medlock, M.C., Wixon, D., McGee, M., and Welsh, D. The Rapid Iterative Test and Evaluation Method: Better products in less time. In Cost Justifying Usability. G. Bias and D. Mayhew, eds. Morgan Kaufmann, San Francisco, 2005, 489-517.] for more information on RITE.) RITE does ensure some feedback from testing into design. However, in my observation, it does not address the mismatch between Agile and good design practice.
RITE is too reactive and tactical; it also does not address the need to support early, deep design work. It actually separates the testing activity from the interaction design activity by fragmenting the team, and puts both design and research in a reactive, entirely tactical mode. I have seen teams where the usability researchers are consumed with scrambling to plan and set up a test of features on parts of a couple of pages.
Meanwhile the design team is moving on to design some other part. Soon the usability researchers are scrambling equally reactively to test features from the next pages under development. The result is that no one ever evaluates the overall concept or architecture, or even the contextual fit of the application as a whole.
RITE’s testing-led approach to design improves design incrementally. There is nothing inherently wrong with this, as long as one is testing the right things. Unfortunately, this practice will never lead to that, especially as testing will tend to focus on what is being worked on in a given sprint. RITE also can lead to a kind of tyranny of testing.
Testing is an important evaluation technique. But a good researcher knows to mix a cocktail of different evaluative techniques to come up with a far richer view of the system and its user. RITE’s pragmatism never gets to the level of sophistication needed for a holistic design evaluation.

The RIDE Alternative

There are better ways. In this design-hostile environment, I advocate a design method I call Rapid Iterative Design and Evaluation (RIDE). In addition to rapidness, it emphasizes inter play between design and research, beginning with conceptual design, where every evaluation is not necessarily a test. The method also allows for alter native evaluation methods for specific iterations. Moreover, it includes partnering with engineering and product management to rapidly work through multiple concepts. This allows the team to identify the backbone of concepts. These backbones (as opposed to key user stories) get developed/ evaluated first. This places UX strategic design decisions up front, where they belong. While these strategic decisions are still made in the context of a sprint, RIDE respects the need for the layered design thinking needed to design a system holistically.
RIDE strives to do this by:
• encouraging the design team (by which I mean to include ever y- body with design input—interaction designers, visual designers, user experience researchers, and, yes, even engineers) to work faster through collaborating rather than working in isolation;
  • respecting the best practices of UCD/HCI/design; and
  • encouraging the development and evaluation of multiple design concepts at each stage.

The main ingredients of RIDE are:
  • Understanding the product context
  • UX Planning—establishing cross-disciplinary collaboration
  • Defining UX goals
  • Rapidly generating and evaluating multiple design concepts
  • Holistic iteration.

The first two steps here belong in the product-definition phase before the sprint cycles begin. Since Agile promotes fragmentation, a holistic view of the product should first be developed. Unlike a traditional UCD project, the concept is developed in broad strokes, leaving the further definition to the sprints.

Understanding product context 

Understanding product context— taking time to understand the users and usage context.
The product context is an essential element in the definition of the product. This effort, led by product management, includes development, design, and research. It helps clarify the product landscape, the users, and their environment. Design helps to visualize this definition through the rapid sketching of multiple concepts. These concepts are done quickly and iteratively, as more and
more information is gained about the product. The end result is a basis for a product requirements document (PRD) and two to three credible alternative UX directions.

UX planning

UX planning—establishing cross- disciplinary collaboration at each phase.
Design and research create a UX plan, which is meant to span the design and sprint iterations. The plan anticipates the possibility that some research and design
activities stretch over sprints. This is the strategic plan to achieve
the design goal of the product and includes identifying the types of evaluation, research, and design activities that will take place and when. Again, this plan does not follow sprint planning but will take it into account. After every sprint, the plan can be reiterated based on the usually unexpected outcomes of the sprint.

The RIDE UX plan also creates the possibility for each stakeholder to influence, guide, and inspire the design at all levels. RIDE acknowledges that design is not done in a vacuum. The more isolated it is from other disciplines, the more discontinuity and signal interference will arise in the UX. Rather than separating UX design into individual sub-disciplines (visual, information, interaction, engineering, product management, and other stakeholders), these all need to work together, because each part of UX design informs, feeds, and inspires the others.
For example, there is nothing to prevent a researcher or engineer from coming up with a great interaction design solution. Nor a designer coming up with a better analysis of the data. Quite to the contrary—combining their differing perspectives almost guarantees new, innovative ideas. We need not be afraid of a plethora of ideas. Some designers in fact fear engineers making design decisions. Yet if the developer is in tune with the larger design concepts, the chances of their having a good point are significantly increased over when they do not. And further, good designers should be able to defend and persuade stakeholders of the value of their designs; otherwise they might have to face the possibility that they may be wrong.
Outside of these core stakeholders, there is another equally important outer circle. These people include anyone who is taking vital interest in the user experience
and has the power to influence it for good or evil—anyone from developers to marketers to CEOs. Without cultivating their support from the beginning and working to keep them on board, you risk having progress derailed later by random changes of direction. In this planning phase, one should engage them early and on the most abstract level they can stomach: product definition. Get them to wrestle with the big conceptual design choices before development starts. Support from them also strengthens your mandate to execute on the concept.
Of course, nothing guarantees that the CEO won’t insist on purple buttons late in the game; however, in an Agile environment, this is less likely to happen if the stakeholders are on board when the conceptual train leaves the station.
Without this plan, you have to find some way of dealing with these outer-circle UX inputs. Research may give you a chance to resolve design debates objectively, but the Agile timeline typically does not allow for this. The best defense is to make them partners proactively in the design process. Include as many stakeholders early on in brain- storming sessions where the conceptual design is worked out.

Defining UX goals. 

Before the beginning of a sprint, it is important to establish the UX goals. These goals require four types of iteration: product definition, conceptual design, detailed design, and evaluation activities. I don’t mean to suggest that these are serial types of goals; rather, they are all interconnected. In the early sprints, the accent may be on product definition, moving to conceptual design. But even these should be done in service of the detailed design. This way, activities do double duty: iterating the practical short-term goals and informing /evaluating the strategic longer-term goals.
These goals are also planned along three time scales:

• The project end—progress to product release
• The current UX plan timeline— the current planned and ad hoc UX activities
• The current sprint—a time snapshot of the current state of the UX goals.

Rapidly generating and evaluating multiple design concepts.

Designer and researcher work on a coordinated effort at designing, evaluating, and then iterating during the sprint. Many parallel activities are driven by the design strategy, not by a reactive “test and see what happens” approach. It is much different from RITE in that the UX goals will determine the strategy for the sprint. The evaluations may or may not trigger reiteration; they may just inform. They can also split off another design variation for exploration. This depends on the agreed-upon evaluation activities: Are they formative or evaluative? Are they abstract or concrete? Will they help find a synthesis among competing ideas? Researcher and designer are partners in this effort. In parallel, longer and different activities (focus groups, interviews, cognitive walk-throughs, etc.) are being done to triangulate with data from other evaluations.

Holistic iteration.

This involves evaluation at the strategic level in parallel with detailed design. Detailed designs sometimes will trigger refinement of the conceptual design and vice-versa. Moreover, evaluations can touch visual, interaction, information, and system design issues— one cannot predict which. Therefore, many design disciplines involved can lead to vastly different results. For example, when a particular design tests poorly, an interaction designer might change the interaction. A visual designer might change typography, colors, and so on. This leads again to incremental and non-holistic revisions. Real iteration would involve these different design disciplines and find ways to distribute the answer among all the design elements, thereby coming up with a far more robust iteration. Holistic iteration also means assuring the deliver y of what engineering needs to meet their goals and their management objectives. Meeting this need will, of course, mean trade-offs on design. This is why having engineering in the core team is essential. Any development method that ignores or frustrates the engineering team’s management goals will fail.

Conclusion

RIDE is a richer design methodology because it leverages a collaboration of all stakeholders. It encourages exploration through a reliance on multiple design options that are synthesized, as opposed to a single option that is puzzle-fitted with additional features. RIDE seeks to find a holistic solution for Agile design and development, not just a tool for rapid changes to a single concept. But most important, it tries to find the right way for design (interaction, visual,

and information) to work with researchers: joined at the hip. It also requires strength and energy, because--as with any quality UX--there is no free RIDE.