A UX Design Process with ADDIE
First published by Pat Godfrey: July 2017
ADDIE (Analysis, Design, Development, Implementation, Evaluation) is a dynamic, non-linear, and evaluative instructional design cycle closely related to how we learn and how we think. ADDIE considers our end-users' performance and not only their tasks, which fits the needs of user experience (UX) design well. Our digital marketing will benefit too!
If you think this content is useful to your Friends, colleagues, or connections, then please consider flagging it to them.
I love to Experience Learning Too. Your feedback is welcome.
What is ADDIE?
ADDIE is just one design processes perhaps based on similar work to Herbert Simon's 1969 decision making model and other works traceable back to John Dewy. You may be familiar with the "UX Design Process" of Research, Insights, Design Concepts, Test Prototypes, Develop; or "Design Thinking" of Empathise, Define, Ideate, Prototype, and Test? And Eric Ries's Build - Measure - Learn and Think - Make - Check UX cycle. Each is based on similar, if not the same reflective cognitive models as ADDIE.
I find ADDIE the most comfortable to apply across a range of situations and the easiest to share with our team. It is notable for the emphasis on cyclic formative evaluation through each stage of the process and its flexibility in application. As a process of iterative and continual improvement rather than a linear model of progression, it fits Agile Scrum and (UK) Government Digital Standards (GDS) methodologies well.
The process label aside, designers facilitate the engineering and development a product needs to be accessible, usable, learnable, and useful. We are our users' advocate and their champion. The process must begin with our users...although our enterprise should learn what its goals are too!
ADDIE (Analysis, Design, Development, Implementation, Evaluation) follows a familiar cognitive and cyclic process model.
Research is essential to the ADDIE process and is far more efficient than you might believe. Research is reusable across related design cycles. Perhaps we should refer to (R)ADDIE?
Each stage of the process is a cycle or eddy that inter-relates to or disrupts other stages in the process.
Analysis should be one of our greatest efforts. Scrimp here, and quality will suffer.
The more research completed around your enterprise, customers, and end users, the more insights you gather into their wants and needs. The research and its analysis should be planned against aims and objectives in turn based on an intelligent (researched) hypothesis, or those made by Marketing teams 😜.
We look to conduct quantitative and qualitative research. This often wrongly suggests clipboards, white coats, and laboratories whereas often the useful (big) data is discovered by talking to our consumers toward a more complete understanding of their motivational and environmental opportunities and constraints.
Too many job specifications list 'A-B 1 ' and 'Multivariate' user testing without understanding the techniques or their application. What are we testing for a start? We need a plan. 'A-B' and 'Multivariate' techniques may not fit our plan.
Analysis of your research finds common avenues on which to found levels of communication, cooperation, collaboration, and corroboration. You may prove or dispel preconceptions, prejudices, and 'borrowed beliefs' moving forward toward a user-centric design.
Note: user research can appear a drain on time and resource, but there are different levels of research to suit every pocket. For example, you can 'user test' team members role-playing personas. Just avoid being too frugal - you want your enterprise to work, yes?
Recall that Usability Labs are a often a compromise solution. You cannot observe our users in their natural environment unless they are in their natural environment?
Footnote 1: A - B tests: bad vs. worse? "Our repondants preferred B..." That really is bad.
Analysis is the most important aspect of any enterprise project, and yet it will be the least well resourced.
UX designers can make do and improvise intelligently, but projects benefit exponentially in line with the resources given to analysis.
What is our enterprise's Mission; its goals; its constraints, strengths, and features? How do these map to the product's and its users' environment, tasks, and needs? These are the first questions at the heart of our analysis. They are the background on which all design may flourish, or fail.
We need scope. What is the scope and how will we resource it? Is the scope finite, or does it take a more agile approach: evolution and not revolution?
What resources are available, and are there contingencies if more are needed?
And so on...
Not everyone in your audience is destined to become our customer or our user... unless we court them by demonstrating that there is something in it for them, by making connections (emotional, practical, and social), and working on generating interest and converting that into business.
Who is in our audience, and what's in it for them?
We need to get off our backside and find our users and talk to our customers and end users; not with check-boxes on a clip board or eye tracking lasers - but talk to them. They are not Pavlovian machines. We must understand what drives them, will block their progress, or worry them. What encourages and engages them? What is it that our enterprise is asking our users to do, and what do our users expect of our enterprise?
It is important to "come off the screen". When our users access our digital products on a screen, there's a whole universe traversing their lives and their experience at the same time. We must know how that will affect the Human Computer Interaction and design for it where we can.
Even the advantages and costs of quality qualitative and quantitative research can be negated by scalping statistical bell curves into flat generalisations. How will we deal with the data?
I am not a fan of the "Three Amigos" approach to Personas: a young user, a less young user, and an older user invariably useless with a browser; or whatever imaginary demographic meets our assumed hypothesis.
Where are our users accessing our digital content differently across devices, in improbable environments, or by overcoming differences in their cognitive or motor skills, browsing technologies, or other difficulties: not just visual impairments? All of them. No, your enterprise's success rests on the Three Amigos..?
Personas need to be real. To build a persona we must talk to an intelligently selected sample of our end-users.
In practice, meeting our users may not be practical. We can assume details about our users, but the result is assumed Personas. They will always miss vital aspects of who, how, what, when and where, as well as why our users approach our platforms as they do.
Inclusivity Built In
Statistics suggest 80% of Internet users have a difference in their perception of, or use of our digital products. It is more important than ever to design inclusively for access and usability. This does not mean designing only for 'alternative browser technologies', but for alternative browsing strategies and methods. There's a wealth of difference.
W3C's Web Accessibility Perspectives is an excellent reference from which to challenge our perceptions of what designing for accessibility and usability is. Through videos, W3C illustrates why the underlying content and HTML architecture must be semantic, well written, and visually easy to use. Once we have the basics right, then we can do the pretty it up.
Perhaps too many visual artists get to be UX Designers and not enough UX Designers? Perhaps because managers are visual users and judge UX on the visual experience alone?
Mobile First "mythology" is an engineering methodology, not necessarily a design one. We can design for larger viewports first and scale down the UI for mobile presentation and still be "Mobile First" as long as the implementation is, "Mobile First".
Caution: Mobile device presentations may no longer be predicted by the screen size. The viewport is often under the user's or software's control. For example, two apps may run on the same mobile device screen. "Mobile First" must now, more than ever include Fluid Responsive presentation strategies. Read more about the Fluid Responsive Design philosophy and Mobile First methodology.
Leaving "user testing" until after the design is completed, or only A-B testing trite solutions is not ideal, but next to useless without a clear understanding against what we are testing. Designs should be tested throughout the process from the first workshops to the last prototype. (Evolution: not revolution).
Cheapness will, if allowed, disable innovation and anything but run-of-the-mill experiences. If we want our products to do what we want and to profit from them, then we really must face up to and properly resource what User Analysis is: research!
There is no one-size-fits-all process to build Personas or digital products and platforms. As for all aspects of design, it depends. But there are frameworks against which we can underpin our research and design processes within budget and time constraints.
There are safe assumptions we can make based on our and other enterprise's experience. Enterprises may already have sufficient knowledge of their users to make very informed assumptions.
At the least, we can prod and poke a Subject Matter Expert (SME) for information and insight. There are great resources available via Internet Searches too, which can yield background information on common user behaviours.
The ideal situation is to conduct primary user research and draw from it our users' expectations and to learn how to meet or better, exceed them. Any research is better than only making assumptions: even assumptions need researched.
Our assumptions are growingly intelligent. All is not lost.
Design relies on good intelligence of our enterprise and our user. It may be that further analysis and evaluation of our users' and enterprise's needs are required to ask questions raised by the team and our stakeholders.
The design process will generally include initial sketches of the solution used to garner buy-in from stakeholders, and which are then developed into wireframes and prototypes that can be tested against the Analysis.
In an agile scrum environment the question is, "When do we design? In Sprint, or pre-Grooming?" There are advantages to both and both can work equally well when adding new features to an existing product. However, with new products and designs, entering a Sprint with an untested solution may cause additional team effort through iterations. Contingencies of time may compromise the end solution.
The artefacts that a design team must deliver are the, "deliverables". These can include sketches, wireframes, prototypes, style guides, and all manner of supporting materials that will aide the development team - and perhaps the writers - in their tasks.
The skill is to know what needs delivered and not to deliver anything unnecessary to the task. Opinion may divide a team on what these should be. It is the designer's role to satisfy the team's needs, and to understand where and why these may at times be excessive. It's why designers need to work great with teams, and also to have the strength to help lead on and to communicate what their work-flow should be.
It is an arrogant team indeed not to offer user support. User support commences with the on-screen feedback we give our users through interaction with our materials to feedback on form completion.
Errors should be designed out where possible, but where possible errors should be managed for our user. If the on-screen environment is restricted, then this may need an external help system. This in turn may be required to open contextually to the support our user needs depending on where in the product they find difficulty.
UX Designers must have a part in this process before conflicting brands and values are implemented by other specialities who may be less well rehearsed in the digital paradigm.
Visual and Code Design
While the cognitive and interactive strategies and designs are finalising, it is a good time to tie in with visual designers and with the UI developers. It is always best practice to ensure that the overall User Experience design fits within the enterprise vision and brand, and also its technological capacities. We want to improve the Universal Experience after all?
Design for Smaller Devices
You cannot 'prove' your design for smaller viewports and devices unless you test them on a selection of devices, operating systems, and browsers. Similarly, when sharing your design concepts as wireframes and prototypes, enable your colleagues to play with them at relevant scales. If devices are in short supply, consider scale paper prints.
Remember that your design team are not your users. Test designs with real users in their real environs with their real devices. You'll be glad you did.
Designers cannot sit back during the development of the solution. It is their role to quality assure that the design is implemented correctly and that contingencies of resources and effort are minimised by solid preparation.
A designer must understand the opportunities of technology to fully exploit it. Designers must be sensitive to the the engineers' skill set and abilities or otherwise prepared to mentor them.
Designers need to be nimble: to react to changing scope and needs rationally. They must return the best a compromise as necessary - or to call a halt on a Story that needs more attention than its Scrum schedule can afford.
Designers need to be able to say, "No". Equally, designers need to say, "Can do!"
Designers may take a back seat during a products implementation ("Release") phase, but will be far from redundant. It is during this short respite that design thinking should be applied to coming Stories and tasks.
It is also key to our engineers that, if there is an unforeseen problem, the designer can quickly confirm or adapt designs and evaluate any changes needed now or in the future.
Our designer should also take part in User Acceptance Testing (UAT): monitoring the engineering output for quality and accuracy and also picking up any nuances that may indicate a need to update a design or flow.
The ultimate or summative test is when our users use the released features. It is important to gather feedback as quickly as possible. Your enterprise's support department is an ideal resource. And we don't only evaluate the process at its close. As a team we should formatively evaluate our design throughout each stage of the design to release process.
Back to Personas: you cannot capture every user in a persona. Now we have access to all of our user population there will be surprises. Designers must deal with them.
How ADDIE fits UX
The (R)ADDIE design cycles's emphasis and priority on Research and importantly on Analysis supports our ambition to deliver delightful experiences.
The ADDIE processes' cyclic nature steers a design beyond what our users expect. It doesn’t only fix a "flag in the sand" to aim for but constantly re-targets exactly where that flag should be. Our design can flex and evolve intelligently toward offering our users:
- Ease of effort.
- Context to their tasks.
- Give results or feedback.
Research alone cannot improve design. The greater our analysis the better our design for Ease, Context, and Results.
Note: the Ease, Context, and Results mantra cross-maps reasonably well to Dana Chisnell's, "The Three Levels of Happy Design" (Pleasure, Flow, Meaning), which Jared Spool terms, "3 Approaches to Delight".
Considering the user journey
At each stage of the ADDIE process, I ground the work to the basic user journey and the enterprise's values, aims, and objectives. As its name suggests, this is a basic and easy to apply tool: not a deliverable such as a Journey Map or Customer Experience Map, etc.
|Wants||Wants give context to the user journey and enterprise aims. What our user and our enterprise want may not match what is needed.|
|Needs||Needs may indicate requirements, or preconditions that must be met to complete the journey.|
|Tasks||Tasks are discrete objectives formed from needs, against which success may be measured on completion of the journey.|
|Input||What our user and enterprise must actively do or to provide to achieve tasks set during the journey.|
|Output||The result of performing tasks such as knowledge acquisition, product orders being processed, or giving feedback on journey progress, etc.|
|Review||An overview of the status of a task or journey, of any further activity required or set in motion; perhaps an event history or time-line with which to track task or process progress, etc.|
|Recycle||There may be two or more phases: first, that our user and enterprise may access and repeat the journey or tasks as necessary, and second to review the success of each task or journey with a view to identifying improvements that can be made.|
Note: See an application of the Basic User Journey that enabled an 'eleventh hour' rapid understanding of, and design update of a failing UI.
There's no one-size-fits-all design process. ADDIE is simply one of my tools of choice . Each designer and enterprise will follow what works for them. It is only essential that whatever the analysis is, that it is thorough and not overly compromised by low resources, poor appetite, or inflated id.
Please help to improve this and other articles in the series by leaving a comment using the Facebook plug-in or by completing the short Feedback Form below.