Our Multi-tiered Design Methodology
This blog is a summary of an internal project undertaken throughout 2015 to review and understand the patterns of 4 years of projects using discovery and design thinking activities to better understand and therefore communicate the unique value of the in-house User Experience team, and to improve our capability for the new business, Data61, moving forward.
This project was lead by myself (Hilary Cinis) and contributed to by Meena Tharmarajah, Georgina Ibarra, Cameron Grant, Phil Grimmett. Much thanks to the team for their help and insights and suffering my eternal reviews of the outcomes.
For years I’ve attended conferences, read articles, seen presentations and worked in teams evangelising the unquestionable importance of research led user experience design. Assumptions, imagination and stakeholder needs are downgraded as lesser practises. We hear how research saves time in the long run, provides focus and yields hidden gems; it helps gain buy-in from teams and creates strong foundations for the decisions made, so development can go ahead with less challenges. And yes, it does.
But as a sole approach, user centred design is flawed and unrealistic.
In my pretty long career experience hypothesis and subject matter driven design is as much a contributor to success as is user centred design. The trick is not to rely on just one mode of operating. There is skill in knowing how much to do of what and managing the cadences of these streams.
So, I’m here to say that’s it’s ok to work with assumptions at the outset and see if it flies. I suspect everyone secretly does it anyway. So just own it and accommodate it, but don’t let it direct things completely.
I’ve worked quite successfully using this methodology and I’ve been studying and experimenting with it specifically for my time at NICTA, now Data61 . I think I have a nice model now – in brief stakeholders set the broad strokes and the big vision and UCD fills in the gaps.
Domain/hypothesis driven design and insights led user centred design are approached as related frameworks but with their own specific cadences. The model accommodates vision, research experiments, business requirements, customer inclusion (participatory), agile solution design and testing and validation.
The current explanation of the methodology comes as the result of a long term design project in itself, reviewing and analysing the traits of 50+ projects over 4 years which have had UX and design contributions at NICTA/Data61
Initially I was attempting to categorise projects into what looked like the a taxonomy to assist with setting design requirements and team expectations but found too many differences. There were too many cross overs of activities and this weird need by humans to have a neat progression of design work and deliverable stages as a project matured into a product which simply didn’t exist.
NICTA project taxonomy and relationship sketch – 2015
There were simultaneous activities happening, in direction and user insights. It was messy but there was a pattern. Our time to market is quite long so my impatience was also getting the way. It all just needed reframing to make it measurable, understandable and then communicable. Instead of aligning specific activities to a project ‘type’, I found there were rhythms of these activities through all projects. The only difference was their own particular user groups – some are highly specific, some are broad.
Early sketch of cadence relationships, 2015
The way forward was not “science fiction” design or UCD, nor a middle ground or a progression from one to the other but both simultaneously and, most importantly, understanding their own cadences and benefits.
We advocate a multi-tiered pathway that helps us map and explain our activities towards designing for future audiences and reduces the friction typically associated projects. After all, everyone wants the same thing – success – so make that a strength and not a competition.
I looked at a highly successful big version of this to validate the theory and National Map (and the Terria platform) has become the poster child of this multi-tiered approach. Fast to market, multiple instances, ongoing income generation, iterative development and future state design proposals.
Also important is taking on an exploratory mind frame. Our customers and users are with us on this journey and we are delivering value to them both as we go.
Early sketch of methodology relationships using National Map/Terria as the base case, 2015
Completed methodology relationship inforgraphic using National Map/Terria as the base case, 2015
The “cadences” mentioned earlier refer to the time frames in which Hypothesis and Insight driven design operate.
They are in parallel but not entirely in step.
They cross inform as time goes on, typical of iterative and divergent/convergent thinking. New opportunities or pivots will create a divergence, results of interviews and testing will initiate a convergence.
- Hypothetical cadences are short and may or may not map to development sprints.
- Insight driven are longer, and may span several hypothetical cadences.
- Ethnographic/behavioural science research projects are of a longer cadence still, and ideally would feed in/take insights from both the previous two. I’ve not covered this here as it’s not my area of expertise.
Domain Driven / Hypothesis Led Design
This is about exploring a future state technology using hypothetical design and team subject matter (domain) expertise.
This provides a project benefit of:
- tangible testing and validation artefacts early
- maintenance of project momentum as teams, sponsors and customers get bored (or scared) when they aren’t seeing ideas manifesting via design i.e. there is evidence of delivery to customer stakeholders and/or project sponsor.
- capturing executive stakeholder ideas early because they will inject them at some point anyway <– this is important
- technical innovation opportunities are unencumbered by current problems or conventional technical limitations. If you are designing for a 5 year future state, we can assume technology might have improved or become more affordable.
Also part of this work is often a sales pitch type work where a concept is used to engage customers so there is a clear business engagement benefit.
Typical design activities include:
- Solution exploration and skething
- Coded prototypes
- Interaction design
- Co-design with customer and team
- Deep thinking (quiet time), assumptions about needs for proposed audiences (yep, pretend to be that person),
- Sales pitch concept designs
- Infographic or other conceptual communication artefacts
What we’ve learned…
- Make no apologies for solutions being half baked or incomplete, own it instead.
Continually communicate context and the design decisions because everyone imagines wrong stuff if you leave gaps.
- Shut down any design by committee activities, relying instead on small bursts of subject matter expertise and vision leadership from the project sponsor. Two chiefs are a nightmare same as unstructured group brainstorms are.
- Keep vigilant about eager business development folks selling an early delivery of work that is technically not feasible, has no real data (i.e. only sample data) or unfinished academic research (algorithms are immature). This is especially problematic when dealing with government who expect what they were pitched to the pixel and because it looks real, think it’s not far off from being built.
Insight Driven Design
This is about solution design using user research.
The project benefits are:
- insights inform solution design and assists with maintenance of project focus (reduction of biases and subject matter noise)
- short term solutions for a customer while journeying to a blue sky future state
- validation of customer interest to stakeholders and/or project sponsor
- change management (less resistance to changes within current workflows).
Typical design activities include:
- Discovery and exploration work
- Qualitative user interviews
- Establishing user beta panels for ongoing reference and testing with
- Usability and concept testing
- Usability testing
- Metrics analysis
What we’ve learned…
- Analysis paralysis is a killer. Keep it light and regular. (User research can be called into question once customers/teams tweak to the fact they don’t know anything about users and will expect large amounts of research to back decisions, thereby inflaming the issue with more information but not getting any more useful insights)
- Unclear objectives from user research provide unclear outcomes
- Poor recruitment leads to poor outcomes (expectations the designer(s) can just “call a few friends” for interviewing and/or testing).