Central Government Alpha

Situation

The client was amidst a complex digital transformation programme. The ambition was to digitise and modernise by automating manual processes, creating robust data processing capabilities, and re-designing their end-to-end service.

My responsibilities included supporting the client’s head of research and leading a team of 18 client and Deloitte researchers across 4 streams of work, supporting the wider product leadership, and ensuring the services all built to all relevant standards (GOV Service Standard, WCAG).

The situation within the team was challenging. Specifically:

  • Immense complexity with 50+ user types identified in Discovery resulting in difficulty prioritising effort
  • Large number of user needs with significant overlap between work streams resulting in duplicated work
  • Demotivated teams with low morale, they’ve been at it for two years and the complexity was only rising
  • Unsuccessful digital transformation efforts in the past left stakeholders and users weary and distrustful
  • The participant pool was drying up with internal stakeholders showing signs of fatigue, and external participants coming back for repeat visits impacting the validity of our findings

Task

I supported the client’s head of research to create a unified roadmap of user research, ensure the new service is accessible, ensure high quality and reliability of research to withstand ministerial scrutiny and pass the GOV Service Assessment, and augment and upskill the existing teams with knowledge transfers, training, and mentorship.

Actions

Introducing OKRs

From the initial 1:1s with the team and the subsequent full-team retro we identified 8 areas of improvement that were then synthesised into actionable goals and measurable results.

Standardising research

I worked with the Product and Delivery leadership to align research activities to the release roadmap across all work streams. My proposed approach was to work in dual-track Agile with the aim of ensuring both evaluative and generative research is considered.

I also introduced a standard research checklist, report and kick-off templates, and other collateral to ensure uniformity and high quality of outputs.

Standardising personas, introducing archetypes

The client’s 50+ user personas were synthesised into 6 high level archetypes in an attempt to simplify and standardise the programme’s approach. We worked with the brand team to create engaging visuals for each archetype and socialised them widely within the department, even printing out posters and hanging them around the office.

Addressing participant fatigue

By introducing research a prioritisation matrix we identified that not every feature needs to be researched in depth. Some features with low ambiguity can be evaluated in a leaner manner reducing the overall strain on the limited participant pool. To address this we began onboarding UserZoom to empower the design team to run their own unmoderated usability tests from templates we’ve set up for them.

The client already had an internal participant panel and we spent some time reviewing and improving it, as well as running several email interviews to understand how participants felt about being on the panel. We wrote a communications guide for research ops colleagues sending out invites to reflect some of this feedback, e.g., “we know it’s been tough but we need you!”

Made a significant decision that we will not exclude repeat participants, but instead put a timer on how often they can participate. The timer was initially 6 months but was later changed to 2.

Centralising insight

I introduced a central research repository and championed the use of Dovetail. While I supported the Head of Research with internal business case and onboarding efforts, I built a research repository in Excel.

Each insight was paired with its parent work stream, supported by specific evidence, and had a column for any actions that were made based on this insight. I also introduced monthly grooming meetings where each work stream lead updated their actions for the previous month. This approach significantly increased transparency of research, our ability to cross-reference similar data from different work streams, and our ability to track actions therefore evidencing impact.

Introducing new methods

I introduced new methods like:

  • The Kano model to improve prioritisation
  • UX-Lite to measure impact of design
  • Rainbow analysis to make analysis fast, fun, and visual

Improving Culture

I ran one off-site for the team where we did team building exercises and got to know each other. This was, according to my end of year written feedback, one of the most fun events to happen on the programme.

I also introduced standard rituals like frequent retros, manuals of me, regular 1:1s, and ‘fun Fridays’ where we would play skribbl.io and other online games together.

Result

It’s hard to measure, but I think the team was much happier when I left, as evidenced by written feedback I received:

A massive thanks to you Arty for being a great UR but an even better human being. I’d like to think I shared with you quite often the positive impact and useful things you taught me in our time working together. You have been, hands down, the most positively impactful Deloitte UR that joined the team and working with you was great fun. — Senior User Researcher

Or this one:

Oh my gosh, what value didn’t you add?? You came to us when we desperately needed an injection of new life, new ideas and refreshed approaches, and you certainly brought that. I know the whole team benefited from your expertise and vigour. — Head of Research

More objectively however:

  • The teams passed 3 Alpha service assessments, receiving “Green” across the board for research related standards
  • We achieved 80% insight to action rate as evidenced in the central repository
  • We raised UX-Lite by 16 points in 12 months (aggregated across all work streams)