
During film school in the early 2000s, I fell in love with editing and arranging the components—images, words, sound, and music—to take shape and convey a particular message and experience for the viewer. Fast-forward ten years, when I transitioned into the world of User Experience (UX) and discovered the practice of Information Architecture (IA)—and fell in love all over again, diving into the elements and structural patterns of arranging words and images for users.
Throughout my UX career, I’ve applied this architectural lens to UX projects and IA research efforts. I made the switch to ResearchOps in late 2021, and over the past few years have come to realize how much research operations is, among other things, an IA job…and by extension, a content strategy job. As ResearchOps professionals, we architect systems, workflows, procedures, and change—as well as content and communications about these things, so that they’re used in the real world.
Central to our role is creating content that people use to learn how to conduct research, including documentation, playbooks, and procedures. We also commonly manage content about what was learned from doing research, i.e. “insights” or “findings.” These types of research operations and research outputs map directly to what many people envision when they talk about information architecture: content in a single website or system, organized in some kind of hierarchy, that people need to easily navigate and read. In the context of this article, I’ll call this “on-the-page” content. Within the ResearchOps domain, this could be things like instructions for how to plan and conduct a study, details about proper handling of participant data, or a repository of research studies.
Making Sense of IA
Practitioners define IA in different ways, but the way Abby Covert describes it in her book, How to Make Sense of Any Mess (CreateSpace Independent Publishing Platform, 2014), particularly resonates:
“It’s the way we arrange the pieces of a whole in order to communicate an intended meaning to users.”1
Spoiler alert: this article is a bit of a fan letter to Abby! But that’s because her work has so meaningfully influenced my approach to IA, and therefore ResearchOps. I believe that bringing IA principles into our practice and seeing “information as a workable material”2 is essential to doing our work effectively. Because ultimately, ResearchOps performs IA in the context of knowledge management.
Primarily, knowledge management in ResearchOps is talked about only in relation to knowledge that’s acquired from research itself and stored in research libraries or repositories. But whether you’re managing content about how to do research (handbooks, playbooks, policies, etc.), or content about what was learned from research (findings or insights), you’re managing content from which people interpret information—which creates the knowledge they have both about how to do research and the research itself.
Content Versus Information
Recently, a new designer joined our team and was onboarded via my team’s research training. A few weeks later, the designer messaged me asking how to access a tool that wasn’t part of our research toolkit. From their message, I realized that instead of searching on our team’s research landing page, they had searched our company-wide intranet and found research tooling content owned by a different part of the organization. This may seem like a small thing, but it shows how important it is to think about people’s behaviors and the broader context your content sits within. Though I steered the designer toward what they were searching for, it’s very possible they wasted their own and other people’s time, and might have caused more confusion trying to conduct a study with the wrong tool. It made me think about how I might further improve my research documentation, and how to train people who juggle research with design duties. You’re always going to be competing for people’s time and attention, and there may be other research content in your organization that clashes with your own team's content, especially if you don’t have centralized ResearchOps.
A common misconception is that “information” is the actual content or thing(s) you’re organizing. In her book, Abby explains that information isn’t an objective thing, it’s a subjective non-thing. The things are content: like words, numbers, images, videos, or even physical objects. She goes on to define information as:
“...whatever a user interprets from the arrangement or sequence of things they encounter. [...] While we can arrange things with the intent to communicate certain information, we can’t actually make information. Our users do that for us.”3
The new designer I mentioned earlier searched for content in a way that felt natural to them, but was influenced by the “information scent”4 of what they encountered (cues they picked up as they “hunted” for information) and guesses about how our company’s intranet worked.
Within the world of ResearchOps—where we have the duty to ensure that research is performed ethically, and with compliance, consistency, and quality—it’s critical that our users (People Who Do Research, or PWDRs)5 interpret information the way we intend.
The other key point Abby makes is that “the absence of content or data can be just as informing as the presence [of it].”6 For example, if you see an empty pizza box (the content) before you got to grab a slice, the information you might interpret is that the pizza was too tasty to last, or maybe that your friends weren’t being thoughtful. Similarly, if a PWDR reads a research playbook and there’s no content about how to run an unmoderated study, the information they might interpret is that unmoderated studies aren’t performed or even allowed (which might be true, or not).
Understanding the difference between content and information, how information can be created by both the presence and absence of content, and the information scent, are key concepts to keep in mind when you’re trying to make research spaces that are easy to navigate and understand. This is especially important in large, enterprise organizations where your research operations content lives within a larger ecosystem with multiple spaces (like our new designer encountered), and “information overload” is prevalent.
Research for Content
It’s important not to make assumptions about how people find, interpret, and use content. Time and time again, I’ve seen mature UX professionals throw UX approaches out the window when it comes to internal work, including research operations. The same UX and IA principles apply internally as they do externally to anything customer-facing: if it’s not useful, usable, and easily understood by people, it won’t succeed.
To ensure your on-the-page IA supports smooth research operations, it’s important to research and understand your PWDR’s contexts and information needs. The best approach will depend on your context and what you want to learn. Entire articles and books have been written on doing research for content and IA,7 but here are some useful methods to consider:
With your PWDR, create a service or journey map of what it looks like to do research in your organization—to reveal the difference between what should happen and what does happen.
Ask your users to show you how they find and use content (research playbooks, insights, etc.) to help you understand what’s missing.
Look at available data to see which pieces of research content are accessed the most, then talk to your PWDRs to find out why. This will help you to more effectively surface the most important content (or deprioritize unimportant content).
Do a card sorting exercise to discover how your users naturally organize various types of content and use the results to create one or two hierarchical outlines (“trees”) of how the content might be organized. Then, evaluate those outlines with a tree test.
Perform a cognitive interview or a simple content markup exercise to understand how people read through and interpret the content. This will help you edit the content itself.
You can combine the outputs of one or several of these research methods with the tacit knowledge you have about your PWDR and the organization, to create research content that’s well articulated and well organized.
Writing and structuring this on-the-page content with care and consideration for the larger context, i.e., having an intentional approach to both your content strategy (what is written, how it’s written) and IA (how it’s organized), will give you a solid foundation upon which to make changes over time.
As in life, change is constant in ResearchOps—large and small in scope—to systems, policies, processes, and more. Managing this change has its own important and complementary information architecture. It’s one of the trickiest and most important aspects of IA for research operations content. Like a tree falling in the forest without a witness, so are uncommunicated (or ill-communicated) changes in a ResearchOps ecosystem. If you change something, but no one knows about it, it’s as if the change didn’t happen. It was just something you wrote down.
Change and Temporal IA
Three years ago when I migrated our team from a duct-taped system of spreadsheets and separate tools for scheduling, email, and incentives to an all-in-one research logistics platform, I managed content about the current system, tested content about the new system with pilot users, and created informational and “teaser” communications about the coming changes over the course of a year. I did this to prepare people—and get them excited.
When you’re thinking about how to architect the information people will ultimately interpret about changes, an important new dimension comes into play: time. What I’ve come to think of as temporal IA is all about the information needs of people over time (usually in the context of change management). This boils down to three Ws: when do they need to know what, because of who they are.
Temporal IA also applies to the more traditional on-the-page content previously mentioned; people are always interacting with content at certain times for certain reasons. And it applies to insights content: different people need different types of insights at different times during the lifecycle of a product or service, and research insights morph over time. But temporal IA is most significant when it comes to change management and communications.
Content strategy and IA for your research operations documentation should consider that you’re almost always architecting two types of content as change happens:
the changing central documentation content, and
the more ephemeral communications content about the change.
The ephemeral communications should always link to the things they’re talking about in your central documentation, so it’s clear what thing is being referenced and its location.
Considerations for Temporal IA
Harnessing temporal IA to manage change effectively requires you to think about the flow and timing of changes to central content, your communications content, and how these different pieces of information relate over time to different groups of people. You want to answer questions like:
When should the central documentation update be published?
Is the change big enough that people need advance notice, or is it something smaller that can be changed and announced simultaneously?
Will the change affect all PWDRs, or a subset (e.g., only dedicated researchers versus all PWDRs)?
Which groups should be informed (e.g., leadership versus researchers versus other PWDRs), what needs to be conveyed to each, and how should the order of communications be prioritized?
Though you’ll usually want to stick with a consistent place for communicating changes, like email or your organization’s messaging platform (i.e. where people already are), think about what medium or channel is best to communicate the change. Big changes might warrant a team meeting presentation or holding a training session in addition to a broadcast post or email. And sometimes, you may want to announce certain kinds of changes first to a specific group in a focused channel, such as when you’re piloting a new system or workflow.
Additionally, consider how people interact with a change, and how time impacts the interaction. Here, you want to answer questions like:
What questions will people have that you can answer preemptively in your central or communications content?
How does informal, social knowledge sharing happen in your organization? What might different groups of people need to know to avoid swirl and confusion from whisper-down-the-lane chatter? For example, if a change doesn’t apply to all PWDRs equally, when they collaborate, there could be confusion if it’s not clear upfront.
How will you manage the change, and the content about it, over time when the enacted change becomes the status quo? (To avoid maintaining an endless cycle of outdated “updates.”)
New Versus Current Users
When you’re working with content that’s instructional or procedural in nature, you need to architect on-the-page changes in your central documentation (and sometimes also in the change communications) with two main user groups in mind. First, those who are familiar with the existing steps or policies (and likely don’t reread or review things often), and second, new team members to whom the content is brand new.
As an example, last year I updated our UX measurement playbook to use UX-Lite instead of SUS, which required a lot of content changes and the need to clarify preexisting confusion. I designed the updated playbook content to balance its new state with how it was changing. I added an overview of the current standard versus the new one, why it was changing, and why the new standard was better. The rest of the playbook was written for first-time users. This helped current team members understand the change and follow it. And for new hires using this playbook for the first time, they could review background context without muddying the core content.
Putting Change IA into Practice
Let’s say you’re updating the template for documenting a research study with participant recruitment criteria to encourage your team to really think through who they need to recruit with their project team, ultimately improving recruitment and screening quality. Your team refers to this template as a “Research Study Brief.” Let’s take a look at how the change (adding recruitment criteria), and communicating about it, might be done using the concepts we’ve discussed, from worst to best case scenario.
Worst
Update the template, but don’t tell anyone because you assume people will always go back and create a new brief from that template, and they’ll use the new template without any questions. Three months later, you notice no one’s using the new template and wonder why, and then you receive repetitive questions about it.
Middling
Update the template and post a message to Slack that reads:
“Hi everyone! Heads up that I changed the research plan template, now it has a section to include more recruitment details.”
In this message, the actual change gets lost. Different and imprecise words are used to describe the thing (is “research plan template” the same as the template for the “Research Study Brief”?), and the thing you changed isn’t linked to in the message, creating a breakdown in the IA. It also doesn’t mention how to use this new part of the template and why it matters. And, while I definitely like to use a friendly, human tone in my communications, this message uses an overly casual tone that undermines the importance of the message. A change communication’s substance as well as the style in which it’s written are both important considerations in your content strategy for managing change.
Best
Update the template and post a message to Slack and/or email that reads:
🔔 Research Study Brief Template – Participant Profile Added
What has changed?
Our Research Study Brief (link) now includes a section called “Participant Profile,” (link) where you should include details about the recruitment criteria for your study.
This new section will help you think through key criteria for a successful study, and document this info for stakeholders and future reference.
What do I need to do?
Fill out the “Participant Profile” for every research study going forward, making sure to create Research Study Briefs from the template.
There’s no need to go back to old studies to add this section.
🔗 The “What’s New” section (link) of our Research homepage has also been updated to reflect this change.
Let us know if you have any questions or feedback in a thread here. Thanks!
This brief message displays an intentional content and IA approach, and packs a punch. It uses:
An emoji to visually signal “update,” with a short headline to summarize the change at a glance and get people’s attention.
Bold headings, bulleted lists, and separate sections to help people read and digest the words quickly.
Spelled-out details in two sections, aligned to common questions in people’s minds (What’s changed? What do I need to do because of the change?).
Names and terms that are consistent with the central documentation (Research Study Brief), and links to the brief’s template directly.
Another link to the specific section that’s been changed (Participant Profile), as well as a link to a central change log.
An explanation of why this change is important and helpful.
A friendly tone that solicits feedback and questions, and instructs people how to respond.
You can use the example’s best version as an architectural template to reliably roll out changes, whether small or large—but I encourage you to experiment with what works for your organization. Sometimes a change can be small in scope, but important, like in the Research Study Brief example. Even so, every change benefits from intentionally architected content and communications over time.
Architect Well
People are often navigating an excess of (new and updated) information that continuously swirls around teams and organizations. Within the realm of research operations, we have the power to bring order and clarity to the chaos. We can help people make sense of research and how to perform it, even as we vie for people’s attention within the organization’s larger information ecosystem. As ResearchOps professionals, we can weave content strategy practices and information architecture principles into our roles, inspiring our users and collaborators to see us as trusted partners and experts.
Sponsor and Credits
The ResearchOps Review is made possible thanks to Rally UXR—scale research operations with Rally's robust user research CRM, automated recruitment, and deep integrations into your existing research tech stack. Join the future of Research Operations. Your peers are already there.
Edited by
Covert, Abby. 2014. How to Make Sense of Any Mess. CreateSpace Independent Publishing Platform. https://www.howtomakesenseofanymess.com/lexicon/information-architecture.
Covert, Abby. 2014. How to Make Sense of Any Mess. CreateSpace Independent Publishing Platform. https://www.howtomakesenseofanymess.com/chapter1/16/people-architect-information/.
Covert, Abby. 2014. How to Make Sense of Any Mess. CreateSpace Independent Publishing Platform. https://www.howtomakesenseofanymess.com/chapter1/20/whats-information
“Information scent” is a concept within information foraging theory that describes peoples’ behavior when navigating or searching for information. It says that humans use cues in information environments (such as web content) to determine the likelihood that they’ll find the answer to their information query—much like how animals or our ancestors hunted or foraged for food!
A term first coined by Kate Towsey in 2019. PWDR stands for “People Who Do Research”. It describes anyone in an organization who performs research, regardless of their job title.
Covert, Abby. 2014. How to Make Sense of Any Mess. CreateSpace Independent Publishing Platform. https://www.howtomakesenseofanymess.com/chapter1/21/information-is-not-data-or-content
To geek out over Card Sorting, I recommend Card Sorting: Designing Usable Categories by Donna Spencer. If you want to learn more about Tree Testing, the Interaction Design Foundation’s article Tree Testing: A Complete Guide should put you in good stead. If you don’t have a UX research background, make friends with your team’s UXRs by asking what resources they recommend for understanding “what research should I do when?”. Or they can help you design your study!