CONTEXT

Content is easy to do poorly and difficult to do well. This statement rang true for a product I worked on that went from a form builder to a no-code application builder. We struggled to make the content patterns that made sense for a form builder work for a technical powerhouse, but pushed forward. Three years after I had begun working on the product, we finally had the resources to assess what was in our live system.  Our first assessment was the product'scontent and our guiding question was: what exists now?

DISCOVERY

It's All Greek to Me: A user-language audit of an application builder

A few years after lightning-fast development, my employer had the resources to devote to evaluating the current status of the product’s usability. The Product Strategy Team (PST) decided  we would evaluate language first, focusing on labels and defaults of the product’s builders .

The UX team was asked to conduct this effort, with myself as the lead. After discussing with my team and PST, we decided we couldn't just look at labels, we needed to look at everything. We then embarked on a two month journey to collect, categorize and analyze all of the product's language.

Our goals were to  discover the understand the pattern of language that is currently used, learn how users understand our current content and gather industry standards for content to better align our product with them.

DATE:
JANUARY 2021
TYPE:
AUDIT
ROLE:
RESEARCH LEAD
RESEARCH

Language Audit

We knew the biggest part of this effort would be a language audit. At first we thought the fastest and easiest way to get content would be web scraping. After connecting with our Solution Architect, it was deemed possible to collect all the content, but we wouldn't understand the behavior or nuances it explained.

A decision was then reached: we would go through the entire system and collect not only the language, but behavior and consistency. It took us a month to complete the data gathering and we ended up collecting 3,400 unique content points. For each language point data was collected identifying the type, purpose and behavior outcome and evaluating consistency across the product's language for behavior and UI.

Industry
Standards

Language standards and best practices were collected from UX industry experts (NNg, Rosenfeld Media, UX Collective, UX Planet, Adobe, UX Writing Hub). A document was created to highlight the industry standards, common terminology and provide insight into writing user-centric language.

Defaults Survey

To discover how our core users understand language for pre-set values (defaults) we had our internal configurers participate in a survey that had them rank use of default values and provide feedback on their responses.

Survey excerpt .
ANALYSIS

Language Audit

consistency for all builders
ACTION BUTTONS ON MODALS

Most of our analysis focused on builders, because they had the most content but we compared all areas. During our analysis we saw three major trends emerged:

  • Language across modals that had the same behavior was fairly consistent but the UI was not. The most common instance for this was Notification modals in different builders having the same content, but some were using black font while others were using red.
  • Modals did not use descriptive language. The pie chart image represents the action button labels for all modals titled Are you sure? The primary action was usually described by a button or in some instances was reaffirmed in a question. While not inherently bad, the previous action was not front and center.
  • Labels for some buttons and placeholders did not match their behavior. 35% of all placeholders for dropdowns  were labeled Select one, when you could select multiple. We saw this as a trend going forward where content was reused for something that was similar, but not exactly the same.The juxtaposition between what a label is telling you to do vs. what you actually can do may cause users to doubt themselves or the product their using.

Defaults Survey

Input field values ranking

During analysis we saw that many users stuck to using 1 to 2 defaults for each field. This was for a variety reasons, the two most prominent were "I only know what a couple of them do" and "It was built this way since I started working, I never changed it." The results also varied by project; those who worked on more technical projects and used object models were more concerned about data corruption, compared to users who used forms. Based on our results, we recommended a few changes in sort order and language changes.

The results also showed a trend in users not knowing which fields were default values. This highlighted a UI issue we had seen during our audit- placeholders and values text were the same color. We assumed that the lack of distinction between defaults and placeholders may have had an impact on use and we updated the product to make their text colors have contrast.

IMPLEMENTATION

Content Outcomes

This effort had outcomes directly related to it that were artifact, process and UI based.

  • Content standards were created to serve as a guide and provide examples of correct content. This is a living document in confluence, that is used by Product, Dev and QA teams to create content, ensure prototype and user story content matches the standards and ensure the live environment matches the standards.
  • Content creation and review were built into the solution cycle. Teams now had a mandatory check-in to discuss content needs with the content team (myself and a content specialist). The content team would then create needed language for the teams and ensure standards were met.
  • The design system was updated to meet new standards. This included modal updates, placeholder and default text updates, action button updates and sentence casing. The changes in the design system were added as user stories, and were implemented by the dev team to the live product.
Content standards excerpt.

Strategic
Outcomes

We conducted this effort during a pivotal time for the product: a (official) product team would soon be formed, a process flow builder was in the early stages and cross-departmental collaboration began to flourish. This work highlighted key information missing that was vital going forward: who is our user base and what are we building? We proposed ideas that would help to partially answer these questions.

  • User segmentation- we recommended that PST to create clearer, more specific user segments using our previously created jobs to be done as the base. We had an idea of what users needed to do, but needed more data on the who. This would benefit content creation by having a focused audience and allow us to test content with identified segments.
  • Competitive alignment- a precursor to a competitive analysis, we proposed an effort that focused on discovering who does PST think our competitors are? who are they actually?  We collected a list of 27 identified competitors and assed the problems they solve to compare them to the problems we solve. This helped align the product as a no-code application builder and shape product vision.