A few years after lightning-fast development, my employer had the resources to devote to evaluating the current status of the product’s usability. The Product Strategy Team (PST) decided we would evaluate language first, focusing on labels and defaults of the product’s builders .
The UX team was asked to conduct this effort, with myself as the lead. After discussing with my team and PST, we decided we couldn't just look at labels, we needed to look at everything. We then embarked on a two month journey to collect, categorize and analyze all of the product's language.
Our goals were to discover the understand the pattern of language that is currently used, learn how users understand our current content and gather industry standards for content to better align our product with them.
We knew the biggest part of this effort would be a language audit. At first we thought the fastest and easiest way to get content would be web scraping. After connecting with our Solution Architect, it was deemed possible to collect all the content, but we wouldn't understand the behavior or nuances it explained.
A decision was then reached: we would go through the entire system and collect not only the language, but behavior and consistency. It took us a month to complete the data gathering and we ended up collecting 3,400 unique content points. For each language point data was collected identifying the type, purpose and behavior outcome and evaluating consistency across the product's language for behavior and UI.
Language standards and best practices were collected from UX industry experts (NNg, Rosenfeld Media, UX Collective, UX Planet, Adobe, UX Writing Hub). A document was created to highlight the industry standards, common terminology and provide insight into writing user-centric language.
To discover how our core users understand language for pre-set values (defaults) we had our internal configurers participate in a survey that had them rank use of default values and provide feedback on their responses.
Most of our analysis focused on builders, because they had the most content but we compared all areas. During our analysis we saw three major trends emerged:
During analysis we saw that many users stuck to using 1 to 2 defaults for each field. This was for a variety reasons, the two most prominent were "I only know what a couple of them do" and "It was built this way since I started working, I never changed it." The results also varied by project; those who worked on more technical projects and used object models were more concerned about data corruption, compared to users who used forms. Based on our results, we recommended a few changes in sort order and language changes.
The results also showed a trend in users not knowing which fields were default values. This highlighted a UI issue we had seen during our audit- placeholders and values text were the same color. We assumed that the lack of distinction between defaults and placeholders may have had an impact on use and we updated the product to make their text colors have contrast.
This effort had outcomes directly related to it that were artifact, process and UI based.
We conducted this effort during a pivotal time for the product: a (official) product team would soon be formed, a process flow builder was in the early stages and cross-departmental collaboration began to flourish. This work highlighted key information missing that was vital going forward: who is our user base and what are we building? We proposed ideas that would help to partially answer these questions.