In an attempt to get insight into page placement and visual preferences, we showed 10 students a static concept and asked, "how do you prefer the text to be highlighted as it's read?"
Results were inconclusive because what we were asking was too abstract. We knew the typical Figma prototype couldn't convey the different speeds and cadences for more detailed testing, but it was unexpected to be left with no signal and needed to move quickly on a solution.
Spotlighted text is inherently similar to highlighted text and they need to exist on the same content.
To avoid disrupting learned behavior, highlighting will remain as is.
Luckily, the initiative was a high priority and Pearson is well staffed but specialists are in high demand. In order not to monopolize their time, I needed to move quickly on the design.
I investigated alternative no-code solutions but determined we should utilize in-house technical talent and build our own prototype conveying audio and visual tone, speed, and pacing. To reach the degree of confidence needed, students needed to actually interact with the interface and the no-code route was riskier.
Luckily, the initiative was a high priority and Pearson is well staffed but specialists are in high demand. In order not to monopolize their time, I needed to move quickly on the design.
Designing for text-to-speech was new for me. In addition to researching best-in-class examples, I learned best practices.
I worked with existing components as much as possible. Luckily, we were actively defining the system so it was a good time to create new styles to differentiate spotlighting text treatments.
Luckily, the initiative was a high priority and Pearson is well staffed but specialists are in high demand. In order not to monopolize their time, I needed to move quickly on the design.
I narrowed down a grey and a color spotlighting option for testing. I setup an unmoderated test with 20 participants (English speaking higher education students in the US) with the custom coded prototype