How hi-fi prototyping for TV became a breeze with a ProtoPie component library

By
By

As a UX agency, we often help our clients in getting UX research on the rails, where we create a good balance of the right tools and governance to ensure prototyping, user feedback, and data-driven design become an integral part of the design process. Testing prototypes with real users allows our clients to quickly validate ideas before going into production.

Currently we’re doing this for our client Liberty Global, one of world's largest international TV and broadband companies. Compared to the usual projects, one major difference here is that we’re doing user research for a TV interface. “Same thing, just another platform” you could say. But the difference is huge, and mainly lies in the prototyping process.

Prototyping for TV, a whole different ballgame

The majority of our prototyping happens right inside design tools like Figma that allow for any designer to create somewhat realistic prototypes. You can create happy flows while utilising some of its minimal animation features. For TV that’s a whole different story. Controlling a TV interface relies on key-press events and conditional logic that cannot easily be simulated by the day-to-day prototyping tools that mainly focus on touch and mouse based input. There are literally thousands of paths users can follow on a single TV UI page. Quite the challenge if you had to stitch those screen states together.

"High fidelity prototypes will get you high fidelity feedback."Andrei Herasimchuk, Principal Designer at Booking.com

Showing users a mouse-controlled prototype wasn’t an option. Asking too much of their imagination only distracts and would muddle the data gathered. Generally speaking, the more the prototype resembles the real thing, the better feedback you’ll get back from your test participants. As Andrei Herasimchuk puts it “High fidelity prototypes will get you high fidelity feedback”. Experience has taught me that some of the best feedback arises when users are freely clicking around in a prototype. When participants get lost.

Happy flow vs Free flow. Interesting findings can arise outside the happy flow.

The right tool for the job

We set three main requirements when searching for a tool that would allow us to make TV prototypes:

  • We need to be able to simulate a realistic TV experience, meaning a user should be able to click around freely and not be forced to follow the ‘happy’ path.
  • Detailed UI animations and the ability to playback and control rich media.
  • Anyone in the team should be able to create a prototype so we don’t need to rely on that one techie that knows all the ins and outs.

With this in mind, we did some digging around and ended up with a short list consisting of Principle, Marvel, ProtoPie, Axure, Framer, and HTML/CSS/JS. Each tool or approach has their own advantages. However, Principle and Marvel fell too short and wouldn’t provide us the level of feedback we needed from users. Axure, Framer and code could definitely do the job, but the learning curve is very steep and wouldn’t meet the third requirement.

So that leaves us with…

ProtoPie

ProtoPie met all of the above requirements. With the use of keypress events, variables, and conditional logic we would be able to create a UI in which the user can navigate around freely. The great support for animation and rich media would provide a TV experience that users are used to. Last but not least: it’s extremely easy to wrap your head around they way ProtoPie works, making it a logic-based prototyping tool that not only works for techies, but for any designer.

However, that didn’t mean we could just pick up the tool and start creating a scalable TV interface from scratch. It’s not an out-of-the-box TV prototyping solution and still required us to understand the paradigms of remote controlled UIs and translate those to a system that is compatible with the interaction model of ProtoPie.

Defining the system

This wasn’t a one-off assignment. We need to deliver at least one prototype every month for an undetermined amount of time. Taking the idea of a design system, the most logical thing would be to create a reusable component library so we wouldn’t have to reinvent the wheel and and start from scratch every time.

The challenge here is to continuously track which component is selected so that the prototype knows which component to select next when pressing one of the arrow keys. To achieve this we defined thee levels in the component hierarchy and can be switched either on or off. Which one is on or off is determined by the parent level.

Roughly speaking, the scene tells which component should be active based on certain conditions. The component then tells which entity needs to be focussed. On focus, the entity send its name to the scene so we know at all times what is being selected.

1. Entity

Entities are the smallest and simplest bits in our library. They generally contain one variable called focus. When focus is true the focussed state is shown, when focus is false the unfocussed state is shown. It’s as simple as that. What the states look like is determined inside the entity. Which entity is focussed is determined by the parent level which is always a component.

Entity behavior as shown below:


2. Component

Components contain one or more entities. Just like entities, components have an on and off state, but instead of calling it focus the variable is called component_active. When component_active is true it can tell which entity inside it needs to focus. Components are generally full width and stacked on top of one another inside a scene.

Component behavior as shown below:

3. Scene

A scene is basically a page with a bunch of components. All that a scene does is tell which component should be active based on a variable called row_number.

Scene behavior as shown below:

Using numbers to control UI and track position

The arrow keys are set to control the variable numbers, where one press adds or subtracts a 1 from the variable. row_number is controlled with the UP and DOWN keys in the scene, while selection_number is controlled with the LEFT and RIGHT keys within the component.

Reusable component library

As mentioned before, one requirement was that anyone on the team should be able to create a TV prototype. We partly achieved this by creating a component library in which each component is created with the same logic as mentioned above. This way anyone (after a brief introduction) can drag and drop existing components onto their scene, link the triggers to the applicable component and will have a working TV UI within minutes.

That process looks something like this:

For those that need to create new components it requires a bit more knowledge of the system, but it still isn’t rocket science. A short hands-on workshop gave designers the knowledge they need to create new components to add to the system.

A realistic prototype requires a realistic setup

People generally don’t watch TV on their laptops. And when they do, it’s not through a UI that is controlled with arrow keys. For that reason we created a setup for users where they can comfortably sit as far away from the big screen as they normally would, and use the actual remote control that customers use.

The test setup at the office

ProtoPie can use a lot of keyboard inputs, but not specific remote inputs. So first we used a generic bluetooth remote and translated the input to normal keyboard keys (like up, down, escape, enter).

Later on, we were able to get a bluetooth version of the actual remote users get at home. Accompanied with code from a developer, the input from the remote was mapped to specific keys. For example, the Guide was mapped to the G key, the back button to ESC and the Record button to the R key.

The UX research / prototype tandem

We now have a solid UX research platform where we are able to conduct a user test every month. Alongside we use and maintain the ProtoPie component library that allows us to deliver very realistic TV experiences to our test users in timeframes that were unimaginable before.

Reoccurring user tests give an incentive to create new components and feed the library as we proceed

The team can use the library and easily drag and drop mockups together. When we need a new component for the test it’s built according to the system and added to the library once approved. Meanwhile we’re improving and expanding the library with advanced features like voice recognition, UI audio, and text-to-speech functionality. Next to that we're in the process of creating a setup that allows us to do unmoderated tests for TV interfaces that will allow us to collect quantitative data.

It has been somewhat of an investment to create the library, but that doesn’t nearly add up to the amount of time it’s saving us in the end. Would you like to have a look how we set things up in more detail? Feel free to reach out and we'll be happy to show you.

Credits: The hand illustrations were provided by Handy 3D Hands - Icons8.

Download the whitepaper

Thank you! Your download should start.
Download
Oops! Something went wrong while submitting the form.

Download the comparison table

Thank you

We'll add you to the list of recipients
Oops! Something went wrong while submitting the form.

Latest Whitepapers

Let's work together

Similar challenge?
Give us a call

Heading

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Cookie Consent

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Cookie preferences