John Brieger, very generously, gave Ludogogy permission to republish this article about playtesting boardgames. It was first published by John, in his blog at http://johnbrieger.com/blog/?p=285
In boardgames, there isn’t a formal term to cover the set of boardgame playtesting techniques that are about observations of play rather than post-game feedback or questionaires.
I’d like to propose “Observational Playtesting”. For me, these techniques have strong parallels with observational research in a number of other disciplines, such as anthropology, behavioral economics, and cognitive psychology. I work as a designer and user researcher for a large retailer, so my playtesting techniques are very informed by a User Experience background. The video games user research community is much more developed than the boardgames one, and many of the top labs there already use these types of research practices to conduct playtests.
In Observational Playtesting, you are trying understand the player experience of a game, paying close attention to the ways they feel and react to moments during play. The best ways to be wholly focused on watching and taking careful notes as testers play (video / audio recording can be helpful too).
If you’ve only taken notes or collected feedback forms at the end of a session, you miss most of what you can potentially capture. The experience of your game happens during the game, so it’s silly to only measure and record data afterwards. Limits of human memory and a number of powerful cognitive and psychological biases make observational playtesting the best way to capture playtest data that is difficult to collect or skewed in post game feedback. In part, this is why many top designers have started asking remote blind testers to video record their game sessions.
Obviously, postgame feedback from the players is still very important and still leads to lots of design improvements. I’m not saying stop having those discussions, but rather that your tests will be more productive if you also use observational techniques. I’ve talked a little bit before about this topic before on “Playtest Like a Researcher: Stop Playing in Your Own Tests.” In this post I’m going to dive into what types of data I like to capture while I’m observing a game.
So – what am I looking for?
At the core of what you want to be watching for are key moments of engagement from the players – the times when the players are most or least engaged with the game, its systems, and their interactions with the other players. While I’m not suggesting attaching galvanic skin response sensors or anything, broadly, if the graph of players engagement looks something like this:
You want to be tracking the circled moments that lead to those upturns and downturns. (the local minima and local maxima)
Keep track of which moments felt good or exciting! Which moments felt boring or confusing? At the same time you’re streamlining your game to clear up weird rules edge cases and bad interactions, you also want to be streamlining it to deliver maximum fun!
Writing down quotes:
Part of understanding player experience is watching what they say to each other or in reaction to key moments during play. Bring those quotes up during feedback: e.g “You said that you ‘wasted a turn’ when you took that action – how did that feel?”. This helps players contextualize feedback, and can prompt on experiences that they might not have otherwise remembered. It puts players in the moment of their experience, and helps compensate for some of the cognitive biases that affect what parts of the game players will give feedback about.
Bringing quotes up at the end helps you mirror understanding of that quote back to the player: confirming that you understand what they meant.
Player confusion and questions:
Understanding the learnability of your games rules and systems is significantly easier using observational techniques than by gathering endgame feedback. I like to note every question players ask during the game (even when they are just wondering and not looking for an answer). These indicate points of potential confusion from players or areas of the game they are particularly engaged with (sometimes both). You’d be surprised how much you can get out just writing down each question players ask, as you can then iterate your components and rules to answer those questions without you there!
I’m also watching for hesitation when making decisions, and when players check printed reference material such as player aids.
Boredom and dips in engagement:
Over multiple tests, you can look for particular times during play that boredom might cluster. Good indications of boredom are: Spending time on their phone when it is not their turn, asking “who’s turn is it?”, looking away from both the other players and the game components, and leaning back away from the table and the game.
Time sub-elements of the game:
Observe how long a round of turns around the table takes and how long a player’s individual turn takes. See if rounds tend to drag on as the game gets toward the conclusion. It also helps you figure out how game length and pacing might change if you added a step, or shortened the game timer.
Watch player dynamics
If your game features player interactions, watch emotional and strategic responses to their moments. How do people feel after the action space they wanted is taken just before their turn? Do players use more aggressive tactics after being attacked for the first time?
Paying attention to inter-player dynamics gives you an idea about how players respond to certain design choices you’ve made, and gives you an idea about how they might react to changes you could introduce.
Observational playtesting is a powerful way to capture playtest data. This is a surface level look at some of the things I watch for, but a lot varies test to test and where a game is in its design and development cycle.
I encourage you to be taking notes continuously during play – you’ll be able to iterate quicker and gain valuable insights from fewer tests. When you move to remote blind testing, try getting testers to video-record their sessions so you can capture similar data.
If you are hungry for more formal research-focused resources, I highly recommend checking out:
- Contextual Design by Karen Holtzblatt and Hugh Beyer
- Games User Research by Drachen, A., Mirza-Babaei, P., & Nacke, L. E.
John Brieger is a boardgame developer based in Sunnyvale, CA with a background in qualitative research. After leaving Apple, he founded the boardgame development studio Brieger Creative, an independent studio of boardgame developers specializing in helping publishers and licenseholders turn prototypes into marketable products. Find more at http://briegercreative.com