The Good and Bad of Simulation-based Training
Decades ago, when I was getting my MBA at Harvard Business School, we had a computer-based learning exercise, where we competed as teams to see who could make the most money. I hated it. I thought it was worthless. Now, having decades of experience in teaching and leadership development, I still hate that type of simulation.
Some simulations are OK.
Simulation-based training is great for teaching people how to use the software that they’ll be using in a new job. It’s great for testing whether people have learned rules and regulations, like you’d encounter on a driver’s test. But most simulations fail the test when it comes to teaching business acumen, strategic thinking, and leadership.
The underlying reason: There’s a big difference between training and education. Training is what you do when you practice a set of skills so you can execute them without having to think about it. Education is almost the exact opposite. Education is what helps you approach new situations thoughtfully, when new answers are needed for new problems. As Crystal Schaffer of Nu U Consulting has said, “Training is for dogs. Learning is for Life.”
Immersive simulations are great when the material you’re covering is easily testable. Example: You’re driving (in a simulated vehicle) and you see a flashing red traffic signal. If you know to put on the (simulated) brakes, you got that one right.
But way too often, vendors are selling expensive training programs with more ambitious, ambiguous goals. These programs aren’t always worthless, but pretty often they are.
The good programs help people learn about themselves and their colleagues. The bad ones just teach you about what assumptions are built into the simulation. What makes the bad ones even worse is when they leave people with the impression that they have learned something, when in fact they just made some lucky guesses about what a computer program was going to give points for.
Let’s look at the positive side first.
One way that simulations are valid is when they teach you something about yourself.Learning that you have a certain decision-making style can help you avoid the pattern blindness that affects most leaders. For example, being more likely to place more weight on interpersonal inputs than on quantitative forecasting isn’t good or bad. But knowing that it’s your default behavior might lead you to run the numbers more frequently than your first instincts would have you do.
Simulations are useful when they teach you something about math. Making investment decisions when you don’t how to do NPV analysis is probably going to lead to sub-optimal performance. But a simulation that sharpens your abilities to run the numbers is teaching you something true and useful.
On a related notes, simulations are useful when they help you to recognize what economists call dominated strategies. A course of action might have positive results, but if there is another option that produces all those results and then some, then clearly that second approach is better.
Simulations can be especially useful when they show you how your colleagues react to what you are saying. Here the anonymity and the artificiality of a simulation can make it possible to receive objective feedback that you’d otherwise never hear. Too many executives believe that their communications are clear and compelling, but they don’t often have the evidence to back that up. When this is part of a simulation, it is actually helping people learn about their actual world, and not just the artificial one of a business game.
Before we talk about the ways simulations do damage, let’s look at a few gray areas.
Sometimes a simulation can drive home a valid truth, making you vividly aware of something that is certainly true, but not really counterintuitive. For example, a business game can reinforce the point that if you run out of inventory, you will have lost out on sales you could have made. The idea is unobjectionable. And in the competitive excitement of a team-based simulation, those losses might really hurt. But here’s the hitch. Rewards and penalties like this are quantitative, and somebody had to built those factors into the simulation. There’s objective value in seeing the directional effect of a bad decision, but not in quibbling about the actual amount of the penalty.
Two other valid behaviors can be reinforced by a simulation. One is where there are dependencies that need to planned for. You can’t have (virtual) sales until you’ve hired your (virtual) salesforce. On a similar note, it’s usually important to spend some money on the present business while you’re also investing some money in the future of the business. If you do only one and not the other, you deserve to fail.
But the problem lies in how the simulation gives you feedback. If the simulation is basically a black box, taking in the inputs of your decisions and spitting out the quantitative results that it alone has calculated, then much of its value is dubious.
You shouldn’t be trying to outguess a black box.
In a competitive, team-based simulation – like the one I had at Harvard Business School – one team is ultimately declared the winner. They came out of the game with the most profit, or the highest market share, or the best stock price. But were their decisions really any better than the ones made by the second-place team? Was their performance in this running of the simulation a reliable predictor of future success? Did they learn any more than any of the other teams?
Usually not.
In games where teams compete directly against each other – where my market share gain comes at your expense – success comes from guessing what actions others are going to take. On a positive note, this really does reflect how the real world works. As John Maynard Keynes noted, you don’t buy a stock based on what the company will be worth, you buy it based on what other investors will think the company will be worth. On a less positive note, this doesn’t amount to actionable learning. It’s more like playing scissors-paper-rock.
Alternatively, where each team is competing independently, they are all in fact competing against the black box. Some teams will do better than others, but that might just be luck. If the simulation has a great number of decision-making rounds, and if the parameters in the simulation don’t change from round to round, then a team could do a genuinely better job than its opponents at making “profitable” decisions. But this raises two problems.
The obvious problem is that learning what the underlying computer model values, and to what degree, tells you about the model and its parameters, and not about the real world. Is investing in people a good thing to do? Yes! How about investing in R&D? Also yes! Investing in marketing? Still yes! Cutting price to gain market share? You know that the answer is yes!
But as the simulation gives feedback to different teams, it has to put weights on these different input/output combinations. As far as the users are concerned, it’s a crapshoot about which investment will yield the greatest result. And even if they guessed exactly right, they didn’t learn anything about how the real world works. In fact, the worst outcome is for them to believe that they did.
A second problem with black box models shows up when a single input variable affects multiple output variables, or even multiple intermediary variables. Even if this is how the real world works, it makes it nearly impossible for the players in a simulation to learn anything. The relative impacts across the variables will seem arbitrary, and there won’t be enough decision-making rounds even to tease out what those impacts were programmed to be.
This is the paradox of most business simulations: The richer the design, and the more it tries to model the way the real world works, the more arbitrary and artificial it is, and the less there is to learn from it.
Not all simulations are useless.
A good simulation can teach the users about themselves. It can make them aware of biases and default behaviors that they bring with them. It can show them that not everybody around them thinks exactly the same way they do.
One very well designed simulation that I have encountered drives home a lesson that is always worth remembering: Not everybody at the table has identical information. Sometimes there is critical information known only to the most junior person on the team, the person on the front line and not in the command center.
I’ve seen another simulation that drives home the point that different people on the team have different objectives. It’s a nice fantasy that many leaders have when they think that everybody in their organization has the same needs and goals, that they live in a state of alignment. Too bad that’s not generally true. What this simulation made very clear is that focusing on the single objective of corporate profitability fell short. (This was not a black box outcome, since it relied on each player making decisions individually, rather than as a team.) When your success depends on others’ actions, their needs and objectives matter too.
The lesson is clear.
Simulations can be great tools for training, for exposure, for team-building, and for education. But like any tool, when they are used improperly, they can do more harm than good.
When a simulation is teaching us how to do something better, faster, and more predictably, what it is engaged in is training. That’s useful, but very limited in its impact.
When a simulation is implicitly judging us on how well we can read its mind and outguess its black box, it’s probably teaching us the wrong things.
When a simulation helps us to learn about ourselves and about other people, it is genuinely providing an educational experience.
Caveat emptor.
Some simulations are OK.
Simulation-based training is great for teaching people how to use the software that they’ll be using in a new job. It’s great for testing whether people have learned rules and regulations, like you’d encounter on a driver’s test. But most simulations fail the test when it comes to teaching business acumen, strategic thinking, and leadership.
The underlying reason: There’s a big difference between training and education. Training is what you do when you practice a set of skills so you can execute them without having to think about it. Education is almost the exact opposite. Education is what helps you approach new situations thoughtfully, when new answers are needed for new problems. As Crystal Schaffer of Nu U Consulting has said, “Training is for dogs. Learning is for Life.”
Immersive simulations are great when the material you’re covering is easily testable. Example: You’re driving (in a simulated vehicle) and you see a flashing red traffic signal. If you know to put on the (simulated) brakes, you got that one right.
But way too often, vendors are selling expensive training programs with more ambitious, ambiguous goals. These programs aren’t always worthless, but pretty often they are.
The good programs help people learn about themselves and their colleagues. The bad ones just teach you about what assumptions are built into the simulation. What makes the bad ones even worse is when they leave people with the impression that they have learned something, when in fact they just made some lucky guesses about what a computer program was going to give points for.
Let’s look at the positive side first.
One way that simulations are valid is when they teach you something about yourself.Learning that you have a certain decision-making style can help you avoid the pattern blindness that affects most leaders. For example, being more likely to place more weight on interpersonal inputs than on quantitative forecasting isn’t good or bad. But knowing that it’s your default behavior might lead you to run the numbers more frequently than your first instincts would have you do.
Simulations are useful when they teach you something about math. Making investment decisions when you don’t how to do NPV analysis is probably going to lead to sub-optimal performance. But a simulation that sharpens your abilities to run the numbers is teaching you something true and useful.
On a related notes, simulations are useful when they help you to recognize what economists call dominated strategies. A course of action might have positive results, but if there is another option that produces all those results and then some, then clearly that second approach is better.
Simulations can be especially useful when they show you how your colleagues react to what you are saying. Here the anonymity and the artificiality of a simulation can make it possible to receive objective feedback that you’d otherwise never hear. Too many executives believe that their communications are clear and compelling, but they don’t often have the evidence to back that up. When this is part of a simulation, it is actually helping people learn about their actual world, and not just the artificial one of a business game.
Before we talk about the ways simulations do damage, let’s look at a few gray areas.
Sometimes a simulation can drive home a valid truth, making you vividly aware of something that is certainly true, but not really counterintuitive. For example, a business game can reinforce the point that if you run out of inventory, you will have lost out on sales you could have made. The idea is unobjectionable. And in the competitive excitement of a team-based simulation, those losses might really hurt. But here’s the hitch. Rewards and penalties like this are quantitative, and somebody had to built those factors into the simulation. There’s objective value in seeing the directional effect of a bad decision, but not in quibbling about the actual amount of the penalty.
Two other valid behaviors can be reinforced by a simulation. One is where there are dependencies that need to planned for. You can’t have (virtual) sales until you’ve hired your (virtual) salesforce. On a similar note, it’s usually important to spend some money on the present business while you’re also investing some money in the future of the business. If you do only one and not the other, you deserve to fail.
But the problem lies in how the simulation gives you feedback. If the simulation is basically a black box, taking in the inputs of your decisions and spitting out the quantitative results that it alone has calculated, then much of its value is dubious.
You shouldn’t be trying to outguess a black box.
In a competitive, team-based simulation – like the one I had at Harvard Business School – one team is ultimately declared the winner. They came out of the game with the most profit, or the highest market share, or the best stock price. But were their decisions really any better than the ones made by the second-place team? Was their performance in this running of the simulation a reliable predictor of future success? Did they learn any more than any of the other teams?
Usually not.
In games where teams compete directly against each other – where my market share gain comes at your expense – success comes from guessing what actions others are going to take. On a positive note, this really does reflect how the real world works. As John Maynard Keynes noted, you don’t buy a stock based on what the company will be worth, you buy it based on what other investors will think the company will be worth. On a less positive note, this doesn’t amount to actionable learning. It’s more like playing scissors-paper-rock.
Alternatively, where each team is competing independently, they are all in fact competing against the black box. Some teams will do better than others, but that might just be luck. If the simulation has a great number of decision-making rounds, and if the parameters in the simulation don’t change from round to round, then a team could do a genuinely better job than its opponents at making “profitable” decisions. But this raises two problems.
The obvious problem is that learning what the underlying computer model values, and to what degree, tells you about the model and its parameters, and not about the real world. Is investing in people a good thing to do? Yes! How about investing in R&D? Also yes! Investing in marketing? Still yes! Cutting price to gain market share? You know that the answer is yes!
But as the simulation gives feedback to different teams, it has to put weights on these different input/output combinations. As far as the users are concerned, it’s a crapshoot about which investment will yield the greatest result. And even if they guessed exactly right, they didn’t learn anything about how the real world works. In fact, the worst outcome is for them to believe that they did.
A second problem with black box models shows up when a single input variable affects multiple output variables, or even multiple intermediary variables. Even if this is how the real world works, it makes it nearly impossible for the players in a simulation to learn anything. The relative impacts across the variables will seem arbitrary, and there won’t be enough decision-making rounds even to tease out what those impacts were programmed to be.
This is the paradox of most business simulations: The richer the design, and the more it tries to model the way the real world works, the more arbitrary and artificial it is, and the less there is to learn from it.
Not all simulations are useless.
A good simulation can teach the users about themselves. It can make them aware of biases and default behaviors that they bring with them. It can show them that not everybody around them thinks exactly the same way they do.
One very well designed simulation that I have encountered drives home a lesson that is always worth remembering: Not everybody at the table has identical information. Sometimes there is critical information known only to the most junior person on the team, the person on the front line and not in the command center.
I’ve seen another simulation that drives home the point that different people on the team have different objectives. It’s a nice fantasy that many leaders have when they think that everybody in their organization has the same needs and goals, that they live in a state of alignment. Too bad that’s not generally true. What this simulation made very clear is that focusing on the single objective of corporate profitability fell short. (This was not a black box outcome, since it relied on each player making decisions individually, rather than as a team.) When your success depends on others’ actions, their needs and objectives matter too.
The lesson is clear.
Simulations can be great tools for training, for exposure, for team-building, and for education. But like any tool, when they are used improperly, they can do more harm than good.
When a simulation is teaching us how to do something better, faster, and more predictably, what it is engaged in is training. That’s useful, but very limited in its impact.
When a simulation is implicitly judging us on how well we can read its mind and outguess its black box, it’s probably teaching us the wrong things.
When a simulation helps us to learn about ourselves and about other people, it is genuinely providing an educational experience.
Caveat emptor.
January 14, 2018