One of the first years we were lucky enough to go to GDC (a yearly game developer conference), we had breakfast with a guy who worked at Bullfrog during their heyday. He told us a story about one of their games that’s been echoing in my head for years since.
During development of the title (which we will call “Cheese Wrangler” to protect the innocent) they created a sophisticated simulation of the environment to determine which challenges to present to the player. Months of painstaking work went into cheese wrangling challenge* simulation in an attempt to create a model that would feel intuitive to players, but the system was taking way too long to finish. Eventually they decided to test players against a purely random event driver, and they found that after all that work players couldn’t tell the difference between simulation and chance. So they threw out the simulation.
(*Not actually cheese wrangling challenges, but we’re sticking with that theme.)
Simulation just doesn’t matter if the player doesn’t care about it.
This is one of the biggest driving factors to why games often don’t have sophisticated AI. Once you jump the hurdles of the cost of development time for the systems and the computational cost within the game, you have to determine how to make the player care, or you might as well have thrown a RNG at the problem and moved on a week later.
We have part of this problem. We have a lovely, deep, complex AI, and if you know what’s going on under the hood it seriously out-performs random behavior, but we haven’t been doing a good enough job of making players care because we aren’t giving them the tools they need to care.
The window into this problem was intended to be the character info panel, but in the initial version of it, you had to read repetitive descriptive paragraphs for every character in order to figure out why they were upset, and often the information wasn’t valuable in the first place. No one used it, because of course they didn’t. It sucked.
The next version we improved the visibility of the information by making the memories that were most important to the character visibly distinct as icons, which was great, but they weren’t all unique. It was the tooltips holding the unique data, so players had to mouse over all of them in turn to read them to try to figure out why the character had a particular mood. So tedious. Nope, that sucks too.
Starting with this experimental I decided that hiding the information was the problem, so I’ve been tearing through our emotion simulation code exposing everything I possibly can to the player. The character doesn’t have one mood that governs everything, so we should show them the entire state. Done. The character was deriving their emotional state from dozens of memories, and we were only showing the most important ones. Why I ever thought this was a good idea I will never know. [It was the laudanum -ed.] Enough with that, let’s just base the state on the ones we were showing in the first place. Does it change how characters act? Not really. Does it improve the player experience? YES! A LOT. Giving players access to the entire character state, one way or another, is the only way they start to trust the UI. The harder it is for them to access the entire state, the less they’re likely to trust it.
Cool, so all of the data in one place! Not cool: there’s so much data. This system was never intended to be visualized. And so begins the refinement of the system: cropping off features that weren’t adding enough value to bother trying to explain, sanding down the ones that were overly indulgent, and creating a presentation of what’s left that is intuitive enough for players to understand and still interesting enough that they’ll use it a lot.
I’m not going to spell out how this works, or why, because I can’t. It’d render useless too much feedback. But I can say that I’m really excited to see what people think.
(You can try all of this in Alpha 43B, which is in the Experimental Branch!)