The other day a friend of mine shared that, as someone who’s worked in games since the PS2 days, he missed when teams decided up front what game to make and then made it, rather than relying on prolonged prototyping.
Hearing this made me smile as I’ve felt the same for a while. It was also surprising, as my friend appeared to be a proponent of prototyping. It made me wonder how many developers, including those seemingly fully on board with the “you never know until you try” mindset, have unvoiced reservations.
In this post I’m going to voice those reservations.
The short version is that, in practice, prototyping often doesn’t produce the results it’s supposed to, and presents rarely-acknowledged but common problems.
Prototyping Defined
Prototyping, for purposes of this post, is a long-term development methodology. A cultural commitment to trying ideas out before running with them, throughout much of a project’s life.
Quickly building a prototype, at a game jam for example, isn’t the “prototyping” I’m discussing today. That’s building one thing, on a short schedule, with a clear deliverable and end goal in mind.
The prototyping I’m discussing today is, in practice, deferred decision-making. It’s a pseudo-scientific transposition of A/B testing onto game design: instead of making decisions up front, based on experience and judgement, try out the possibilities and choose what works best. (This sounds great on paper!) It’s “you never know till you try (so try everything).” It’s meandering, often without concrete goals or deadlines, through a nearly infinite possibility space.
It’s how you end up with games that are little more than a basic concept after years of development.
The Theory of Prototyping
The Platonic Ideal of prototyping looks something like this:
It’s done with a small team, so while it takes time it’s not expensive.
In fact, it saves lots of money by validating ideas and avoiding costly failed production paths.
It’s a scientific process. You try out rival ideas, the team picks the objectively best one, then you repeat to build out a game composed of best-possible decisions. It’s game design via natural selection.
Unfortunately Platonic Ideals exist only in the aether, not in real life. In real life you do the above and still release a Concord.
Part 1: Prototyping Pitfalls in Practice
Lack of Overall Direction
One of the most common themes of failed games is a lack of shared vision among team members. This is a common complaint even on games that turn out fine, because wrangling even medium-sized teams is hard, but “we didn’t know what game we were making” is an especially acute and common failure of disastrous developments.
It’s hard to maintain a shared vision, period, as each team member imagines their own version of the game. But it’s especially hard to maintain a shared vision with a “you never know till you try” mindset and a refusal to commit. You can’t share a vision when no vision exists and you’re “finding the fun” along the way.
Don’t Rely on “Plan B”
When writing proposals or design docs, you might have some doubts about which spec is the right one to proceed with. ….
It’s tempting to approve of both options and leave the actual decision to someone else. That’s why so many proposals and design docs say “we could do this”, and the leave both doors open. But with proposals in particular, you’re better off ditching Plan B entirely. …
In the proposal phase, it’s important to stand by your idea and show you’ve really thought it through.
According to Sakurai we shouldn’t propose a Plan B. But prolonged prototyping consists of regularly proposing a Plan B, and often a Plan C and D - and then spending time and money implementing each of them.
You Can’t Evaluate Ideas, Only Implementations
Let’s imagine you’re faced with a question like “is our combat system better with or without stamina?”. Your approach is to try out both options and pick whichever is more fun1 - let’s say you pick the version without stamina.
Was that a better idea or was it just made better?
Perhaps all the animations you used (surely you’re not authoring a whole new set of animations?) looked nimble, like they shouldn’t require stamina. Maybe the stamina values were tweaked poorly - stamina use was too onerous or generous to the point of irrelevance. Maybe your enemy AI, at that point in your project, allowed enemies to attack all at once without turn-taking, and the addition of stamina made combat unfairly difficult.
To truly compare these systems you need new animations, new enemy AI and balance tweaks. Which is a lot of work! But without that work you’re invested time into engineering a half-assed comparison that reaches a foregone conclusion.
Now let’s tweak this scenario slightly. Some team members are jazzed about the stamina-free combat system so they work on that, somewhat separately from the team doing the regular combat system.
Now when you compare these two systems you’re comparing not just two different ideas, and not just two different implementations, but two different implementations from two different groups.
Maybe one group is more detail-oriented than the other, or puts more effort into game feel, sound FX and VFX. Maybe one has an animation background and the other doesn’t, or is more familiar with 3D action games.
It’s difficult, even for experienced professionals, to try out two combat systems and say “this one has better sound FX and better VFX and better overall game feel and more satisfying hit reactions but all of those are due to the specific execution - the one that feels worse in every way is fundamentally more sound.”
If they could say that - if they could tell that the less-fun implementation has more long-term potential - what did building and comparing the two different versions accomplish?
Choosing Better Ideas - The Meaning of “Better”
When I wrote about Silent Hill: Homecoming I made the point that it had the “best” combat in the series, but that made for one of the worst games, because the game was too combat-centric.
Dark Souls has basic combat. I played the entire game using a one-handed axe. I never used the heavy attack, and the light attack string uses basically the same animation for each swing. Effectively I used one attack the entire game! In a combat-centric game, like a fighting game or a Devil May Cry, that would be the sign of a terrible combat system. But while Dark Souls has a lot of combat it’s not a combat game in the same way Devil May Cry is. It’s not a “spectacle fighter” focused on the player-character’s capabilities.
If you were to compare the combat of Dark Souls and Devil May Cry in a vacuum you would probably decide that Devil May Cry has the “better combat.” But that conclusion is meaningless without context. In an actual A/B test you test something with clear metrics - do users click on the “download now” button more often if the button is red or blue? “Which combat system is more fun?” doesn’t have any clear evaluation metrics. It’s entirely subjective and it depends on the rest of the game.
Games aren’t collections of individually great mechanics. Many great games have mechanics that would be lousy in other games. A Mario game using Uncharted-style traversal would be awful, and vice-versa. Red Dead Redemption 2 is about the overall fidelity of the experience - not control fidelity but more authenticity. So opening a drawer or skinning a creature uses a realistic animation rather than a “gamey” one. If Mario played a slow animation every time he grabbed a coin it would be unbearable. Many great games have mechanics that are arguably lousy in their own game. Mashing to run in GTA. Driving a car in GTA. Shooting in GTA.
If you’re making a small game centered on a single mechanic asking “is this fun in isolation?” is useful. But in larger games answering “is this fun?” can be pointless or misleading.
The Importance of Content
A last point I’ll make here is with an excerpt from this piece from Daniel Cook of Spy Fox:
Content heavy games like RPG’s, Adventure games, some MMOGs and heavily scripted FPSs are difficult to prototype. The problem here is that these titles are actually highly evolved versions of a core game mechanic.
A RPG or MMOG is generally a single or multiplayer turn-based combat system with a whole bunch of content and meta-game systems layered on top. You can prototype the combat system, but the game’s competitive advantage is often “everything else.”
This seems far more true today that it was in 2005, even for genres that are traditionally mechanics-centered. Street Fighter 6 shipped with an entire open-world mode as a way to add more content to the game. Many AAA games rest on three pillars: combat, traversal, and exploration / puzzle solving. In many of those games the latter two are simplified or trivialized to better usher the player through content.
“Is this idea more fun than this other one?” often isn’t a good question to pose, it’s not easy to answer, and for many games, even if you can find an answer that answer isn’t too relevant.
The Production Realities of Prototyping
In theory prototyping might take time, but the team is small and the prototype can be unpolished, so it’s cheap.
In reality for prototyping to be effective senior personnel have to be involved. So while the team doing the prototyping is small it’s not all that cheap. And eating up the time of senior personnel has large opportunity cost.
In practice you often have a boss evaluating your prototyping efforts. Your boss may lack the ability to imagine untextured cubes as God of War: Ragnarok, so they expect a certain level of production polish. Which again isn’t cheap.
Sometimes instead of a boss you have a publisher, or a potential publisher or investor. I try to keep tabs on what publishers look for in pitches, and increasingly what they want are games that are far along and have great production value.
This is a video of a Bayonetta prototype. If you brought this to a publisher today they’d probably tell you to get lost. Or at least, ask you to make the game look like a final product on your own dime, then talk to them again.
Now let’s talk timetables.
We often discuss prototyping as if “prototype until you figure it out” is a short production phase that sits on the far left of the production timeline. But at many places prototyping is a production ethos, not a pre-production ethos. At those places it’s a constant productivity tax over the life of a project.
Another common way to think about prototyping is that you can prototype for as long as it takes before switching into costly full production mode. In effect it doesn’t count against the real schedule and budget.
But even Blizzard, which popularized the “when it’s done” mindset, regularly kicks under baked projects out the door now. Overwatch 2 released missing its main promised feature.
If you take too long the genre you’re targeting may become saturated. A fad or ethos you’re capitalizing on could be played out. You may hit the Duke Nukem Forever problem where games as a whole outpace your production, forcing you to constantly rework; you start off making a battle royal, then switch to a hero shooter, then switch to an extraction shooter. If you’re working on a sequel delaying it for too long risks the franchise losing momentum.
There are costs to a long production beyond monetary. And, at some point someone can (and probably will) demand recognizable revenue. That you can prototype virtually forever, with little downside, is more pipe dream than reality.
Many modern games take a long time to come out but still seem rushed. That goes far beyond prototyping - sometimes executives decide to turn the single-player RPG into a multi-player shooter, then reverse course a year later. But a common issue with these games is that the developers - not the executives, the developers - are indecisive about what they want to ship, sometimes delaying bedrock decisions for years under the guise of exploration.
From Jason Schreier’s report, “How Bioware’s Anthem went Wrong”
It’s a story of a video game that was in development for nearly seven years but didn’t enter production until the final 18 months, thanks to big narrative reboots, major design overhauls, and a leadership team said to be unable to provide a consistent vision and unwilling to listen to feedback.
…
Early iterations of flying—which, developers say, was removed from and re-added to Anthem several times—were more like gliding, and members of the Anthem team say it was tough to get the system feeling all that fun. Every time they changed the traversal, it meant changing the world design accordingly, flattening and stretching terrain to accommodate the latest movement style.
…
The most common anecdote relayed to me by current and former BioWare employees was this: A group of developers are in a meeting. They’re debating some creative decision, like the mechanics of flying or the lore behind the Scar alien race. Some people disagree on the fundamentals. And then, rather than someone stepping up and making a decision about how to proceed, the meeting would end with no real verdict, leaving everything in flux. “That would just happen over and over,” said one Anthem developer. “Stuff would take a year or two to figure out because no one really wanted to make a call on it.”
Anthem was a game about dressing up like Iron Man to fight baddies, but it took them years to decide if the Iron Man armor should fly! You can spin that as exploring whether flying was fun but at some point that’s just punting on answering “what game are we making?”
The Myth of Potential Energy
I suspect we all know people (including sometimes ourselves) who fall into the trap of planning without doing. The person who plans to get in shape, asks for advice on which exercises to do and which protein shakes to drink, but never does consistent exercise. I’m not talking about procrastinators who do no work at all, but people who substitute preparation for progress.
Something I see a lot in people who claim they want to make games — but who aren’t actually making them — is the idea that they can build up potential energy that will eventually transform into kinetic energy. Instead of working on their game they read game design books, which gives them greater game design insights, so when they do work on the game it will be that much better! Instead of messing around in Blender they watch Youtube videos on how to use Blender.
Consuming these things is fine, if you’re watching on your phone on the bus or during your lunch break. But for many people consuming this content is worse than doing nothing, since it tricks their brain into thinking they’re making progress. Food that fills them up but has no nutritional value.
Prolonged prototyping often rests on the idea that you’re building up potential energy. That you’re “laying the groundwork” and “answering outstanding questions”, so that when you begin work “for real” you’ll hit the ground running.
In reality you might stumble out the blocks when you realize the separate features you prototyped don’t play together nicely. That the game had “30 seconds of fun” in prototype form but each 30 seconds is the same as every other, so while it’s fun for 30 seconds it’s not fun for 30 hours. There are plenty of issues that only rear their head during production, even when a vertical slice is supposed to guard against that.
Laying the groundworks is important, but sometimes that we’re laying the groundwork is a white lie we tell ourselves.
The Personal Realities of Prototyping
It’s hard to stay excited about a project that remains exploratory for too long, or when it’s safe to assume that your work will be largely discarded. That creates turnover, it’s bad for projects, and I suspect it’s bad for the industry as a whole.
This Case of the Missing Cost-Benefit Analysis
“You never know till you try” is seductive as it has some element of truth: sometimes you try out a great-sounding idea and it turns out lousy, and sometimes (though, in my experience, not very often) you try out a lousy-sounding idea and it turns out great.
Making decisions up front has an obvious downside: you might make the wrong call. But a “try it out and see” approach has major downsides as well. Often missing is any discussion of those downsides, and a cost-benefit analysis.
I’d propose the following questions:
How often does trying out multiple ideas and choosing the best one produce a better result than choosing without trying?
In other words, how reliable is your initial intuition?
Or, how quickly do you recognize and pivot away from bad choices?
How much time and money does the process of trying out ideas take?
Would that time and money be better spent, on average, on execution rather than exploration?
In short, which has higher return: exploring multiple ideas, or executing on one?
One of the most common pieces of game development wisdom is “ideas are nothing, execution is everything.” In that case the answer to the above question is obvious. If we believe that ideas are nothing - are worthless - why would we advocate for a process that spends a great deal of time and budget choosing between them? (This is not a trick question, feel free to try to answer it)
I don’t entirely agree with “ideas are nothing” but execution is important. Missing out on the best ideas is a real danger, but so is short-changing those ideas (good or otherwise) with substandard execution.
I suspect Anthem would have been a better game had they committed to flying early. And would also have been a better game had they axed flying early.
Here’s another Sakurai video, “Don’t Put Decisions Off”, expressing similar sentiments as the previous one.
Each decision represents a fork in the road that will send your team down a long production path, so one wrong choice could lead to many hours of wasted work.
That’s truly a massive responsibility.
Thus, I can understand the desire to be careful, get second opinions, and figure things out as you go, but there often isn’t time for that kind of hesitation.
A quick, firm decision can end up helping more of your team that you might imagine. Every minute they have to wait is a minute the issue remains unresolved.
Intermission
I suspect many will get to this point and think “he keeps saying that prototyping is bad, but what he means is that prototyping done poorly is bad.”
Most developers reading this have probably taken part in stand-up meetings that devolve into two people arguing or getting lost in a tangent while everyone else loses interest. That’s not supposed to happen, but it happens.
The question is how often does it happen? If it happens 5% of the time you take the bad with the good. If those kinds of unproductive meetings happen 50% of the time you have a problem. If those kinds of unproductive meetings are the norm across the industry then we should stop using Scrum, no matter how great it sounds on paper.
What motivated me to write this piece is that it’s so rare for people to acknowledge when prototyping goes wrong, or that it can go wrong at all.
When in practice it goes wrong pretty often.
Prototyping: Origins
My friend who told me he’d grown weary of prototyping mentioned that he remembered it as popularized by GDC. I’ve done a little digging into that time period, and I’d like to highlight one representative presentation.
Game prototyping is all the rage these days. Chaim Gingold and I gave a fun lecture at the 2006 Game Developers Conference titled, Advanced Prototyping. The material is based on our experience prototyping games, technology, and user interfaces for Spore and at the Indie Game Jam over the past 4 years. Here's the abstract:
There are two points I’d make about this presentation and similar presentations from around that time period.
First, much of it goes against this hard-to-summarize thread by Daniel Cook of Spry Fox:
Why does 'finding the fun' fail so often as a design strategy?
#gamedesign
New designers are told to prototype and 'find the fun'. But the naive version of this is a garbage tactic that mostly results in poorly thought out prototypes that are never going to converge on gameplay.
The prototyping talks from around that time period lean heavily on “finding the fun” of individual features. They advance, in essence, the “bad version” from Cook’s thread: make the system, play the system, see if you’ve “found the fun” and if you co-workers (who are in this context little more surrogate players) are jazzed about it.
More generally, these presentations fall into many of the pitfalls I covered in the first section. The idea that games can be successfully composed of individually-selected features, chosen based on which are most fun in the moment. The conflation of trying out ideas with trying out executions. A commitment to a constant productivity tax, where Plans A B and C are regularly implemented, to aid in making even minor decisions. A lack of appreciation for the time cost of prototyping, and how that time could be spent elsewhere.
This particular presentation was well-received at the time because Hecker was working on Spore. Here’s a particularly amusing writeup of the prototyping talk, from Gamespy
Great games like Spore don't spontaneously appear from a design document. Here's how it happens...
Chris Hecker and Chaim Gingold are outspoken vanguards of wild, experimental approaches to game design and development. Is it any coincidence that these two designer-programmers are both working on Spore, Will Wright's experimental new Maxis title? This morning when the pair spoke about the processes they use to build up a breakthrough title like Spore, people were mobbed outside the door waiting to get in.
This writeup about how great and groundbreaking Spore is, thanks in part to heavy reliance on prototyping, was written in 2006. Two years before Spore released.
Spore still regularly shows up on most disappointing games of all time lists.
Spore relied heavily on prototyping individual features and minigames and that’s what it plays like - as per Cook it never converged on gameplay. It’s hard to even call Spore a collection of individually-fun elements that don’t converge, because most individual elements aren’t particularly fun on their own. The strengths of the game are the high-level concept (going from single-celled organism to a race of space explorers) and the creature creator.2 But adjusting the morphology of creatures - the most fun part of the creator - is only loosely connected to the gameplay. If you want to make a fast creature you don’t give it a long flexible spine and a dozen millipede-style legs, you give it a foot with a high “speed” stat.
So the second point here is an oldie but goodie: the proof of the pudding is in the tasting. Spore’s production methodology didn’t produce great results.
Part 2: The Prototyping Alternative
So if prototyping is bad (at least, more often than we’d like to admit) what’s the alternative?
I propose something like this:
Decide what game to make
Make the game
Step 1 might involve building prototypes, but in weeks or months, not years. It’s not the A/B/C/D testing of competing features or meandering through an infinite possibility space. And it’s not - this may be controversial - “finding the fun.” (Or hoping to stumble over it) You can hone the fun, or “follow the fun” - double down on the parts that are working best. But you should know what the fun of your game is supposed to be from the start. Otherwise what are you doing?
Step 2: make the game. This is not “doggedly move forward and stick to your original plan no matter how poorly it’s going.” Move forward in sane manner. Still start with rough versions and blockouts. Adjust to taste - if something is going poorly ditch or rework it.
What I’m proposing, more than anything, is an attitude shift from “this is exploratory work that’s probably throwaway” to “this will probably end up in the final game in some form.”
In practice these are less different than they sound. If you want to answer a question like “should this function be on left or right click?” you could try out both and gather team feedback. But you could also just choose one and swap down the road if needed. Instead of doing both up front and committing to a productivity tax, that tax is only paid when things go awry. (Though that tax may be higher, I suppose)
Anthem Flying and When #@&! Gets Real
The Anthem team explored different flying variations and eventually cut it entirely because it didn’t feel fun and complicated the production. That may sound like an example of prototyping done right: the team avoided a costly dead-end path.
But of course the game shipped with flying.
The leadership team’s most recent decision had been to remove flying entirely, but they needed to impress Söderlund, and flying was the only mechanic they’d built that made Anthem stand out from other games, so they eventually decided to put it back. This re-implementation of flying took place over a weekend, according to two people who worked on the game, and it wasn’t quite clear whether they were doing it permanently or just as a show for Söderlund. “We were like, ‘Well that’s not in the game, are we adding it for real?’” said one developer. “They were like, ‘We’ll see.’”
…
Then, according to two people who were in the room, Patrick Söderlund was stunned.
“He turns around and goes, ‘That was fucking awesome, show it to me again,’” said one person who was there. “He was like, ‘That was amazing. It’s exactly what I wanted.’”
Here’s a quote about flying from the ArsTechnica Anthem review
Almost everything about Anthem's flight system is awesome. The sheer act of lifting off looks, sounds, and feels great, no matter how many times you do it. There's a timing chain for the required jump-then-boost combo, coupled with a light-and-sound reaction of blasting off. The combined effect implies enough torque to make your real-life head rock back instinctually. Once you're airborne, the default speed is slow-and-maneuverable for newbies (aided all the more by a useful "hover in place" button), while pressing "forward" delivers a juicy amount of controllable velocity.
Flying wasn’t fun so they cut it, added it back in to impress their boss, then under the pressure of “make it work or we’re toast”, made it fun. That should scare prototyping enthusiasts.
The process that was supposed to help choose the correct feature set and avoid bad decisions - a process that took a lot of time and money - produced worse results than “it’s an Iron Man game so of course it should have flying - why are we even having this conversation?”
So what changed that made flying fun?
Sound, look and feel aren’t system features that show off in a rough prototype, they’re production polish. A hover button is something you might introduce to fill a need as the game comes together, when you notice that’s it’s hard to land or shoot precisely while flying. So one answer is the team wasn’t able to imagine cubes as God of War: Ragnarok - they weren’t able to project their prototype version of flying forward into a more polished more satisfying version.
Another answer is that there’s just a fundamental difference between “exploring” and working for real.
When Mark Darrah joined the project in the fall of 2017, he began pushing the Anthem team toward one goal: Ship the game.
“The good thing about Mark is that he would just wrangle everybody and make decisions,” said one former BioWare developer. “That was the thing that the team lacked—nobody was making decisions. It was deciding by panel. They’d almost get to a decision and then somebody would go ‘But what about this?’ We were stagnant, not moving anywhere.”
“He started saying basically, ‘Just try to finish what you’ve started,’” said a second developer. “The hard part about that was that there were still a lot of things to figure out. There were still a lot of tools to build to be able to ship the game we were making. It was very, very scary because of how little time there was left.”
“Finish what you’ve started” highlights the mindset difference between “this is exploratory work” and “this will end up in the shipped game”, as well as the importance of execution.
Earlier I posed the question: is it better to spend effort executing on the (possibly) wrong idea or to spend that effort choosing the right idea? But in this case execution revealed that the” wrong” idea was the right one.
Knowing Without Trying
Most games consist of familiar parts arranged in familiar fashion.
Many games are sequels. IP-based games typically don’t let the demands of IP define a radical new genre; instead the IP is married to an existing genre. Star Wars: Outlaws and Avatar: Frontiers of Pandora both take the form of familiar Ubisoft games.
New games are often “spiritual successors” or “inspired by” or “x meets y” - “cozy farming sim meets dungeon crawler.” Most games aren’t interested in big innovations, and it was a lot easier to be innovative in 1979 when there were only a few dozen games in existence.
If you’re trying to decide on an Estus-Flask style health system vs health packs vs regenerating health there are plenty of examples of games that use each of those, to varying degrees of success. The same is true for tactics games that use action points vs a fixed number of actions, hex vs square grids, various cover and line of sight rules, etc.
I recently saw a video about Vlambeer prototyping Luftrausers, as an example of how prototyping can be useful. Quickly building a prototype is the good sort of prototyping so I have no objections, but I could tell you without prototyping that Luftrausers would work because I’ve played a game called Asteroids. Not to mention Two Tigers, which is a very similar game. Blasting enemies while controlling a ship via Newtonian thrust has worked since Spacewar!
Something that’s always irked me about the game industry is how disinterested we are in our own prior art. Maybe it’s because the industry, being technology-based, prizes newness. Maybe it’s that high turnover leads to loss of institutional knowledge and historical memory. But for whatever reason our collective memory of video games often seems limited to hits from the past 5 or 10 years.
An example I’ve written about before is the difference between loot games that drop items and loot games that drop currencies. (Or that make it easy to convert items to currency and back) The former is almost universally more fun than the latter, so whenever I see a game doing the latter I want to ask: why?
Watching Sakurai videos it’s hard not to notice that he has a broad and deep knowledge of games. I suspect that’s why he has the confidence to be decisive and why it works out - I’m sure he has good instincts and intuition, but he also has a wealth of knowledge of prior art.
When I consider game ideas I always ask myself: did other games do this, did it work, and why or why not? Particularly useful is identifying that you’re repeating bad decisions from past games.
In the first Project X-Zone you have one primary resource that’s used for super moves, defensive moves, and minor buffs. Since these all draw from the same pool and super moves are way better than the minor buffs using those buffs is nearly always a mistake; playing optimally means ignoring the buffs system entirely.
In Project X-Zone 2 super moves and buffs use two different resources and don’t compete. Lesson learned.
In the PS2 God of War magic uses mana and a separate rage meter governs how often you can engage Extra Angry Mode, for presumably similar reasons. If magic and Rage Mode shared a meter it would be hard to pace the Rage Mode activations, hard to balance spell effectiveness and cost vs Rage, and players might feel bad for using magic since it would delay Rage Mode. (Perhaps indefinitely) You can see a similar dynamic in Street Fighter 6, which has a super meter and a separate Drive Gauge rather than one universal resource bar.
If you’re working on a game that involves special moves backed by a resource and pondering what would work best there are hundreds of previous examples to call upon. You don’t have to puzzle it out on your own. There are also dozens of examples of games that use cooldowns instead of resource pools. Just yesterday I played demos for Dynasty Warriors Origins and SWORD ART ONLINE Fractured Daydream. Both map special abilities to right bumper + face button. In Dynasty Warriors those abilities use resource points and in SOA they have independent cooldowns. I think those each make sense for their respective games, based on what those games are trying to accomplish.
I’m not trying to imply that most developers ignorantly stumble through this sort of analysis. But I think we collectively undervalue bringing prior art to bear.
I’ll end this section with a quote from Xalavier Nelson of Strange Scaffold:
I do think the overall principles of having, if you’ve got a time and a budget and a thing that you’re bringing to life, especially if you know what you’re building, and it’s based on any prior precedent, you can ship that thing - it’s ok - that’s possible. I was told a lot, early in my career, that that’s not possible, I was told you can’t know what game you’re making until you making. Which is fine as a philosophical concept - it doesn’t not keep you in business for very long though.
I’m not going to claim that Nelson has read this blog and agrees with my points. But I agree with his. You can know what you’re making up front rather than finding out along the way, and to some degree you can know if the game will work, because it’s likely composed of familiar pieces. Which, I would note, is less true of Strange Scaffold’s games than of the vast majority.
The Conclusion
This final quote comes from Ken Levine by way of Jason Schreier. It took me quite some time to find it but I’m glad I did, as provides a nice summary:
During a panel discussion a few years ago, Levine explained the final act of his process. “In almost every game I’ve ever worked on, you realize you’re running out of time, and then you make the game,” he said. ”You sort of dick around for years, and then you’re like, ‘Oh my god, we’re almost out of time,’ and it forces you to make these decisions.”
This is rarely admitted to but I don’t think it’s particularly uncommon.
Levine’s Judas looks an awful lot like Bioshock, by the way, which is common in these sorts of cases. After a prolonged prototyping phase Mass Effect: Andromeda was essentially a lesser version of previous Mass Effect games3. Not much in Anthem stands out as incredibly ambitious or experimental. When games run out of time and are forced to make decisions that often involves regressing to the mean.
I’ve done a fair amount of Anthem-bashing, but a point I’d make again is that had the Anthem team spent all that time actually making Anthem it would probably have been pretty great!
That’s the tradeoff we don’t often talk about. If they had made decisions quickly some might have been wrong. But they would have had more time to work on the best possible execution of those ideas, rather than 18 months to produce a flawed version of the theoretically best ideas. (And “theoretically” is doing a lot of work here) And boy, the lack of execution in the final game is very apparent, with horrible loading times, a mathematically flawed loot-scaling system, etc. The first-person home base (the rest of the game is third person) is bizarre in concept and just doesn’t work in practice. With more time at least someone could have cranked up the walking speed to make it bearable.
Studios like Capcom and Ryu Ga Gotoku put out quality games on a regular schedule, while western-made games increasingly struggle with bloated schedules and budgets. That goes way beyond a commitment to prototyping, but I think it’s fair to say that those more productive studios indulge way less in dicking around. Ryu Ga Gotoku released Like a Dragon, a turn-based Yakuza game, 2 years after the brawler Judgment - there simply was no time to dick around.
This post is not advice. I could never tell anyone, site unseen, that they’re prototyping too much or in the wrong way. But what I’m fairly certain of is that we, as an industry, aren’t entirely honest with ourselves about how often the “we dicked around and now we’re out of time” scenario plays out. About how often prototyping is a cover for indecision and a way of giving ourselves permission to dick around. And about how often prototyping provides worse results than advertised while being a constant drag on productivity.
Prototyping might not be explicit A/B testing where features are pitted directly against each other, but that’s often what it amounts to. And pitting the existence of a feature against non-existence is also an A/B test of sorts
This is too much to get into here, but I think there’s an important distinction between prototyping technical features like a procedural animation system vs prototyping gameplay features. Which might explain why the creature creator, which is more a technical feature than a gameplay one (and a novel one at that), fares better than the rest of the game
Andromeda prototyped ambitious procedurally generated planets, so there’s a discussion to be had here around playing it safe with conservative iteration vs trying bolder swings, but it’s way outside the scope of this piece