Priorities and tech considerations in pre-launch game development

Interview with Fred Gill

Few possess the breadth of experience and insight that Fred Gill brings to the table when it comes to game development. With a storied career that includes pivotal roles at Respawn Entertainment, where he contributed to blockbuster titles like Apex Legends and Titanfall, Fred has successfully navigated the complexities and challenges of pre-launch development many times over. 

He recently sat down with us to delve into the nuanced stages of game development, shedding light on the critical priorities, technological choices, and strategic considerations that shape a game’s journey from concept to launch. 

GGWP: What are the top priorities for your team in the pre-launch phase of game development?

Fred Gill: There are different phases to game development, and with each phase, the priorities change:

  • Pre-production. The goal is to “find the fun.” How? Make lots of prototypes to hone in on what is fun for the player. Most of the prototypes won’t be good enough to ship, and that’s fine – it’s about learning as quickly and inexpensively as possible what works and doesn’t work. Most prototypes will have placeholder assets (whiteboxing / greyboxing) as they may be thrown away.

At this phase, engineering team and technical directors are supporting the rapid-prototyping efforts of the team – fast iteration(short feedback loops) leads to higher quality, and as the fun is found (feature set), start looking to what technology will be needed to support the game – both in the client (PC or console) and server-side technology / APIs.

  • Production. Production may be split into two phases. The first phase is typically about generating a Vertical Slice and/or a Horizontal Slice (if both, may get split into two phases / milestones). A Horizontal Slice is the whole game world blocked out so the game can be played from start to finish. A Vertical Slice is a small section of the game to shippable quality (art assets,audio,  target performance, etc.). Both the VS and HS can be used to more accurately estimate the work required to get the whole game scope to shippable-quality. The second phase is about making the game to that quality.

“Finding the fun” doesn’t stop during production – it tends to be evolutionary improvements and changes rather than revolutionary at this stage; you do get the occasional radical change that dramatically improves a game at this stage, and you just have to deal with it. I’ve known some games undertake radical changes after Alpha, that proved genius, but incredibly risky, and not where you want to make changes like that.

This is also the phase when the team starts finalising decisions on the technology that will be used to ship the game; the tech team will have evaluated 3rd-party technologies against their understanding of the game.

  • Alpha. The goal here is “feature complete” – all game features implemented, with some content still being added, i.e. final models and textures to replace blocked out areas, sound effects, music, in-game cutscenes, etc. The gameplay features may be buggy, but should be complete.

At this stage, if the game is online and there is significant new technology supporting the release of the game, the team may elect to do a “Technical Test.” These used to be called “Closed Alpha” or “Open Beta,” but the terminology was confusing to the public. Respawn coined the term “Technical Test” for Titanfall2, which had completely different dedicated-server scaling technology to the original Titanfall. The “technical test” was to see if the scaling worked well enough to ship the game. It didn’t scale well, but we learned where all the problems were through the Technical Test, which allowed us to ensure a smooth launch when we did ship!

  • Beta. All features and content complete, and now the team (usually a core of the team) working fixing bugs ready to ship.
  • Ship. Ship the game; if there is no live service planned, there may be several patches as inevitably, the players do find bugs when the game ships; if there is a live service, the team will be working on content for that.

GGWP: How do you balance between game design, technical development, and marketing during this phase?

FG: For me, game design trumps everything – if the design team needs to make a change to make the game better (especially if they can demonstrate that need), it has to take priority, and so other stuff has to be deprioritised, or the ship date has to move. We had something like that on Apex Legends – our actual launch date (2019-02-04) was not our original desired date.

Getting awareness of your game is critical, which means collaborating with the marketing team on assets to show the game in the best light possible – that may be screenshots, in-game footage, a pre-release build, etc. That can be tricky when a team is still polishing gameplay and content (and bug fixing). The key is getting alignment on the goals and needs between the development team and marketing – easy to say, and it can be quite challenging to achieve. FOCUS is one of the most important elements of making a game – do a small number of things extremely well, rather than a large number of mediocre things. That focus is key when finalising a game – if the key ingredients are right, it’s amazing how much the quality can be improved in the final few months of Production, as long as you don’t lose that focus.

GGWP: How do you determine the best game engine for your project? What are the key factors in this decision?

FG: There are two key questions for any tech choice (“build vs. buy”), namely: 1) “Does it allow you to ship earlier, with lower risk, and to at least the same quality?”, and 2) “Is it worth the cost?” These two questions are effectively the return-on-investment (ROI).

This leads to a looooong list of more-detailed questions, some of which are:

  • How does adoption change your biggest risks?
  • Is this a competitive advantage or commodity (or commodity soon)?
    • “Are you OK with losing control of this piece of tech?
  • Is the gap this technology fills well understood / stable?
    • “Finding the fun” may change tech requirement (gap in technology)
  • Is the tech mature / stable?
    • Does it support the platforms you need it to?
    • How many games have shipped with the tech?
    • Is the tech a perfect fit for needs, or a subset, or superset?
      • What if your needs change?
    • Updates & Support – can you influence the direction / features / bug-fixes?
      • If not, can you add the features you need (easily), or take ownership?
  • If the team wants to “write it themselves”, why do they think they can do it better?
    • Narrower scope
    • Future flexibility
    • etc.
  • etc.

There is lots of nuance around the questions, and the answers. Ideally, the answers are objective not subjective.

The larger the tech component, the more difficult it is to answer the questions! Why?

  • Smaller tech usually has the advantage of doing one thing well / really well. This means an evaluation can be focused, and usually data-driven, so objective. It’s also potentially easier to change if the evaluation is wrong. Key though, is that usually fewer members of the team are affected – potentially only an individual, or the tech team, or the tech team and one other group in the team. Examples include RADGameTools Oodle and Bink.
  • Larger tech (an engine being the largest) does multiple things to varying degrees of quality. In every engine, decisions / choices have been made. Ultimately these mean compromises, i.e. no engine can handle every game genre flawlessly. When you are developing a prototype these compromises may not be obvious. It’s only when you try to push a game to the limits of the engine (CPU, GPU, memory, streaming, etc.), which is usually when you try to ship, that you feel the pain and when you truly understand the decisions / choices / compromises that the engine team made.

Note: Some game designs never push a game engine to its limits. If that’s the case, great.

Engine choice affects the workflow of the whole team, and so the decision becomes more subjective. I’ve seen many teams try to mould an engine to work “the way the team is used to working.” That is a world of pain, and I’ve not seen it done successfully (other than by a hard fork of the engine). How is each domain impacted by engine choice now, and by the time you ship? How does it affect the iteration speed (directly linked to quality) of the content creators and engineering teams – where is it faster, and where is it slower, and does it support proving out your game now, and shipping it later? An example here might be: “is your design team OK working with visual-scripting to iterate on gameplay? Can you ship with visual-scripting (is it performant enough)? If you need text-based scripting, what’s the integration cost of doing that every time you upgrade the engine?“

I believe that “if a game team has shipped a game on one engine, it should be really difficult (not impossible) for that team to switch to another engine” – once the team has lived-and-breathed shipping a game on an engine (felt the pain), the team understands the choices / compromises the engine team made, and so shipping future games on the same engine become easier.

GGWP: Can you describe the role of third-party services in your game’s development? How do you choose which ones to integrate?”

FG: In addition to the answer above, for 3rd party services there is also the question of: 

  • How does it support multiple customers?
    • Single-tenant or multi-tenant (affects questions & answers below)?
    • Can I go from multi-tenant to single-tenant smoothly (if needed)?

Note: multi-tenant can be cheaper, but potentially has scaling, security and noisy-neighbour issues (not only can other applications be noisy neighbours to you, but your game can be a noisy neighbour to them).

  • Can it scale to my expected player base?
    • Total lifetime audience
    • Peak simultaneous
    • What are the scaling characteristics
      • How quickly can it scale up?
        • Critical after an outage (outage could be somewhere else)
      • How does the service degrade with more players (latency)?
      • Dynamic auto-scaling up (and down), or manual intervention?
    • etc.
  • What’s the patch/update model?
    • Does it support zero-down-time?
    • Does it support deployment to a subset of players?
    • How quickly can it be rolled-back if there is an issue?
    • etc.
  • Data & security
    • GDPR
      • What data does the 3rd-party need / retain, and for how long?
      • What support for Subject Access Requests (SAR)?
      • What support for Right To Be Forgotten (RTBF)?
    • What security do they have in place (development, and deployment)?
    • etc.
  • Has the scaling been tested / demonstrated?
    • What performance characteristics?
    • What if my game is a runaway success (5x, 10x)?
  • Support model?
    • What is self-serve vs. white-glove?
    • What is the SLA, and what escalation paths?
  • Cost modelling:
    • If my game is moderately successful, how does it affect profitability?
    • If my game is a runaway success, how does it affect profitability?

Respawn makes games, not technology. Technology is a side-effect of making games. Respawn developed many backend services for Titanfall themselves as the technologies they wanted / needed didn’t exist in the market or were not mature. These technologies were later adapted for Titanfall2 and Apex Legends, as the team had intimate knowledge of their strengths and weaknesses, and switching technology would have meant cutting game features. EA, Respawn’s publisher at the time, had backend services that Respawn could have used, but as an independent developer, Respawn wanted to minimise ties to any one publisher. That all changed when Respawn was purchased by EA in 2017.

GGWP: In your experience, which services are more commonly developed in-house, and which are typically outsourced? What drives these decisions?

FG: There are a lot more backend services that can be licensed than there were in 2013/2014, and which can operate at the scale required for very successful games. The selection criteria are very similar to my answer regarding choosing a game engine above, with emphasis on the following:

  • Are any of the services key differentiators that the team should own?
    • Expected area for rapid iteration / change post-ship?
  • Does the team need operational insights into the service the partner cannot / will not support?

At EA, first party services are already approved from a security perspective, legal compliance on data handling, tested at scale, and critically, are interoperable with minimal game glue-code. On the other hand, 3rd party services tend to be slicker, i.e. easier to integrate and configure (their company can thrive or die due to this).

GGWP: For multiplayer games, what live operations services need to be in place before launch?

FG: It’s a long list. We didn’t do all of these for the launch of Apex – some weren’t possible at the time, i.e. cross-play and cross-progression were not possible in Feb 2019 when Apex shipped. The key learning is that it’s significantly easier to do the work before you first ship the game to the players.

  • Identity – 1st-party / federated publisher identity
  • Crash reporting
    • Client and any server / services – MTTF and MTBF both important
  • Matchmaking
  • Persistence
  • Anti-cheat (a whole topic in itself)
  • Social/community systems
  • UGC – moderation, reporting (a whole topic in itself)
    • Text chat
    • VoIP
    • etc.
  • Compliance with worldwide regulation (evolving)
    • CVAA for text & voice chat
    • COPPA
    • etc.
  • In-game store (depending on publishing model)
    • Ensure you are compliant with evolving worldwide legislation, i.e. Loot Boxes are banned in Belgium
  • Server scaling
    • If dedicated servers, auto-scaling (up and down) to match demand
      • For cost reasons, Apex Legends used a hybrid system of bare-metal servers in datacentres that scaled into cloud if demand outstripped bare-metal capacity
    • If listen-servers or peer-to-peer – NAT, server migration, etc.
  • Server configuration / feature kill switches
  • Server Update / patching
  • A/B testing capability
  • Client update and patching (inc. stripping future season content / data)
  • Cross-play and cross-progression (persistence)
  • Telemetry – collection and visualisation
    • GDPR support for SAR and RTBF
    • Insights into infrastructure behaviour
      • API calls, latency, DB use, etc.
    • Insights into “quality” of matches
    • Insights into gameplay
      • General – character use, weapon use, health pick-up use, etc.
      • Detecting collusion, smurfing, boosting, etc.
    • Insights into store
    • Insights into engagement, retention

GGWP: How do you plan and prepare for the scalability of multiplayer services?

FG: Test, test and test again at a scale larger than you expect the player counts (concurrent and lifetime) to be! There are two areas you need to consider:

  1. Back-end services – identity, persistence, social, etc.
  2. Gameplay servers – servers on which the gameplay simulation takes place

Background: there are essentially three common architectures for multiplayer games:

  • Listening server – one the PC / console clients is also acting as the authoritative server. It scales “infinitely” as the player base grows, as the players are providing the servers. There are issues around bandwidth and connection quality (as usually these are running in a player’s home) and the number of players that can be supported, NAT, latency, the performance impact on the machine that is acting as client and server, migrating the game / server state if the server crashes / connection lost / server player “pulls the plug.” It’s an anti-cheat nightmare too – the machine that is acting as the authoritative server can potentially have a latency advantage (depending on architecture), can be attacked with a DoS (denial of service) or DDoS (distributed denial of service) and if a PC, can be hacked by the player.
  • Peer to peer – has many similarities to a listening server. With peer to peer, every client is also a “mini server,” having its own view of what is going on in the world, and every player has a connection to every other player. Peer to peer doesn’t scale to large numbers of players, i.e. more than 8; NAT is worse (due to all players connecting to all other players), the potential for cheating is much worse too (no one is authoritative).
  • Dedicated server – a server built to run the game simulation in a datacentre or in the cloud, and trusted to be authoritative. All the other listening server / peer to peer problems are minimised. The big issue with dedicated servers is capacity – ensuring you have enough servers running around the world to match capacity. Note: even with this architecture, if you trust the client too much (AT ALL) it will make your anti-cheat, anti-fraud and anti-toxicity even harder to execute.

Until Titanfall, Dedicated Servers were hosted in datacentres on bare-metal servers, which had to be ordered many months in advance, and had a minimum commit of 30d. If you ordered too few for a launch, players would struggle to get into matches, and correcting it can take several weeks (due to the lead-time on orders), and by the time you correct, you’ve frustrated most of the community and they move onto other games. Order too many servers for launch, and you’re potentially wasting millions of dollars on idle servers. Rinse-and-repeat on a regular (monthly) basis as you try to scale the capacity to match the active player base.

Titanfall was the first game to launch with auto-scaling for dedicated servers. It used Microsoft’s Thunderhead system. Titanfall spins up dedicated servers “just in time” and shuts them down when no longer needed; there is a lag, so it tries to have a little more capacity than needed, but sometimes players might have to wait a minute or two.

Titanfall2 moved to a hybrid model, where there were bare-metal servers in datacentres around the world, with cloud capacity added if/when datacentre capacity is exhausted in a given region. Apex Legends launched with this too – a good balance of cost vs. capacity scaling.

For testing Infrastructure, “thin clients” (a client that emulates player interactions to get into / out of game) are a great way, but have significant maintenance overheads. Alternatively, there are several API testing frameworks that can achieve similar results. Take all test results with “a pinch of salt” – they are only ever a best-guess of what the player will actually do in the game, so can be wildly out – and player interactions can change dramatically with a single new game feature!

For testing dedicated servers, the ultimate is clients running bots through the normal player flows to connect and play on the dedicated servers (server code remains pure) – that’s hard to create and maintain. The dedicated server running bots in place of players gets you close (server code contaminated) to that, and is significantly easier to do; otherwise it is humans playing on dedicated servers whenever possible to stress test the dedicated server – the more bots / humans you can throw at this, the better.

I remember that on Apex Legends, it was incredibly difficult for anyone to forecast the peak simultaneous users (PSU, also known as PCU or peak concurrent users), which the infrastructure needed to support – EA didn’t have a free-to-play PC/console title, and our marketing strategy was “to ensure players had Apex in their hands when we started talking about it.” We were working to ensure the infrastructure could handle a certain number. I had a meeting with Ken Moss in October 2018, the CTO of EA at the time. Ken, Laura Miele and Andrew Wilson had all played Apex recently, and had an amazing time. Ken thought the PSU forecast was far too low. I asked Ken what number he thought we should work to. Ken said 6 million, based on Fortnite regularly hitting 8M+ including mobile – better to overspend than fail at a low number. A large part of our focus after that was trying to ensure we could scale to 6M (we did have to make changes) – it’s a good we did as we were comfortably over 2M on the first launch weekend, and our original architecture would not have supported that.

GGWP: What strategies do you employ for content moderation in the pre-launch phase?

FG: On Apex our focus was on ensuring anti-cheat was robust. We did nothing on text chat and voice chat (other than supporting CVAA). That was a mix of “no metrics” to know if text and voice chat was going to be toxic, and the fact we had the Ping system, which was popular in internal playtests, which meant players didn’t need to use text chat or voice chat.

I would certainly ensure that UGC (text chat, voice chat, etc.) moderation is built in ahead of launch for future games.

GGWP: How do you ensure the stability and security of the game infrastructure during pre-launch testing phases, especially with large-scale multiplayer games?

FG: On the security side, EA has an amazing security team, a healthy security community, and a regular internal security conference each year. Before any title ships in EA, the security team will review infrastructure and code, and can provide penetration testing, regular code scans, secure storage for credentials, code reviews of critical systems, and also has a response team in case of incidents.

Security awareness is something the team has to be reinforced continually (“never trust the client”), and features and code/systems reviewed regularly.  Once your game ships it will be under constant attack / probing for weaknesses by a very small number of your players (every game is) – people looking to cheat, take advantage of bugs / exploits, data-miners and people trying to commit fraud.

On the stability front, continuous integration, test infrastructure that supports automation with “thin clients,” bots on clients, bots on servers, a large QA team, regular internal playtests, great crash reporting, etc.

GGWP: Fred, thank you for your insights and guidance for developers aiming to create memorable and successful games.

FG: No problem!


About Fred Gill

Fred Gill – LinkedIn

Fred has been a professional in the video games industry for over 40 years. His journey began at the age of 17 when he sold his first game. After graduating in 1988, Fred co-founded a games company with four friends, leading it to grow to a team of 100 people over the next 15 years. The company was eventually acquired in 1997. When the parent company collapsed in 2002, Fred founded another small development studio, this time as CEO. However, he found that being a CEO wasn’t enjoyable, so he transitioned back to technical direction  within an EA studio. The team was responsible for creating the single player campaign (complementing Dice’s multiplayer) in Battlefield: Modern Combat in 11 months.

Fred later moved to Swordfish Studios in 2006, returning to EA to join a small division called EA Partners. In this role, he collaborated with independent studios to bring their products to market with EA as the publisher. Some of the titles he shipped include Crysis 2, Syndicate, Crysis 3, A Way Out, Unravel, Titanfall, and Titanfall 2 (among others). After EA acquired Respawn in 2017, Fred joined Respawn in 2018 as Studio Technical Director, eventually becoming VP and Head of Technology. His contributions included helping ship Apex Legends and being the Franchise Technical Director until Season #18. Fred retired in August 2023, returning to his roots by creating games for himself, with no expectation they will be commercially successful. Fred is also helping two charities, one of which he founded (Antidote Gamers) to help combat toxicity in video gaming.