by | Jul 16, 2018 | Dev Blog

The backbone of every multiplayer game is making sure the state is the same on every client. So network communication between the server and the connected clients is pretty key. If you mess that up, the whole experience falls short.

Getting the data out and making sure all players see the same thing was the first thing that was built when I started working on Deepfield. A few months down the road, the project has grown and even though it’s doing a great job and holding it own; that backbone is starting to show its age.

The networking system in an RTS needs the following 3 features to do it’s job;

  1. Transparently transfer information between multiple simulations
  2. Keep track of and correctly associate clients objects with their connections
  3. Get the data to the clients as fast as possible so the game feels responsive

UDP or “User Datagram Protocol” is one of the Internet protocols that allows computers to send messages to each other. It’s a bare-bones protocal and doesn’t guarantee that the data is received in the right order, on time or at all. You send the data out and hope for the best. Ideally if you need to know for sure that the data is delivered reliably you need to write that into your engine.

TCP or “Tranfer Control Protocol” is another protocol, however it differs because it concerns itself with the reliability of the communication. If you use TCP you know the data will arrive at it’s destination, in the order to you sent it and completely in-tact.

So, if UDP and TCP both get the data there and TCP is a more reliable why do you need UDP at all? Seems silly to take the risk and potentially lose data.

UDP is fast. Very fast. There’s no overhead for managing the connection or making sure the data has been received. UDP just throws the packets out there and moves onto the next packet.

The very first implementation of Deepfield was a browser game and was forced to use TCP for network communication. As of this writing browsers don’t do UDP. As the game started growing the delays in transmission were causing enough trouble that I needed to abandon the browser version. The current version of Deepfield now using native code has access to everything the computer can offer and as a result uses UDP to get the data out there. That approach worked in the early stages but I started having problems with out-of-order packets causing jitter in the unit movement. I wrote some sanity checks into the network system and that was smoothed out but that wasn’t the only problem.

All run-time data is provided by the server, so unit specifications and balance settings can be tweaked quickly without the need to update the clients. These payloads can be pretty big and I was having trouble fitting the updates into the small packet size provided by UDP. Packets were getting lost, so when the server told the client to change the viewport or the user gave an order to their units sometimes it just wouldn’t happen.

More and more issues started creeping in. I had planned to build the required checks to mitigate these problems but mid-way through the build there’s always things that are more important.

The thing is; TCP solves all of these issues without me needing to re-invent the wheel. So why not use it instead? Because when I have done that in the past the transmission ruins the responsiveness of the game to the point of it being unplayable.

Last week I was working on a feature that really made the shortcomings of UDP shine. It was breaking the game and I needed to solve these network problems to be able to continue. Adding reliability checks and better connection management would take a week to build and another week to work out the bugs and polish it so I could get back on the feature.

What if I used BOTH protocols and use them when I needed to take advantage of their individual strengths? TCP already handles data reliability and large data sizes! Why re-invent the wheel? It’s conventional wisdom that you can’t use TCP and UDP at the same time as the network is geared to prioritise TCP packets over UDP. As I looked into it, there wasn’t much substantial evidence to support this idea and it was really only an issue when the network was saturated. The way I figure it, if the network is saturated enough that enough UDP packets were getting dropped then the users would be having bigger problems.

I decided to save some time and reworked the network system to utilize both protocols. Considering that TCP would only be used occasionally to send critical data to and from the server there was no way there would be enough TCP packets from the application to drown out the UDP packets. All sync data and incidental messaged would still use UDP. The engine could specify if it wanted a particular message sent via UDP or TCP as was required.

The upside of this change is that the connection management would no longer be handled by the engine. This simplified the user objects in the engine and solved some bugs I was having with that.

Took me a little over 2 days to add TCP support to the engine and another 2 days to smooth out the bugs introduced by the change. Leaving me an extra day to get the feature that was blocked over the line and in a state that could be released for initial testing.

I’ll keep working on that feature this week and I’ll update you guys on what that’s all about next time.