Goldsrc - Low Level Networking Details
Description of the low-level innerworkings of GoldSrc networking.
The GoldSrc engine has a comprehensive networking system that allows the game to host multiplayer servers and to communicate with many clients connected at once. The low-level networking code that handles all of this is written in pure C—just like the majority of the engine—and it's using socket posix API for communication such as recvfrom, sendto, bind, etc.
This article tries to describe in detail the innerworkings of the low-level implementation of GoldSrc networking, but does not go into more high-level topics such as how is the data in packets futher processed.
Socket Creation and Initialization¶
During initialization phase, two important data structures are created:
sizebuf_t net_message: This is the network message that we receive/send and that we work with. This data structure is used all across the engine together with functions likeMSG_WriteLongto occupy this buffer with data. It is filled in functionNET_GetPacketand later processed (on the client side), and sent byNET_SendPacket(on the server side).sizebuf_t in_message: The data received fromrecvfromfunction is copied over to this data structure. This data structure is however used only internally insidenet_ws.c. From this data structure,net_messageis then filled with fresh data.
These two data structures provide the fundamentals of the networking, since they hold the raw data transmitted over the wire.
On late initialization, upon calling another function from net_ws.c called NET_Config, the sockets are created. This function is called early in the connection process. Internally it creates and sets up the sockets on either default or user-specified ports. The default ports for client and server that Valve uses are 27005 and 27015 respectively.1
If you connect to a server, the client port is the port on your machine, where the server sends data to. On the other hand, the server port is the port on the server machine, to where you and other players send data.
Data Transmission flow¶
The data flow is straight-forward. The data is received/transmitted from/to the socket inside net_ws.c via NET_GetLong and NET_SendLong—get/send "long" chunk of data (packet). These functions are the fundamental pieces of the network communication. And this code is shared between client and server, meaning it's used by both parties.
Higher up in the call chain, the callers of these functions, still shared between the client and the server, are functions from net_chan.c. This file is higher up in the application layer, and it works with the data and structures it, adding sequence numbers, working with file fragments, and so on. The most important functions here are the Netchan_Transmit (for sending) and Netchan_Process (for receiving). The netchan code is used for regular packets—data from these are for example server messages (data sent from server to the client), processed on the client. On the other hand, there are OOB packets, which are used e.g. in connection setup.
The netchan functions are then called by the individual party in client/server code, interally, in e.g. sv_main.c for server and cl_main.c for client.
Packet Types¶
There are mainly two types of packets used in Goldsrc:
- Out of Bounds (OOB) packets, also called Connectionless packets. These are determined by an 0xffffffff mark as the first u32 integer in the packet data.
- Netchan packets. These types of packets are sent/received using functions
Netchan_TransmitandNetchan_Process, and these are used to process structured data in form of files, or messages (client and server ones - clc_* and svc_*).
So for instance, consider this piece of code on the client side:
// cl_main.c
while (CL_GetMessage())
{
if (*(int*)net_message.data == -1)
{
// OOB "connectionless" packet
continue;
}
// else the packet must be a "netchan packet"
if (Netchan_Process())
{
CL_ParseServerMessage();
continue;
}
}
As you can see, the data that we work with here are the aforementioned OOB and netchan packets. The function CL_GetMessage essentially just calls NET_GetPacket, which in turn calls NET_GetLong and so on. This is the "message pump" type of code, where all of the incoming data from the server is processed.
OOB Packets¶
The OOB packets represent a connectionless packet, where some of the messages such as S2C_CONNECTION or S2C_CHALLENGE sent by the server are processed. These two in particular are used in the connection process.
Netchan Packets¶
The netchan packets are designed to be reliable. The netchan code provides some kind of sequence-acknowledgment code that handles reliable connection. This type of connection is of course used e.g. for player movement - clc_move message.
The types of messages here are (from the client side) all svc_* and all user messages. Server messages are sent from the server, but from the engine module, while user messages are sent from the game server DLL. These may include messages like svc_time to tell the client what is the server time, so that it can sync with the server. Each of these svc_ messages have a parser function—CL_Parse_Time for example in this case.
The Raw Network Payload¶
Every piece of data that is sent through the socket eventually goes through NET_SendPacket function, which then depending on the type sends the packet to the receiver or loops it back. Loopback occurs in singlerplayer, where server and client are running on the same machine, therefore the data do not go outside of the local network.
Packet Splitting¶
GoldSrc has a built-in mechanism for splitting the packets if they are too large to be transmitted (e.g. for files or simply large packets). This is essentially referred to as a split packet in the GoldSrc terminology.
In loopback mode, the data is just sent back to us without any latency, since it's all happening on the same machine. There is also no fragmentation, so there is no need for splitting. However, as soon as we are sending the data over the wire (e.g. to the internet) through our router, the data will be fragmented and reassembled at the receiving end, which increases latency and reduces overall performance. This means that such packets needs to be split up into more manageable pieces called aforementioned split packets.
The splitting is usually initiated only by the server, because it is sending data to many clients at once. From my measurements, the client can hardly reach a hundred bytes of data sent to the server per single transmission. On the other hand when the server is sending files to us, it needs to split up the packets. These packets are then reassembled and pieced back up on the client.
Whether the packet is a split packet or a regular one is checked in NET_QueuePacket, where recvfrom is called. That function essentially just fills up the in_message buffer, which in turn then fills up the net_message buffer. The first uint32 of the received buffer from recvfrom is checked, and if it is -2, then it's a split packet, else a regular one. Split packets need to be handled in a separate logic, because they are well, split up,
I mention the packet size in GoldSrc in the next section.
Packet Size in GoldSrc¶
As previously stated, in loopback the data is sent in its raw form. However, as soon as we are talking to someone on the internet—considering that we are the server, we need to worry about the packet size.
The Maximum Transmition Unit (MTU), or the Maximal Routeable Packet (MRP) (as referred to in GoldSrc), over UPD without the need of fragmentation is typically around 1500 bytes2. In GoldSrc, the size of MRP is hardcoded to 1400 bytes. On the other hand, the minimal routeable packet is 16 bytes:
// net.h
#define MIN_ROUTEABLE_PACKET 16
#define MAX_ROUTEABLE_PACKET 1400
If you are familiar with the UDP transmission protocol, you may say that The theoretical maximum size of UDP packet is 65,535 bytes (2^16), and not 1400 bytes. While this is true, in practice, a packet of that size cannot be sent without fragmentation3. The smallest non-fragmented packet sent over UDP can be 1472 bytes. That is, because:
$$ 1500 \text{ bytes} - 20 \text{ bytes (IP header)} - 8 \text{ bytes (UDP header)} = 1472 \text{ bytes} $$
GoldSrc therefore defaults to 1400 bytes, and this is due to performance reasons because sending a full 65k UDP packet would be highly inneficient and prone to fragmentation issues. Using smaller packet size improves packet delivery reliability and minimizes packet loss.
More information can be found here:
- For the structure of the IPv4 header, see RFC791 section 3.1.
- For UDP, see RFC768.
Simulated Packet Lagging¶
In order to test the prediction code and other simulation code on the client-side, Valve has developed a "fake-lag" system in their networking stack. This is of course implemented in the net_ws.c file together with other core functionality.
Before we talkt about the implementation details, let's first discuss some common indicators of poor connection.
Packet Loss¶
Packet loss occurs when one or more data packets fail to reach their destination across a network, i.e. when we lost incomming data from a server. This can happen for several reasons and packet loss can cause noticeable issues, and hence the engine needs to adapt for such situations by using prediction and other mechanisms. This goes along with how the whole environment is kept "smooth" by using interpolation, even though the client is having high packet loss or just higher ping. I will not go into detail how the engine tries to predict the situation in order to make the gameplay more smooth, or how does it interpolate some of the data.
Packet loss can introduce rubberbanding, forgotten inputs from clients, and overall desynchronization between server and client state. GoldSrc handles packet loss primarily through netchan packets (net_chan.c), which include sequence numbers and acknowledgement (ACK) mechanisms.
Packet Choke¶
Packet choke occurs when the game deliberately holds back packets from being sent, usually because of bandwith limits or rate settings. It's different from packet loss—in packet loss, the game sends a packet, but it never reaches the destination. With choke, the packet never even leaves the client because the game engine decides it would exceed the allowed data rate.
In GoldSrc, this is typically controlled by cvars such as rate, cl_cmdrate, and cl_updaterate. If the rate is too low, the client won't send packets as frequently to the server, even though the player is making a lot of inputs or there's a lot of data to send. This is the engine way of throttling the network to avoid flooding the client connection.
Usually, lowering the rate for client does not make much sense. This is because when the rate is set as low, the client isn't able to handle all the data the server wants to send, causing packet choke on the client side. This means that the client is missing out on important updates, which can cause improper gameplay experience. The only situation where the client would want to lower their rate is if their packet loss is high—that means taht their internet connection cannnot no longer keep up, and therefore packets are being lost.
Fakelag¶
The game has fakelag and fakeloss cvars that simulate these scenarious. Fake lag is created by providing packets in N ms from the past, basically acting like we'd have high ping. And fakeloss is an amount in % of packets to drop upon receive.
The lagging is done in NET_QeueuPacket which in turn calls NET_LagPacket, which keeps track of the lagged packets, and so on. It also randomly drops the packets, if fakeloss is set.
This however all applies just for packets being received. Not packets being sent.
Threaded vs Synchronous Networking¶
The engine uses two modes of networking, in terms of synchronization and how it receives data via the socket:
- Through a separate thread, therefore asynchronously.
- Through the main thread, therefore synchronously.
The default way currently in the steam version of GoldSrc is the synchronous way however, it can be switched to the asynchronous way using launch option -netthread.
Usually when working with sockets in C, they are blocking by default. This means that when you call recv or recvfrom in C, this call blocks the calling thread until some data arrives. However, in GoldSrc, this is obviously not the case, since the frame would halt unless data is received. This is because when the socket is created inside NET_IPSocket, the socket is set to be non-blocking:
qboolean _true = TRUE;
// FIONBIO - a flag that tells the code which option should be turned on/off depending on the last function parameter - _true.
::ioctlsocket(newsocket, FIONBIO, (unsigned long*)&_true);
Why did Valve decided to choose the main-thread version is unknown. Plus the threaded option is also only available on Linux. And also in the 2007 version of the engine, the threaded version was completely dropped.
Also worth mentioning is that the thread code brings a lot of additional code with it, such as message queues, which I don't have the understanding of.
Notes
- Valve has officially registered ports from 27000 through 27100 for their games for UDP communication. See https://support.steampowered.com/kb_article.php?ref=8571-GLVN-8711.
- See this stackoverflow question. This is also related to packet fragmentation: How is the MTU is 65535 in UDP but ethernet does not allow frame size more than 1500 bytes.
- "Any IP datagram can be fragmented if it is larger than the MTU."—Can UDP packet be fragmented to several smaller ones