Demo Delay

From P2SR Wiki

Revision as of 06:42, 14 June 2021 by Mlugg (talk | contribs) (Add missing title)

Demo Delay


Demo delay is a phenomena where if you autorecord demos through loads (either use the base game's autorecording system or the sar_autorecord command in SAR), you will automatically lose between 0.1 and 0.4 seconds of time at the start of the level. This applies both to the CM timer and the SAR speedrun timer.

Demo delay only affects singleplayer.

Removing Demo Delay

Currently, demos for fullgame runs (or any other category on speedrun.com) are required to use autorecording, and therefore include demo delay. However, in challenge mode (CM), you are allowed to record your demos in such a way that demo delay is eliminated; it is strongly encouraged to do this, especially when playing at a high level (many, if not all, records will be impossible to beat with demo delay!).

To do this, you should not use autorecording in CM. Instead, you should start recording after the map load. This can be done manually; for instance, some runners will bind their W key to something like +forward; record demoname to make sure they record demos as soon as they start moving. However, if you have SAR installed, a much easier solution is via the sar_record_at command. If sar_record_at 0 is set, SAR will automatically start demo recording on the first tick you're loaded into a map - the name of this demo can be controlled through the sar_record_at_demo_name cvar.

Note that when using one of these solutions in CM, it's important that you prevent autorecording, by putting the stop command in your reset bind.

SLA

Demo delay can have a few useful effects in SLA. The main one is that it seems to cause save-loads to clip you much further; this applies both to voidclipping and to clipping through objects you're stuck in. If for any reason you wish not to record demos, the effect of demo delay can be simulated by running cl_localnetworkbackdoor 0, however this may not be legal in runs; check with a moderator first.

Technical Explanation

When you reach signon state 5 (SIGNONSTATE_SPAWN), the server sends the client a full update, as well as a NET_Tick message to sync the tick, ready for the full connection. The full update is composed of quite a few messages, but the two we care about are the SVC_ClassInfo message and the SVC_PacketEntities message. It's important to note that the NET_Tick message is sent after the full entity update, and these packets are all sent over a reliable channel, so order will be preserved.

After the client receives the full update and NET_Tick message, it will switch to signon state 6 (SIGNONSTATE_FULL). A singleplayer server will only simulate if there is a client connected, so this is the point where simulation begins.

Now, remember how both packets are sent on a reliable channel? This channel is either a TCP socket or a reliable transport built on top of UDP, generally the latter (either way, the same logic will apply here). This makes perfect sense for multiplayer servers. However, while it works in singleplayer, it's not a very efficient solution; we don't even need to do IPC, we just need to send the packet to a different part of the same program! Therefore, sending the data out to the network layer only to receive it again seems very wasteful. This is amplified by the fact that due to limitations of the protocols, large packets - such as the full entity update - have to be split up. It looks like the Source engine will split packets up into 256 byte "fragments"; this seems a bit small to be honest, but sure, okay. A full entity update is going to be several kilobytes; allocating buffers to split this up and then sending each packet only to immediately re-combine them is, well, stupid. So, how do we deal with this inefficiency?

The answer is local transfers. The engine has a system called the "local network backdoor" which effectively special-cases performing full entity updates when we're using the local server. When it's active, full updates aren't sent over the usual network channels; instead, once the SVC_ClassInfo message is received (over the normal reliable UDP/TCP transfer), the entity data which would normally be sent in SVC_PacketEntities is immediately transferred from the server. This saves us the overhead of serialising and splitting up everything - great!

However, the local network backdoor is only active in certain cases. Every frame, the engine checks whether it should be enabled or not, and changes state if necessary. Its conditions are as follows:

  • The cl_localnetworkbackdoor cvar must be nonzero (it's 1 by default)
  • The network channel we'd be replacing must be a loopback channel (i.e. it must be a local server)
  • The server must be active
  • We must be in a singleplayer game
  • The demo player must not be active
  • The demo recorder does not be active

Notice the last two of these conditions. The demo player one makes sense due to the way demo playback works (it functions as a kind of fake transfer, since the demo contains all the network packets, so we shouldn't try to transfer from the local server; we're not using it!). However, why can't we use the local backdoor while there's a demo recording? Well, this is due to the fact that the demo needs to store the full update packet at the start. That means we need to receive that packet in the first place in order to write it to the demo file; if we were using the local backdoor, the packet would never even be created! Therefore, we have to disable the local backdoor to make sure the recorded demo is valid (or rather, contains all the necessary information to be played back correctly).

However, this means that the full update is sent over the slow, socket-based transfer. Once this data is sent, it seems that it is not all received immediately; instead, it is buffered across several engine frames, the number being proportional to the number of fragments received. Importantly, during this time, the server is allowed to run the tick method (GameFrame). It does not simulate the world in this time; as we know, the server will only simulate while there is a player fully connected (i.e. in SIGNONSTATE_FULL). However, the tick count still increases for these non-simulated ticks.

In normal circumstances, this wouldn't really matter; the client's tick count should routinely be synchronised with the server's, and the client accurately predicts tick timing, so it would remain in sync. However, because all the packets are being sent over a reliable transport - and therefore come through in order - all of this is happening before the NET_Tick synchronisation message has been processed by the client. After all the entity data has been processed - and, say, 12 non-simulated ticks have elapsed - the NET_Tick message is finally processed. This will synchronise the client's tick count... to the server's old tick count, 12 ticks before now. For sake of example, let's say the server's tick count is now 32, so the client's is 20.

Now that the entity data has all been received, and the tick synchronised, the client happily notifies the server of its advance to SIGNONSTATE_FULL, beginning the session and world simulation. Timers like the CM timer and SAR's speedrun timer, both of which are based on the client tick, duly note the tick the session started on. At first, the client remains desynchronised, at tick 20. However, after simulating one or two ticks (depending on alternateticks), the server will resync its tick count (now 34, assuming alternateticks) to the client again (it does this after every simulation). This time, there's no such network delay due to a full transfer; the client's tick is immediately updated, and jumps forward by 14 (rather than the 2 you'd expect), meaning the timers (CM timer and SAR speedrun timer) immediately see 12 ticks (0.2s) more than it should being added to the timer: demo delay.