catid Posted February 16, 2004 Report Posted February 16, 2004 G'day, I've been thinking more and more about synchronizing clocks. I am actively experimenting with various techniques, trying to eliminate some random and systematic error inherent in common sync protocols like NTP (network time protocol) and SubSpace. The meat of this message is a spreadsheet of experimental data from a protocol that's designed to be like SubSpace. It implements just the "ping pong" part of the protocol, where the clients send their local time in hundredths of a second since the computer started to when they sent the "ping", and the server replies with that same timestamp with the time the server had when it received the ping. (Since it's related to when the computers started and how fast the clocks are ticking and error, these timestamps are probably very very different.) Given client send time(CST - not central standard), server receive time(SRT), and client receive time(CRT - not Chinese remainder), you can determine the difference in the two clocks (DC - not D/C), plus or minus the ping time, like so: NOTE: "Determined" variables are marked with <>. These are !@#$%^&*umed to have perfect accuracy.NOTE: "Approximated" variables are marked with [].NOTE: Yet-unknown variables are not marked in either way. TT := Time to transfer the UDP PING packet to the serverTTp := Time to transfer the UDP PONG packet to the client + DC + TT = . - = DC + TT. PING_TIME := Time between sending ping and receiving pong, on client-side.K := Percentage of ping time spent going from client to server. SubSpace protocol says this is 60% [TT] ~= * [K],[TT] ~= ( - ) * [K]. --> [DC] ~= - - ( - ) * [K], If [K] is approximated as 50%, then it can be simplified: [DC] ~= - / 2 - / 2,[DC] ~= - ( + ) / 2. This is essentially what NTP (network time protocol) uses, except that NTP also factors in the microseconds it takes to receive a packet and send another one on the server side. Since SubSpace operates at the level of hundredths of a second, this will not affect anything. I did some tests with that too. It takes about ~40 microseconds to ping myself on loopback, so it will be an even smaller number than 40 usec difference from NTP. SubSpace, however, uses 60% for [K], so: [DC] ~= - - ( - ) * 0.6. How much does it affect the accuracy of the [DC]?Well, instead of blabbering on any more, here are the graphs.test4.xls --> [K] = 0.5test7.xls --> [K] = 0.4 Here's why I thought someone might be interested: Using 0.4 makes the average [DC] lower, so when you do [CST] + [DC], the sum is smaller, which means that the timestamps are less likely to be greater than the timestamps generated on any other clients. test4.xls (K=1/2)test7.xls (K=2/5) Edited: Test7 is NOT how SubSpace does it. It IS representative of 10% divergence from 50%. So, flip the graphs upside down if you want to see it that way. If you play with the equations, you'll see that 60% of ping from the remote client and 40% of ping from local client are used to approximate the transfer time of a position packet from remote to local. This is beyond the scope of my research, which involves a single client and a single server for the purposes of clock synchronization.
catid Posted February 16, 2004 Author Report Posted February 16, 2004 Oops, I forgot some important info about the SubSpace ping. It follows this algorithm to selectively drop pings: ping := round trip time if time since last ping was accepted > 2 minutes, then accept this ping.else, if ping+1 < last accepted ping, then accept this pingelse, if ping > last accepted ping times two, then accept this ping only if time since last security checksum > 1 minute. I do not believe NTP drops pings in this way, though I may be mistaken. In any case, it's a good idea, based on my graphs. See "adjusted diff. vs. ping". As ping increases, random error increases. So it's a good idea to keep the lowest pings, for best results.
Mr Ekted Posted February 17, 2004 Report Posted February 17, 2004 If the 2 computers' clocks run at exactly the same speed, and there's no change in sync packet latencies, then the 2 ends should stay synced always. If the client is slightly faster or slower, it should smoothly adapt over time (although I think if the client notices that it's adjusting too much it should increase the frequency of sync). If the network has a small spike with the sync packets, there could be a temporary incorrect adjustment. When the adjustment is too large, the client should smooth it out, maybe only 10-20% of the adjustment, and again increase the frequency of sync until it's stable for several iterations.
pyxlz Posted February 17, 2004 Report Posted February 17, 2004 You might want to take a look at one dimensional Kalman filtering and Markov estimation/localization for error estimation models.
Recommended Posts