Frames per second - different when using dt versus system clock

Hello!

I was having some genuine performance issues with my game (which have since gone away with some refactoring, and by realising that streaming in OBS using window capture and Windows Aero enabled is a baaaaaad idea), which highlighted something odd with how I am tracking frames per second.

I have written two functions for tracking fps - one using the dt reported by the engine, and one using differences in os.clock(). The dt function reports 53 fps as the maximum framerate. The os.clock() function reports 60 fps as the maximum. The difference of ~7 fps seems consistent, as they both drop together if I do something intense. I have variable_dt enabled.

Any ideas on why this might be? It could very well be an issue with my functions, so I will attach them below just in case.

dt function:

local dt_table = {}
local dt_sum = 0
local frames_per_second = 0
local seconds_to_track = 2
local expected_fps = 60

local function show_fps(dt)

    dt_sum = dt_sum + dt
    table.insert(dt_table, dt)
    local len = length(dt_table)

    frames_per_second = len / dt_sum
    
    if len >= seconds_to_track*expected_fps then
        dt_sum = dt_sum - dt_table[1]
        splice(dt_table, 1, 1)
    end

    local n = gui.get_node("fps")
    gui.set_text(n, "FPS (dt): " .. round(frames_per_second))
end

OS clock function:

local last_time = os.clock()
local os_dt_table = {}
local os_dt_sum = 0
local os_frames_per_second = 0
local os_seconds_to_track = 2
local os_expected_fps = 60

local function show_os_fps()

    local dt = os.clock() - last_time

    os_dt_sum = os_dt_sum + dt
    table.insert(os_dt_table, dt)
    local len = length(os_dt_table)

    os_frames_per_second = len / os_dt_sum
    
    if len >= os_seconds_to_track*os_expected_fps then
        os_dt_sum = os_dt_sum - os_dt_table[1]
        splice(os_dt_table, 1, 1)
    end

    last_time = os.clock()

    local n = gui.get_node("fps_os")
    gui.set_text(n, "FPS (OS): " .. round(os_frames_per_second))
end

Any insight would be greatly appreciated!

os.clock() and os.time() precision is probably too low. What if you instead use socket.gettime()?

That seems a plausible point!

I get exactly the same results using socket.gettime(). Here is a handful of prints of the two values using socket.gettime():

|DEBUG:SCRIPT: dt:|0.018954999744892|
|DEBUG:SCRIPT: os:|0.016000747680664|

|DEBUG:SCRIPT: dt:|0.018915999680758|
|DEBUG:SCRIPT: os:|0.017000198364258|

|DEBUG:SCRIPT: dt:|0.018921000882983|
|DEBUG:SCRIPT: os:|0.017002105712891|

|DEBUG:SCRIPT: dt:|0.01893899962306|
|DEBUG:SCRIPT: os:|0.016000747680664|

|DEBUG:SCRIPT: dt:|0.018955999985337|
|DEBUG:SCRIPT: os:|0.088005065917969|