Off Draw calls


#1

Hi everyone,

Just checking, is it possible to off the draw calls totally?

As I created the game that are possible to launch in either server/client mode. But in server mode, the display is kind of redundant as the network model is Server-Client. So I wanted to reduce the cpu consumption if possible.

Not sure why but the program actually crashed after prolonged running (Roughly a day worth of running), which my only assumption is due to the limitation of the physical server I am using right now. If I ran with my PC, there is no issue.

Kindly enlighten me if there is a better way as I had started off wrong in the network design but I don’t really have the time to change the structure again, so I am kind of stuck in this problem.

Thanks in advance!


#2

Make the build headless in the appmanifest https://britzl.github.io/manifestation/ you can exclude other options for servers too.


#3

@andreas.strangequest should be able to provide more hands on tips since he’s running Defold on AWS at scale. Using dmengine_headless or stripping the engine even further is a recommendation for running headless on a server.


#4

Thanks! I will take a look! This seems like what I need. Need to try it out first.


#5

Thanks for providing the solution, after following the instruction and made the appmanifest. I did managed to create an headless application. But the application don’t seems to work as expected.

I made a state machine in it so it should run with some logic and delays, but somehow it’s like speed running through all my states. Is there any reason behind? The timer I am using is just a simple +dt till some value.

I will test more with the configurations and see if I made a mistake somewhere.


#6

You can try tracking time offsets yourself looking at os time instead of relying on dt. I have not done many experiments with headless setups so can’t say what making the build headless may impact.


#7

Headless builds will usually run at full speed. Try unchecking variable dt.


#8

No wonder it’s sped off the states instantaneously. Thanks for pointing that out, I will use os.time offset.

But my concern with this approach is, won’t the user able to change their os time and somehow messed up the offset value?

Correct me if wrong. Thanks!


#9

You can use https://www.defold.com/community/projects/99117/ for using server time instead of local time. Although you probably don’t want to be constantly checking it.

Probably you don’t need to worry about user messing with local time. These days it’s almost always managed automatically by the OS and users don’t even know they can manually change it.


#10

But you should be able to get a headless build to run at the correct frame rate. Right @andreas.strangequest?


#11

Hi all and sorry for the late reply. First vacation in > two years so I try to avoid work :smiley: and yes, we have manage to get the headless server to respect the framerate by unchecking variable_dt but also you will need to set the vsync swap interval to 0 using this:
msg.post("@system:", “set_vsync”, { swap_interval = 0 })
It feels a little hacky but is explained here: https://www.defold.com/ref/sys/#set_vsync:swap_interval

I hope that will help you further. Writing both server and client network is really nice in Defold and has worked perfectly fine for our game so far.


#12

It’s interesting that you code the server in Defold… but it seems like it will be less able to scale and more costly to host ultimately for a hosting style like this. Is every game instance its own copy of the engine running or are you handling many games from a single game instance? If you let every game be its own copy of the engine then you have the extra OS cost of each copy running. If you have multiple games run in a single copy of the engine then one crash ends multiple games running at once, Defold is not really built for this use case. But if it works then it works! I’d like to hear more about your setup!


#13

We do it as well.
I’ve found it to be very convenient writing both the server and client in Defold.
First of all, our game is strictly server authoritative which means all level loading, physics collision detection, game logic etc are done on server. Having that being ran on 2 different systems/languages would be a lot more hassle. Now I build a defold build and run server/client locally and can check everything immediately. I also have a visual representation of the game immediately when needed even on the server so I can see whats going on, who is logged in, where they are, where the scanlines are shooting. how the level collision shapes are looking etc etc.
Talking to our hosting services and comparing stats we are really up to par in how many game instances we cram into one server compared to other similar titles out there. Also all our network code and many other things are shared between server and client so it becomes very very easy to implement new stuff and debug it all.
If the serverside was much more a “validate if this and that move might be possible and then just send the data coming in to all the other clients” I agree that having another special solution could be a better option. Now the pros surely wins over the cons. I recommend you to try it!


#14

Sounds like a direction I would like to try.
So to clarify, you have a build/project for the server, which is actually the full game instance running with all the players.
You then also have a project/build of the client, which is basically a thin client handling forwarding of inputs to the server and receiving updated statuses from the server and drawing locally.
If this is working performant on the server its super.
Thanks!


#15

Hi! Thanks for the solution. Do you mind to elaborate what you meant by unchecking variable_dt? Cause if by unchecking that, that’s essentially a fixed 1/60 value isn’t it? Or by setting the vsync swap interval to 0 at the same time will help with getting back the dt?

Thanks for clarifying.


#16

Hi everyone, update from what I had done. I had followed the instruction and set variable_dt off and the vsync swap interval to 0, but the framerate is still not respecting what it’s ought to be. So I used os.clock() offset to get the dt for the time being. But apparently, my server will still crash after a short period of running. In fact, is faster than when it’s not in headless build. I probably could blame the server’s specification for the crash but can’t be that bad?

This is the VM I am using and nothing else is running except the application. Any advice? Thanks for the help.


#17

Hmm, why does it crash? What kind of error are you getting?


#18

I have no idea, a crash.dmp was created but I have no clue how to read it. Is there anyway to read it?


#19

Yes, you’re supposed to read it on next successful engine start using crash.load_previous()


#20

Check out https://github.com/subsoap/err