Hey, when I transfer my game to my mobile (android) the levels will scale to fit the screen instead of remaining in their original ratio. I can’t find how to remove this behavior.
//Rasmus
You can control the rendering of ingame content with a custom render script. The default one (in “builtins/render/default.render_script”) renders sprites, particles and tiles with the following:
render.set_viewport(0, 0, render.get_window_width(), render.get_window_height())
render.set_view(self.view)
[...]
render.set_projection(vmath.matrix4_orthographic(0, render.get_width(), 0, render.get_height(), -1, 1))
This means that the viewport (what parts of the screen are drawn to) is the full width and height of the target device. You can change this to a behavior that fits your game by taking the aspect ratio defined by the “width” and “height” settings in game.project (accessible through render.get_width() and render.get_height()). You can then decide how to deal with the aspect - if you want to crop the game on screen (change the viewport) or zoom in/out to fit (change the projection).
How come this isn’t represented when you run from the defold engine directly?
The levels didn’t stretch while running on the motor but did in the mobile…
This is something that I wish Defold would handle automatically with the default behavior of the camera. The “in-game” camera should behave the same way as the GUI camera - update as you resize/rescale the window you run the game in, properly updating the matrix
Yes it should. I’ve wasted all afternoon trying to make this work and still can’t figure it out.
Not sure if this helps, but I did this in order to make the camera properly view a specified rectangular area in the game, correcting for aspect ratio.
I guess you just need to run it every frame, or detect when a resize occurs. In my case, I just run it once on startup.
All of this code is in my custom render_script:
function init(self)
self.x = 0
self.y = 0
self.w = 1
self.h = 1
[...]
[...]
render.set_projection(vmath.matrix4_orthographic(self.x, self.x+self.w, self.y, self.y+self.h, -10, 10))
[...]
[code]function adjust_camera_after_tiles(self, rect)
-- Add some padding
local pad = 64
rect.x = rect.x - pad
rect.y = rect.y - pad
rect.w = rect.w + pad*2
rect.h = rect.h + pad*2
-- Get ratios
local ratio_screen = render.get_window_width()/render.get_window_height()
local ratio_tiles = rect.w/rect.h
-- Aspect ratio correction
if ratio_screen > ratio_tiles then
local mod = rect.h * ratio_screen;
rect.x = rect.x - (mod - rect.w) * 0.5
rect.w = mod
else
local mod = rect.w / ratio_screen;
rect.y = rect.y - (mod - rect.h) * 0.5
rect.h = mod
end
-- Send to level data
level_data.camera_rect = { x = rect.x, y = rect.y, w = rect.w, h = rect.h }
-- Save in this object
self.x = rect.x
self.y = rect.y
self.w = rect.w
self.h = rect.h
-- Save camera res
level_data.screen_res = { x = render.get_window_width(), y = render.get_window_height() }
end[/code]
But, again, this should really be automatic
If you resize the window of the running engine on desktop you will see how it is set up to scale.
How do you see a good default behavior being? (there are a few options)
I’d say keep one axis locked, expand/contract the other, based on whether the aspect ratio is wide or tall, and based on a specified size/apothem of the camera.
Unity does this partially. It can only lock the Y axis. (Although you can alter it with code to lock on X if you like)
If you set it to orthographic, you can then specify the size/apothem of the camera on the Y axis, the X axis size will then auto-adjust based on aspect ratio, and the size/resolution of the screen doesn’t matter, as the camera will still show the same content (As in, 1920x1080 will show exactly the same amount of content as 1280x720, since they’re both 16:9)
Ok, we’ll have a look at updating the default render script with something like that.
I would like to have the option to choose.
With GUI you can chose to either stretch or fit them, which seems to work fine with all kinds of ratio. Something similar to that but with set at gameobject/camera would be lovely.
The GUI rendering model would not work for game objects since GUI nodes are replaced in screen-space and laid out. For game objects you almost always want exact placement against other elements and automatic rescaling and layout on a GO level would alter the relation between objects.
The idea with the render script is that you can indeed choose. You can render your game in any way you want. It is possible to alter scaling of draw passes individually, have multiple views into the game (split screen), post effects, advanced lighting and pretty much anything you want. However, accessing all this power requires that you do some work. So if you want something simple it might feel daunting that you have to setup and modify the render script.
There are at least two things I think we at Defold can do here:
- Look at the camera component. We are aware that it is not adequate, it needs to be properly redesigned and implemented. Doing that would probably make it much easier to solve a couple of common use-cases. This is in our backlog and will happen some time in the future.
- Improve on documentation and examples. We are continuously working on this. Perhaps it would be a good idea to give a few examples with common setups that you could just cut and paste into a custom script?
If you have other ideas, please let us know.
Not to be the guy who keeps mentioning Unity, but, Unity’s solution is pretty clean. If you want split screen, you create two cameras, and change their individual normalized screen-space rectangle. If you want post processing, you attach it to the camera you want it on. All without coding - it’s in the camera’s exposed properties.
Yes, Unity provides a lot of functionality. They have a very different approach to how the engine is designed.
Out of curiosity, can you do one pass full screen post processing in Unity with two camera views?
Indirectly, yes, though it requires some fiddling!
It would look something like this:
- Have two cameras render to a render texture, instead of rendering to the screen
- Set up a script to use that RT as post process target
- Blit that render texture onto the screen
I figured it out after alot of try and error.
I use a background that is 1140X720 so that all ratios will work. I only use 960X640 of this background for when the game is in 3:2 ratio.
I modified the render-script so that it will stretch the screen and still keep the ratio.
If the screen is in a different format than 3:2 it will render a bigger bit of the background. So that 1140X640 (16:9) or 960X720 (4:3) will be drawn.
To make sure the background and all objects is aligned to the center I give them an offset so that they can adjust to the new format.
But since the viewport is still in 960X640 I need to use a formula to get the action.xy coordinats to the new system as well.
After all the that, my game will now (hopfully) run on all resulotions without strecthing and/or leaving black unused areas on the screen =)
Which brings us to this
I used the following lines to get the new mouseposition:
local tempX = action.x
local tempX2 = tempX / 960
local tempX3 = tempX2 * self.m_TotalWidth
local tempY = action.y
local tempY2 = tempY / 640
local tempY3 = tempY2 * self.m_TotalHeight
local mousepos = vmath.vector3(tempX3, tempY3, 1)
Allthou I think it will only work if the viewport starts at 0.0
Ok, thanks. Interesting.
There is a design idea floating around here how to make something like that possible, in conjunction with a new camera component. Not sure how far into the future that would be though.