Fixed_fit_projection -wrong click coordinates how to fix? (SOLVED)


watch the video:

when I switch to fixed_fit_projection mode

clicks do not match the button

take an example from lessons



The render script change from stretch to fixed fit will change how the game is rendered but the input system will not know about this. The input coordinates are still in screen space. You need to manually scale the input values to handle the change in projection.

Another option is to use the defold-orthographic or RenderCam extensions from the Asset Portal. These extension takes care of both scaling and input (plus much more).

1 Like


tried RenderCam extensions
rendercam.screen_to_viewport(x, y,)

y coordinates are working correctly
x- coordinates do not match (if the window stretches wide)



Ping @ross.grams

1 Like


Sorry, my internet is being too slow right now for me to see your video, but I am 99% sure ‘screen_to_viewport’ is not the function you want, try ‘rendercam.screen_to_gui_pick’.

1 Like


does not work ;(

function on_input(self, action_id, action)
	if action_id == hash("click")  and action.pressed then 
		x,y =rendercam.screen_to_gui_pick(action.x, action.y)
		print(action.x, action.y,x,y)
DEBUG:SCRIPT: 13.5 559.5 13.5 559.5
DEBUG:SCRIPT: 14.4 437.5 59.9 437.5

I click on 1 point on the tile x, y 13.5 | 559.5 |
I change the width of the window
I click on the same point on the tile
function returns incorrect coordinates 59.9 | 437.5 | instead of 13.5|559.5|



Have you done proper setup of RenderCam (



I set up everything in the instructions



Oh, sorry, if you’re just using mouse clicks/touch you don’t need it, just use action.x and action.y directly to with pick_node. (this is in the Rendercam documentation btw)

Action.x and action.y are not screen coordinates if the window size is changed from what you set in your game.project file. They get stretched to fit the new window instead of expanding.

Action.screen_x and action.screen_y will give you real screen coordinates. Most of the Rendercam functions use these, not action.x and action.y.



where to read about action.screen? can not find in the documentation
how gui.pick_node to use with tiles and not with gui?


  • The on_input doc lists the available fields but doesn’t go into a lot of detail.

  • The input manual talks about the difference between x/y and screen_x/screen_y, but isn’t terribly clear about when you need one vs. the other.

  • The Rendercam documentation for rendercam.screen_to_gui_pick describes when you need that function and when you don’t.

This all may seem weird and pointlessly annoying—having separate screen and gui coordinate systems—but it makes it so gui nodes keep the same reported position no matter how you change the window, which solves a lot more problems than it creates.

There is no ‘pick_node’ function for tiles (or sprites), that is for gui only. There are multiple different ways to do this. You can find one method in the Colorslide tutorial (in level.script), though that won’t quite work with the fixed_fit_projection. Since the tiles are rendered in ‘world’ space, you need to convert your mouse coordinates to ‘world’ space for it to work correctly. With Rendercam you can do this with ‘rendercam.screen_to_world_2d’. Then use those coordinates to check your tiles for collision (divide by the tile size, ceiling, etc, - like in the Colorslide level.script).


rendercam.screen_to_world_2d(action.screen_x, action.screen_y)

that’s what i need
thank you

1 Like


hi! I am getting “attempt to index global ‘rendercam’ (a nil value)” error.

I have fetched the library and done everything i need to do (created a camera, enabled shared state, added the rendercam to the bootstrap). what step might i be missing?

edit: okay, I was missing local rendercam = require “rendercam.rendercam”

My next question… I’m trying to create a tool that detects if the user is swiping up, down, left, or right, using a GUI script. I’ve been testing how this works creating a GUI node at the location provided by rendercam.world_to_screen(vmath.vector3(action.x, action.y,0)) when doing a click. But it seems to provide strange results, and results that vary depending on how i resize the screen. I’d really appreciate someone telling me what i need to do (i’d like to use fixed fit projection) because I’ve been fighting this a long time and i thought it would be something fairly simple!!



…Okay. here’s my solution which doesn’t use rendercam, but action.dx and action.dy.

Swipes must be:
in one of four directions (up, down, left, right)
quick (less than 30 frames between action.pressed and action.released)
…but not too quick (more than 5 frames between action.pressed and action.released)
unidirectional (if you swipe diagonally, the swipe isn’t counted)
involve a movement of at least 30 pixels (no tiny accidental swipes!)


if self.swiping then
	if action_id == hash("click") then 
		if action.pressed then
			self.swx = 0
			self.swy = 0
			self.time = 0 
		elseif not action.released then
			self.swx = self.swx + action.screen_dx
			self.swy = self.swy + action.screen_dy
			self.time = self.time+1
			if self.time < 30 then
				if math.abs(self.swx) > math.abs(self.swy)*2 and math.abs(self.swx) > 30 then
					if self.swx > 0 then
						print("right swipe")
						print("left swipe")
				elseif math.abs(self.swy) > math.abs(self.swx)*2 and math.abs(self.swy) > 30 then
					if self.swx > 0 then
						print("up swipe")
						print("down swipe")

So here are my question:

  • do you agree with my principles of swiping? Is there anything i’ve missed?
  • have i safeguarded enough against accidental interpretation of touches as swipes?
  • Am I going to have any problems with different screen ratios? or is action.dx independent of changes in window size? What function of rendercam would I need to mitigate those problems?
  • apart from different screen ratios… will this work just as well on a 12" tablet and a 5" phone?




The code looks ok. I think i would calculate the delta on release instead of accumulating it over every frame.

I have a gesture detection module here if you want to check another implementation:



thanks britzl!

I do still have one question which boils down to:

how do I get the real-life location of a touch within a GUI script so that I can create a GUI object that appears under the player’s finger? I am not sure what function of
rendercam I need.



Can’t you use the normal on_input() callback to achieve this?



Please can you elaborate?



You’ve just got it backwards. action.x and action.y are already in GUI coordinates. You don’t need rendercam for that, just use them directly from on_input.

“World Space” means your game objects, sprites, etc. You see those through the camera. GUI is not drawn with a camera, it’s just fit to the window.

The difference between action.x/y and action.screen_x/screen_y is subtle, they are very similar. In fact, they will be exactly the same if your window size matches the resolution you set in your game.project. But if the window size changes, action.x/y will stretch to always give you the same range (the top right corner will always have the same coordinates), whereas action.screen_x/y will be the actual pixel coordinates.




I am just about ready to punch myself in the nutsack but the problem was the adjust mode of the node i was trying to track. I’ve set it to stretch and everything is now fine. UGH

edit: in case anyone runs into this problem again (creating a GUI node which follows exactly the location of the mouse on all screen sizes and ratios), you need to create a GUI node with another GUI node as a child. Set the parent’s adjust mode to stretch and the alpha to 0. Then, you can use the child GUI node as the one with the image (with the adjust mode as fit, maintaining its ratio) and the parent node will follow the mouse position accurately, just stretching itself if you change the screen size.