Understanding Render Targets

Hello there,

I’ve been trying to wrap my head around OpenGL for the past few days and have stumbled into some questions regarding the render scripting and render targets.

I have created my render target

self.render_target_shadowmap = render.render_target("shadowmap", parameters)

and this seems to be how you draw onto that render target

render.enable_render_target(self.render_target_shadowmap)
render.clear({[render.BUFFER_COLOR_BIT] = self.clear_color})
render.draw(self.light_pred)
render.disable_render_target(self.render_target_shadowmap)

I then want to draw this shadowmap, blending it on top of my scene. I’ve managed to do this through rendering onto a GUI box-node, covering the screen (and also being rotated 180 degrees around Y, because the output image would otherwise be mirrored) and then setting the box-node’s blend mode to multiply.

render.enable_texture(0, self.render_target_shadowmap, render.BUFFER_COLOR_BIT)
render.draw(self.screenquad_pred)
render.disable_texture(0, self.render_target_shadowmap, render.BUFFER_COLOR_BIT)

Would you say that this is the intended way to do fullscreen draw operations or are there other ways?

What I really would like to do is draw both my scene and the shadowmap to separate render targets and then pass them as textures to the shader where I can do custom blend operations. Is there currently any way of doing that in Defold?

Render scripting seems really nice to use. I’m just hoping for as much flexibility as possible.

1 Like

Yes, or alternatively you can use a model consisting of the quad and have that cover the screen. I haven’t tried using GUI nodes myself but I think a model quad will take care of the 180 degree flip problem for you. (Most post effects done here are constructed with model quads so there might be other reasons why that’s better that a GUI node.)

Sure, It’s certainly possible to do what you ask. Currently, when you do:

render.enable_texture(0, self.render_target_shadowmap, render.BUFFER_COLOR_BIT)

the 0 refers to an entry in the list of textures that you can define in your material. If there are no ones listed, the first texture is bound to 0. You can add more textures to the material, name them and use samplers in your shader.

But that would require an extra pass for your case so it’s probably better to do what you already do, but assign a custom shader to the material used for rendering the quad. You would then get the scene pixel data as input and can sample the shadow map from texture 0 and blend them as you see fit.

1 Like

Cool, I’ve gotten it working. I had misunderstood how render targets work and thought that the index in the “enable_texture”-call referred to one out of many textures in the render target, and was thoroughly confused. I see now that one render target == one texture and that the index in the call lets you choose which texture slot in the material to feed the render target into.

This sounds like the smoothest solution but- and pardon my ignorance here - how would you go about getting the scene pixel data as input in the shader without sampling a texture? From what I’ve learned, fetching a pixel value from the back buffer is not possible or at least not recommended, so is it something that is fed into the shader via the render-script?

Oh, sorry. I wasn’t thinking right. You’re correct, you need two samplers.

I think I now why you use quads instead of GUI-nodes. GUI nodes are automatically Fit, Zoomed or Stretched according to aspect ratio, which kind of screws with rendering if you render to a GUI node.