you bring up a lot of great points, thanks for sharing.
looks like it’s on its way.
you bring up a lot of great points, thanks for sharing.
looks like it’s on its way.
Yep implementation is done and it will be released in the next version, and frustum culling for meshes is also on the near horizon
The review is done! How will the MRT be working in Defold?
I’ve added documentation to the PR:
Great, thank you!
I’m looking into the changes now So, the current buffer constant “render.BUFFER_COLOR_BIT” will be deprecated and every render script should be updated to use “render.BUFFER_COLOR0_BIT” instead?
Ah, no. BUFFER_COLOR_BIT will still exist and be equal to BUFFER_COLOR0_BIT to maintain backwards compatibility:
…but yes, it is deprecated as in “we’d like to remove it one day”
Reasonable, thank you! And is (or maybe why is) the number of buffers limited now?
(static const uint8_t MAX_BUFFER_COLOR_ATTACHMENTS = 4;)
And how will it be possible to draw different predicates to different render targets?
I believe the spec defines a minimum of 4. We can perhaps later make this dynamic if there is a need for it.
I don’t think that is possible right now. For OpenGL it is glDrawBuffers that is used.
Maybe I can’t explain my thoughts clearly
From what I understand MRT allow us to draw images (textures) to multiple render targets at once (in one draw call). But the example in the documentation shows only one predicate and one RT.
So how will we draw for example Diffuse and Specular together to multiple targets? Isn’t it that we should use predicates to specify e.g. which texture is a diffuse map and which is specular? Or am I misunderstanding something?
Everything is done through a shader.
You know that your render-target-1 is diffuse and your render-target-2 is specular.
In your shader, you write diffuse to render-target-1 and specular to render-target-2.
In this way, you determine for yourself who is who.
Good info for undestand this in details:
https://learnopengl.com/Advanced-Lighting/Deferred-Shading
I’m studying those documents for a while, but still some points might be misunderstood by me, hence the questions
Let’s assume a very simplified 2D ball example - I plan to create my own maps and make the simplest Phong lighting on a 2D circle, that should be lit as a ball. Circle is in one constant position, so is camera and also light source (e.g. Between circle and camera)
I was thinking that for such simple case (one object with sprites and one light source) I could make a go with sprites - e.g. one with Albedo texture, one with normals and one with specular (+constant (flat) position of it could be in uniform? Or should I also have some kind of height map?)
Then I could pass light and camera positions in unfiorms to a fragment program of quad’s material on which I will draw the render target.
So I thought I would need to draw them to a proper render target, by using proper predicates (e.g. tile_albedo, tile_normal, etc).
I wold have need to draw all the textures anyway.
So in:
function update(self, dt)
-- enable target so all drawing is done to it
render.enable_render_target(self.my_render_target)
-- draw a predicate to the render target
render.draw("tile_albedo")
render.draw("tile_normal")
end
Where self.my_render_target is a RT with multiple color buffers - how do I actually write that one buffer represents Albedo and another Normals and takes proper values in fragment program?
Is it that with the first call of render.draw(“tile_albedo”) I am assigning this texture (Albedo data) to the current first texture? It will be then in fragment program available as e.g. ‘tex0’ - the first sampler?
Shader is not enough for me to understand this maybe - do you mean fragment program of a quad on which I draw a texture from render target? (I imagine it like this )
If there is something wrong in the above understanding, please point it out too
Yes, you got that right about the fragment shader.
Your code variant does not work - because it renders both predicates (“tile_albedo” and “tile_normal”) to the same render target.
Otherwise, this is way to go about it - use three different textures (albedo, normal, specular) in one rendering pass:
render.enable_render_target(self.my_render_target)
render.enable_texture(0, my_albedo_texture)
render.enable_texture(1, my_normal_texture)
render.enable_texture(2, my_specular_texture)
render.draw("tile") -- you only need ONE call to render.draw in this case
Generally speaking, understanding the rendering pipeline is not the easiest thing to do. And in the case of the Defold, there are also a number of complexities associated with the infrastructure around the .render_script module.
Just tell me what you need. Describe your task in detail and I will try to tell you what steps you need to take to implement it in Defold. Or maybe someone can answer faster than me.
Thank you so much @morgerion!
I forgot about enabling textures and was again thinking in maybe a Forward Lighting like manner Like, I understand drawing to 2 RTs, normal tiles and light and then combine it on a quad (a simple 2D lighting). But with MRTs I imagine I could make more advanced lighting, even for 2D (but with normal maps, specular, etc)
So, to actually have those different textures we will be setting it with e.g. Resource properties for each sprite like here: API reference (sprite)?
What I was trying to describe above is, as far as I learned, a simple example of deffered lighting There could be any sprites on the screen, but let’s stick to that single circle in the middle of the screen. I want to fake lighting over it, like it is a 3D ball. I can treat a 2D sprite like a single frame from a 3D scene (or you can imagine a very static 3D scene/one frame from 3D scene, it’s easier for me to learn on a flat image in orthographic view ) . I can prepare 4 textures for one sprite, like in learn opengl (but with more sophisticated shapes):
If I want to assign 4 textures to one sprite - should I create separate atlases for each version? And then how can I pass those textures to render script’s render.enable_texture? (more like Defold questions and of course we are talking about a version of Defold with the MRT support )
There’s a big problem here - in Defold, you can’t assign multiple textures to a sprite (but meshes and models can).
But there is a way out of this - you can combine all the textures you want horizontally, for example. That is, you can put three textures into ONE texture at once. Combine them beforehand in photoshop, for example. And in the sprite shader, you would shift through the uv coordinates and read the values from the desired texture.
This is an example of how many textures can be assigned to a mesh. Naturally, half of these textures are 4K*4K multi-texture atlases.
So for this I don’t even need MRT, but additional work in a graphical software Without multiple textures for sprites MRT is only useful for models and meshes in that case, is it? So maybe when I’m merging multiple render targets to which I draw in render script (using different predicates for example) - could I perhaps utilize MRT when drawing all the render targets to one quad (which fills my screen)?
Yes, you can.
But I still don’t see what that will do.
MRT makes sense to save on repeated render passes - in cases where you have very large amounts of geometry and textures, and fillrate is an issue for you.
I will tinker with it as sson as it will be released Meanwhile @JCash added an update to Multiple Textures, so things are on great tracks!