Shadow map - how to properly create a render target? (SOLVED)

In simple case by steps:

  1. Set the camera at the point where the ‘shadow source’ is and aim the camera in the desired direction. Camera is an abstract concept, in 3d graphics it is matrices.
  2. Render the view from this camera on all objects that cast a shadow, but put in the texture is not a picture, but the distance from the source of the shadow to the point of the object.
    https://github.com/Dragosha/defold-light-and-shadows/blob/main/light_and_shadows/materials/shadow/shadow.fp#L16
  3. Get a texture with a ‘depth map’ in the ‘shadow source’ coordinate system or shadowmap as a result.
    https://github.com/Dragosha/defold-light-and-shadows/blob/main/light_and_shadows/light_and_shadows.lua#L142
  4. Calculate a special matrix of our camera ‘shadow source’, multiplying by which we can translate any coordinate of the world into the coordinate system of the camera ‘shadow source’.
    https://github.com/Dragosha/defold-light-and-shadows/blob/main/light_and_shadows/light_and_shadows.lua#L107
  5. Switch to a regular camera and render the world again, but in its whole form. In the shader we pass the texture with shadowmap and a special matrix from the previous points.
  6. In the vertex shader we translate 3d coordinates of the vertex into 2d coordinates on the screen. And additionally we translate into 2d coordinates in the shadowmap (multiplied by the special matrix).
    https://github.com/Dragosha/defold-light-and-shadows/blob/main/light_and_shadows/materials/model/model.vp#L33
  7. In the fragment shader we compare the depth of our fragment with the value from the shadowmap texture. And this is where the ‘magic’ happens. If the value is greater, then our fragment was far away from the ‘shadow source’ than some other fragment, i.e. it is in the shadow. Something was nearer to the light bulb and we colour this pixel with the shadow in mind. This is what we will find out in step 2. If the value is smaller, it means that our pixel does not overshadow any object and it is in the light zone.
    https://github.com/Dragosha/defold-light-and-shadows/blob/main/light_and_shadows/materials/model/model.fp#L73
  8. Going for a beer to rethink the above.
7 Likes

Thank you very much @Dragosha ! :heart:

Regarding 1. :
Do you have some tips on how to define view and projection matrices for given source of light? Camera component gives me matrices, but am I thinking correctly if I think about utilizing some cameras placed in point_light.go just for the purpose of giving me matrices? OR is there a better approach?

EDIT: Ok, both Lights and Shadows and Jhonny’s Shadow mapping example create those matrices in code. I wonder if my idea of utilising a camera component for this is impossible/nonoptimal to use? Isn’t the camera component mainly for this - doing a hard work of giving you the correct matrices from what you can set up in the Editor?

And regarding this:

@jhonny.goransson is there any advantage of this over creating it in init()? I see in @Dragosha Lights and Shadows the render target for shadowmap is created in this way - there is a “shadowmap” render target resource and it’s linked to the render (and then used in render script):

I can see that width and height are defined here, so it can’t be initialized with initial window size. Either has to be at least same as the window size from game.project, or it should be recreated in render script when window size changes.

Beside that, one can add as many color attachments as we want in Render Target Editor, but from what I know, we can only create up to 4 color attachments using API - is it a bug?

image

1 Like

It doesn’t seem to matter where exactly to make RT, I just tried to see how RT creation will work in the editor, after adding this feature in some version of Defold, before that RT was created in code.
The thing is that the size of Shadow map RT does not depend directly on the size of the window, they are not even in the window proportions, it is its own view from the light source, it is projected on the view of the regular camera. But this resolution affects the accuracy with which shadows are drawn. The higher the RT resolution, the less ‘laddering’, especially if the light source is at a high angle to the surface in the regular camera view. There is such a thing as bias in the code to eliminate the moire effect (acne), several passes over the shadow map and averaging with noise, all this is done just to improve the final look of the shadow, to make it less ugly.

As for using camera component to get matrices, I guess you can, why not, I haven’t tried to use its new version with extended API in my work yet.

It is also interesting to look at materials on this topic, besides explaining the principles of work, there is the main thing - illustrations.
https://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/

1 Like

I’m using little bit easier and lazy(cause I’m) way for light’s point of view calculation.

local light_position = vmath.vector3(37.0, 50.0, 12.0) -- light current position -> go.get_position('/light')|
local light_look_at  = vmath.vector3(0, 0, 0)       -- light target pos -> go.get_position('/light_target')|
local up             = vmath.vector3(0, 1, 0)|
self.light_transform = vmath.matrix4_look_at(light_position, light_look_at, up)|
render.set_view(self.light_transform)|

1 Like

And I got the shadows :smiley:

5 Likes

Yes, the advantage is that since it is a resource it can act as a texture elsewhere in the engine. Meaning you can attach it in the editor to a model for example, so you can create debug panels or simplify writing effects. Or wrap it into an atlas that can be used for particles or sprites.

An RT created in a render script needs to be either used only in the render script, or passed into a script to be wrapped into a texture resource which is just more tedious.

In a shadow mapping example perhaps it doesn’t make things that much easier, except for the creation, but RTs can be used for so much more than just shadow mapping.

In my example we didn’t have this functionality nor the camera api, if I did it today I’d use both. But you don’t have to :slight_smile:

3 Likes

Any ‘know’ performance differences between using render resources for RT(using it elsewhere) and creating in render script?

Nice indeed! One can then display anything on sprite or model, e.g. make virtual banner/tv screen or anything like this with dynamic texture like a view from a surveillance camera :star_struck: or mirrors! :sweat_smile:

2 Likes

Or mini map :slight_smile: (just posting again for future searches)

1 Like

Not really no, it’s one more resource to load but using it in the engine is no different than creating from a render script

Yes exactly. And I think we want to add a “target” property to cameras that can take RTs, so you can hook most of these thing up in the editor without using render scripts

5 Likes

hmm… new graphics module doesn’t have .TEXTURE_BIT or I can’t find the equivalent…

1 Like

No, because it’s not part of the graphics API, it’s a render enum still

Ah! ok, sorry

Yeah no problem, I was debating wether or not to add it to the graphics api but decided not to, at least not yet

1 Like

Beside LearnOpenGL, there is also a great video (and whole tutorial series):

Shadow-mapping:

3 Likes

@jhonny.goransson Camera component looks great for spotlight and directional light (orthographic projection), but afaik we don’t have yet a possibility to get a frustum from camera API, right? Is it planned?

Another crucial thing to fully use camera components is that when there is more than one camera sending "set_view_projection" messages to the render script - we get its id only, not a whole URL. I noticed it, because usually I don’t name components uniquely, so I had 2 game objects, with same components "camera" and both messages received had `id = “camera”. There is no possibility now in such case to differentiate from which camera the matrices for view and projection are received.

I don’t use that message personally, you already have all the cameras so I don’t know why you would want to respond to that message?

1 Like

So it’s not only me that sees spotlight as “camera” :sweat_smile:

It’s a lenghty, but very good explanation:

image

:sweat_smile: Yeah, you can get everything from API or use .set_camera()

And when we render.draw() we should pass a frustum for culling, right? Uhm, but we can calculate it from proj*view from that camera, right? So yeah, no question :smiley:

1 Like