Set the camera at the point where the ‘shadow source’ is and aim the camera in the desired direction. Camera is an abstract concept, in 3d graphics it is matrices.
Switch to a regular camera and render the world again, but in its whole form. In the shader we pass the texture with shadowmap and a special matrix from the previous points.
In the fragment shader we compare the depth of our fragment with the value from the shadowmap texture. And this is where the ‘magic’ happens. If the value is greater, then our fragment was far away from the ‘shadow source’ than some other fragment, i.e. it is in the shadow. Something was nearer to the light bulb and we colour this pixel with the shadow in mind. This is what we will find out in step 2. If the value is smaller, it means that our pixel does not overshadow any object and it is in the light zone. defold-light-and-shadows/light_and_shadows/materials/model/model.fp at main · Dragosha/defold-light-and-shadows · GitHub
Regarding 1. :
Do you have some tips on how to define view and projection matrices for given source of light? Camera component gives me matrices, but am I thinking correctly if I think about utilizing some cameras placed in point_light.go just for the purpose of giving me matrices? OR is there a better approach?
EDIT: Ok, both Lights and Shadows and Jhonny’s Shadow mapping example create those matrices in code. I wonder if my idea of utilising a camera component for this is impossible/nonoptimal to use? Isn’t the camera component mainly for this - doing a hard work of giving you the correct matrices from what you can set up in the Editor?
@jhonny.goransson is there any advantage of this over creating it in init()? I see in @Dragosha Lights and Shadows the render target for shadowmap is created in this way - there is a “shadowmap” render target resource and it’s linked to the render (and then used in render script):
I can see that width and height are defined here, so it can’t be initialized with initial window size. Either has to be at least same as the window size from game.project, or it should be recreated in render script when window size changes.
Beside that, one can add as many color attachments as we want in Render Target Editor, but from what I know, we can only create up to 4 color attachments using API - is it a bug?
It doesn’t seem to matter where exactly to make RT, I just tried to see how RT creation will work in the editor, after adding this feature in some version of Defold, before that RT was created in code.
The thing is that the size of Shadow map RT does not depend directly on the size of the window, they are not even in the window proportions, it is its own view from the light source, it is projected on the view of the regular camera. But this resolution affects the accuracy with which shadows are drawn. The higher the RT resolution, the less ‘laddering’, especially if the light source is at a high angle to the surface in the regular camera view. There is such a thing as bias in the code to eliminate the moire effect (acne), several passes over the shadow map and averaging with noise, all this is done just to improve the final look of the shadow, to make it less ugly.
As for using camera component to get matrices, I guess you can, why not, I haven’t tried to use its new version with extended API in my work yet.
It is also interesting to look at materials on this topic, besides explaining the principles of work, there is the main thing - illustrations.
Yes, the advantage is that since it is a resource it can act as a texture elsewhere in the engine. Meaning you can attach it in the editor to a model for example, so you can create debug panels or simplify writing effects. Or wrap it into an atlas that can be used for particles or sprites.
An RT created in a render script needs to be either used only in the render script, or passed into a script to be wrapped into a texture resource which is just more tedious.
In a shadow mapping example perhaps it doesn’t make things that much easier, except for the creation, but RTs can be used for so much more than just shadow mapping.
In my example we didn’t have this functionality nor the camera api, if I did it today I’d use both. But you don’t have to
Nice indeed! One can then display anything on sprite or model, e.g. make virtual banner/tv screen or anything like this with dynamic texture like a view from a surveillance camera or mirrors!
Yes exactly. And I think we want to add a “target” property to cameras that can take RTs, so you can hook most of these thing up in the editor without using render scripts
@jhonny.goransson Camera component looks great for spotlight and directional light (orthographic projection), but afaik we don’t have yet a possibility to get a frustum from camera API, right? Is it planned?
Another crucial thing to fully use camera components is that when there is more than one camera sending "set_view_projection" messages to the render script - we get its id only, not a whole URL. I noticed it, because usually I don’t name components uniquely, so I had 2 game objects, with same components "camera" and both messages received had `id = “camera”. There is no possibility now in such case to differentiate from which camera the matrices for view and projection are received.
Yeah, you can get everything from API or use .set_camera()
And when we render.draw() we should pass a frustum for culling, right? Uhm, but we can calculate it from proj*view from that camera, right? So yeah, no question