How to deal with "coords-related" shaders?

Hello!

There is a general question in the end, but I need this for my another attempt for waving flora shader, having a fp for quad that is having currently 3 textures multiplied or added together, one of them containing green-like pixels of flora and focusing on that part:

(...)
vec4 color_normal = texture2D(tex0, uv.xy);
vec2 uv = var_texcoord0.xy;

// if color is green-like modify coords
if (color_normal.g > 0.4 && color_normal.r < 0.4 && color_normal.b < 0.4)
{
    float X = uv.x*25.+time.x;  //time.x is currently constant, so no waving effect
    float Y = uv.y*25.+time.x;
    uv.y += cos(X+Y)*0.01*cos(Y);
    uv.x += sin(X-Y)*0.01*sin(Y);
}

color_normal = texture2D(tex0, uv.xy);

(...)

And after this it is all drawn blended on screen with lights and other effects.
But what is annoying me is that it is dependent on the position of camera that is following the character - aka position of each pixel on the screen - when it is changing, the whole calculation is obviously giving different result. This is just an example, but my problem applies to every simple shader that is having coords of texel in the calculations.
How should I approach such coords-related shaders to get rid of this effect :point_down: ?

For what it’s worth, it’s a neat effect so far.

Are those quads facing the camera exactly, or angled a bit? Might give clues for the engine folks.

I was also expecting UV space would be relative to the quad’s face, independent of the camera. Sadly I’m not familiar with how it’s calculated and passed to the shader.

Looking at your code, it should only be dependent on the time.x variable. What is this value set to?

time.x is constant (1.0) so in this case I expected texture to not wave, but actually it is not moving only when camera is not moving. When I pass time updated every frame it waves, but the dependence on position of camera is still:

When time.x is constant it is very noticable, as below. uv is simply a texel coord that is modified here, and that’s why the effect is depending on the movement of camera, that changes what is on the displayed quad (the above code is part of my quad fp). Moving in y direction is also altering the picture:

I do see why the effect is dependent on the camera, but I am looking for a solution how to make it independent. I believe I need to pass some offset maybe, every frame to the fp, but I don’t know how to make such offset.

It is of course related to my previous attempt, but that one was basing on vp, so I would need quads for each bush or grass sprite. In this one I wanted to do calculations in fp when drawing those green colors on screen - modify its coords (then pixelate it, but it’s not important now).

None of the other calculations should change with camera or time.
Does it behave the same (that it moves) if you remove the time.x from the calculation?

It feels like we miss some info about the shader.
E.g. does this even compile? (sampling using “uv” before uv is declared)

vec4 color_normal = texture2D(tex0, uv.xy);
vec2 uv = var_texcoord0.xy;

It waves when time.x is removed. And yes, there is a bad editing when I was posting the first fragment, I just wanted to focus on the question, so I stripped the code, here’s the whole fp, but note that the //Pixelated and //Lights part are totally regardless:

varying mediump vec2 var_texcoord0;

uniform lowp sampler2D tex0; // sprites
uniform lowp sampler2D tex1; // pixelated particles
uniform lowp sampler2D tex2; // lights
uniform lowp vec4 tint0;
uniform lowp vec4 tint1;
uniform lowp vec4 tint2;

uniform lowp vec4 time;

#define def_sharpness 100.0
#define def_pixel_size 10.0
#define PixelArt_res_x 480.0
#define PixelArt_res_y 320.0

float sharpen(float pix_coord) {
    float norm = (fract(pix_coord) - 0.5) * 2.0;
    float norm2 = norm * norm;
    return floor(pix_coord) + norm * pow(norm2, def_sharpness) / 2.0 + 0.5;
}

void main()
{
    // Pre-multiply alpha since all runtime textures already are
    vec4 tint0_pm = vec4(tint0.xyz * tint0.w, tint0.w);
    vec4 tint1_pm = vec4(tint1.xyz * tint1.w, tint1.w);
    vec4 tint2_pm = vec4(tint2.xyz * tint2.w, tint2.w);

    // Pixelated particles
    vec4 color_pixelated = texture2D(tex1, vec2(
        sharpen(var_texcoord0.x * PixelArt_res_x) / PixelArt_res_x,
        sharpen(var_texcoord0.y * PixelArt_res_y) / PixelArt_res_y
    )) * tint1_pm;

    // Lighting
    vec4 color_lights = texture2D(tex2, var_texcoord0.xy) * tint2_pm;

    // Normal sprites, tiles, background:
    vec4 color_normal = texture2D(tex0, var_texcoord0.xy) * tint0_pm;
    vec2 uv = var_texcoord0;

    if (color_normal.g > 0.4 && color_normal.r < 0.4 && color_normal.b < 0.4)
    {
        float X = uv.x*25.+time.x;
        float Y = uv.y*25.+time.x;
        uv.y += cos(X+Y)*0.01*cos(Y);   // waving flora
        uv.x += sin(X-Y)*0.01*sin(Y);
    }

    color_normal = texture2D(tex0, uv.xy);

    gl_FragColor = (color_normal+ color_pixelated*color_pixelated.a) * color_lights;
}

A note about render script: I am drawing all the sprites, tiles (tag tile) on the render target, then some particles (tag pixelated) on another render target and lighting (tag lights) on the third rt. Then all of them are put on the quad model and the above is fp of its material.

I suggest debugging one thing at a time.
E,e, remove the “time.x” and focus on the color_normal:

    vec4 color_normal = texture2D(tex0, var_texcoord0.xy) * tint0_pm;
    vec2 uv = var_texcoord0;

    if (color_normal.g > 0.4 && color_normal.r < 0.4 && color_normal.b < 0.4)
    {
        float X = uv.x*25.;
        float Y = uv.y*25.;
        uv.y += cos(X+Y)*0.01*cos(Y);   // waving flora
        uv.x += sin(X-Y)*0.01*sin(Y);
    }

    color_normal = texture2D(tex0, uv.xy);

    gl_FragColor = color_normal;

Does this still produce the wobbliness?

Yes, as I wrote above, it is still wobbling, when moving camera.

I see it as this: When fp works it calculates a new coordinates for each pixel, so the image is looking like on Van Gogh picture, but static (without time.x). But when I move camera I simply change the image that is on the quad, so the pixel that has been analysed in previous frame is now in different position, so it will be then changed accordingly, so for example looking at a line, its y coordinate would be different in next frame (I am changing in both directions).

It would be perfect for me if I could have static camera view, like in some Celeste levels or in ScourgeBringer. Wobbling is not a bug here, but a feature that implies directly from what is passed into the texture. But I want the camera to follow character :frowning: I want to make waving independent on the changes of camera view, but also I don’t want to draw another quad, so that’s why I was thinking if I could pass some offset to the fp

Are you sampling from the rendertarget? That would explain it.

To compensate for the camera movement seems a bit difficult.
I guess you can upload the texture size and how many texels the camera has moved, and use that.

Can you not apply the wobble directly when rendering the sprites in the first pass?

1 Like

Yes, so this could be the issue here, right?

I will try it! :wink:

Yes, since the entire texture is updating (moving), the var_texcoord0.xy that referenced a certain pixel the last frame, might not reference the same pixel this frame.
You’d need to modify the var_texcoord0.xy to compensate for the camera movement.

4 Likes

So the approach with sprite’s fp works and is indeed not related to camera movement, but it have that disadvantage: I need to send time constant for each sprite’s fp every frame instead of one pass on the quad, and I think that’s too much :confused: Additionally I wanted to apply the same effect to flora in tilemaps and that is another component that will need time constant.
I will try to find a way to easily compensate camera movement for the above approach.

Approach with sprites fp waves:

  1. Draw a sprite with a fp that applies sin to texel coordinates to a render target
  2. Draw that render target on a quad to combine with other rts

[+ no camera compensation]
[- sending time constant to all sprites, tiles(?), a lot of fps]

or approach with quad fp waves:

  1. Draw all sprites normally to rt (default sprite.fp)
  2. Draw to render target on a quad and try to modify texcoord in quad.fp (like I did)
  3. Compensate texcoord with some offset of camera movement

[+ one fp that changes picture basing on color of pixel (green-likes)]
[- wobbling or camera compensation calculations every frame]

Or perhaps you have some other ideas on how to set up a pipeline?