Will Sampler3D be supported?

hi. i noticed that defold shaders only expose sampler2d sampler2darray and not sampler3d. since defold supports opengl 3.3, i was wondering why sampler3d / 3d textures are not available in materials.

is this due to cross-platform limitations (for example webgl or opengl es compatibility), or is it simply not implemented in the engine yet?

are there any plans to support sampler3d / 3d textures in the future?

i’m asking because i wanted to experiment with things like voxel lighting in a 3d texture.

2 Likes

You can create 3d textures in the engine, what functionality are you currently lacking? You can always file a feature request on GitHub and we’ll discuss it!

1 Like

I can create a 3d texture from a C++ extension using dmGraphics::NewTexture with TEXTURE_TYPE_3D, and that part works. The missing piece is on the shader/material side there’s no way to bind that texture as a sampler3D uniform through the .material file system and the glsl preprocessor doesnt seem to accept sampler3d declarations.

My use case is voxel lighting: I generate a per chunk light volume 16×16×16 on the CPU and want to upload it as a true 3D texture so the fragment shader can do native texture(sampler3D, uvw) lookups with hardware trilinear filtering instead of the packed 2D workaround I’m currently using flattening XZ into the U axis, Y into the V axis of a 256×16 texture

The packed 2D approach works but requires manual trilinear math in GLSL 8 texture fetches, whereas a real sampler3D would do the same thing in a single hardware interpolated fetch

Here’s of what I’m trying to achieve:

current workaround packed 2D texture manual trilinear 8 fetches

// Light volume packed as 256x16 texture: u = x + z*16, v = y
vec4 sample_light(vec3 pc) {
    float u = (floor(pc.x) + 0.5 + floor(pc.z) * 16.0) / 256.0;
    float v = (floor(pc.y) + 0.5) / 16.0;
    return texture(texture2, vec2(u, v));
}

vec4 trilinear_light(vec3 pos) {
    vec3 i = floor(pos);
    vec3 f = fract(pos);
    // 8 manual texture fetches to reconstruct trilinear interpolation
    vec4 c000 = sample_light(i + vec3(0,0,0));
    vec4 c100 = sample_light(i + vec3(1,0,0));
    vec4 c010 = sample_light(i + vec3(0,1,0));
    vec4 c110 = sample_light(i + vec3(1,1,0));
    vec4 c001 = sample_light(i + vec3(0,0,1));
    vec4 c101 = sample_light(i + vec3(1,0,1));
    vec4 c011 = sample_light(i + vec3(0,1,1));
    vec4 c111 = sample_light(i + vec3(1,1,1));
    vec4 c0 = mix(mix(c000,c100,f.x), mix(c010,c110,f.x), f.y);
    vec4 c1 = mix(mix(c001,c101,f.x), mix(c011,c111,f.x), f.y);
    return mix(c0, c1, f.z);
}

what I would like to write instead sampler3D 1 fetch, hardware interpolation

uniform sampler3D light_volume; // ← this is what's missing

vec4 sample_light_3d(vec3 world_pos) {
    // Normalize position into 0..1 range for the 16x16x16 chunk volume
    vec3 uvw = (world_pos - chunk_origin) / 16.0;
    return texture(light_volume, uvw); // hardware trilinear single fetch
}

The second version offloads trilinear interpolation entirely to the GPU’s texture unit reduces fragment shader complexity from 8 fetches + 7 mix() calls down to single instruction

I will also make a github post

2 Likes

I have an example project here that uses 3D textures:

This creates a 3d texture from a script component and then binds it in a render script, would that be enough of do you need something else/more?

5 Likes