I am proud of uncovering this, but maybe only shader wizards will understand.
Some of you may know examples of “Mode 7” are frequently just a 3D plane, either as a mesh or fragment shader. You might know it’s not the same thing as what Super NES/Famicom games did…and maybe that’s about it?
A faithful recreation has been on my bucket list since I began learning about 3D projection myself five years ago. Lately, there have been breakthroughs to deconstruct & communicate exactly how various original Mode 7 effects work. Like other shaders bending the rules to rewind time (eg. PS1 style), what I’ve done is adapt the original parameters to a GLSL-based solution. Why did I dig the internet to copy a 35-year-old kludge? Eliminating motion sickness!
Using 2D over 3D to imitate early 1990s games causes the foreground & background to diverge. Most of those games back then lacked synchrony on this anyway, but modern 3D distorts objects & motion at the edges of the screen, especially with a vertical FOV in a wide aspect ratio. [warning: eventual nausea]
True 1990s-style “weak perspective”, emblematized by original Mode 7, geometrically excludes the nonlinear influence. Consumer hardware like the SNES could not afford the calculations responsible for it. The upshot is no atypical queasiness as you rotate your heading. Like magic!
An outcome like this is vital if you want to include a pixel art horizon in forward perspective. I’m stoked that I figured it out, because I love those old pixel-perfect panoramas. It seems like knowledge of that particular detail is fading? (@dlannan) I hope I can adapt it to 3D, assuming early polygonal games relied on this optimization too.
I have much more to do before I am finished with this, but I intend to share a project on GitHub, since the technique is an iconic style rather than a personal one.
Today, I remembered that I only stepped back from Defold in want of more 3D features, but Defold is perfect for a project like this. As of yet, I only have Godot shader code. I’m drafting a new Defold render script; it’s a relief to claim manual control over frustum culling again, because when you do this, vertices & AABBs do not end up in the same place…
Old Lua I wrote for Defold is so much nicer & easier to come back to than old GDScript I wrote.
Ahh good times Looking really nice. Alot of early machines did this sort of thing (Commodore series - Amiga, C64 etc) and even early PC as well. Its a nice low cost way to make a pretty cool looking scene.
Alot of the early machines didnt even have floating point units .. so running “minimal math” solutions was always the go. With the minor exception of the Amiga, because it did have what we would class as a GPU (the Copper) back in 1985.
It was a fascinating little runtime asm you could run while the CPU was running. Again, no floats.. but it meant examples like this were common in demos and games. Well done. Brings back some great memories.
That’s what I was hoping to hear. It would be terrific to add this perspective puzzle piece to that dithered 3D art style I was developing that resembles F18 Interceptor. Else, I have a compelling opportunity to embrace more 2D-oriented design in the meantime.
I will lean harder into trying to solve this for 3D before I give up, but I might ask for help later. Trying to deliberately project 3D wrong is a labyrinth of accidentally-correct geometries.
The frustrating thing with current day 3D systems, is the lack of access to really important things like the framebuffers (on the gpus/video cards). While I get the need to “ip control” their internals, it means that fascinating old skool technicques are nearly impossible to do - apart from doing it all on cpu in an “emulated” frame buffer.
One of my favourite was compiled sprites - where you just executed the sprite at the FB mem pos. And the sprite was essentially a small asm that set the pixels from that mempos. It meant things like collision, masking, animation and even layer blending was built into the sprite itself. Ahh.. miss that stuff.
Thanks again.. it warms my heart that this stuff isnt dead yet
Yesterday was getting back up to speed in Defold. Unless there’s a long distraction, today I’ll port the shaders. Then we can talk about the nitty gritty in a Defold context.
More background to this, spreading out the info dump: The insight that finally made this concept ‘click’ for me is thanks to Retro Game Mechanics Explained. This revelatory visualization of a frame in F-ZERO became my mental model. (timestamp)
With that image in my head, I thought about what it would mean for a quad of UV coordinates. It is the inverse of transforming a texture itself; for example, scaling the texture bigger means scaling UV coordinates smaller. I was able to exploit hardware varying interpolation to reproduce per-scanline operations iterating over V, interpolated across U.
The fact that this zoom-based projection is square, despite being non-linear row by row, has got to be mathematically related to the results.
Is that particular behind-the-scenes one that you knew, @dlannan?
I’m most of the way to Gen X at 38, so I’m not calling you old, but I didn’t get far in programming until now in life.
Edit: I guess you did describe the same thing. I feel the same as you, which is why I want to share this – I would prefer to see knowledge of this survive.
Soz. I should have explained some of the maths. There are alot of examples out there.
Many of these techniques (although the nintendo and psx both had hardware specific for it) were based around matrix transforms like the rotozoomers:
The code is really quite simplistic but it allows for a simple type of 2d transform that is fast for integer based platforms specifically.
(theres tuns of examples of this)
I really like Matt Parkers brilliant vid on this:
You can see why the method was so powerful back in the day, since you are using a bunch of simple adds and mults, and more importantly, it is pixel consistent.
Compiled sprites (side discussion) are even more cool. Since you literally compiled a sprite with its pixel data and when you set its memory position (addr) while the code was running, it just worked … so you would make this huge array of compiled sprites that would bump, layer and animated… and it was all code (technically it had art, but the pixels were just a move ). Back when minimizing your pixel write count, it was important to minimize how much of the screen buffer you touched at any one time.
Also, the Amiga had a different framebuffer layout. It was built in planes not a pixel array (some early machines did this). The added benefit, is that parallax scrolling was kinda free.. you moved one planes offset.. dun
On machines like the C64 there were some truly nutty people that were doing cycle accurate “chasing the beam” tricks. Where because the hardware was fixed, you could calculate the actual signal clock cycles of when to update a pixel behind the beam (the scanline raster position) and produce quite amazing results for a machine that couldnt really do much at 1Mhz (or so it seemed ).
Its kinda cool theres a big retro C64 demo movement. Some of the best demos have come out recently on it.. and I think its because its more accessible, you can tweak every nook and cranny of the machine. These days, the limitations to the hw access.. imho.. has really dented our ability to investigate and tinker with the hw we buy and own
This. It wasn’t strictly necessary, but I eliminated a dimension from the view matrix, so a regular 3D camera drives the 2D transforms:
const vec2 FLIP = vec2(-1.0, 1.0);
const vec3 UP = vec3(0.0, 1.0, 0.0);
//...
//// TODO: rewrite some of this in fixed-point math?
// construct a matrix to eliminate Z roll, so _rotate & pitch give valid results
vec3 forward = mtx_view[2].xyz;
// clamp pitch to less than 90 degrees
forward.y = clamp(forward.y, -0.99999, 0.99999);
forward = normalize(forward);
vec3 right_3d = normalize(cross(UP, forward));
mat3 camera_mtx = mat3(right_3d, cross(forward, right_3d), forward);
// construct affine matrices for rotation & pitch
vec2 right = camera_mtx[0].xz;
vec2 up = vec2(camera_mtx[1].y, camera_mtx[2].y);
_rotate = mat2(right.xy, FLIP * right.yx); // inverse is -FLIP times the first vector
mat2 pitch = mat2(-FLIP * up.xy, up.yx); // inverse is +FLIP times the second vector
The Super NES/Famicom offered horizontal-blank direct memory access to developers at the end of each scanline. That was the only way the Mode 7 plane effect was possible, which is why games never rolled the camera or had (convincing) changes in terrain elevation.
I only have var_texcoord0.y or (2.0 * var_texcoord0 - vec2(1.0)).y.
Hehe. the horizontal and vertical blanks were the coders playgrounds back then ..
The C64 still freaks me out though.. some of the ‘recent’ tricks discovered (remember this is a machine from 1982.. 8-bit.. many yrs before 16bit machines and hw gfx chips ) are properly nuts.
I remember when we got ours in 1983.. it was insane.. the manual it came with had the hw schematics for the whole machine!
I cant say Ive replicated much in 3D.. its hard to do some of the things that were available. As I messaged you.. just drawing lines (which was an artform back then) is still a bit messy. But I think its great what you are doing here, I love seeing the replication of these old techniques and really fills me with wonderful nostalgia.. so thanks.
3D has been the ultimate goal, even though it would be wiser to settle for Mode-7-style and practice making a game. The next objective is to ratchet up from “Mode 7” to “Super FX chip”, under the assumption that the Super FX chip, again, did not calculate 3D like modern engines do. If I am wrong, I want to confirm that. Eliminating the distortion exhibited in modern engines is the objective, and search results (or AI interrogations) are not super helpful.
If I must, I could write discrete horizon graphics that scale & scroll independently in synchrony with modern 3D.
In the OP, Final Fantasy VI’s Blackjack is positioned with a crude approximation for world Y that makes perfect sense for Mode 7, but is not helpful for figuring out the rest of vertex positioning in a Super-FX-style context.
Last year I learned I have ADHD. Optimistically, I guess I make games like a 3D printer. When I’m done, I want an art asset style/spec set in stone so I don’t make doomed artwork again.