Now that Defold is being slowly nudged into being a viable 3D engine for larger game projects, a necessity would seem to be the ability to render a GUI screen (either a fully-decorated GUI screen or a simple pie/9slice node) onto a GO so that an object in 3D space can be decorated with a flat and dynamic visual (example usage: ingame poster, ingame flag, ingame computer screen, ingame TV, ingame painting, ingame window for a house) which would be unnecessarily difficult to animate using traditional animations and especially difficult to retexture a portion repeatedly.
Is this possible in the current version of Defold somehow, is there a way to modify the render script to make it work, or should I submit a feature request?
You can already render GUI elements to render targets (using unique render predicates) which are then used on meshes which follow game objects. You would need as many render predicates as you want objects.
Making this simple and easy would be nice.
Currently the GUI systems are not meant for 3D at all. I’ve done experiments in forcing perspective cameras and things break. Maybe others have had better experiences.
Ok. I think what you are asking is possible, but probably not in the way you want/expect
If you use the imgui extension: https://github.com/britzl/extension-imgui
This is actually rendering to a OpenGL texture. This texture can be used in a material and thus, rendered in 3D. I’ve honestly not bothered testing this, but it should be a fairly simple thing to do.
If you want the Defold gui rendered into 3D then @Pkeod suggested method would work. As the GUI is rendered in the render script separately and this render target could be assigned to a material and then utilized on a 3D object.
To do this, you would also need to manage the projection of inputs into the correct gui space. This is a more complicated problem and as @Pkeod said, it would be nice if there were a management layer to do this.
Maybe some of the camera managers around might help with this. Its possible it might not be possible with the current Defold gui. But Im fairly certain you should be able to use imgui.
If I have some spare time later tonight. I’ll do a couple of quick tests.
For me, the advantage of the GUI is that it automatically adjusts the size/position depending on the screen size. The usual GO sprite/label/spine components plus a 3D collision and a ray from the camera to check clicks on objects in the 3D world are enough for user interface elements in the 3D world. Where the size of UI elements is related to the size of objects in the world. Tip: you can combine mesh/models with sprite/label.
Hi all, attached is a little Basic 3D example with GUI being rendered to a plane.
Things to note:
It renders gui with scaling, and its a 512x512 render target. This should be adjusted if you need higher gui definition
There is no input handling. To make the gui useful you will need to pass in all the input information and handle the gui input messages with the appropriate projection.
The plane is set at an angle, and also recieves lighting (its material is a copy of the model material). This plane can be dynamic etc. You could also render to any other mesh.
Its a sample, so dont expect too much, but it appears that Defold gui rendered to a mesh is quite ok.
I’ve done a little bit of 3d gui testing before and need to get back to it for further testing but I can give some feedback atm.
Can you render a gui to a 3d render target ?
Yes you can render gui to 3d objects just fine without perspective input.
Issues I noticed are:
Gui size is automatically set to project size in editor. With a 3d gui optional sizing may be ideal.
Moving the gui to a 3d position and creating views and projections (perspective) seem to not affect the input elements. The input seems to be set to frame buffer with no changes to perspective. I initially thought the render.STATE_STENCIL_TEST is being used to mask out gui elements and it seems to not work in perspective , however I think further testing is needed to nail it down.
instead of rendering gui to a 3d target a good start would be to move a GO with a gui component around in 3d and its gui nodes elements stenciled with optional perspective.
here I did a quick test.
Notes:
For some cases you can work out some simple tricks to get some very nice 3d gui menus etc. by using the gui simply as a non rendered mask for input and using 3d models and animations as the visual elements. Matching the gui nodes to the 3d elements on screen to click on etc when input is needed. For example animate some 3d menu on screen then load a non rendered gui via proxy that acts as the input management for the menu and animation can be set to models via messages from gui script, then unload the gui proxy when needed. If gui text etc is needed then rendering that to a render target and using it with the 3d models should work as well.
Completely agreed. If there is no simple enough solution which could be packaged as a native extension, possibly some feature more easily allowing GUIs to be used in this context would be useful, if there is general enough support for it and no simpler alternatives to achieve the same result.
This is an interesting method and not one I considered, however the fundamental crucial drawback to this from a usability standpoint is that it isn’t especially user friendly or simple to implement, and requires essentially maintaining either 2 parallel GUI systems (one in Defold’s GUI and one in DearimGUI) or sacrificing the many benefits and wide-reaching support of Defold’s GUI in order to achieve this preferred outcome of GUIs as screens which doesn’t need to be complex to the user.
Most camera managers for Defold give 3D the lazy eye except for Scene3D but I’m not sure if it would be realistic to expect a library even if maintained by the legendary aglitchman to be able to implement Defold GUI elements in this fashion when they are structurally not designed for it. A simpler and less bloated fix would probably be engine-side.
This seems like a minor hassle which could be solved with some maths after the main implementation is working and stable.
“It’s enough” isn’t especially useful but the specific reason that I feel my proposed solution would make sense is because dynamically rendering any sort of flat projection within 3D which could be very generally applicable would be better off using Defold’s brilliant GUI system than asking devs to code their own GUI implementation with tiny z offsets to combat z-fighting etc.
This is absolutely awesome. The resolution seems like it would be a simple fix by offering the option to decouple screen resolution from GUI resolution, or alternatively by simply scaling the output. Input handling, once rendering is working, would probably be simpler requiring only a bit of geometry and trigonometry to offset positions of clicks and pass key events.
This looks incredible and directly addresses the type of projected dynamic screen I had in mind.
I think most of these things with exception to the flag can be done in defold not as difficult as it seems with some rendering pipeline knowledge and for the most part you really wouldn’t need gui. Specifically learning about predicates , render targets, views and projections for the most part as Pkeod mentioned. The flag would be much more difficult because it is more of a cloth physics simulation and trying to animate it with bones would be very generic and I think the way it could be done in defold is through vertex animation , not sure if anyone has tried that yet, would be really awesome though and probably very difficult.
I’m not sure if its possible or a good idea. What I meant is that having an actual 3d gui would possibly be where users can move around the gui component in 3d space from its parent GO and render it in perspective view if they wanted that behavior. I think there are a lot of issues with that idea. The bounds of the gui nodes would need to be converted to world space and then converted back to screen space with perspective somehow or something similar to that for mouse or touch input purposes like dlannan noted. Otherwise using other input like keyboard and controllers would work perfectly fine with rendering gui to render targets like the short video above.
So the big issue with a 3d gui idea is mouse or touch input from perspective views. There are workarounds like Dragosha mentioned you can instead set up 3d collision objects and use raycast from the camera to 3d objects and treat it like gui mouse/touch input, and that may be good enough for most cases. For ingame screens , tv’s , paintings and windows etc you really don’t need a 3d gui and instead would use , predicates /render targets/ views and projections etc.
The general input problem is non-trivial as mentioned by @MasterMind. Primarily with touch and and mouse platforms. Because the current method uses model projections and as previously mentioned you need something to manage that (hence why I mentioned camera managers - since most of them do this to some extent).
Additionally, there is a the problem of perspective selection and depth testing. Its a common problem in VR/AR systems for this sort of gui management.
I personally dont agree this sort of thing should be in the engine (yet). It is quite a specific use case, and with the current framework it is achievable without too much struggle. The implications in adding it to the system I think might be reasonably dramatic considering how much editor capability there is for the gui, and system api’s might result is a wide impact.
I think as a native extension, it should be quite achievable. And likely almost no/low impact to the system itself - imho, this is where Defold shines. You wont gain 3D editing of such a gui, but the editor isnt 3D anyway (yet). And the main benefit is to be able edit in 2D.