I created this GPT for lua and defold and I’ve had amazing success with it. Give it a try and see if you like it too. When the scripts get a bit long though I find I need to find where the issue is at and tell it what to fix, but it’s still amazing as it has saved me a ton of time writing code. Note: you may want to tell it to only give you the functions that changed if the scripts are very long as it takes a while for ChatGPT to give out the entire script if it’s too long.
Let me know if you found this useful. Oh you probably need to have ChatGPT pro to use this.
I still don’t think ChatGPT is recommended for Defold outside of very basic and introductory tasks, if even that. I asked it to do three separate tasks:
Add and use the Rendy camera extension.
Generate a 3D cube at runtime.
Change factory component’s dynamic prototype at runtime.
All of which it made up nonsensical steps, made up fake API calls, and got author information and links incorrect.
I recently got back into Defold (after almost 2 years away), and Custom GPT has been a huge help in getting back on track and saving me a lot of time with my new project.
It doesn’t always give me the exact answer right away, but it often points me in the right direction. By digging a little deeper, I always end up finding a satisfactory solution… and often save a lot of time in the process.
At the time it was great as I was able to point ChatGPT to all the APIs for defold and Lua 5.1. It kept it from hallucinating. ANd I can tell it how I like my answers (like full functions, or just changes).
But now with ChatGPT o1…that model is super amazing. ANd they have a new thing called projects. You can do all this in a project dedicated to what you’re working on. And it allows the o1 model. Not sure if they udpated the customGPTs to allow o1 model yet.
But o1 is so amazing. However, every so often it gets a bit stupid after days of being great. So I have to stop for a day or two and then it gets cleaned up and is great again.
You can create a .txt file with all the “knowledge” (so copy there documentation, examples, Defold API, maybe some source code of your projects or so) that you want such GPT to look up before answer (it looks like you have to ask for this explicitly though every time you want it to look it up), so it becomes your better search engine - ask it and it should give you an answer based on that knowledge, so you are avoiding situations when it hallucinates and creates functions that are not existing (still happens though sometimes, so double check).
You can also “tell it” in what way it should answer you, e.g. “assume I know Lua, always answer concisely and shortly and explain/elaborate only when I ask to explicitly” or something like this.
Btw. I’m also testing out Cursor and extensions for VSCode like Codeium or twinny (which uses locally hosted LLMs, so worse than through API of the best LLMs, but it’s the one I managed to run on a laptop with 32 GB RAM). Cursor and Codeium looks like in that direction programming is going - it reads your code, analyses it, sometimes can help you find a bug, a problem or is really useful with documentation (e.g. I tested it while writing documentation to Squid and only few times I had to fix few things manually later on). I still don’t think it’s useful with coding, if you are not aware what you are writing, because it might inject nasty bugs you will be having problems finding or might give you solution not suited to your needs, but in general I do admit it gets better and better somehow. I can’t use them for my job, but maybe locally run, isolated and private solutions will be widespread and useful in the future.
I don’t test Copilot, because it prompts to ask for access to all my repositories, even private, I don’t know if there is a way to avoid it.
copilot is really just GPT 4. So I wouldn’t bother with it. It’s great for current web based info though. Like what did NVIDIA announce at CES 2025 and stuff like that.
Also check out the o1 model on chatGPT. Handsdown the best I’ve tried. It does more reasoning loops to help “get it right”.
I’m so stupid sometimes. I already solved this with a different shader
function M.update_background_shader(node, time, color1, color2, color3)
local width = gui.get_width()
local height = gui.get_height()
gui.set(node, “utime”, vmath.vector4(time, 0, 0, 0))
gui.set(node, “uresolution”, vmath.vector4(width, height, 0, 0))
gui.set(node, “color1”, color1)
gui.set(node, “color2”, color2)
gui.set(node, “color3”, color3)
end
I want to point out that LLMs are a moving target. Gemini 2.5 pro is now at the top of the heap for coding. It’s free in Bard and google’s AI Studio. Don’t know how well it’s trained on Defold’s API, but I understand that Lua is not a major language for LLMs, like python is.
Anything that the Defold team can do to improve LLMs’ knowledge of their engine, would be a blessing.
The Defold team cannot physically fulfil all wishes. But the community can help the team! For example, I added llms-full.txt based on the documentation, but it can still be improved by adding API reference and leaving only necessary parts.