Will Defold support an official alternative language like Godot does?

No, I dont know where you gathered this, but the Truffle implementation of Ruby, Python, Javascript, and so on is either undisputed the fastest one, or among the fastest.

It is also not running these languages on the JVM, as Truffle is its own runtime.

The GraalVM just has both, a traditional JVM runtime and additionally Truffle.

You can also run JVM languages via Espresso on Truffle, to benefit from its Polyglot API access, the unified tooling, and the native image technique.

LLVM languages can run on Sulong.

A native extension that loads other extensions compiled to WASM could be a good way to add quick unofficial support for other languages without having to deal with the headache of all the platform-specific SDKs. File size would still be a big issue though, plus players being able to potentially load their own extensions.

Your graph shows that Truffle is fast compared to how it used to be. And given what it is doing it is likely fast, compiling source code into byte code, to be run in a VM. A generic coding capability is impressive. It will not be as fast to build an application as it currently is with Lua, I guess on a par with the speed of compiling C++. Hot re-loading gives instant changes with Lua, a compile step would prevent that. Truffle is a really interesting concept; it is new to me, I think what I have just said is correct.

I made this summarization using ChatGPT o3 Pro:

Summary

Below is a build‑oriented cheat‑sheet that game‑engine authors typically keep on hand when they evaluate an embeddable VM. It focuses on the three things you asked for:

VM / Runtime Execution performance**(recent real‑world benchmark)** Smallest practical native binary you can ship Host platforms that overlap with official Defold targets
Wasmtime 4.0 (Cranelift JIT) In the 2023 libsodium crypto benchmark Wasmtime’s Cranelift backend is within ≈15 % of the LLVM‑based leaders across 60+ kernels, and matches Wasmer‑Cranelift almost point‑for‑point 0.7 – 1.2 MB for libwasmtime.so after stripping, --no‑default‑features, LTO etc. Windows‑x64, macOS‑x64/ARM64, Linux‑x64/ARM64 tier‑1; builds for Android & iOSby cross‑compiling (Wasmtime provides a min‑platform example)
Wasmer 3.2 (LLVM & Singlepass) LLVM backend is statistically tied with iWasm‑AOT for the top spot in Frank Denis’ 2023 cumulative score; Singlepass trades ≈2‑3× throughput for instant startup 2.5 – 3 MB (wasmer_c_api with LLVM, stripped) when built --no‑default‑features --features llvm; 1 – 1.5 MB with the baseline Winch compiler Same as Wasmtime(Windows, macOS, Linux) plus official prebuilts for Android & WASI-in‑the‑browser
WAMR (iwasm) 1.4.5 Fastest overall in the same libsodium benchmark when compiled AOT ; interpreter is ≈8‑20× slower but still beats Lua/Python AOT VM ≈ 85 KB code + 20 KB read‑only data; classic interpreter is 55 KB. Peak RAM on x86‑64 < 0.5 MB running CoreMark C99 source builds on every Defold target: Win/mac/Linux, iOS, Android, HTML5 (Wasm), Nintendo Switch (verified by community)
Wasm3 0.5 Interpreter is ~4× slower than Wasmtime on CoreMark but 3× faster than CPython for the same fib‑40 micro‑test 64 KB of code (!) when built -Os -flto; needs ~10 KB RAM for useful scripts Builds out‑of‑the‑box for all desktop OSes, iOS, Android, Emscripten/HTML5
GraalVM 23 Native Image (polyglot) Truffle languages run 1.3‑2× slower than optimized JVM byte‑code on the same CPU (2024 Renaissance suite) but startup latency drops to < 10 ms “Hello world” native image ≈ 9.8 MB with -Os; default no‑tuning build is ~13 MB Windows‑x64, macOS‑x64/ARM64, Linux‑x64/ARM64 only (no iOS/Android)
.NET 8 NativeAOT MicroBenchmarks show ≈ 0‑10 % overhead vs CoreCLR JIT once warm; startup in single‑digit ms Currently ~2.7 MB for a trimmed console app; roadmap issue shows 1.5 MB target Windows, macOS, Linux. Mobile support planned; today you’d embed Mono/Xamarin for iOS/Android
V8 (embedded) Still the fastest JS/TS engine, but its Wasm throughput is on par with Wasmtime/Wasmer (see cumulative libsodium chart) A monolithic libv8_monolith.a without ICU is ≈ 28 MB on Linux‑x64 Windows, macOS, Linuxtiers 1; iOS/Android need cross‑compile; no Web build because V8 is the Web runtime

How to read the table

  • Execution performance is taken from independent, multi‑kernel tests that reflect real workloads (libsodium 2023) rather than micro‑benchmarks. Numbers are relative: being “tied with the fastest” means you won’t see game‑visible differences.
  • Binary size assumes you link the VM as a static or dynamic library into your engine—not the CLI—with all non‑critical features disabled, stripped symbols, LTO enabled.
  • Platforms list only the host side runtimes you’d ship with Defold. (Guest Wasm runs anywhere.)

What these numbers mean for a Defold‑based engine

1 .

If the goal is “any language” plus tiny footprint

→

WebAssembly + Wasmtime or WAMR

  • Modders compile once to wasm32-wasi and you never re‑tool the engine API for Lua vs Rust vs Zig.
  • Wasmtime gives you JIT‑class speed while staying < 2 MB; WAMR lets you fall back to a 100 KB interpreter on memory‑starved mobiles.
  • Both runtimes deliver fully deterministic execution if you meter fuel—handy for lock‑step multiplayer.

2 .

If you mainly want C#/F# scripting

→

.NET 8 NativeAOT

  • Still heavier than Wasm but < 3 MB is bearable on desktop.
  • You get the Visual Studio & Rider debugger for free.
  • For mobile you’d bundle the existing Mono AOT (≈ 6 MB), then switch to NativeAOT‑mobile when it lands.

3 .

If you need Java/Kotlin, JS/TS and Python in one process

→

GraalVM Native Image

  • Single GC, excellent inter‑language calls. Startup ≈ 9–10 MB ‑‑Os, so target desktop builds only.

4 .

If you only care about JavaScript

→

V8

  • The 25 – 35 MB static lib is large, but you inherit the npm ecosystem and world‑class JIT.

Integration tips for Defold

Task Recommended approach
Hot‑reload in the editor Keep each script as its own Wasm module; on file‑change call Wasmtime’s engine.precompile() on a worker thread, then reinstantiate().
Mobile memory limits Use WAMR classic interpreter for Android/iOS; limit each script to 32 MB max linear memory.
Debugging For Wasm VMs, surface stack traces by mapping frame IPs back to DWARF using wasm‑objdump –d. Attach to Cranelift’s perf‐maps to profile hotspots.
Deterministic physics Tick scripts with a fixed‑step update(dt) and enable Wasmtime’s “fuel” metering so no script can exceed, e.g., 10 000 instructions per frame.
Packaging Strip symbols (strip -S) and, on macOS, run codesign --remove-signature + strip before notarizing to avoid re‑codesign.

Bottom line

  • For broad language choice and the smallest shipping binary, pair Wasmtime (~1 MB) with a WAMR fallback (~100 KB).
  • If you’re committed to managed languages, NativeAOT (.NET) wins on size over GraalVM, but GraalVM wins on polyglot richness.
  • V8 is justified only when first‑class JavaScript or npm is a must‑have.

Those trade‑offs should give you a clear path to prototype an embedded scripting layer that fits Defold’s tight cross‑platform footprint while letting creators script in their favorite language.

It searches answers in internet, analyzing many links.
A few interesting links I read afterward:

Reading all of that, it’s not like there are too many options exist.
I like iwasm or wasm2c the most.

1 Like

The problem with runtimes such as WAMR - or really anything besides wasmtime, at the time, - is that you dont get component model compatibility.

If you dont want to write the bindings yourself - which is a huge issue due to loads of code to write, and high maintenance costs - you want it generated.

And auto-generated bindings are rare, there are not a lot of systems for that.

And if you do that, you only have access to that one language you just implemented.

That is one of the huge reasons, why Lua is so popular amongst C/C++ applications, that want to provide a scripting interface.

Its uniquely designed in such a way, that it makes exposing the C++ functions available from Lua, without much action from you.

I feel like its a very sane choice for such a use case, and there has been long no strong alternative to it.

Thanks to the component model of WASI, that changes now.

You can create bindings between the supported host, and the supported guest languages.

The guest languages have to be able to compile to WASM, and to fulfill the conditions the component model asks for.

This way, we can add a ton of languages without much maintenance costs, and with less work to implement them in the first place.

Plus, the runtime performance and the size of the shipped binary is close to native.

I think its a good extension method, and can also be used to extend the engine in other ways.

Like as a plugin format, possibly replacing the native extensions, and unifying both systems.

Another bonus is, that this approach sandboxes the code.

2 Likes

Sorry for the long post - but this is the first time I’ve come out and openly asked this, and due to this post about the top tier dev digging in his heels and saying he wont use Defold just because of LUA, and how much I like Defold and the opportunity it’s given me to learn about LUA and putting games together in a different way - I feel the need to finally ask; Why do some people care so much about what language they use?

For the longest time, I’ve gone through my career thinking it’s normal that if some engine or framework works best with a particular language, then you should just learn it. Like going to a different country - if you plan to stay there and be productive, learning the local language is just something you have to do. With game engines and other software frameworks, my assumption is they had good reasons for choosing the language they did for the system they’ve created.

But over the years I’ve seen what looks like people refusing, arms folded, to learn a specific language or to use anything BUT a specific language. And it baffles me, and makes me feel like I have to “humble brag” to ask why this is the case when I’ve always just learned what I needed to get the job done.

If the argument is:

  • Performance: I get that different languages perform differently, but it’s not the only factor at play, and in the contexts of game engines, 99% of the time it’s not going to matter because most of the performance is going to come from using the features of the engine correctly - and if you really want performance for your custom code for some reason (which is rare but possible), there’s usually the option to do some C/C++ native extension and compile it for the engine you’re using - I know you can do this for Unity, Unreal (which uses C++ anyway), Godot, and Defold.
  • Ecosystem: I still haven’t seen an engine & language combo worth using that was SO deficient in a particular area that it wasn’t worth building a bit of stuff here and there to fill in the gaps to reap the other major benefits the engine provides.
  • Time cost of learning: It’s not like you have to become an expert in it to be productive - to continue the country analogy; if you only spent a limited time in a country it would make sense that you only got a portion of the language learned enough to get by while you were there before you moved to another country. Spoken languages are harder than programming languages and I still put time into this whenever I went travelling abroad, and I know others do - so what’s the big deal with programming languages, especially since a lot of them are way more similar than spoken languages?

And now there’s really no excuse with AI, you can paste in code in one language and ask it to convert, or ask a question about how to do a task in a specific language, and you can really speed along your learning that way.

So coming back to this case - why would such a top tier developer state so firmly that he would never use a whole engine - purely because of the language it uses? Surely such a competent dev is not making assumptions about performance, or unwilling to simply type functions, conditional statements and loops in a slightly different way?

4 Likes

Hello,

I’m unable to ask him but from my understanding it’s more about what developers have done successful work with previously or the hiring constraints they’ve had in the past. I don’t think in this case it’s totally to do with personal preference, rather they’re used to what works for them.

Personally, I can’t ship the complexity of software I need with Defold without using a number of utility addons to verify the code on the IDE side, and so on. Lua was made by amateur developers to begin with, but it has come a long way since then. The design decisions back then resonate all the way to today. While it’s come a long way from those humble beginnings, and there’s nothing that can be done about that language-side, we now all thankfully can validate that code and hopefully prevent a lot of bad software engineering.

That’s why there’s just so many validation addons for IDEs and error checking (and forks), because there isn’t any built into Lua itself.

No-one will be able to complete a project of Hades 1 or 2’s size (also made with Lua), without significant addons in their dev pipeline. An IDE with addons, strong type enforcement etc, all of this needs to be taken into account otherwise without many of the things C++ devs are used to like compile-time checks and strong typing will lead to bugs. This is should be more obvious as the project size grows.

For game-jam scale games it’s perhaps possible to get by without additional tooling. Not so much if there’s a large budget riding on it.

2 Likes

This is fiction. Lua is very well designed. In fact is is possibly one of the best designed languages for it’s use-case. Lua is specifically designed as a ‘batteries not included language’. It is the best language for embedding and extending. It is not a competitor to other much larger languages, such as Python.

What makes Defold and Lua what they are is the design decision to keep the core very small.
Higher level features are addons, not core bloaters.

Yes significant addons are required and need to be taken into account. This is clearly (and specifically stated as being) outside of the remit of the Defold and Lua core. Expectation management is required here.

The design approach of Godot is to include everything in the engine, this is a different decision.

Pros and cons of both approaches are easily imagined.

10 Likes

I learned a lot of languages since the 80’s (even assembler and “language machine”, Java, the good old locomotive “basic”, VBscript, JS, that language I used 15 years for server-side websites and can’t remember its name, etc.), and I love LUA because it’s simple, and straight forward.

Fun fact : I had no idea of what it really is before I found Defold mid-2023, and for me, I don’t know why, it was just some coding third-party thing to make mods for games. Honestly, I don’t know why I was thinking that. I just remember I’m starting to feel years (My health is not good, sometimes), and I was a little tired to learn YAL (Yet Another Language) after the enormous fumble my trying of Godot was.

Language is a tool and we don’t like to change tools we took sometimes years to learn. Ask graphic designers (that’s THE reason why Apple survived before the smartphone era).