Will Defold support an official alternative language like Godot does?

No, I dont know where you gathered this, but the Truffle implementation of Ruby, Python, Javascript, and so on is either undisputed the fastest one, or among the fastest.

It is also not running these languages on the JVM, as Truffle is its own runtime.

The GraalVM just has both, a traditional JVM runtime and additionally Truffle.

You can also run JVM languages via Espresso on Truffle, to benefit from its Polyglot API access, the unified tooling, and the native image technique.

LLVM languages can run on Sulong.

A native extension that loads other extensions compiled to WASM could be a good way to add quick unofficial support for other languages without having to deal with the headache of all the platform-specific SDKs. File size would still be a big issue though, plus players being able to potentially load their own extensions.

Your graph shows that Truffle is fast compared to how it used to be. And given what it is doing it is likely fast, compiling source code into byte code, to be run in a VM. A generic coding capability is impressive. It will not be as fast to build an application as it currently is with Lua, I guess on a par with the speed of compiling C++. Hot re-loading gives instant changes with Lua, a compile step would prevent that. Truffle is a really interesting concept; it is new to me, I think what I have just said is correct.

I made this summarization using ChatGPT o3 Pro:

Summary

Below is a build‑oriented cheat‑sheet that game‑engine authors typically keep on hand when they evaluate an embeddable VM. It focuses on the three things you asked for:

VM / Runtime Execution performance**(recent real‑world benchmark)** Smallest practical native binary you can ship Host platforms that overlap with official Defold targets
Wasmtime 4.0 (Cranelift JIT) In the 2023 libsodium crypto benchmark Wasmtime’s Cranelift backend is within ≈15 % of the LLVM‑based leaders across 60+ kernels, and matches Wasmer‑Cranelift almost point‑for‑point 0.7 – 1.2 MB for libwasmtime.so after stripping, --no‑default‑features, LTO etc. Windows‑x64, macOS‑x64/ARM64, Linux‑x64/ARM64 tier‑1; builds for Android & iOSby cross‑compiling (Wasmtime provides a min‑platform example)
Wasmer 3.2 (LLVM & Singlepass) LLVM backend is statistically tied with iWasm‑AOT for the top spot in Frank Denis’ 2023 cumulative score; Singlepass trades ≈2‑3× throughput for instant startup 2.5 – 3 MB (wasmer_c_api with LLVM, stripped) when built --no‑default‑features --features llvm; 1 – 1.5 MB with the baseline Winch compiler Same as Wasmtime(Windows, macOS, Linux) plus official prebuilts for Android & WASI-in‑the‑browser
WAMR (iwasm) 1.4.5 Fastest overall in the same libsodium benchmark when compiled AOT ; interpreter is ≈8‑20× slower but still beats Lua/Python AOT VM ≈ 85 KB code + 20 KB read‑only data; classic interpreter is 55 KB. Peak RAM on x86‑64 < 0.5 MB running CoreMark C99 source builds on every Defold target: Win/mac/Linux, iOS, Android, HTML5 (Wasm), Nintendo Switch (verified by community)
Wasm3 0.5 Interpreter is ~4× slower than Wasmtime on CoreMark but 3× faster than CPython for the same fib‑40 micro‑test 64 KB of code (!) when built -Os -flto; needs ~10 KB RAM for useful scripts Builds out‑of‑the‑box for all desktop OSes, iOS, Android, Emscripten/HTML5
GraalVM 23 Native Image (polyglot) Truffle languages run 1.3‑2× slower than optimized JVM byte‑code on the same CPU (2024 Renaissance suite) but startup latency drops to < 10 ms “Hello world” native image ≈ 9.8 MB with -Os; default no‑tuning build is ~13 MB Windows‑x64, macOS‑x64/ARM64, Linux‑x64/ARM64 only (no iOS/Android)
.NET 8 NativeAOT MicroBenchmarks show ≈ 0‑10 % overhead vs CoreCLR JIT once warm; startup in single‑digit ms Currently ~2.7 MB for a trimmed console app; roadmap issue shows 1.5 MB target Windows, macOS, Linux. Mobile support planned; today you’d embed Mono/Xamarin for iOS/Android
V8 (embedded) Still the fastest JS/TS engine, but its Wasm throughput is on par with Wasmtime/Wasmer (see cumulative libsodium chart) A monolithic libv8_monolith.a without ICU is ≈ 28 MB on Linux‑x64 Windows, macOS, Linuxtiers 1; iOS/Android need cross‑compile; no Web build because V8 is the Web runtime

How to read the table

  • Execution performance is taken from independent, multi‑kernel tests that reflect real workloads (libsodium 2023) rather than micro‑benchmarks. Numbers are relative: being “tied with the fastest” means you won’t see game‑visible differences.
  • Binary size assumes you link the VM as a static or dynamic library into your engine—not the CLI—with all non‑critical features disabled, stripped symbols, LTO enabled.
  • Platforms list only the host side runtimes you’d ship with Defold. (Guest Wasm runs anywhere.)

What these numbers mean for a Defold‑based engine

1 .

If the goal is “any language” plus tiny footprint

→

WebAssembly + Wasmtime or WAMR

  • Modders compile once to wasm32-wasi and you never re‑tool the engine API for Lua vs Rust vs Zig.
  • Wasmtime gives you JIT‑class speed while staying < 2 MB; WAMR lets you fall back to a 100 KB interpreter on memory‑starved mobiles.
  • Both runtimes deliver fully deterministic execution if you meter fuel—handy for lock‑step multiplayer.

2 .

If you mainly want C#/F# scripting

→

.NET 8 NativeAOT

  • Still heavier than Wasm but < 3 MB is bearable on desktop.
  • You get the Visual Studio & Rider debugger for free.
  • For mobile you’d bundle the existing Mono AOT (≈ 6 MB), then switch to NativeAOT‑mobile when it lands.

3 .

If you need Java/Kotlin, JS/TS and Python in one process

→

GraalVM Native Image

  • Single GC, excellent inter‑language calls. Startup ≈ 9–10 MB ‑‑Os, so target desktop builds only.

4 .

If you only care about JavaScript

→

V8

  • The 25 – 35 MB static lib is large, but you inherit the npm ecosystem and world‑class JIT.

Integration tips for Defold

Task Recommended approach
Hot‑reload in the editor Keep each script as its own Wasm module; on file‑change call Wasmtime’s engine.precompile() on a worker thread, then reinstantiate().
Mobile memory limits Use WAMR classic interpreter for Android/iOS; limit each script to 32 MB max linear memory.
Debugging For Wasm VMs, surface stack traces by mapping frame IPs back to DWARF using wasm‑objdump –d. Attach to Cranelift’s perf‐maps to profile hotspots.
Deterministic physics Tick scripts with a fixed‑step update(dt) and enable Wasmtime’s “fuel” metering so no script can exceed, e.g., 10 000 instructions per frame.
Packaging Strip symbols (strip -S) and, on macOS, run codesign --remove-signature + strip before notarizing to avoid re‑codesign.

Bottom line

  • For broad language choice and the smallest shipping binary, pair Wasmtime (~1 MB) with a WAMR fallback (~100 KB).
  • If you’re committed to managed languages, NativeAOT (.NET) wins on size over GraalVM, but GraalVM wins on polyglot richness.
  • V8 is justified only when first‑class JavaScript or npm is a must‑have.

Those trade‑offs should give you a clear path to prototype an embedded scripting layer that fits Defold’s tight cross‑platform footprint while letting creators script in their favorite language.

It searches answers in internet, analyzing many links.
A few interesting links I read afterward:

Reading all of that, it’s not like there are too many options exist.
I like iwasm or wasm2c the most.

1 Like

The problem with runtimes such as WAMR - or really anything besides wasmtime, at the time, - is that you dont get component model compatibility.

If you dont want to write the bindings yourself - which is a huge issue due to loads of code to write, and high maintenance costs - you want it generated.

And auto-generated bindings are rare, there are not a lot of systems for that.

And if you do that, you only have access to that one language you just implemented.

That is one of the huge reasons, why Lua is so popular amongst C/C++ applications, that want to provide a scripting interface.

Its uniquely designed in such a way, that it makes exposing the C++ functions available from Lua, without much action from you.

I feel like its a very sane choice for such a use case, and there has been long no strong alternative to it.

Thanks to the component model of WASI, that changes now.

You can create bindings between the supported host, and the supported guest languages.

The guest languages have to be able to compile to WASM, and to fulfill the conditions the component model asks for.

This way, we can add a ton of languages without much maintenance costs, and with less work to implement them in the first place.

Plus, the runtime performance and the size of the shipped binary is close to native.

I think its a good extension method, and can also be used to extend the engine in other ways.

Like as a plugin format, possibly replacing the native extensions, and unifying both systems.

Another bonus is, that this approach sandboxes the code.

1 Like