Flix - Procedural training and Movie making

I don’t quite understand what this is useful for, although I appreciate the progress updates. Think you can explain it in terms of a user story?

Its based around a tool for generating a procedural trainer (as mentioned at top). These are simulated training tools that help trainers develop training material for procedures. Some examples:

  • Training white card certification (this is a safety, hazard, and tool use training certification in Australia)
  • For driver training. Develop procedures for trainers to train drivers (trains, cars, etc)
  • Site awareness training. Where for example a mining site needs to induct people for site procedures.

And much more. The aim of this is to be more oriented at people wanting to make interactive videos/runtimes. With a side benefit of also being able to build procedural training content.
It is effectively an editor of sorts. Its much more simplified, so as to allow trainers (and users in this case) to quickly create content, without needing to learn a full 3D editor like Blender for instance.
When the Director (the main interface) is up and running it will make alot more sense. :wink:

1 Like

Some additional info. A user story isnt kinda applicable to this sort of development. Since the user is a trainer, maker, developer, designer - so as a tool its to facilitate what they need to do (much like an editor facilitates developers to build what they want in an engine).

For general users this product is targeting a specific niche of people. It is not really for game developers (although they could use it to make cut-scenes and interactive movies) so its more suited to content creators like indie film makers who want to try out film camera movements, interactions and so on. It is very movie script oriented. There are ways to do this in many other tools, but this is about bringing the availability of making simple CG based action/interaction sequences to everyone. This will be free to use for general users. I dont expect many people to be overly interested, but the commercial side of the same product (with a bunch of other features) will form the basis for procedural trainers in a few different industries.

Hope that helps out a bit. I have a bunch of google docs that go into far more detail, but this isnt really something that will be revolutionary, its more about a market segment Im well versed in, with people I know and the free version is about being able to give customers something to examine before purchasing.

Sorry to hassle. But should a factory instance with a go with a temp mesh on it be able to change it material textures for each instance? Just wanted to check. As Im unsure.

Im going through my own texture/image handling code, and Im wondering if I might have accidentally shared handles somewhere. Which could cause this.

Ok. I think Ive answered my own question. I think if image buffers are all created with a similar resource.create_texture then this should be working. In my old geom extenstion I was using buffer.create, but not resource.create_texture.

Replacing this like the mesh buffer, has yielded some improvements (I think :wink: ). So most of the models are pulling in their own textures (not colors yet, will address that next). It kinda of looks ok. Heres a number of the meshes loaded, and not interfering with each other. The characters shouldnt be white, but I think thats my character img mapping. All the cars should be white or grey (because the loader will colorize them on input if needed).

If anyone is interested in the messy code (and for my future me to look back in disgust on it):

function loadimage(goname, imagefilepath, tid )

	local res, err = image.load(utils.loaddata( imagefilepath ))
	if(err) then print("[Image Load Error]: "..v.uri.." #:"..err) end 

	if(res.buffer ~= "") then
		rgbcount = 3
		if(res.type == "rgba") then res.format = resource.TEXTURE_FORMAT_RGBA; rgbcount = 4 end
		if(res.type == "rgb") then res.format = resource.TEXTURE_FORMAT_RGB; rgbcount = 3 end

		local buff = buffer.create(res.width * res.height, { 
			{	name=hash(res.type), type=buffer.VALUE_TYPE_UINT8, count=rgbcount } 
		})

		geomextension.setbufferbytes( buff, res.type, res.buffer )

		res.type=resource.TEXTURE_TYPE_2D	
		res.num_mip_maps=1

		-- create a cloned buffer resource from another resource buffer
		local new_path = "/imgbuffer_"..string.format("%d", imageutils.ctr)..".texturec"
		local newres = resource.create_texture(new_path, res)	
		imageutils.ctr = imageutils.ctr + 1				

		-- Store the resource path so it can be used later 
		res.resource_path = hash(new_path)
		res.image_buffer = buff 

		resource.set_texture( new_path, res, buff )
		go.set(goname, "texture"..tid, hash(new_path))
		msg.post( goname, hash("mesh_texture") )
	end

	return res
end 
2 Likes

Another quick update. Characters are now working better. The glb files are doing something odd with the internal png conversion though. But its close now, real close :slight_smile:


The Director is well underway too, and I hope to have Ozz animation in over the weekend so these guys and gals can look a little less alphabet like :slight_smile:

< update > The png loader is converting the images for me but I think I have the stride incorrect, so the buffer is warped. Very happy with all this, it should be working properly tomorrow.

5 Likes

Ok. Everything material and mesh wise is all spot on. Big tip, if you are loading at runtime and are using the png-loader extension, it internally flips the image vertically. And on the glb loader I have it works directly with embedded image data thats already in the correct orientation.


Glad it was an easy fix. Onto anim and camera controls for the director.

3 Likes

Interesting find in recent gltf investigations.
There are many gltf files that do not have indicies set on their primitives (they are just pure mesh buffers). And because of this, if you are using indices it wont load a mesh for it.
As mentioned here: https://github.com/assimp/assimp/issues/2046 Assimplib has added a patch for this so there is a ‘fall through’.
Had me puzzled with a couple of large scenes I had. Missing geom etc.

1 Like

Environments now sorted as well. Heres what some 12,000 meshes in Defold looks like (with silly single point light :slight_smile: ).


Frame rate is a bit average (scene is not really optimized yet) around 20fps. Its still quite good since this is quite a complicated gltf (has all manner of index types, and data types in it).
The big benefit of keeping these large scenes with separate meshes is in the case of needing to be interactive with other objects. Ie: car runs into tree, or light pole. etc.
Might add some illumination with the anim this afternoon.

Note to self - need to get transparent materials working (update render_script to support).

2 Likes

Another nice test environment working well. With no modifications - this is important, since a large amount of my gltf lib will hopefully work as is. As will other peoples. Nice.

3 Likes

Some more oddities. Something I never saw before (maybe because I never usually use this method) is that when using set_scale() on a go with a negative value it fails. This is a fairly common thing to do in 3D for flipping axis on instanced objects in the scene to make them look a little different (with scale). Some of my test scenes do this (have small and negative axis on scale).

I can possibly get around it by setting the vertices and normals manually, but then I lose the sharing of the mesh. Hrm. Something to think about for a later fix.

1 Like

Ok. This is impressive. I didnt expect Defold to load this env in. Its a 2x2 suburb with over 2M polys and a tun of meshes. And wow. I really dont intend to use flix like this, but its good to know it doesnt blow up with loading such huge scenes.

7 Likes

Thanks for sharing your dev diary! I found it informative and weirdly exciting for some reason. But then, I am a developer.

One standout I noticed was 20fps for 12K meshes that are probably static. That seems frankly quite slow, could you share the hardware you usef for it and was it a bottleneck on CPU (draw calls) or GPU? I suspect it’s a CPU performance hit at the moment.

2 Likes

Note that we haven’t implemented instancing yet. It should dramatically help with such scenes.
It’s scheduled to be released very soon.

4 Likes

Although I wonder if @dlannan is using meshes and not models? :thinking: if it’s meshes the instance support will take a little more time, but if it’s models I hope to get it in for the next release.

1 Like

Right.
Since our runtime doesn’t support creating/updating our model data, I think he’s using the Mesh component.

Yup, but we’ll fix that after models in any case :slight_smile:

1 Like

Ahh. So you understand this is a go per mesh in a rather convoluted hierarchy (as you could imagine). This can most definitely more easily optimized - atm there is one go child node created for each mesh (unnecessary of course, but easy to optimize after).

I can implement instancing using this method too :slight_smile: … ie apply transform to go, and if a mesh is sharing the same gltf view buffer then simply map to same mesh (keep a cache of it). I have some of this framework already done, so will do it later. Its a fair bit down my list - functionality first :slight_smile:

Generally in a game and asset dev, these would be collapsed to mostly single static meshes. In this use case there are benefits to having some of the scene still hierarchically structured. Hope that makes some sense :slight_smile:

That’s not really instancing, but a good improvement. I’m talking about draw call instancing, which should give you a good performance bump with shared meshes! :slight_smile:

1 Like