Thanks again!
I really appreciate your help.
As I said I know how to use RT and materials in render_script; I have already implemented a distortion postprocess. However the point is: I don’t want to have code in the render_script for a very specific moment in the game. I would prefer to keep the render_script as agnostic as possible with respect to the game.
Let me make an example of one of the possible use cases for me. When the player beats a boss I want to show a waving flag with some data written on the flag (time for beating the boss, some localized text). So, you see, the texture for this flag cannot, in any reasonable way, be prepared in design time; it must be composed in runtime. On the other hand, in order to have the waving effect, I can use a shader or I can animate the vertices of a mesh.
Of course I can write some code in render_script drawing certain predicates on a RT, bind the RT to a texture unit, bind the material of the mesh and draw the mesh with its own predicate. All this is clear to me, no problem. BUT I don’t like this approach since I am writing code in the render_script that is too specific for that particular result screen after beating a boss. And then, probably, I want to use something similar, but maybe not identical, in another specific point in the game, so I need some more code in the render_script. I hope I have conveyed the idea…
Note also that with this approach, the texture for the flag is redrawn (identical to itself) for each frame I want to have the flag on screen, and this is not optimal.
On the other hand, what I hoped to do is: (1) draw in the render_script the flag in a RT (2) generate a texture resource with this RT (3) use this texture resource in a mesh. In this way the render_script is a bit more decoupled from the game code and the RT is not redraw in each frame.
My apologize for the long post. I hope it makes somehow sense…