When giving your GUI a parent, that gui’s position is now reported as a vector point RELATIVE to the origin point of the parent that you set it to when using gui.get_position.
As an example, say:
gui.set_positon(parent,vmath.vector3(100,0,0) -- set parent to (X_100,Y_0,Z_0)
gui.set_parent(funky,parent) -- set funky's parent to parent
gui.set_position(funky,vmath.vector3(0,0,0)) -- set funky's position RELATIVE to the parent to (0,0,0), even though this will change nothing
My logic goes to think, “Ok, so funky will be found at (100,0,0) on the screen.” But this is not what happens. Gui.get_position now will report the position relative to the gui object, and no longer the screen. I don’t see much of an advantage this could give. For world objects I absolutely understand the power of it, but I think of gui objects of always being anchored to the screen, so this confuses me and makes on-the-fly menu building much more difficult.
At the moment, I’m applying gui objects to one “menu” which will form a tooltip. One is the background, a variable amount for the description text(s), and a variable amount for the buttons. I’m having trouble with this because I’m getting button presses by collision. I calculate the area by four points, and ask if the click point was within that. I use gui.get_position to find those collision points, and it makes it more confusing to have to throw in a loop hole, adding the parent’s position as well.
If there is powerful logic to this that makes this case insignificant, would you mind letting me know? I’m a bit confused why it would change, because it just makes more sense to keep it consistent and relative to the screen’s origin point.