In my point of view the sound manager is to audio what the render_script is to visuals.
You have sprites, tiles, shaders etc. and things to view, and the role of the render_script is to orchestrate all of it, layer by layer.
The sound manager does this for audio. You must layer many sounds, maybe you need to gate stuff, maybe you want 1 voice effect at a time but have many game objects who wants to talk (like think “walking arround talking PNJs”), you have music, ambiances, player effects etc.
So you need a global system that handle this. So game objects just have to trigger events (I use defold-events lib for these kind of global systems). So the object just ask to play a particular sound, but doesnt need to know where the sound component is, nor how to play it. So it is really not a direct dependency, the inverse actually.
And you can code the events before even wiring reall sound in the game.
Then you can also organise your sounds in different collections/sound banks and load them when necessary (loaded by the sound manager). Think that musics and sound files are heavy, maybe you dont want to load every sounds into memory at start of game.
Finaly, having dedicated collection make it also easier to import many sounds. I use Reaper for audio, and export many files at once. Then I copy it in my project, use an editor script to create all sound components files, then an editor script to insert them directly into a collection. I dont need to find out which game object will use which sound.
And I define a kind of “naming convention” to idenfy sounds ( “{soundbank}{soundname}{soundsequence}” where the sequence is when I have many different versions of same effect, so I can randomize) then I build a lua fil with a table n enumerating all sounds and linking with the actual url to sound component.
So in code you can load this module and reference the sound name with autocompletion, and send it with events to be played.
I dont know if this is the best practice, but it sounds well enough for me