> 1. Numbers are not assigned sequentially, creating non-equal numbers of participants in different versions.
Every time someone visits the launch page (or decides to refresh it when it's already opened) a new number will be returned. This happens regardless of whether the respective person ultimately decides to actually start the experiment or not. That's the likely reason you might be seeing some gaps and there really isn't anything that could be done to prevent this from happening at present.
> 2. One number has been used for 9 different participants. How is this possible? They participated at different times and even on different days.
It shouldn't be possible provided that everything has been set up correctly. Whether that is the case or not, unfortunately cannot be answered in general terms. You would have to at least provide a link to the respective launch page.[1]
As a general matter, if you want full control over how many people get assigned to each of your conditions, doing the condition assignment beforehand and then passing the respective ID in via URL query parameters would be the way to go.
[1] The only scenario I can think of that would lead to duplicate IDs is if you -- at some point after collecting some data -- took the experiment down and later re-uploaded it. Then ID generation would re-start from scratch and duplicates would be likely to occur, particularly when using sequential IDs.