Hello all, My name is Flavio Figueiredo and I am using VirtualBox to develop a secure sandbox for a grid environment. The system has a Worker service that starts a new VM whenever an application needs to be executed. Only one execution at a time is allowed at a given Worker.
In order to control the VM that will execute the application, I am using the `VBoxManage` command. The life cycle of an application in my system is detailed bellow: 1) New application is submited to a Worker 2) Worker copies application to a shared folder 3) Worker starts a VM that will execute the application, using `VBoxManage startvm name` 4) VM executes the application on the shared folder and returns the results to the same folder 5) VM is powered off with the `VBoxManage controlvm poweroff` (the disk being used is Immutable, so I believe that there exists no problem in halting the machine and this is a necessity) 6) Worker returns results to the user and is ready for a new application. In summary that is what the system does. The problem I'm having is that after a `VBoxManage controlvm poweroff` a new execution is queued at the Worker, and when the `VBoxManage startvm name` is executed sometimes it fails telling me that the machine is still powered on. In order to bypass the problem I am using a busy-wait approach, only starting a new execution when the `VBoxManage showvminfo name` tells me that the machine is powered off. The problem is that sometimes the startvm command still tells me that the machine is powered on. Apparently there exists some race condition with the machine state and to bypass the problem again I have to execute the `VBoxManage startvm name` more than once (using a retry-loop). Though I was able to bypass the problem with the solution detailed bellow I believe that the describe behaviour is not normal and is a bug. -- []s Flavio
_______________________________________________ vbox-users mailing list [email protected] http://vbox.innotek.de/mailman/listinfo/vbox-users
