> On 30 May 2017, at 08:31, Yaniv Kaul <yk...@redhat.com> wrote:
> 
> 
> 
> On Mon, May 29, 2017 at 2:25 PM, Andy Gibbs <andyg1...@hotmail.co.uk 
> <mailto:andyg1...@hotmail.co.uk>> wrote:
> On 29 May 2017 08:22, Sandro Bonazzola wrote:
> > Hi, so if I understood correctly, you're trying to work on a single host
> > deployment right?
> > Or are you just trying to replace the bare metal all-in-one 3.6 in a context
> > with more hosts?
> > If this is the case, can you share your use case? I'm asking because for
> > single host installations there are other solutions that may fit better than
> > oVirt, like virt-manager or kimchi 
> > (https://github.com/kimchi-project/kimchi 
> > <https://github.com/kimchi-project/kimchi>)
> 
> Sandro, thank you for your reply.
> 
> I hadn't heard about kimchi before.  Virt-manager had been discounted as the 
> user interface is not really friendly enough for non-technical people, which 
> is important for us.  The simple web interface with oVirt, however, is 
> excellent in this regard.
> 
> I would say that the primary use-case is this: We want a server which 
> individual employees can log into (using their active directory logins), 
> access company-wide "public" VMs or create their own private VMs for their 
> own use (if permitted).  Users should be able to start and stop the "public" 
> VMs but not be able to edit or delete them.  They should only have full 
> control over the VMs that they create for themselves.  And very importantly, 
> it should be possible to say which users have the ability to create their own 
> VMs.  Nice to have would be the ability for users to be able to share their 
> VMs with other users.  Really nice to have would be a way of detecting 
> whether VMs are in use by someone else before opening a console and stealing 
> it away from the current user!  (Actually, case in point, the user web 
> interface for oVirt 3.6 always starts a console for a VM when the user logs 
> in, if it is the only one running on the server and which the user has access 
> to.  I don't know i
>  f this is fixed in 4.1, but our work-around is to have a dummy VM that 
> always runs and displays a graphic with helpful text for any that see it!  
> Bit of a nuisance, but not too bad.  We never found a way to disable this 
> behaviour.)
> 
> This sounds like a bug to me, if guest agent is installed and running on the 
> guest.
> I'd appreciate if you could open a bug with all relevant details.

nothing to do with agent but rather the “connect automatically” checkbox per 
user. Just uncheck it for the user.
You may also check out https://github.com/oVirt/ovirt-web-ui for a modern 
simplified user portal. It’s not fully complete, it’s missing this “connect 
automatically” functionality, so it’s perfect for you:)

Thanks,
michal

> 
> 
> We started off some years ago with a server running oVirt 3.4, now running 
> 3.6, with the all-in-one plugin and had good success with this.  The hosted 
> engine for oVirt 4.1 seemed to be the recommended "upgrade path" -- although 
> we did also start with entirely new server hardware.
> 
> Ultimately once this first server is set up we will want to convert the old 
> server hardware to a second node so that we can balance the load (we have a 
> number of very resource-hungry VMs).  This would be our secondary use-case.  
> More nodes may follow in future.  However, we don't see the particular need 
> to have VMs that migrate from node to node, and each node will most likely 
> have its own storage domains for the VMs that run on it.  But to have one 
> central web interface for managing the whole lot is a huge advantage.
> 
> Coming then to the storage issue that comes up in my original post, we are 
> trying to install this first server platform, keeping the node, the hosted 
> engine, and the storage all on one physical machine.  We don't (currently) 
> want to set up a separate storage server, and don't really see the benefit of 
> doing so.  Since my first email, I've actually succeeded in getting the 
> engine to recognise the node's storage paths.  However, I'm not sure it 
> really is the right way.  The solution I found was to create a third path, 
> /srv/ovirt/engine, in addition to the data and iso paths.  The engine gets 
> installed to /srv/ovirt/engine and then once the engine is started up, I 
> create a new data domain at node:/srv/ovirt/data.  This then adds the new 
> path as the master data domain, and then after thinking a bit to itself, 
> suddenly the hosted_storage data domain appears, and after a bit more 
> thinking, everything seems to get properly registered and works.  I can then 
> also create the ISO storag
>  e domain.
> 
> Does this seem like a viable solution, or have I achieved something "illegal"?
> 
> Sounds a bit of a hack, but I don't see a good reason why it wouldn't work - 
> perhaps firewalling issues. Certainly not a common or tested scenario.
>  
> 
> I am still not having much luck with my other problem(s) to do with 
> restarting the server: it still hangs on shutdown and it still takes a very 
> long time (about ten minutes) after the node starts for the engine to start.  
> Any help on this would be much appreciated.
> 
> Logs would be appreciated - engine.log, server.log, perhaps journal entries. 
> Perhaps there's race between NFS and Engine services?
> Y.
>  
> 
> Thanks
> Andy
> _______________________________________________
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users 
> <http://lists.ovirt.org/mailman/listinfo/users>
> 
> _______________________________________________
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to