Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Nir Soffer
- Original Message -
 From: Sven Kieske s.kie...@mittwald.de
 To: users@ovirt.org
 Sent: Friday, September 12, 2014 4:53:46 PM
 Subject: Re: [ovirt-users] [RFI] oVirt 3.6 Planning
 
 +1!
 
 I really would like to see a fast working way to create, manage
 and use ceph in ovirt. I don't know if implementation through
 openstack component is enough, because you will need more components
 then (keystone for auth e.g.).

Sure you will need more components, but we don't want to duplicate the effort,
implementing ceph support directly in ovirt. Using Cinder we can reuse the
existing ceph support, and gain also support for other Cinder drivers for
free.

On the host side, we will have of course ceph specific support, so you will
be able to create ceph volumes directly, and use vdsm api to start vms using
these volumes, if you want more direct support.

I think this is the general long term approach; integrate openstack components
instead of re-implementing the wheel.

 
 On 12/09/14 15:34, Cédric Buot de l'Epine wrote:
  Dunno if CephFS efforts will be sustainable against Glusterfs...
  I'm not sure if a posix fs is the best provider for blocks.
  I would be pleased to have a direct form to create a Ceph pool for a
  datacenter, then provide librbd path for the guests (or only their size).
 
 --
 Mit freundlichen Grüßen / Regards
 
 Sven Kieske
 
 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Mohyedeen Nazzal
When VMs goes to unknown status, all control options are grayed, and the
only way to fix this is to update the Database manually, it will be nice to
be able to force the engine to poweroff the VM from admin portal.

Thanks,
Mohyedeen.

On Sat, Sep 13, 2014 at 12:27 PM, Nir Soffer nsof...@redhat.com wrote:

 - Original Message -
  From: Sven Kieske s.kie...@mittwald.de
  To: users@ovirt.org
  Sent: Friday, September 12, 2014 4:53:46 PM
  Subject: Re: [ovirt-users] [RFI] oVirt 3.6 Planning
 
  +1!
 
  I really would like to see a fast working way to create, manage
  and use ceph in ovirt. I don't know if implementation through
  openstack component is enough, because you will need more components
  then (keystone for auth e.g.).

 Sure you will need more components, but we don't want to duplicate the
 effort,
 implementing ceph support directly in ovirt. Using Cinder we can reuse the
 existing ceph support, and gain also support for other Cinder drivers for
 free.

 On the host side, we will have of course ceph specific support, so you will
 be able to create ceph volumes directly, and use vdsm api to start vms
 using
 these volumes, if you want more direct support.

 I think this is the general long term approach; integrate openstack
 components
 instead of re-implementing the wheel.

 
  On 12/09/14 15:34, Cédric Buot de l'Epine wrote:
   Dunno if CephFS efforts will be sustainable against Glusterfs...
   I'm not sure if a posix fs is the best provider for blocks.
   I would be pleased to have a direct form to create a Ceph pool for a
   datacenter, then provide librbd path for the guests (or only their
 size).
 
  --
  Mit freundlichen Grüßen / Regards
 
  Sven Kieske
 
  Systemadministrator
  Mittwald CM Service GmbH  Co. KG
  Königsberger Straße 6
  32339 Espelkamp
  T: +49-5772-293-100
  F: +49-5772-293-333
  https://www.mittwald.de
  Geschäftsführer: Robert Meyer
  St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
 Oeynhausen
  Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
 Oeynhausen
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Mohyedeen Nazzal
One more thing,,

Being able to attach USB dongles from the admin portal...

Thanks,
Mohyedeen.

On Sat, Sep 13, 2014 at 3:32 PM, Mohyedeen Nazzal 
mohyedeen.naz...@gmail.com wrote:

 When VMs goes to unknown status, all control options are grayed, and the
 only way to fix this is to update the Database manually, it will be nice to
 be able to force the engine to poweroff the VM from admin portal.

 Thanks,
 Mohyedeen.

 On Sat, Sep 13, 2014 at 12:27 PM, Nir Soffer nsof...@redhat.com wrote:

 - Original Message -
  From: Sven Kieske s.kie...@mittwald.de
  To: users@ovirt.org
  Sent: Friday, September 12, 2014 4:53:46 PM
  Subject: Re: [ovirt-users] [RFI] oVirt 3.6 Planning
 
  +1!
 
  I really would like to see a fast working way to create, manage
  and use ceph in ovirt. I don't know if implementation through
  openstack component is enough, because you will need more components
  then (keystone for auth e.g.).

 Sure you will need more components, but we don't want to duplicate the
 effort,
 implementing ceph support directly in ovirt. Using Cinder we can reuse the
 existing ceph support, and gain also support for other Cinder drivers for
 free.

 On the host side, we will have of course ceph specific support, so you
 will
 be able to create ceph volumes directly, and use vdsm api to start vms
 using
 these volumes, if you want more direct support.

 I think this is the general long term approach; integrate openstack
 components
 instead of re-implementing the wheel.

 
  On 12/09/14 15:34, Cédric Buot de l'Epine wrote:
   Dunno if CephFS efforts will be sustainable against Glusterfs...
   I'm not sure if a posix fs is the best provider for blocks.
   I would be pleased to have a direct form to create a Ceph pool for a
   datacenter, then provide librbd path for the guests (or only their
 size).
 
  --
  Mit freundlichen Grüßen / Regards
 
  Sven Kieske
 
  Systemadministrator
  Mittwald CM Service GmbH  Co. KG
  Königsberger Straße 6
  32339 Espelkamp
  T: +49-5772-293-100
  F: +49-5772-293-333
  https://www.mittwald.de
  Geschäftsführer: Robert Meyer
  St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
 Oeynhausen
  Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
 Oeynhausen
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Nicolas Ecarnot

Le 12/09/2014 14:22, Itamar Heim a écrit :

With oVirt 3.5 nearing GA, time to ask for what do you want to see in
oVirt 3.6?

Thanks,
Itamar
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


For God's sake, please, prevent secondary storage domains (anything 
other than the master, especially iso and export) to completely block 
the whole thing when being unavailable.
To many users suffered from this in the last two years I participate at 
this mailing list (including me).


--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] : password authentication failed for user engine

2014-09-13 Thread Ramon Ramirez-Linan
[image: logo]
Navteca
Ramon Ramirez-Linan
C: 301.789.9139
O: 202.505.1553
*www.navteca.com http://www.navteca.com/*

6301 Ivy Lane, Greenbelt, MD 20770
Suite 700

[image: script]
[image: https://s3.amazonaws.com/www.navteca.com/img/certified.png]

On Sat, Sep 13, 2014 at 10:32 AM, Ramon Ramirez-Linan rli...@navteca.com
wrote:

 Hello,

 New to the list and relatively new to oVirt

 I am trying to deploy the all in one for ovirt3.4. But the engine-setup
 script is stuck in the process of creating the db schema

 reating PostgreSQL 'engine' database
 [ INFO  ] Configuring PostgreSQL
 [ INFO  ] Creating Engine database schema
 [ ERROR ] Failed to execute stage 'Misc configuration': Command
 '/usr/share/ovirt-engine/dbscripts/create_schema.sh' failed to execute
 [ INFO  ] Yum Performing yum transaction rollback
 [ INFO  ] Rolling back database schema



-

OperationalError: FATAL:  password authentication failed for user
engine .FATAL:  password authentication failed for user engine



 Any suggestions would be greatly appreciate

 Thanks

 Rezuma

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Nir Soffer
- Original Message -
 From: Nicolas Ecarnot nico...@ecarnot.net
 To: users@ovirt.org
 Sent: Saturday, September 13, 2014 4:53:03 PM
 Subject: Re: [ovirt-users] [RFI] oVirt 3.6 Planning
 
 Le 12/09/2014 14:22, Itamar Heim a écrit :
  With oVirt 3.5 nearing GA, time to ask for what do you want to see in
  oVirt 3.6?
 
  Thanks,
  Itamar
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 For God's sake, please, prevent secondary storage domains (anything
 other than the master, especially iso and export) to completely block
 the whole thing when being unavailable.
 To many users suffered from this in the last two years I participate at
 this mailing list (including me).

I think this is true for ISO and export domains for some time.

It will not work for data domains - if you have vms with disks on what you
call secondary data domain, how would you migrate these vms to a host that
cannot see the secondary domain?

In 3.6, there will be no master domain, so any data domain will be important
as any other data domain.

Maybe what do you like is to have more control on which domains are critical,
and which are not. A domain which you mark as non-critical, or secondary,
will not cause the host to become non-operational when the host cannot see
this domain.

So you would not be able to migrate some vms to a host that cannot see the 
secondary domain, but since *you* marked it as secondary, it is not a problem
for you.

What do you think?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] feature review - ReportGuestDisksLogicalDeviceName

2014-09-13 Thread Dan Kenigsberg
On Wed, Sep 03, 2014 at 08:50:12AM +0200, Michal Skrivanek wrote:
 
 On Sep 2, 2014, at 21:03 , Nir Soffer nsof...@redhat.com wrote:
 
  - Original Message -
  From: Michal Skrivanek mskri...@redhat.com
  To: Liron Aravot lara...@redhat.com, Dan Kenigsberg 
  dan...@redhat.com, Federico Simoncelli
  fsimo...@redhat.com, Vinzenz Feenstra vfeen...@redhat.com
  Cc: users@ovirt.org, de...@ovirt.org
  Sent: Tuesday, September 2, 2014 3:29:57 PM
  Subject: Re: [ovirt-devel] feature review - 
  ReportGuestDisksLogicalDeviceName
  
  
  On Sep 2, 2014, at 13:11 , Liron Aravot lara...@redhat.com wrote:
  
  
  
  - Original Message -
  From: Federico Simoncelli fsimo...@redhat.com
  To: de...@ovirt.org
  Cc: Liron Aravot lara...@redhat.com, users@ovirt.org,
  smizr...@redhat.com, Michal Skrivanek
  mskri...@redhat.com, Vinzenz Feenstra vfeen...@redhat.com, Allon
  Mureinik amure...@redhat.com, Dan
  Kenigsberg dan...@redhat.com
  Sent: Tuesday, September 2, 2014 12:50:28 PM
  Subject: Re: feature review - ReportGuestDisksLogicalDeviceName
  
  - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: Liron Aravot lara...@redhat.com
  Cc: users@ovirt.org, de...@ovirt.org, smizr...@redhat.com,
  fsimo...@redhat.com, Michal Skrivanek
  mskri...@redhat.com, Vinzenz Feenstra vfeen...@redhat.com, Allon
  Mureinik amure...@redhat.com
  Sent: Monday, September 1, 2014 11:23:45 PM
  Subject: Re: feature review - ReportGuestDisksLogicalDeviceName
  
  On Sun, Aug 31, 2014 at 07:20:04AM -0400, Liron Aravot wrote:
  Feel free to review the the following feature.
  
  http://www.ovirt.org/Features/ReportGuestDisksLogicalDeviceName
  
  Thanks for posting this feature page. Two things worry me about this
  feature. The first is timing. It is not reasonable to suggest an API
  change, and expect it to get to ovirt-3.5.0. We are two late anyway.
  
  The other one is the suggested API. You suggest placing volatile and
  optional infomation in getVMList. It won't be the first time that we
  have it (guestIPs, guestFQDN, clientIP, and displayIP are there) but
  it's foreign to the notion of conf reported by getVMList() - the set
  of parameters needed to recreate the VM.
  
  The fact is that today we return guest information in list(Full=true), We
  decide on it's notion
  and it seems like we already made our minds when guest info was added 
  there
  :) . I don't see any harm in returning the disk mapping there
  and if we'll want to extract the guest info out, we can extract all of it
  in later version (4?) without need for BC. Having
  the information spread between different verbs is no better imo.
  
  At first sight this seems something belonging to getVmStats (which
  is reporting already other guest agent information).
  
  
  Fede, I've mentioned in the wiki, getVmStats is called by the engine every
  few seconds and therefore that info
  wasn't added there but to list() which is called only when the hash is
  changed. If everyone is in for that simple
  solution i'm fine with that, but Michal/Vincenz preferred it that way.
  
  yes, that was the main reason me and Vinzenz suggested to use list(). 15s 
  is
  a reasonable compromise, IMHO.
  And since it's also reported by guest agent in a similar manner (and 
  actually
  via the same vdsm-ga API call) as other guest information I think it
  should sit alongside guestIPs, FQDN, etc…
  
  Maybe not the best place, but I would leave that for a bigger discussion
  if/when we want to refactor reporting of the guest agent information
  
  Given that we are late, adding disk mapping where other guest info is
  in a backward compatible way looks reasonable.
  
  Did you consider adding a new verb for getting guest information?
  This 
  verb can be called once after starting/recovering a vm, and then only when
  guest info hash changes (like the xml hash).
 
 yes, was considered but turned down. 
 Main reason is an additional overhead and pollution of the API for a tiny 
 little item which is in the same group of guest agent reported items. It 
 doesn't make sense to introduce an inconsistency just this one item.
 Refactoring of the frequency of what we report is indeed long overdue, but we 
 shouldn't start here…(the first and most offending is still the application 
 list)

Ok, lacking another alternative, let's dump the maps to list().
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] : password authentication failed for user engine

2014-09-13 Thread Alon Bar-Lev

If this is an attempt to install over a previous installation, please try to 
remove /etc/ovirt-engine/engine.conf.d/*.conf, and setup again.

- Original Message -
 From: Ramon Ramirez-Linan rli...@navteca.com
 To: users@ovirt.org
 Sent: Saturday, September 13, 2014 5:51:23 PM
 Subject: Re: [ovirt-users] : password authentication failed for user engine
 
 
 
 
 Navteca
 Ramon Ramirez-Linan
 C: 301.789.9139
 O: 202.505.1553
 www.navteca.com
 
 6301 Ivy Lane, Greenbelt, MD 20770
 Suite 700
 
 
 
 On Sat, Sep 13, 2014 at 10:32 AM, Ramon Ramirez-Linan  rli...@navteca.com 
 wrote:
 
 
 
 Hello,
 
 New to the list and relatively new to oVirt
 
 I am trying to deploy the all in one for ovirt3.4. But the engine-setup
 script is stuck in the process of creating the db schema
 
 reating PostgreSQL 'engine' database
 [ INFO ] Configuring PostgreSQL
 [ INFO ] Creating Engine database schema
 [ ERROR ] Failed to execute stage 'Misc configuration': Command
 '/usr/share/ovirt-engine/dbscripts/create_schema.sh' failed to execute
 [ INFO ] Yum Performing yum transaction rollback
 [ INFO ] Rolling back database schema
 
 
 
 
 *
 
 OperationalError: FATAL: password authentication failed for user engine
 .FATAL: password authentication failed for user engine
 
 
 Any suggestions would be greatly appreciate
 
 Thanks
 
 Rezuma
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Nicolas Ecarnot

Le 13/09/2014 17:34, Nir Soffer a écrit :

For God's sake, please, prevent secondary storage domains (anything
other than the master, especially iso and export) to completely block
the whole thing when being unavailable.
To many users suffered from this in the last two years I participate at
this mailing list (including me).


I think this is true for ISO and export domains for some time.


In 3.4 at least, this is still true and is a problem.
At least, if we can get rid of this and just mark these iso and export 
domains as unusable, and still do every other non-related operations, 
this would be very useful.



It will not work for data domains - if you have vms with disks on what you
call secondary data domain, how would you migrate these vms to a host that
cannot see the secondary domain?

In 3.6, there will be no master domain, so any data domain will be important
as any other data domain.

Maybe what do you like is to have more control on which domains are critical,
and which are not. A domain which you mark as non-critical, or secondary,
will not cause the host to become non-operational when the host cannot see
this domain.

So you would not be able to migrate some vms to a host that cannot see the
secondary domain, but since *you* marked it as secondary, it is not a problem
for you.

What do you think?


This proposition is perfect.
I don't know whether being able to mark some storage domain as more 
precious than other will be useful, when their absence won't be a 
blocker anymore?




Nir




--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Nir Soffer
- Original Message -
 From: Nicolas Ecarnot nico...@ecarnot.net
 To: Nir Soffer nsof...@redhat.com
 Cc: users@ovirt.org, Federico Simoncelli fsimo...@redhat.com
 Sent: Saturday, September 13, 2014 9:24:55 PM
 Subject: Re: [ovirt-users] [RFI] oVirt 3.6 Planning
 
 Le 13/09/2014 17:34, Nir Soffer a écrit :
  For God's sake, please, prevent secondary storage domains (anything
  other than the master, especially iso and export) to completely block
  the whole thing when being unavailable.
  To many users suffered from this in the last two years I participate at
  this mailing list (including me).
 
  I think this is true for ISO and export domains for some time.
 
 In 3.4 at least, this is still true and is a problem.
 At least, if we can get rid of this and just mark these iso and export
 domains as unusable, and still do every other non-related operations,
 this would be very useful.
 
  It will not work for data domains - if you have vms with disks on what you
  call secondary data domain, how would you migrate these vms to a host
  that
  cannot see the secondary domain?
 
  In 3.6, there will be no master domain, so any data domain will be
  important
  as any other data domain.
 
  Maybe what do you like is to have more control on which domains are
  critical,
  and which are not. A domain which you mark as non-critical, or
  secondary,
  will not cause the host to become non-operational when the host cannot see
  this domain.
 
  So you would not be able to migrate some vms to a host that cannot see the
  secondary domain, but since *you* marked it as secondary, it is not a
  problem
  for you.
 
  What do you think?
 
 This proposition is perfect.
 I don't know whether being able to mark some storage domain as more
 precious than other will be useful, when their absence won't be a
 blocker anymore?

The idea is to let you define which storage domains are required and 
which are not, lets call them optional.

If a required storage domain cannot be seen, a host be become
non-operational.

If an optional storage domain cannot be seen, you will get a warning
but the host will function normally.

If you try to migrate a vm to a host which cannot see the storage
domain used by that vm, the operation will fail.

When a required storage domain is down, you would be able to
change it to optional, and continue to work with the other 
domains in degraded mode. Some vms will not able to run, but
other vms that do not depend on problem domain will not be
affected.

I hope that I understood your question correctly.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Latest 3.5 - No Enable Gluster Service check box.

2014-09-13 Thread Nathan Stratton
Latest 3.5 install as of today, I already have gluster up and running on
all host and I am trying to import my gluster volume into ovirt. When I go
to edit the cluster I have Enable Gluster Service but there is no check
box next to it to check or uncheck.

If I look at the host under Hosts, I do see:

GlusterFS Version: glusterfs-3.5.2-1.el6


nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Itamar Heim

On 09/12/2014 04:45 PM, Federico Alberto Sayd wrote:

On 12/09/14 09:55, Jakub Bittner wrote:

ISO upload over web UI.

+1, Is it so hard to implement such feature?


well, the tricky part is web ui access the engine, which doesn't access 
the storage, rather the hosts do, so you need to stream it via the engine.
good news are vdsm has now upload/download api's which should hopefully 
pave the way for this to materialize.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Itamar Heim

On 09/12/2014 04:53 PM, Sven Kieske wrote:

+1!

I really would like to see a fast working way to create, manage
and use ceph in ovirt. I don't know if implementation through
openstack component is enough, because you will need more components
then (keystone for auth e.g.).


you would need to setup cinder with noauth or keystone, yes.
similar to glance and neutron current support.
neutron virtual appliance in 3.5 should have neutron with a local 
keystone already configured out of the box iirc




On 12/09/14 15:34, Cédric Buot de l'Epine wrote:

Dunno if CephFS efforts will be sustainable against Glusterfs...
I'm not sure if a posix fs is the best provider for blocks.
I would be pleased to have a direct form to create a Ceph pool for a
datacenter, then provide librbd path for the guests (or only their size).




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Itamar Heim

On 09/13/2014 03:32 PM, Mohyedeen Nazzal wrote:

When VMs goes to unknown status, all control options are grayed, and the
only way to fix this is to update the Database manually, it will be nice
to be able to force the engine to poweroff the VM from admin portal.


'unknown' means the host is in non-responsive state.
the correct approach is to correct the status of the host (stop via 
the power management options ('fence'), or 'confirm manual shutdown' of 
the host.

either should release the 'unknown' status of the VM.

for extreme cases (aka 'bugs'), there is a command line unlock utility 
to be used with caution.




Thanks,
Mohyedeen.

On Sat, Sep 13, 2014 at 12:27 PM, Nir Soffer nsof...@redhat.com
mailto:nsof...@redhat.com wrote:

- Original Message -
 From: Sven Kieske s.kie...@mittwald.de mailto:s.kie...@mittwald.de
 To:users@ovirt.org mailto:users@ovirt.org
 Sent: Friday, September 12, 2014 4:53:46 PM
 Subject: Re: [ovirt-users] [RFI] oVirt 3.6 Planning

 +1!

 I really would like to see a fast working way to create, manage
 and use ceph in ovirt. I don't know if implementation through
 openstack component is enough, because you will need more components
 then (keystone for auth e.g.).

Sure you will need more components, but we don't want to duplicate
the effort,
implementing ceph support directly in ovirt. Using Cinder we can
reuse the
existing ceph support, and gain also support for other Cinder
drivers for
free.

On the host side, we will have of course ceph specific support, so
you will
be able to create ceph volumes directly, and use vdsm api to start
vms using
these volumes, if you want more direct support.

I think this is the general long term approach; integrate openstack
components
instead of re-implementing the wheel.

 
  On 12/09/14 15:34, Cédric Buot de l'Epine wrote:
   Dunno if CephFS efforts will be sustainable against Glusterfs...
   I'm not sure if a posix fs is the best provider for blocks.
   I would be pleased to have a direct form to create a Ceph pool
for a
   datacenter, then provide librbd path for the guests (or only
their size).
 
  --
  Mit freundlichen Grüßen / Regards
 
  Sven Kieske
 
  Systemadministrator
  Mittwald CM Service GmbH  Co. KG
  Königsberger Straße 6
  32339 Espelkamp
  T: +49-5772-293-100 tel:%2B49-5772-293-100
  F: +49-5772-293-333 tel:%2B49-5772-293-333
  https://www.mittwald.de
  Geschäftsführer: Robert Meyer
  St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
Oeynhausen
  Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
Oeynhausen
  ___
  Users mailing list
  Users@ovirt.org mailto:Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Itamar Heim

On 09/13/2014 03:38 PM, Mohyedeen Nazzal wrote:

One more thing,,

Being able to attach USB dongles from the admin portal...


like the hostusb vdsm hook allows?



Thanks,
Mohyedeen.

On Sat, Sep 13, 2014 at 3:32 PM, Mohyedeen Nazzal
mohyedeen.naz...@gmail.com mailto:mohyedeen.naz...@gmail.com wrote:

When VMs goes to unknown status, all control options are grayed, and
the only way to fix this is to update the Database manually, it will
be nice to be able to force the engine to poweroff the VM from admin
portal.

Thanks,
Mohyedeen.

On Sat, Sep 13, 2014 at 12:27 PM, Nir Soffer nsof...@redhat.com
mailto:nsof...@redhat.com wrote:

- Original Message -
 From: Sven Kieske s.kie...@mittwald.de 
mailto:s.kie...@mittwald.de
 To:users@ovirt.org mailto:users@ovirt.org
 Sent: Friday, September 12, 2014 4:53:46 PM
 Subject: Re: [ovirt-users] [RFI] oVirt 3.6 Planning

 +1!

 I really would like to see a fast working way to create, manage
 and use ceph in ovirt. I don't know if implementation through
 openstack component is enough, because you will need more components
 then (keystone for auth e.g.).

Sure you will need more components, but we don't want to
duplicate the effort,
implementing ceph support directly in ovirt. Using Cinder we can
reuse the
existing ceph support, and gain also support for other Cinder
drivers for
free.

On the host side, we will have of course ceph specific support,
so you will
be able to create ceph volumes directly, and use vdsm api to
start vms using
these volumes, if you want more direct support.

I think this is the general long term approach; integrate
openstack components
instead of re-implementing the wheel.

 
  On 12/09/14 15:34, Cédric Buot de l'Epine wrote:
   Dunno if CephFS efforts will be sustainable against
Glusterfs...
   I'm not sure if a posix fs is the best provider for blocks.
   I would be pleased to have a direct form to create a Ceph
pool for a
   datacenter, then provide librbd path for the guests (or
only their size).
 
  --
  Mit freundlichen Grüßen / Regards
 
  Sven Kieske
 
  Systemadministrator
  Mittwald CM Service GmbH  Co. KG
  Königsberger Straße 6
  32339 Espelkamp
  T: +49-5772-293-100 tel:%2B49-5772-293-100
  F: +49-5772-293-333 tel:%2B49-5772-293-333
  https://www.mittwald.de
  Geschäftsführer: Robert Meyer
  St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG
Bad Oeynhausen
  Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG
Bad Oeynhausen
  ___
  Users mailing list
  Users@ovirt.org mailto:Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Nicolas Ecarnot

Le 13/09/2014 20:38, Nir Soffer a écrit :

- Original Message -

From: Nicolas Ecarnot nico...@ecarnot.net
To: Nir Soffer nsof...@redhat.com
Cc: users@ovirt.org, Federico Simoncelli fsimo...@redhat.com
Sent: Saturday, September 13, 2014 9:24:55 PM
Subject: Re: [ovirt-users] [RFI] oVirt 3.6 Planning

Le 13/09/2014 17:34, Nir Soffer a écrit :

For God's sake, please, prevent secondary storage domains (anything
other than the master, especially iso and export) to completely block
the whole thing when being unavailable.
To many users suffered from this in the last two years I participate at
this mailing list (including me).


I think this is true for ISO and export domains for some time.


In 3.4 at least, this is still true and is a problem.
At least, if we can get rid of this and just mark these iso and export
domains as unusable, and still do every other non-related operations,
this would be very useful.


It will not work for data domains - if you have vms with disks on what you
call secondary data domain, how would you migrate these vms to a host
that
cannot see the secondary domain?

In 3.6, there will be no master domain, so any data domain will be
important
as any other data domain.

Maybe what do you like is to have more control on which domains are
critical,
and which are not. A domain which you mark as non-critical, or
secondary,
will not cause the host to become non-operational when the host cannot see
this domain.

So you would not be able to migrate some vms to a host that cannot see the
secondary domain, but since *you* marked it as secondary, it is not a
problem
for you.

What do you think?


This proposition is perfect.
I don't know whether being able to mark some storage domain as more
precious than other will be useful, when their absence won't be a
blocker anymore?


The idea is to let you define which storage domains are required and
which are not, lets call them optional.

If a required storage domain cannot be seen, a host be become
non-operational.

If an optional storage domain cannot be seen, you will get a warning
but the host will function normally.

If you try to migrate a vm to a host which cannot see the storage
domain used by that vm, the operation will fail.

When a required storage domain is down, you would be able to
change it to optional, and continue to work with the other
domains in degraded mode. Some vms will not able to run, but
other vms that do not depend on problem domain will not be
affected.

I hope that I understood your question correctly.

Nir



You did, and the features you described seem great.
Can't wait to see them released!

--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Sven Kieske
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 13.09.2014 22:25, Itamar Heim wrote:
 like the hostusb vdsm hook allows?
is this installed by default, yet?
furthermore I think you need to enable it via engine-config(custom
parameters), don't you?

I guess what is asked for is something working out of the box.

kind regards

Sven
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQGcBAEBAgAGBQJUFNv1AAoJEAq0kGAWDrqlNpgMAMgPo9QqH0aiqE0bMsmvXDbM
R/gbK2+OteDGKgA7m4nvUIw5dzQ1X5a07+q51JoSRaUrJ3MxbA/y2lni9LG6ObyM
57pY3lwWqn91bS+dwo9bGaDKgdWq3HoHPfo+Wz7LfoHSACpeOgu+j6kWtgm3WcLm
G3tST1fncLXnWPTQ6SkXaKo5N8G68NrjR2Ih8pFKu+GfeP+0MredKt3d+yHJRrjV
9ZtZvsvd4INDKOfVvJmR8MA0NvTSkFvLSBHvJ9gG3k0Umn/htdn9sPgS41Kghz2x
NV1YX8pr4PmJjanGhFaVAYw2v367tVMgcewa21zZ162Wzh7nzTNOJAtMIfQn+hSG
2EGaw0NLFm0grpjITXCp24G1GwVUf3FR2dxt+GdsQ9upEs5605SMlknT6BLmtoJ8
8LCArwEfZ0UgC4T6qOWp7+Sbkcl124qg7abupewL8usKtmnQV00/8pIusx1mtypj
4BDAE5Cq4e+U8hiB9nlGJzo1IFnyBszMOQQLunKGow==
=fv56
-END PGP SIGNATURE-
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-13 Thread Paul Jansen
My vote is for storage load balancing/scheduler.
Vmware's Vcenter has the concept of a 'storage cluster'.   It's essentially a 
logical storage device.
When you configure hosts/vms to use this device vcenter then works out which of 
the actual storage devices underneath this logical device is will send the 
storage requests to.
This works out as a basic form of load balacing by alternating where the 
storage for new vms are created.
This isn't particularly amazing, but what it does allow - with the highest end 
vcenter licensing anyway - is what Vmware calls 'storage distributed resource 
scheduling'.
Much like we already have the ability to have a scheduler that moves the 
execution of vms around on hosts based on load, this does the same thing for 
the storage component of VM.
Imagine having two configured storage locations under a 'storage cluster' and 
then having the ability to put one of the storage locations into 'maintenance 
mode'.  The storage scheduler would then 'live storage migrate' all the storage 
for vms over to the other storage location.  This would then allow the first 
storage location to be taken down for maintenance.
This approach also allows storage to scale over time as more is added.  The 
'storage scheduler' can take inputs such as latency etc into account and manage 
the load across the 'storage cluster' to balance things out and make smart 
decisions so that the avaialble storage is utilized as best as it can be (ie: 
not overloading one storage location while the other location is mainly idle).


I've done a bit a searching to see where Ovirt might be up to in this regard 
and what I've found seems to indicate that we are not anywhere near this 
capability just yet.
An important prerequisite is having the hosts able to actually do a live 
storage migration.  EL7 based hosts under ovirt 3.5 have this, as have Fedora 
19 and 20 hosts.
If the decision is made to use qemu-kvm-rhev on EL6 hosts - as has been talked 
about recently - then the host requirement for supporting live storage 
migration will be met.  This then allows the idea of a storage scheduler to be 
futher considered.

I think this is an important step in reaching feature parity with Vmware's 
vcenter product, and removes a key reason ovirt/rhev can't be considered in 
some instances.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] lost connectivty to the ovirt-engine after adding host

2014-09-13 Thread Ramon Ramirez-Linan
Hi,

I am working on a proof of concept using using oVirt and ManageIQ.

I have a CentOS6.5 where I deployed libvirt, I create a VM where I deployed
oVirt 3.4. I was trying to add the host where the VM with oVirt resides.
After much troubleshooting I was able to add the host but now the oVirt VM
is owned by a user vdsm which I dont know the password for, so I can't
start the VM with oVirt.

This is the result of running virsh list

$virsh list
Please enter your authentication name: vdsm@ovirt
Please enter your password:
error: Failed to reconnect to the hypervisor
error: no valid connection
error: authentication failed: authentication failed
 

I added another user with the command

saslpasswd2 -a libvirt [username]


I used that username and password to connect and to do

virsh list -all


That shows the VM but If i try to start it it gives me an error like follows

error: Failed to start domain Cent6.5
error: internal error Process exited while reading console log output: char
device redirected to /dev/pts/1
qemu-kvm: -drive
file=/ovirt/test/virtlive01/miq_demo/cent01.img,if=none,id=drive-virtio-disk0,format=raw,cache=none:
could not open disk image /ovirt/test/virtlive01/miq_demo/cent01.img:
Permission denied


Any ideas how can start that VM? is there a default password for the user
vdsm
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users