Re: [vdsm] question about faqemu hook

2014-02-25 Thread Andrew Cathrow


- Original Message -
 Hey,
 
 does someone know why do we set memory size value to 20480 (20mb) in
 before_vm_start hook when using fake qemu?
 

enough so it starts but not enough to consume too many resources


 thanks.
 
 --
 Yaniv Bronhaim.
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Andrew Cathrow
 - Forwarded Message -
  From: Itamar Heim ih...@redhat.com
  To: Sahina Bose sab...@redhat.com
  Cc: engine-devel engine-de...@ovirt.org, VDSM Project
  Development vdsm-devel@lists.fedorahosted.org
  Sent: Wednesday, August 7, 2013 1:30:54 PM
  Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
  Domain
  
  On 08/07/2013 08:21 AM, Sahina Bose wrote:
   [Adding engine-devel]
  
   On 08/06/2013 10:48 AM, Deepak C Shetty wrote:
   Hi All,
   There were 2 learnings from BZ
   https://bugzilla.redhat.com/show_bug.cgi?id=988299
  
   1) Gluster RPM deps were not proper in VDSM when using Gluster
   Storage
   Domain. This has been partly addressed
   by the gluster-devel thread @
   http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
   and will be fully addressed once Gluster folks ensure their
   packaging
   is friendly enuf for VDSM to consume
   just the needed bits. Once that happens, i will be sending a
   patch to
   vdsm.spec.in to update the gluster
   deps correctly. So this issue gets addressed in near term.
  
   2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu
   1.3.
  
   libvirt 1.0.1 has the support for representing gluster as a
   network
   block device and qemu 1.3 has the
   native support for gluster block backend which supports
   gluster://...
   URI way of representing a gluster
   based file (aka volume/vmdisk in VDSM case). Many distros (incl.
   centos 6.4 in the BZ) won't have qemu
   1.3 in their distro repos! How do we handle this dep in VDSM ?
  
   Do we disable gluster storage domain in oVirt engine if VDSM
   reports
   qemu  1.3 as part of getCapabilities ?
   or
   Do we ensure qemu 1.3 is present in ovirt.repo assuming
   ovirt.repo is
   always present on VDSM hosts in which
   case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in will
   install qemu 1.3 from the ovirt.repo
   instead of the distro repo. This means vdsm.spec.in will have
   qemu =
   1.3 under Requires.
  
   Is this possible to make this a conditional install? That is,
   only if
   Storage Domain = GlusterFS in the Data center, the bootstrapping
   of host
   will install the qemu 1.3 and dependencies.
  
   (The question still remains as to where the qemu 1.3 rpms will be
   available)

RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support so we 
shouldn't need to require qemu 1.3 just the appropriate qemu-kvm version from 
6.5

https://bugzilla.redhat.com/show_bug.cgi?id=848070

  
  
  hosts are installed prior to storage domain definition usually.
  we need to find a solution to having a qemu  1.3 for .el6 (or
  another
  version of qemu with this feature set).
  

 
   What will be a good way to handle this ?
   Appreciate your response
  
   thanx,
   deepak
  
   ___
   vdsm-devel mailing list
   vdsm-devel@lists.fedorahosted.org
   https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
  
   ___
   vdsm-devel mailing list
   vdsm-devel@lists.fedorahosted.org
   https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
  
  ___
  vdsm-devel mailing list
  vdsm-devel@lists.fedorahosted.org
  https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
  
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Andrew Cathrow


- Original Message -
 From: Deepak C Shetty deepa...@linux.vnet.ibm.com
 To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
 Sent: Monday, August 12, 2013 9:55:45 AM
 Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain
 
 On 08/12/2013 07:22 PM, Andrew Cathrow wrote:
 
  - Original Message -
  From: Deepak C Shetty deepa...@linux.vnet.ibm.com
  To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
  Sent: Monday, August 12, 2013 9:39:21 AM
  Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
  Domain
 
  On 08/12/2013 06:32 PM, Andrew Cathrow wrote:
  - Original Message -
  From: Deepak C Shetty deepa...@linux.vnet.ibm.com
  To: vdsm-devel@lists.fedorahosted.org
  Sent: Monday, August 12, 2013 8:59:37 AM
  Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster
  Storage
  Domain
 
  On 08/12/2013 04:51 PM, Andrew Cathrow wrote:
  - Forwarded Message -
  From: Itamar Heim ih...@redhat.com
  To: Sahina Bose sab...@redhat.com
  Cc: engine-devel engine-de...@ovirt.org, VDSM Project
  Development vdsm-devel@lists.fedorahosted.org
  Sent: Wednesday, August 7, 2013 1:30:54 PM
  Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster
  Storage
  Domain
 
  On 08/07/2013 08:21 AM, Sahina Bose wrote:
  [Adding engine-devel]
 
  On 08/06/2013 10:48 AM, Deepak C Shetty wrote:
  Hi All,
 There were 2 learnings from BZ
  https://bugzilla.redhat.com/show_bug.cgi?id=988299
 
  1) Gluster RPM deps were not proper in VDSM when using
  Gluster
  Storage
  Domain. This has been partly addressed
  by the gluster-devel thread @
  http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
  and will be fully addressed once Gluster folks ensure their
  packaging
  is friendly enuf for VDSM to consume
  just the needed bits. Once that happens, i will be sending
  a
  patch to
  vdsm.spec.in to update the gluster
  deps correctly. So this issue gets addressed in near term.
 
  2) Gluster storage domain needs minimum libvirt 1.0.1 and
  qemu
  1.3.
 
  libvirt 1.0.1 has the support for representing gluster as a
  network
  block device and qemu 1.3 has the
  native support for gluster block backend which supports
  gluster://...
  URI way of representing a gluster
  based file (aka volume/vmdisk in VDSM case). Many distros
  (incl.
  centos 6.4 in the BZ) won't have qemu
  1.3 in their distro repos! How do we handle this dep in
  VDSM
  ?
 
  Do we disable gluster storage domain in oVirt engine if
  VDSM
  reports
  qemu  1.3 as part of getCapabilities ?
  or
  Do we ensure qemu 1.3 is present in ovirt.repo assuming
  ovirt.repo is
  always present on VDSM hosts in which
  case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in
  will
  install qemu 1.3 from the ovirt.repo
  instead of the distro repo. This means vdsm.spec.in will
  have
  qemu =
  1.3 under Requires.
 
  Is this possible to make this a conditional install? That
  is,
  only if
  Storage Domain = GlusterFS in the Data center, the
  bootstrapping
  of host
  will install the qemu 1.3 and dependencies.
 
  (The question still remains as to where the qemu 1.3 rpms
  will
  be
  available)
  RHEL6.5 (and so CentOS 6.5) will get backported libgfapi
  support
  so
  we shouldn't need to require qemu 1.3 just the appropriate
  qemu-kvm version from 6.5
 
  https://bugzilla.redhat.com/show_bug.cgi?id=848070
  So IIUC this means we don't do anything special in vdsm.spec.in
  to
  handle qemu 1.3 dep ?
  If so... what happens when User uses F17/F18 ( as an example) on
  the
  VDSM host.. their repos probably
  won't have qemu-kvm which has libgfapi support... how do we
  handle
  it.
  Do we just release-note it ?
 
  For Fedora SPEC we'd need to handle use a =1.3 dependency but
  for
  *EL6 it'd need to be 0.12-whaterver-6.5-has
  I would love to hear how. I am waiting on some resolution for
  this,
  so
  that I can close the 3.3 blocker BZ
 
  For Fedora if I put qemu-kvm = 1.3 in vdsm.spec.in, then F17/F18
  can't
  be used as a VDSM host, that may not be acceptable.
 
  What options do we have for fedora f19?
  virt-preview may be an option for F18 but F17 is out of luck ..
 
 what do you mean by 'out of luck'.. I thot virt-preview had F17/F18
 repos, no ?

As I said, virt-preview may be an option for F18 as it has a newer version of 
qemu in there but F17's virt-preview doesn't have a new enough version of qemu

 Another Q to answer would be.. Do we support F17 as a valid vdsm host
 for 3.3 ?
 
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
 
 
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Host bios information

2012-12-11 Thread Andrew Cathrow


- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Andrew Cathrow acath...@redhat.com, Yaniv Bronheim 
 ybron...@redhat.com
 Cc: VDSM Project Development vdsm-devel@lists.fedorahosted.org, Adam 
 Litke a...@us.ibm.com
 Sent: Tuesday, December 11, 2012 1:54:28 PM
 Subject: Re: [vdsm] Host bios information
 
 On Tue, Dec 11, 2012 at 05:44:50AM -0500, Andrew Cathrow wrote:
  
  
  - Original Message -
   From: Dan Kenigsberg dan...@redhat.com
   To: Adam Litke a...@us.ibm.com
   Cc: VDSM Project Development
   vdsm-devel@lists.fedorahosted.org
   Sent: Tuesday, December 11, 2012 4:35:31 AM
   Subject: Re: [vdsm] Host bios information
   
   On Wed, Dec 05, 2012 at 09:44:21AM -0600, Adam Litke wrote:
On Wed, Dec 05, 2012 at 05:25:10PM +0200, ybronhei wrote:
 On 12/05/2012 04:32 PM, Adam Litke wrote:
 On Wed, Dec 05, 2012 at 11:05:21AM +0200, ybronhei wrote:
 Today in the Api we display general information about the
 host
 that
 vdsm export by getCapabilities Api.
 
 We decided to add bios information as part of the
 information
 that
 is displayed in UI under host's general sub-tab.
 
 To summaries the feature - We'll modify General tab to
 Software
 Information and add another tab for Hardware Information
 which
 will
 include all the bios data that we'll decide to gather from
 the
 host
 and display.
 
 Following this feature page:
 http://www.ovirt.org/Features/Design/HostBiosInfo for more
 details.
 All the parameters that can be displayed are mentioned in
 the
 wiki.
 
 I would greatly appreciate your comments and questions.
 
 Seems good to me but I would like to throw out one
 suggestion.
 getVdsCapabilities is already a huge command that does a lot
 of
 time consuming
 things.  As part of the vdsm API refactoring, we are going
 to
 start favoring
 small and concise APIs over bag APIs.  Perhaps we should
 just
 add a new verb:
 Host.getVdsBiosInfo() that returns only this information.
 
 It leads to modification also in how the engine collects the
 parameters with the new api request and I'm not sure if we
 should
 get into this.. Now we have specific known way of how engine
 requests for capabilities, when and how it effects the status
 of
 the
 host that is shown via the UI.
 To simplify this feature I prefer to use the current way of
 gathering and providing host's information. If we'll decide
 to
 split
 the host's capabilities api, it needs to get rfcs mail of its
 own
 because it changes engine's internal flows and it makes this
 feature
 to something much more influential.

I don't understand.  Why can't you just call both APIs, one
after
the other?
   
   I understand it as: this adds more work on Engine side, which
   is
   not
   very convincing.
   
   Adam, I agree that getVdsCaps is bloated as it is. But on the
   other
   hand, host bios info fits into it quite well: it is host related
   information, that is not expected to change post boot. (Too bad
   that
   network configuration is there, for sure).
   
   So I'm a bit reluctant about adding a new verb for
   getVdsBiosInfo,
   and
   would not mind the suggested API change.
   
  
  Can we do both?
  Add a new GetBiosInfo call (/me hates having Vds in there) that can
  be called independently but also extend the getVdsCaps call.
  That way we can keep the existing flows in place but start building
  a foundation for if/when we refactor?
 
 I am not strictly regecting the idea of a new verb for bios
 information.
 I just wondered if the 6 values suggested in
 http://gerrit.ovirt.org/#/c/9258/5/vdsm/caps.py
 merit their district API call.
 
 However, if there are any plans to ever expose more of the dmidecode
 output that is dumped on
 http://www.ovirt.org/Features/Design/HostBiosInfo - we may well need
 to
 reconsider agregating these properties in a more structured manner.

There's more in the current feature page taht I believe was called out in the 
original patch.

 
 In any case, Yaniv, I'd be happy if you elaborate on why calling
 another
 verb just after calling getVdsCaps is complex in Engine (I really did
 not understand).
 
 Dan.
 
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] Network related hooks in vdsm

2012-10-10 Thread Andrew Cathrow


- Original Message -
 From: Yaniv Kaul yk...@redhat.com
 To: Igor Lvovsky ilvov...@redhat.com
 Cc: Dan Yasny dya...@redhat.com, engine-devel engine-de...@ovirt.org, 
 vdsm-devel
 vdsm-devel@lists.fedorahosted.org
 Sent: Wednesday, October 10, 2012 10:53:19 AM
 Subject: Re: [Engine-devel] Network related hooks in vdsm
 
 On 10/10/2012 04:47 PM, Igor Lvovsky wrote:
 Hi everyone,
  As you know vdsm has hooks mechanism and we already support dozen
  of hooks for different needs.
  Now it's a network's time.
  We would like to get your comments regarding our proposition for
  network related hooks.
 
  In general we are planning to prepare framework for future support
  of bunch network related hooks.
  Some of them already proposed by Itzik Brown [1] and Dan Yasny [2].
 
  Below you can find the additional hooks list that we propose:
 
  Note: In the first stage we can implement these hooks without any
  parameters, just to provide an entry point
for simple hooks.
 
  Networks manipulation:
  - before_add_network(conf={}, customProperty={})
  - after_add_network(conf={}, customProperty={})
  - before_del_network(conf={}, customProperty={})
  - after_del_network(conf={}, customProperty={})
  - before_edit_network(conf={}, customProperty={})
  - after_edit_network(conf={}, customProperty={})
  - TBD
 
  Bondings manipulations:
 
 Bond might be interesting because it may require switch configuration
 -
 but so will VLAN changes, so perhaps triggers in VLAN changes are
 worthwhile as well.
 Y.
 
  - before_add_bond(conf={}, customProperty={})
  - after_add_bond(conf={}, customProperty={})
  - before_del_bond(conf={}, customProperty={})
  - after_del_bond(conf={}, customProperty={})
  - before_edit_bond(conf={}, customProperty={})
  - after_edit_bond(conf={}, customProperty={})
  - TBD
 
  General purpose:
  - before_persist_network
  - after_persist_network

What about some way of doing a push that's not tied to an event - if we want to 
push something



 
 
  Now we just need to figure out the use cases.
 
  Your input more than welcome...
 
  [1] http://gerrit.ovirt.org/#/c/7224/   - Adding hooks support for
  NIC hotplug
  [2] http://gerrit.ovirt.org/#/c/7547/   - Hook: Cisco VM-FEX
  support
 
 
  Regards,
   Igor Lvovsky
  ___
  Engine-devel mailing list
  engine-de...@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/engine-devel
 
 ___
 Engine-devel mailing list
 engine-de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-24 Thread Andrew Cathrow


- Original Message -
 From: Andy Grover agro...@redhat.com
 To: Shu Ming shum...@linux.vnet.ibm.com
 Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org, VDSM 
 Project Development
 vdsm-devel@lists.fedorahosted.org
 Sent: Sunday, June 24, 2012 10:05:45 PM
 Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt
 integration
 
 On 06/24/2012 07:28 AM, Shu Ming wrote:
  On 2012-6-23 20:40, Itamar Heim wrote:
  On 06/23/2012 03:09 AM, Andy Grover wrote:
  On 06/22/2012 04:46 PM, Itamar Heim wrote:
  On 06/23/2012 02:31 AM, Andy Grover wrote:
  On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
  Also, there is no mention on credentials in any part of the
  process.
  How does VDSM or the host get access to actually modify the
  storage
  array? Who holds the creds for that and how? How does the user
  set
  this up?
 
  It seems to me more natural to have the oVirt-engine use
  libstoragemgmt
  directly to allocate and export a volume on the storage array,
  and
  then
  pass this info to the vdsm on the node creating the vm. This
  answers
  Saggi's question about creds -- vdsm never needs array
  modification
  creds, it only gets handed the params needed to connect and use
  the
  new
  block device (ip, iqn, chap, lun).
 
  Is this usage model made difficult or impossible by the current
  software
  architecture?
 
  what about live snapshots?
 
  I'm not a virt guy, so extreme handwaving:
 
  vm X uses luns 1  2
 
  engine -  vdsm pause vm X
 
  that's pausing the VM. live snapshot isn't supposed to do so.
  
  Tough we don't expect to do a pausing operation to the VM when live
  snaphot is undergoing, the VM should be blocked on the access to
  specific luns for a while.  The blocking time should be very short
  to
  avoid the storage IO time out in the VM.
 
 OK my mistake, we don't pause the VM during live snapshot, we block
 on
 access to the luns while snapshotting. Does this keep live snapshots
 working and mean ovirt-engine can use libsm to config the storage
 array
 instead of vdsm?
 
 Because that was really my main question, should we be talking about
 engine-libstoragemgmt integration rather than vdsm-libstoragemgmt
 integration.

for snapshotting wouldn't we want VDSM to handle the coordination of the 
various atomic functions?
 
 Thanks -- Regards -- Andy
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [virt-node] VDSM as a general purpose virt host manager

2012-06-22 Thread Andrew Cathrow


- Original Message -
 From: Ryan Harper ry...@us.ibm.com
 To: Adam Litke a...@us.ibm.com
 Cc: Anthony Liguori aligu...@redhat.com, VDSM Project Development 
 vdsm-devel@lists.fedorahosted.org
 Sent: Friday, June 22, 2012 12:45:42 PM
 Subject: Re: [vdsm] [virt-node] VDSM as a general purpose virt host manager
 
 * Adam Litke a...@us.ibm.com [2012-06-22 11:35]:
  On Thu, Jun 21, 2012 at 12:17:19PM +0300, Dor Laor wrote:
   On 06/19/2012 08:12 PM, Saggi Mizrahi wrote:
   
   
   - Original Message -
   From: Deepak C Shetty deepa...@linux.vnet.ibm.com
   To: Ryan Harper ry...@us.ibm.com
   Cc: Saggi Mizrahi smizr...@redhat.com, Anthony Liguori
   aligu...@redhat.com, VDSM Project Development
   vdsm-devel@lists.fedorahosted.org
   Sent: Tuesday, June 19, 2012 10:58:47 AM
   Subject: Re: [vdsm] [virt-node] VDSM as a general purpose virt
   host manager
   
   On 06/19/2012 01:13 AM, Ryan Harper wrote:
   * Saggi Mizrahismizr...@redhat.com  [2012-06-18 10:05]:
   I would like to put on to the table for descussion the
   growing
   need for a way
   to more easily reuse of the functionality of VDSM in order to
   service projects
   other than Ovirt-Engine.
   
   Originally VDSM was created as a proprietary agent for the
   sole
   purpose of
   serving the then proprietary version of what is known as
   ovirt-engine. Red Hat,
   after acquiring the technology, pressed on with it's
   commitment to
   open source
   ideals and released the code. But just releasing code into
   the
   wild doesn't
   build a community or makes a project successful. Further more
   when
   building
   open source software you should aspire to build reusable
   components instead of
   monolithic stacks.
   
   Saggi,
   
   Thanks for sending this out.  I've been trying to pull
   together
   some
   thoughts on what else is needed for vdsm as a community.  I
   know
   that
   for some time downstream has been the driving force for all of
   the
   work
   and now with a community there are challenges in finding our
   own
   way.
   
   While we certainly don't want to make downstream efforts
   harder, I
   think
   we need to develop and support our own vision for what vdsm
   can be
   come,
   some what independent of downstream and other exploiters.
   
   Revisiting the API is definitely a much needed endeavor and I
   think
   adding some use-cases or sample applications would be useful
   in
   demonstrating whether or not we're evolving the API into
   something
   easier to use for applications beyond engine.
   
   We would like to expose a stable, documented, well supported
   API.
   This gives
   us a chance to rethink the VDSM API from the ground up. There
   is
   already work

   in progress of making the internal logic of VDSM separate
   enough
   from the API
   layer so we could continue feature development and bug fixing
   while designing
   the API of the future.
   
   In order to achieve this though we need to do several things:
1. Declare API supportability guidelines
2. Decide on an API transport (e.g. REST, ZMQ, AMQP)
3. Make the API easily consumable (e.g. proper docs,
example
code, extending
   the API, etc)
4. Implement the API itself

In the earlier we'd discussed working to have similarities in the modeling 
between the oVirt API and VDSM but that seems to have dropped off the radar.




   I agree with the list, but I'd like to work on the redesign
   discussion so
   that we're not doing all of 1-4 around the existing API that's
   engine-focused.
   
   I'm over due for posting a feature page on vdsm standalone
   mode,
   and I
   have some other thoughts on various uses.
   
   Some other paths of thought for use-cases I've been mulling
   over:
   
 - Simplifying using QEMU/KVM
 - consuming qemu via command line
 - can we manage/support developers launching
 qemu
 directly
 - consuming qemu via libvirt
 - can we integrate with systems that are already
 using
 libvirt
   
 - Addressing issues with libvirt
 - are there kvm specific features we can exploit
 that
 libvirt
 doesn't?
   
 - Scale-up/fail-over
 - can we support a single vdsm node, but allow for
 building up
 clusters/groups without bringing in something like
 ovirt-engine
 - can we look at decentralized fail-over for
 reliability
 without
 a central mgmt server?
   
 - pluggability
 - can we support an API that allows for third-party
 plugins to
 support new features or changes in implementation?
   
   Pluggability feature would be nice. Even nicer would be the
   ability
   to
   introspect and figure whats supported by VDSM. 

Re: [vdsm] reserve virtio-balloon device created by libvirt

2012-04-29 Thread Andrew Cathrow


- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Gal Hammer gham...@redhat.com
 Cc: vdsm-devel@lists.fedorahosted.org
 Sent: Sunday, April 29, 2012 7:19:10 AM
 Subject: Re: [vdsm] reserve virtio-balloon device created by libvirt
 
 On Mon, Apr 23, 2012 at 04:00:55PM +0300, Gal Hammer wrote:
  On 23/04/2012 12:26, Mark Wu wrote:
  Hi guys,
  
  I saw that an option to create balloon device was added by Gal in
  http://gerrit.ovirt.org/1573
  I have a question about it. Why don we preserve the old default
  behaviour? I know it's not supported by ovirt-engine now, but I
  can't
  figure out what will break if it's not disabled explicitly. So do
  you
  think we can just make use of the balloon device added by libvirt?
  
  We didn't change the old behavior.
  
  Libvirt creates by default a memory-balloon device, so vdsm
  defaults
  was to disable it by adding a none-type device. This was done
  because vdsm didn't include an option to add such device.
  
  My patch added an option to create a memory-balloon through vdsm.
  If
  the user didn't request to add the device, the behavior is same as
  before, disabling the memory-balloon.
 
 I feel that it would be best not to flip Vdsm's default at the
 moment,
 even though it is the opposite of libvirt's. I would consider to flip
 them only after your (Mark's) patches are in, tested, and proven
 worthwhile for the common case.
 
 Currently, without any management for the balloon, reserving a guest
 PCI
 device was deemed wasteful.

On the other side of the fence 
- We know that we do need to do ballooning
- In the (next?) release we'll end up adding this support
- There's no harm (see next point) in adding the device now in fact it saves a 
config change on upgrade.
- While it takes up a PCI slot it's going to be very, very rare deployments 
that will ever see the limit, libvirt/virtmanager/virt-install has done this 
forever without seeing push back.
 
 Regards,
 
 Dan.
 
 
 
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] /etc/rc.c/init.d/vdsmd set filters for libvirt at DEBUG level

2012-04-06 Thread Andrew Cathrow
Are we sure that performance concerns aren't manifested in 'real' environments? 
I think we should make some more effort to validate that assumption.

I agree with all the points below, but the worst part of the logging is the 
readability. Right now the log is fine for developers and support but  from a 
user's point of view we may as well write it out in hex - it's very hard to 
decipher and makes self support for users very difficult.

We should default to INFO mode which should be understandable, and switching to 
debug should be simple and ideally possible from ovirt-engine.


Saggi Mizrahi smizr...@redhat.com wrote:

of course you are right and just having everything log in debug is less then 
optimal.
We do only save 100 logs back so this will not fill up a decently sized hard 
drive.

The performance implication though existing don't really pop up in normal use.

It's true we usually don't need logs that far back but as I said it's really 
not an issue.
Even when you try and run a 1000 VMs you will not be blocked on logging trying 
to get written to disk I can assure you of that.

There are a lot of things we can do to improve the size footprint and 
readability of our logs.
The move to XZ instead of GZ was a simple way to save on space and there are 
plans on using an even more compressed representation for old logs.
We are also working on tools to help manage the log files.

You are correct though that setting the logging level back or changing the 
amount of logs kept shouldn't be such a bothersome process and you should open 
a bug on that for VDSM and it will be fixed.

- Original Message -
 From: Peter Portante pport...@redhat.com
 To: Saggi Mizrahi smizr...@redhat.com
 Cc: vdsm-devel@lists.fedorahosted.org
 Sent: Thursday, April 5, 2012 3:57:08 PM
 Subject: Re: [vdsm] /etc/rc.c/init.d/vdsmd set filters for libvirt at DEBUG 
 level
 
 Hi Saggi,
 
 - Original Message -
  From: Saggi Mizrahi smizr...@redhat.com
  To: Peter Portante pport...@redhat.com
  Cc: vdsm-devel@lists.fedorahosted.org
  Sent: Thursday, April 5, 2012 3:02:23 PM
  Subject: Re: [vdsm] /etc/rc.c/init.d/vdsmd set filters for libvirt
  at DEBUG level
  
  It's required for support purposes. When something goes wrong we
  collect all the logs from the hosts, this way we can figure out
  whats wrong without requiring someone to reproduce the problem with
  the logging turned on.
  
  We are working on making the logs easier to filter when inspecting
  them.
  But the general idea is, if the information is in when can easily
  get
  it out, but doing it the other way around is impossible.
 
 While one can understand the pain of debugging complex systems,
 respectfully, this approach seems more problematic than helpful.
 
 First, it was filling up the system disk. In 10 minutes there were
 four compressed log files for libvirt alone, not to mention the
 vdsmd.log files. Talking with the rest of the performance team, we
 always turn all this off so that we don't loose our systems while
 testing.
 
 Additionally to get the logging to stop, one has to modify the vdsmd
 start up script to get libvirt to stop logging so much. Each time a
 modification was made to libvirt's configuration file to make it
 stop, libvirt kept up all its debug logging. All the documentation
 on the libvirt web page tells one what to do to affect its behavior,
 but in the presence of vdsm that is not the case. Somehow, that
 seems like a problem to have one subsystem completely override
 another with leaving any indication that it is doing so.
 
 Second, it is expensive to have such overhead. Compressing and
 maintaining arbitrary sized log files of text takes processing time
 away from the VMs. It was amazing to see how often xz would run on a
 box, not knowing it was related to maintaining these log files.
 
 There must be a better way than collecting all the data we could
 possibly need ahead of time just in case a problem comes up.
 
 Have you considered asserting the expected state of the system before
 embarking on changing that state?
 
  In a nutshell order to filter out debug messages grep -v is you
  friend.
 
 grep -v is not useful when your system disk fills up. :)
 
 And Why wouldn't an attacker use that fact in some sort of denial of
 service?  And if the counter is to configure the log files so that
 they are processed more often and kept to a small number, then as
 the amount of data grows (like when multiple VMs are created) the
 original problem will get lost as it is truncated.
 
 So if we already a finite data set of sorts, why not drop using log
 files in favor of using a dedicated ring buffer that stores
 ultra-compressed binary data (enough to track the problem) with a
 tool that can format that ring buffer into useable output.
 
 When we were writing the thread library for Tru64 Unix and OpenVMS,
 such ring buffers were invaluable to help find complicated timing
 problems across multiple processors.
 
 Respectfully,
 
 

Re: about shared disk file system

2011-12-29 Thread Andrew Cathrow


- Original Message -
 From: Ayal Baron aba...@redhat.com
 To: Andrew Cathrow acath...@redhat.com
 Cc: vdsm-devel@lists.fedorahosted.org, wangxiaofan wangxiao...@opzoon.com
 Sent: Thursday, December 29, 2011 8:40:49 AM
 Subject: Re: about shared disk file system
 
 
 
 - Original Message -
  
  
  - Original Message -
   From: wangxiaofan wangxiao...@opzoon.com
   To: vdsm-devel@lists.fedorahosted.org
   Sent: Thursday, December 29, 2011 8:21:43 AM
   Subject: about shared disk file system
   
   Hi there,
   To support SAN storage in vdsm, is there any plan to use
   shared
   disk file system, such as Red Hat GFS or OCFS2,
   instead of lvm?
  
  One of the features we're adding to oVirt is the ability to have
  plug-able file domains. Today we support block based (iscsi/fiber)
  and NFS. We'll add support for generic filesystems [1].
  
  For some filesystems this will be pretty straight forward eg
  Gluster,
  GPFS but GFS adds some extra complications - it brings along it's
  own cluster stack that provides membership management, fencing etc.
 
 so do gpfs and gluster.
 as long as you don't mix LVM based domains with GFS you should be
 fine.

GFS/Clustersuite will power off nodes, GPFS and Gluster won't.


 
  
  
  [1]
  https://fedorahosted.org/pipermail/vdsm-devel/2011-December/000408.html
  
   ___
   vdsm-devel mailing list
   vdsm-devel@lists.fedorahosted.org
   https://fedorahosted.org/mailman/listinfo/vdsm-devel
   
  ___
  vdsm-devel mailing list
  vdsm-devel@lists.fedorahosted.org
  https://fedorahosted.org/mailman/listinfo/vdsm-devel
  
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: about shared disk file system

2011-12-29 Thread Andrew Cathrow


- Original Message -
 From: wangxiaofan wangxiao...@opzoon.com
 To: vdsm-devel@lists.fedorahosted.org
 Sent: Thursday, December 29, 2011 8:21:43 AM
 Subject: about shared disk file system
 
 Hi there,
 To support SAN storage in vdsm, is there any plan to use shared
 disk file system, such as Red Hat GFS or OCFS2,
 instead of lvm?

One of the features we're adding to oVirt is the ability to have plug-able file 
domains. Today we support block based (iscsi/fiber) and NFS. We'll add support 
for generic filesystems [1]. 

For some filesystems this will be pretty straight forward eg Gluster, GPFS but 
GFS adds some extra complications - it brings along it's own cluster stack that 
provides membership management, fencing etc. 


[1] https://fedorahosted.org/pipermail/vdsm-devel/2011-December/000408.html

 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: Installing vdsm standalone on RHEL6.2

2011-10-19 Thread Andrew Cathrow


- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
 Sent: Wednesday, October 19, 2011 5:47:19 AM
 Subject: Re: Installing vdsm standalone on RHEL6.2
 
 On Wed, Oct 19, 2011 at 01:12:31AM +, Itzik Brown wrote:
  Hi,
  
  I'm trying to install vdsm on RHEL6.2(Beta).
  
  Steps I have done:
  
  git clone git://git.fedorahosted.org/vdsm.git
  cd vdsm
  ./autobuild.sh
  Then going to my rpmbuild directory and running rpm  -ivh vdsm
  vdsm-4.9.0-0.189.gb60414c.el6.itzikb1318984224.x86_64.rpm I get
  the following:
  
  error: Failed dependencies:
  fence-agents is needed by
  vdsm-4.9.0-0.189.gb60414c.el6.itzikb1318984224.x86_64
  libvirt = 0.9.4-13 is needed by
  vdsm-4.9.0-0.189.gb60414c.el6.itzikb1318984224.x86_64
  libvirt-python = 0.9.4-13 is needed by
  vdsm-4.9.0-0.189.gb60414c.el6.itzikb1318984224.x86_64
  

If you have access to RHN then the binary is here 
https://rhn.redhat.com/rhn/software/packages/details/Overview.do?pid=627013

Or source is here 
http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/fence-agents-3.0.12-23.el6.src.rpm


 
 These packages should be available as part of the RHEV-3.0-beta-3
 channel.
 Are you defined as a RHEV-3.0 beta customer? If you have a problem
 finding these
 packages there, I can send them to you off-list.
 
 Dan.
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel