CC'ing the devel list, maybe some VDSM and storage people can
explain this?
Am 10.06.2014 12:24, schrieb combuster:
/etc/libvirt/libvirtd.conf and /etc/vdsm/logger.conf
, but unfortunately maybe I've jumped to conclusions, last weekend, that
very same thin provisioned vm was running a simple
Thanks, I really hope someone can help, because right now I'm afraid to
thin provision any large volume due to this. I forgot to mention that,
when I do the export on the NAS, node that is SPM at that moment is
running qemu-img convert process and VDSM process on that node is
running wild (400%
Interesting, which files did you modify to lower the log levels?
On Tue, Jun 3, 2014 at 12:38 AM, combus...@archlinux.us wrote:
One word of caution so far, when exporting any vm, the node that acts as SPM
is stressed out to the max. I releived the stress by a certain margin with
lowering
/etc/libvirt/libvirtd.conf and /etc/vdsm/logger.conf
, but unfortunately maybe I've jumped to conclusions, last weekend, that
very same thin provisioned vm was running a simple export for 3hrs
before I've killed the process. But I wondered:
1. The process that runs behind the export is
OK, I have good news and bad news :)
Good news is that I can run different VM's on different nodes when all
of their drives are on FC Storage domain. I don't think that all of I/O
is running through SPM, but I need to test that. Simply put, for every
virtual disk that you create on the shared
Bad news happens only when running a VM for the first time, if it helps...
On 06/09/2014 01:30 PM, combuster wrote:
OK, I have good news and bad news :)
Good news is that I can run different VM's on different nodes when all
of their drives are on FC Storage domain. I don't think that all of
Hm, another update on this one. If I create another VM with another
virtual disk on the node that already have a vm running from the FC
storage, then libvirt doesn't brake. I guess it just happens for the
first time on any of the nodes. If this is the case, I would have to
bring all of the
I'm curious to hear what other comments arise, as we're analyzing a
production setup shortly.
On Sun, Jun 1, 2014 at 10:11 PM, combus...@archlinux.us wrote:
I need to scratch gluster off because setup is based on CentOS 6.5, so
essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not
One word of caution so far, when exporting any vm, the node that acts as SPM
is stressed out to the max. I releived the stress by a certain margin with
lowering libvirtd and vdsm log levels to WARNING. That shortened out the
export procedure by at least five times. But vdsm process on the SPM
Hi,
I have a 4 node cluster setup and my storage options right now are a FC based
storage, one partition per node on a local drive (~200GB each) and a NFS based
NAS device. I want to setup export and ISO domain on the NAS and there are no
issues or questions regarding those two. I wasn't aware
I need to scratch gluster off because setup is based on CentOS 6.5, so
essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not met.
Any info regarding FC storage domain would be appreciated though.
Thanks
Ivan
On Sunday, 1. June 2014. 11.44.33 combus...@archlinux.us wrote:
Hi,
I
11 matches
Mail list logo