it took some time to answer due to some other stuff, but now I had the
time to look into it.
Am 21.08.2018 um 17:02 schrieb Michal Skrivanek:
With the latest version of the ovirt-imageio and the v2v we are
performing quite nicely, and without specifying
Moving disk from one gluster domain to another fails, either with the vm
running or down..
It strikes me that it says : File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 718, in blockCopy
if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self)
I'am sending the
Hi, It should be possible, as oVirt is able to support NFS 4.1 I have a
Synology NAS which is also able to support this version of the protocol, but
never found time to set this together and test it until now. Reagrds
Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a crit:
I would like to understand how to find the point that fails when starting from
the Event showing in the GUI. With that event I get an correlation ID and how
would I trace all the following tasks, actions or events that are connected to
that correlation ID ?
Is it something like
thanks for your answer. I tried letting it restart automatically but it makes
vdsm-client Volume getInfo shows the correct values
manually looking in the metadata file on the storage domain shows the correct
ssh into the machine and lsblk shows the correct value
On Thu, 13 Sep 2018 11:08:28 +0200
Robert O'Kane wrote:
> I have a simmilar issue with ovirt-provider-ovn.
> But in my config I see:
> Where do I find / how do I generate this token?
Usually engine-setup will generate an appropriate
but performance.strict-o-direct is not one of the option enabled by
gdeploy during installation because it's supposed to give some sort of
Il 14/09/2018 11:34, Leo David ha scritto:
> performance.strict-o-direct: on
> This was the bloody option that created the botleneck !
This was the bloody option that created the botleneck ! It was ON.
So now i get an average of 17k random writes, which is not bad at all.
Below, the volume options that worked for me:
So i have decided to take out all of the gluster volume custom options,
and add them one by one while activating/deactivating the storage domain &
rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
I've managed to upgrade them now by removing logical volumes, usually it's just
/dev/onn/home but one I had to keep reinstalling see were it failed so had to
Hi, there was a memory leak in the gluster client that is fixed in
What version of gluster are you using?
Il 11/09/2018 16:51, Endre Karlson ha scritto:
> Hi, we are seeing some issues
On Thu, Sep 13, 2018 at 5:19 PM Pötter, Ulrich <
> This worked. The VM has now a larger disk and the metadata on the storage
> domain show the new value (vdsm-client Volume getInfo ... too)
> Unfortunetaly the virtual size of the disk shown in the
On Thu, Sep 13, 2018 at 7:53 AM Edward Haas wrote:
> On Mon, Sep 10, 2018 at 5:53 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>> supposing to have ovirt-ng node 4.2.6 and that ovirtmgmt config regarding
>> DNS servers has to be updated, what is the correct way to
coming back on this I did experience again the issue and the referred link
was useful to restore the VMs.
I followed the steps to delete the illegal snapshot while the VM was down.
I now have again the issue with other 3 VMs
Mail list logo