On Fri, Sep 3, 2021 at 9:35 PM Nir Soffer wrote:
> On Fri, Sep 3, 2021 at 4:45 PM Gianluca Cecchi
> wrote:
>
>> Hello,
>> I was trying incremental backup with the provided
>> /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py and began
>> using the "full" option.
>> But I specified a
On Fri, Sep 3, 2021 at 4:45 PM Gianluca Cecchi
wrote:
> Hello,
> I was trying incremental backup with the provided
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py and began
> using the "full" option.
> But I specified an incorrect dir and during backup I got error due to
> filesy
On 9/2/21 9:16 AM, Lucia Jelinkova wrote:
Could you please share more details about the CPU problem you're facing?
There shouldn't be any breaking change in that CPU definition in 4.4+
compatibility version.
Unfortunately not, I've already made irreversible changes to the cluster
so that I ca
This looks like a bug. It should have 'recovered' from the failure.
I'm not sure which logs would help identify the root cause.
Best Regards,Strahil Nikolov
On Fri, Sep 3, 2021 at 16:45, Gianluca Cecchi
wrote: Hello,I was trying incremental backup with the provided
/usr/share/doc/python3
That's really odd. Maybe you can try to clone it and then experiment on the
clone itself. Once the reason is found out, you can try with the original.
My first look is to check all logs on the engine and the SPM for clues.
Best Regards,Strahil Nikolov
On Fri, Sep 3, 2021 at 11:42, David White
Hello,
I was trying incremental backup with the provided
/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py and began
using the "full" option.
But I specified an incorrect dir and during backup I got error due to
filesystem full
[ 156.7 ] Creating image transfer for disk
'33b0f6fb-a855
oVirt Node 4.4.8.3 Async update
On September 3rd 2021 the oVirt project released an async update for oVirt
Node consuming the following packages:
-
ovirt-release44 4.4.8.3
-
ovirt-node-ng-image-update 4.4.8.3
oVirt Node respin also consumed most recent CentOS Stream and Advanced
Vi
We are use CephFS domains without issues from 4.3, currently on 4.4
[root@ovirt-host1 /]# ls -la /rhev/data-center/mnt/*ovirt*
'/rhev/data-center/mnt/172.16.16.2:3300,172.16.16.3:3300,172.16.16.4:3300:_ovirt__iso':
total 0
drwxrwxrwx. 3 root root 1 Sep 3 15:43 .
drwxr-xr-x. 5 vdsm kvm 182 Aug
The save operation is going at a snail's pace, though.
Using "watch du -skh", I counted about 5-7 seconds per .1 GB (1/10 of 1GB).
It's a virtual disk, but I'm using over 200GB... so at this rate, it'll take a
very long time.
I wonder if Pascal is on to something, and the export is happening ove
Update perhaps I have discovered a bug somewhere?
I started another export after hours (it's very early morning hours right now,
and I can tolerate a little downtime on this VM). I had the same symptoms, but
this time, I just left it alone. I waited about 45 minutes with no progress.
I then
In this particular case, I have 1 (one) 250GB virtual disk..
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Tuesday, August 31st, 2021 at 11:21 PM, Strahil Nikolov
wrote:
> Hi David,
>
> how big are your VM disks ?
>
> I suppose you have several very large ones.
>
11 matches
Mail list logo