[ovirt-users] Re: [ANN] oVirt 4.3.6 is now generally available

2019-09-28 Thread Nir Soffer
On Sat, Sep 28, 2019 at 11:04 PM Rik Theys 
wrote:

> Hi Nir,
>
> Thank you for your time.
> On 9/27/19 4:27 PM, Nir Soffer wrote:
>
>
>
> On Fri, Sep 27, 2019, 12:37 Rik Theys  wrote:
>
>> Hi,
>>
>> After upgrading to 4.3.6, my storage domain can no longer be activated,
>> rendering my data center useless.
>>
>> My storage domain is local storage on a filesystem backed by VDO/LVM. It
>> seems 4.3.6 has added support for 4k storage.
>> My VDO does not have the 'emulate512' flag set.
>>
>
> This configuration is not supported before 4.3.6. Various operations may
> fail when
> reading or writing to storage.
>
> I was not aware of this when I set it up as I did not expect this to
> influence a setup where oVirt uses local storage (a file system location).
>
>
> 4.3.6 detects storage block size, creates compatible storage domain
> metadata, and
> consider the block size when accessing storage.
>
>
>> I've tried downgrading all packages on the host to the previous versions
>> (with ioprocess 1.2), but this does not seem to make any difference.
>>
>
> Downgrading should solve your issue, but without any logs we only guess.
>
> I was able to work around my issue by downgrading to ioprocess 1.1 (and
> vdsm-4.30.24). Downgrading to only 1.2 did not solve my issue. With
> ioprocess downgraded to 1.1, I did not have to downgrade the engine (still
> on 4.3.6).
>
ioprocess 1.1. is not recommended, you really want to use 1.3.0.

> I think I now have a better understanding what happened that triggered
> this.
>
> During a nightly yum-cron, the ioprocess and vdsm packages on the host
> were upgraded to 1.3 and vdsm 4.30.33. At this point, the engine log
> started to log:
>
> 2019-09-27 03:40:27,472+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Executing with
> domain map: {6bdf1a0d-274b-4195-8f
> f5-a5c002ea1a77=active}
> 2019-09-27 03:40:27,646+02 WARN
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Unexpected return
> value: Status [code=348, message=Block size does not match storage block
> size: 'block_size=512, storage_block_size=4096']
>
This means that when activating the storage domain, vdsm detected that the
storage block size
is 4k, but the domain metadata reports block size of 512.

This combination may partly work for localfs domain since we don't use
sanlock with local storage,
and vdsm does not use direct I/O when writing to storage, and always use 4k
block size when
reading metadata from storage.

Note that with older ovirt-imageio < 1.5.2, image uploads and downloads may
fail when using 4k storage.
in recent ovirt-imageio we detect and use the correct block size.

> 2019-09-27 03:40:27,646+02 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] FINISH,
> ConnectStoragePoolVDSCommand, return: , log id: 483c7a17
>
> I did not notice at first that this was a storage related issue and
> assumed it may get resolved by also upgrading the engine. So in the morning
> I upgraded the engine to 4.3.6 but this did not resolve my issue.
>
> I then found the above error in the engine log. In the release notes of
> 4.3.6 I read about the 4k support.
>
> I then downgraded ioprocess (and vdsm) to ioprocess 1.2 but that did also
> not solve my issue. This is when I contacted the list with my question.
>
> Afterwards I found in the ioprocess rpm changelog that (partial?) 4k
> support was also in 1.2. I kept on downgrading until I got ioprocess 1.1
> (without 4k support) and at this point I could re-attach my storage domain.
>
> You mention above that 4.3.6 will detect the block size and configure the
> metadata on the storage domain? I've checked the dom_md/metadata file and
> it shows:
>
> ALIGNMENT=1048576
> *BLOCK_SIZE=512*
> CLASS=Data
> DESCRIPTION=studvirt1-Local
> IOOPTIMEOUTSEC=10
> LEASERETRIES=3
> LEASETIMESEC=60
> LOCKPOLICY=
> LOCKRENEWALINTERVALSEC=5
> MASTER_VERSION=1
> POOL_DESCRIPTION=studvirt1-Local
> POOL_DOMAINS=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77:Active
> POOL_SPM_ID=-1
> POOL_SPM_LVER=-1
> POOL_UUID=085f02e8-c3b4-4cef-a35c-e357a86eec0c
> REMOTE_PATH=/data/images
> ROLE=Master
> SDUUID=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77
> TYPE=LOCALFS
> VERSION=5
> _SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66
>
So you have a v5 localfs storage domain - because we don't use leases, this
domain should work
with 4.3.6 if you modify this line in the domain metadata.

BLOCK_SIZE=4096

To modify the line, you have to delete the checksum:

_SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66

> I assume that at this point it works because ioprocess 1.1 does not report
> the block size to the engine (as it doesn't support this option?)?
>
I think it works because ioprocess 1.1 has a bug when it does not use
direct I/O when writing
files. This 

[ovirt-users] Ovirt 4.2.7 won't start and drops to emergency console

2019-09-28 Thread jeremy_tourville
I see evidence that appears to be a problem with gluster.  /var/log/messages 
has multiple occurrences of: WARNING: /dev/gluster_vg1/lv_:  Thin's 
tihin-pool needs inspection.  Also, if I run  vgs I am returned info for my 
other volume groups but not gluster_vg1.  Lastly, when reviewing journalctl -xb 
| grep -i timed  I see messages from systemd
Job dev-gluster_vg1-lv_vmdisks.device/start timed.out.
Timed out waiting for device dev-glustet_vg1-lv_vmdisks.device

Thses messages are happening for both
/dev/gluster_vg1/lv_datadisks
/dev/gluster_vg1/lv_vmdisks

 I did review the article here but I am unable to change to VG1.
https://mellowhost.com/billing/index.php?rp=/knowledgebase/65/How-to-Repair-a-lvm-thin-pool.html

Can anyone assist with a procedure on how to repair? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCHU2Q4RD5QWNMHFCSXFMWFIHANJGUE2/


[ovirt-users] Re: [ANN] oVirt 4.3.6 is now generally available

2019-09-28 Thread Rik Theys

Hi Nir,

Thank you for your time.

On 9/27/19 4:27 PM, Nir Soffer wrote:



On Fri, Sep 27, 2019, 12:37 Rik Theys > wrote:


Hi,

After upgrading to 4.3.6, my storage domain can no longer be
activated, rendering my data center useless.

My storage domain is local storage on a filesystem backed by
VDO/LVM. It seems 4.3.6 has added support for 4k storage.
My VDO does not have the 'emulate512' flag set.


This configuration is not supported before 4.3.6. Various operations 
may fail when

reading or writing to storage.
I was not aware of this when I set it up as I did not expect this to 
influence a setup where oVirt uses local storage (a file system location).


4.3.6 detects storageblock size, creates compatible storage domain 
metadata, and

consider the block size when accessing storage.

I've tried downgrading all packages on the host to the previous
versions (with ioprocess 1.2), but this does not seem to make any
difference.


Downgrading should solve your issue, but without any logs we only guess.


I was able to work around my issue by downgrading to ioprocess 1.1 (and 
vdsm-4.30.24). Downgrading to only 1.2 did not solve my issue. With 
ioprocess downgraded to 1.1, I did not have to downgrade the engine 
(still on 4.3.6).


I think I now have a better understanding what happened that triggered this.

During a nightly yum-cron, the ioprocess and vdsm packages on the host 
were upgraded to 1.3 and vdsm 4.30.33. At this point, the engine log 
started to log:


2019-09-27 03:40:27,472+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Executing with 
domain map: {6bdf1a0d-274b-4195-8f

f5-a5c002ea1a77=active}
2019-09-27 03:40:27,646+02 WARN 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Unexpected 
return value: Status [code=348, message=Block size does not match 
storage block size: 'block_size=512, storage_block_size=4096']
2019-09-27 03:40:27,646+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] FINISH, 
ConnectStoragePoolVDSCommand, return: , log id: 483c7a17


I did not notice at first that this was a storage related issue and 
assumed it may get resolved by also upgrading the engine. So in the 
morning I upgraded the engine to 4.3.6 but this did not resolve my issue.


I then found the above error in the engine log. In the release notes of 
4.3.6 I read about the 4k support.


I then downgraded ioprocess (and vdsm) to ioprocess 1.2 but that did 
also not solve my issue. This is when I contacted the list with my question.


Afterwards I found in the ioprocess rpm changelog that (partial?) 4k 
support was also in 1.2. I kept on downgrading until I got ioprocess 1.1 
(without 4k support) and at this point I could re-attach my storage domain.


You mention above that 4.3.6 will detect the block size and configure 
the metadata on the storage domain? I've checked the dom_md/metadata 
file and it shows:


ALIGNMENT=1048576
*BLOCK_SIZE=512*
CLASS=Data
DESCRIPTION=studvirt1-Local
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=1
POOL_DESCRIPTION=studvirt1-Local
POOL_DOMAINS=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77:Active
POOL_SPM_ID=-1
POOL_SPM_LVER=-1
POOL_UUID=085f02e8-c3b4-4cef-a35c-e357a86eec0c
REMOTE_PATH=/data/images
ROLE=Master
SDUUID=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77
TYPE=LOCALFS
VERSION=5
_SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66

I assume that at this point it works because ioprocess 1.1 does not 
report the block size to the engine (as it doesn't support this option?)?


Can I update the storage domain metadata manually to report 4096 instead?

I also noticed that the storage_domain_static table has the block_size 
stored. Should I update this field at the same time as I update the 
metadata file?


If the engine log and database dump is still needed to better understand 
the issue, I will send it on Monday.


Regards,

Rik



Should I also downgrade the engine to 4.3.5 to get this to work
again. I expected the downgrade of the host to be sufficient.

As an alternative I guess I could enable the emulate512 flag on
VDO but I can not find how to do this on an existing VDO volume.
Is this possible?


Please share more data so we can understand the failure:

- complete vdsm log showing the failure to activate the domain
- with 4.3.6
- with 4.3.5 (after you downgraded
- contents of 
/rhev/data-center/mnt/_/domain-uuid/dom_md/metadata

(assuming your local domain mount is /domaindir)
- engine db dump

Nir


Regards,
Rik


On 9/26/19 4:58 PM, Sandro Bonazzola wrote:


The oVirt Project is pleased to announce the general availability
of oVirt 4.3.6 as of September 

[ovirt-users] change connection string in db

2019-09-28 Thread olaf . buitelaar
Dear oVirt users,

I'm currently migrating our gluster setup, so i've done a gluster replace brick 
to the new machines.
Now i'm trying to update the connection strings of the related storage domains 
including the one hosting the ovirt-engine (which i believe cannot be brought 
down for maintenance). At the same time i'm trying to disable the "Use managed 
gluster volume" feature.
i've had tested this in a lab setup, but somehow i'm running into issues on the 
actual setup.

On the lab setup it was enough to run a query like this;
UPDATE public.storage_server_connections
SET 
"connection"='10.201.0.6:/ovirt-kube',gluster_volume_id=NULL,mount_options='backup-volfile-servers=10.201.0.1:10.201.0.2:10.201.0.3:10.201.0.5:10.201.0.4:10.201.0.7:10.201.0.8:10.201.0.9'
WHERE id='29aae3ce-61e4-4fcd-a8f2-ab0a0c07fa48';
on the live setup i also seem to run a query like this;
UPDATE public.gluster_volumes
SET task_id=NULL
WHERE id='9a552d7a-8a0d-4bae-b5a2-1cb8a7edf5c9';
i couldn't really find where this task_id relates to, but it does make the 
checkbox for "Use managed gluster volume" being unchecked in the web interface.

in the lab setup it was enough to run within the hosted engine;
- service ovirt-engine restart
and then bring an ovirt-host machine to maintenance, and active it again. and 
the changed connection string was being mounted in the 
/rhev/data-center/mnt/glusterSD/ directory.
Also the vm's after being shutdown and brought up again, started using the new 
connection string.

But now on the production instance, when i restart the engine the connection 
string is restored to the original values in the storage_server_connections 
table. I don't really understand where the engine gathers this information from.
Any advice on how to actually change the connection strings would by highly 
appreciated.

Thanks Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QX7RFHBYWRFSDTUPMHN5RZCNH6A4RPX6/


[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-28 Thread Amit Bawer
On Fri, Sep 27, 2019 at 4:02 PM Vrgotic, Marko 
wrote:

> Hi oVirt gurus,
>
>
>
> Thank s to Tony, who pointed me into discovery process, the performance of
> the IO seems greatly dependent on the flags.
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 0.108962 s, *470 MB/s*
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
> *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 322.314 s, *159 kB/s*
>
>
>
> Dsync flag tells dd to ignore all buffers, cache except certain kernel
> buffers and write data physically to the disc, before writing further.
> According to number of sites I looked at, this is the way to test Server
> Latency in regards to IO operations. Difference in performance is huge, as
> you can see (below I have added results from tests with 4k and 8k block)
>
>
>
> Still, certain software component we run tests with writes data in
> this/similar way, which is why I got this complaint in the first place.
>
>
>
> Here is my current NFS mount settings:
>
>
> rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5
>
>
>
> *If you have any suggestions on possible NFS tuning options, to try to
> increase performance, I would highly appreciate it.*
>
*Can someone tell me how to change NFS mount options in oVirt for already
> existing/used storage?*
>

Taking into account your network configured MTU [1] and Linux version [2],
you can tune wsize, rsize mount options.
Editing mount options can be done from Storage->Domains->Manage Domain menu.

[1]  https://access.redhat.com/solutions/2440411
[2] https://access.redhat.com/solutions/753853

>
>
>
>
> Test results with 4096 and 8192 byte size.
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 1.49831 s, *273 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 349.041 s, *1.2 MB/s*
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 11.6553 s, *70.3 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 393.035 s, *2.1 MB/s*
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Thursday, 26 September 2019 at 09:51
> *To: *Amit Bawer 
> *Cc: *Tony Brian Albers , "hunter86...@yahoo.com" <
> hunter86...@yahoo.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
> Dear all,
>
>
>
> I very much appreciate all help and suggestions so far.
>
>
>
> Today I will send the test results and current mount settings for NFS4.
> Our production setup is using Netapp based NFS server.
>
>
>
> I am surprised with results from Tony’s test.
>
> We also have one setup with Gluster based NFS, and I will run tests on
> those as well.
>
> Sent from my iPhone
>
>
>
> On 25 Sep 2019, at 14:18, Amit Bawer  wrote:
>
>
>
>
>
> On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers  wrote:
>
> Guys,
>
> Just for info, this is what I'm getting on a VM that is on shared
> storage via NFSv3:
>
> --snip--
> [root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
> count=100
> 100+0 records in
> 100+0 records out
> 409600 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s
>
> real0m18.171s
> user0m1.077s
> sys 0m4.303s
> [root@proj-000 ~]#
> --snip--
>
> my /etc/exports:
> /data/ovirt
> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>
> and output from 'mount' on one of the hosts:
>
> sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
> 001.kac.lokalnet:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
> .41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
> 16.216.41)
>
>
>
> Worth to compare mount options with the slow shared NFSv4 mount.
>
>
>
> Window size tuning can be found at bottom of [1], although its relating to
> NFSv3, it could be relevant to v4 as well.
>
> [1] https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
>
>
>
>
> connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
> SATA disks in RAID10. NFS server is running CentOS 7.6.
>
> Maybe you can get some inspiration from this.
>
> /tony
>
>
>
> On Wed, 2019-09-25 at 09:59 +, Vrgotic, Marko 

[ovirt-users] Re: Changing certificates for oVirt 4.3.5

2019-09-28 Thread TomK

On 9/26/2019 6:44 AM, TomK wrote:

On 9/26/2019 3:58 AM, Yedidyah Bar David wrote:

On Thu, Sep 26, 2019 at 3:19 AM TomK  wrote:


Hey All,

Would anyone have a more recent wiki on changing all certificates,
including VDSM ones?

Have this page but it's for version 3.

https://access.redhat.com/solutions/2409751


I wasn't aware of this page. It's quite old, but mostly correct.
However, if you do not mind host downtime, it's much easier to re-enroll
certificates for all hosts, instead of the manual steps mentioned there
(that are quite old, perhaps not up-to-date).



Thinking the process didn't change much but wanted to ask if there's
anything more recent floating around.


I am not aware of anything specifically doing what you want.

Related pages you might want to check:

1. Section "Replacing SHA-1 Certificates with SHA-256 Certificates" of:

https://www.ovirt.org/documentation/upgrade-guide/chap-Post-Upgrade_Tasks.html 



2. Only now I noticed that it does not mention the option --san for
setting SubjectAltName. It does appear here:

https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html

See also:

https://www.ovirt.org/develop/release-management/features/infra/pki-renew.html 



So I guess (didn't try recently) that if you follow the existing 
procedures
and generate pki without --san, a later engine-setup will prompt you 
to renew.


Best regards,



Thought I ran that though I probably didn't select the renew all option. 
  However, it did not renew the VDSM one:


[root@ovirt01 ovirt-engine]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
   Configuration files: 
['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', 
'/etc/ovirt-engine-setup.conf.d/10-packaging.conf', 
'/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
   Log file: 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20190926062007-ysyb9p.log

   Version: otopi-1.8.3 (otopi-1.8.3-1.el7)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup (late)
[ INFO  ] Stage: Environment customization

   --== PRODUCT OPTIONS ==--

[ INFO  ] ovirt-provider-ovn already installed, skipping.

   --== PACKAGES ==--

[ INFO  ] Checking for product updates...
val ub = 100
var totalEven = 0
var totalOdd = 0
while(lb <= ub) {
   if(lb % 2 == 0) totalEven += lb else totalOdd += lb
   lb += 1
}
[ INFO  ] No product updates found

   --== NETWORK CONFIGURATION ==--

   Setup can automatically configure the firewall on this system.
   Note: automatic configuration of the firewall may overwrite 
current settings.
   NOTICE: iptables is deprecated and will be removed in future 
releases
   Do you want Setup to configure the firewall? (Yes, No) [Yes]: 
[ ERROR ] Invalid value
   Do you want Setup to configure the firewall? (Yes, No) [Yes]: 
[ ERROR ] Invalid value
   Do you want Setup to configure the firewall? (Yes, No) [Yes]: 
[ ERROR ] Invalid value
   Do you want Setup to configure the firewall? (Yes, No) [Yes]: 
[ ERROR ] Invalid value
   Do you want Setup to configure the firewall? (Yes, No) [Yes]: 
[ ERROR ] Invalid value
   Do you want Setup to configure the firewall? (Yes, No) [Yes]: 
[ ERROR ] Invalid value
   Do you want Setup to configure the firewall? (Yes, No) [Yes]: 
[ ERROR ] Invalid value

   Do you want Setup to configure the firewall? (Yes, No) [Yes]:
[ INFO  ] firewalld will be configured as firewall manager.

   --== DATABASE CONFIGURATION ==--

   The detected DWH database size is 48 MB.
   Setup can backup the existing database. The time and space 
required for the database backup depend on its size. This process takes 
time, and in some cases (for instance, when the size is few GBs) may 
take several hours to complete.
   If you choose to not back up the database, and Setup later 
fails for some reason, it will not be able to restore the database and 
all DWH data will be lost.
   Would you like to backup the existing database before 
upgrading it? (Yes, No) [Yes]:

   Perform full vacuum on the oVirt engine history
   database ovirt_engine_history@localhost?
   This operation may take a while depending on this setup 
health and the

   configuration of the db vacuum process.
   See https://www.postgresql.org/docs/10/sql-vacuum.html
   (Yes, No) [No]:

   --== OVIRT ENGINE CONFIGURATION ==--

   Perform full vacuum on the engine database engine@localhost?
   This operation may take a while depending on this setup 
health and the

   configuration of the db vacuum process.
   See https://www.postgresql.org/docs/10/sql-vacuum.html
   (Yes, No) [No]:

   --== STORAGE CONFIGURATION ==--


   --== PKI CONFIGURATION ==--

   One or 

[ovirt-users] Incremental backup using ovirt api

2019-09-28 Thread smidhunraj
Hi,
I tried to take incremental backup of a vm using this script.

 public function downloadDiskIncremental(){

  $data=array();



   $xmlStr = "

  incremental
 
";

  $curlParam=array(
  "url" => 
"vms/4044e014-7e20-4dbc-abe5-64690ec45f63/diskattachments",
  "method" => "POST",
  "data" =>$xmlStr,


  );
}

But it is throwing me error as
Array ( [status] => error [message] => For correct usage, see: 
https://ovirt.bobcares.com/ovirt-engine/api/v4/model#services/disk-attachments/methods/add
 
Please help me with this issue
   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BON6MWHIDJCZ3IWTTJVWXEP2NMJYGGP/


[ovirt-users] Incremental backup using ovirt api

2019-09-28 Thread smidhunraj
Hi,
I tried to take incremental backup of a vm using this script.

 public function downloadDiskIncremental(){

  $data=array();



   $xmlStr = "

  incremental
 
";

  $curlParam=array(
  "url" => 
"vms/4044e014-7e20-4dbc-abe5-64690ec45f63/diskattachments",
  "method" => "POST",
  "data" =>$xmlStr,


  );
}

But it is throwing me error as
Array ( [status] => error [message] => For correct usage, see: 
https://ovirt.bobcares.com/ovirt-engine/api/v4/model#services/disk-attachments/methods/add
 
Please help me with this issue
   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AZPPQRUCAV6MFJYZXHNBNM3RJVQEPHIX/