[ovirt-users] Local Storage domain to Shared

2019-02-19 Thread Matt Simonsen

Hello all,

I have a few nodes with local storage, and I've considered exporting 
them via NFS to migrate into shared storage, more than a few times.


I have thought of this post on the ovirt-users list many times: 
https://lists.ovirt.org/pipermail/users/2017-December/085521.html


Is this procedure documented & fully supported? Or is it something that 
just happens to work?


The instructions provided by Gianluca seem very clear. If this isn't 
documented better, ie: a blog for the site, what are the things that I 
should include to make it of value for going on the site?


Thanks,

Matt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEPO3YLV6TBUSCNGUESMYQDJHG43445V/


[ovirt-users] Re: Local storage formatting

2018-09-04 Thread Matt Simonsen

On 09/04/2018 02:22 PM, Nir Soffer wrote:
Maybe you have lvm filter set, which is highly recommend for an oVirt 
hypervisor.



Indeed, I do.   I am not sure I have the right filter however, so I 
appreciate the help.


This is the filter setup initially:

  filter = [ "a|^/dev/mapper/3600508b1001c7e172160824d7b204c3b2$|", 
"r|.*|" ]


Just to be clear, my intent isn't to add /dev/sdb to the main volume 
group, but to make a new volume group to setup a local ext4 mount point.



I changed it to:

  filter = [ "a|^/dev/sdb|", 
"a|^/dev/mapper/3600508b1001c7e172160824d7b204c3b2$|", "r|.*|" ]


Following this and a reboot, I was able to create a PV, VG, and LV.

# pvcreate /dev/sdb
# vgcreate data /dev/sdb
# lvcreate -L800g /dev/data --name local_images
# mkfs.ext4 /dev/data/local_images
-- adjust fstab
# mount -a

It seems to function as expected now the the filter has been adjusted.  
But is the filter doing what it is "supposed" to?


When I run the command "vdsm-tool config-lvm-filter" what I see is:

[root@node4-g8-h4 ~]# vdsm-tool config-lvm-filter
Analyzing host...
LVM filter is already configured for Vdsm

Thanks for the help and confirming how this should work.

Matt




To add /dev/sdb, you need to add it to the lvm filter in 
/etc/lvm/lvm.conf.


After you configure the device properly, you can generate lvm filter
for the current setup using:

    vdsm-tool config-lvm-filter

Here is example run on unconfigued oVirt host:

#  vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/fedora_voodoo1-root
  mountpoint:      /
  devices:         /dev/vda2

  logical volume:  /dev/mapper/fedora_voodoo1-swap
  mountpoint:      [SWAP]
  devices:         /dev/vda2

This is the recommended LVM filter for this host:

  filter = [ "a|^/dev/vda2$|", "r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.


Nir
On 09/04/2018 01:23 PM, Matt Simonsen wrote:

> Hello,
>
> I'm running oVirt with several data centers, some with NFS
storage and
> some with local storage.
>
> I had problems in the past with a large pool and local storage. The
> problem was nodectl showed the pool being too full (I think
>80%), but
> it was only the images that made the pool "full" -- and this
storage
> was carefully setup such that there was no chance it would actually
> fill.  The LVs for oVirt itself were all under 20%, yet nodectl
still
> reported the pool was too full.
>
> My solution so far has been to use our RAID card tools, so that
sda is
> the oVirt node install, and sdb is for images.  There are probably
> other good reasons for me to handle it this way, for example being
> able to use different RAID levels, but I'm hoping someone can
confirm
> my partitioning below doesn't have some risk I'm now yet aware of.
>
> I setup a new volume group for images, as below:
>
>
> [root@node4-g8-h4 multipath]# pvs
>   PV VG Fmt  Attr PSize
> PFree
>   /dev/mapper/3600508b1001c7e172160824d7b204c3b2 onn_node4-g8-h4
lvm2
> a--  <119.00g  <22.85g
>   /dev/sdb1  data lvm2 a--
1.13t
> <361.30g
>
> [root@node4-g8-h4 multipath]# vgs
>   VG  #PV #LV #SN Attr   VSize    VFree
>   data  1   1   0 wz--n-    1.13t <361.30g
>   onn_node4-g8-h4   1  13   0 wz--n- <119.00g <22.85g
>
> [root@node4-g8-h4 multipath]# lvs
>   LV   VG Attr LSize
> Pool   Origin Data%  Meta% Move Log
> Cpy%Sync Convert
>   images_main  data -wi-ao 800.00g
>   home onn_node4-g8-h4 Vwi-aotz--
> 1.00g pool00 4.79
>   ovirt-node-ng-4.2.5.1-0.20180816.0   onn_node4-g8-h4 Vwi---tz-k
> 64.10g pool00 root
>   ovirt-node-ng-4.2.5.1-0.20180816.0+1 onn_node4-g8-h4 Vwi---tz--
> 64.10g pool00 ovirt-node-ng-4.2.5.1-0.20180816.0
>   ovirt-node-ng-4.2.6-0.20180903.0 onn_node4-g8-h4 Vri---tz-k
> 64.10g pool00
>   ovirt-node-ng-4.2.6-0.20180903.0+1   onn_node4-g8-h4 Vwi-aotz--
> 64.10g pool00 ovirt-node-ng-4.2.6-0.20180903.0 4.83
>   pool00   onn_node4-g8-h4 twi-aotz--
> 91.10g   8.94 0.49
>   root onn_node4-g8-h

[ovirt-users] Re: Local storage formatting

2018-09-04 Thread Matt Simonsen

Hello all,

Following this report below, I did a reboot. Now I have a real question.

I added the VG, LV and mount point to this node using the port 9090 web 
interface.


Now the volume group isn't active and will not mount, causing the boot 
to hang.


I am able to do "vgchange -ay data" and then a manual mount in rescue mode.

Any feedback on the best way to add a new volume group to an empty 
partition (sdb) would be appreciated. Prior to using the web interface, 
I was having failures using the manual tools to /dev/sdb with an error 
"device /dev/sdb excluded by filter" which I suspect is related.


Thanks

Matt





On 09/04/2018 01:23 PM, Matt Simonsen wrote:

Hello,

I'm running oVirt with several data centers, some with NFS storage and 
some with local storage.


I had problems in the past with a large pool and local storage. The 
problem was nodectl showed the pool being too full (I think >80%), but 
it was only the images that made the pool "full" -- and this storage 
was carefully setup such that there was no chance it would actually 
fill.  The LVs for oVirt itself were all under 20%, yet nodectl still 
reported the pool was too full.


My solution so far has been to use our RAID card tools, so that sda is 
the oVirt node install, and sdb is for images.  There are probably 
other good reasons for me to handle it this way, for example being 
able to use different RAID levels, but I'm hoping someone can confirm 
my partitioning below doesn't have some risk I'm now yet aware of.


I setup a new volume group for images, as below:


[root@node4-g8-h4 multipath]# pvs
  PV VG Fmt  Attr PSize    
PFree
  /dev/mapper/3600508b1001c7e172160824d7b204c3b2 onn_node4-g8-h4 lvm2 
a--  <119.00g  <22.85g
  /dev/sdb1  data lvm2 a-- 1.13t 
<361.30g


[root@node4-g8-h4 multipath]# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  data  1   1   0 wz--n-    1.13t <361.30g
  onn_node4-g8-h4   1  13   0 wz--n- <119.00g  <22.85g

[root@node4-g8-h4 multipath]# lvs
  LV   VG  Attr LSize   
Pool   Origin Data%  Meta% Move Log 
Cpy%Sync Convert

  images_main  data    -wi-ao 800.00g
  home onn_node4-g8-h4 Vwi-aotz--   
1.00g pool00 4.79
  ovirt-node-ng-4.2.5.1-0.20180816.0   onn_node4-g8-h4 Vwi---tz-k 
64.10g pool00 root
  ovirt-node-ng-4.2.5.1-0.20180816.0+1 onn_node4-g8-h4 Vwi---tz-- 
64.10g pool00 ovirt-node-ng-4.2.5.1-0.20180816.0
  ovirt-node-ng-4.2.6-0.20180903.0 onn_node4-g8-h4 Vri---tz-k 
64.10g pool00
  ovirt-node-ng-4.2.6-0.20180903.0+1   onn_node4-g8-h4 Vwi-aotz-- 
64.10g pool00 ovirt-node-ng-4.2.6-0.20180903.0 4.83
  pool00   onn_node4-g8-h4 twi-aotz-- 
91.10g   8.94 0.49
  root onn_node4-g8-h4 Vwi---tz-- 
64.10g pool00

  swap onn_node4-g8-h4 -wi-ao 4.00g
  tmp  onn_node4-g8-h4 Vwi-aotz--   
1.00g pool00 4.87
  var  onn_node4-g8-h4 Vwi-aotz-- 
15.00g pool00 3.31
  var_crash    onn_node4-g8-h4 Vwi-aotz-- 
10.00g pool00 2.86
  var_log  onn_node4-g8-h4 Vwi-aotz--   
8.00g pool00 3.57
  var_log_audit    onn_node4-g8-h4 Vwi-aotz--   
2.00g pool00    4.89




The images_main is setup as "Block device for filesystems" with ext4. 
Is there any reason I should consider pool for thinly provisioned 
volumes?  I don't need to over-allocate storage and it seems to me 
like a fixed partition is ideal. Please confirm or let me know if 
there's anything else I should consider.



Thanks

Matt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7N547X6DC7KHHVCDGKXQGNJV6TG7E3U/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LJINANK6PAGVV22H5OTYTJ3M4WIWPTMV/


[ovirt-users] Local storage formatting

2018-09-04 Thread Matt Simonsen

Hello,

I'm running oVirt with several data centers, some with NFS storage and 
some with local storage.


I had problems in the past with a large pool and local storage. The 
problem was nodectl showed the pool being too full (I think >80%), but 
it was only the images that made the pool "full" -- and this storage was 
carefully setup such that there was no chance it would actually fill.  
The LVs for oVirt itself were all under 20%, yet nodectl still reported 
the pool was too full.


My solution so far has been to use our RAID card tools, so that sda is 
the oVirt node install, and sdb is for images.  There are probably other 
good reasons for me to handle it this way, for example being able to use 
different RAID levels, but I'm hoping someone can confirm my 
partitioning below doesn't have some risk I'm now yet aware of.


I setup a new volume group for images, as below:


[root@node4-g8-h4 multipath]# pvs
  PV VG Fmt  Attr PSize    
PFree
  /dev/mapper/3600508b1001c7e172160824d7b204c3b2 onn_node4-g8-h4 lvm2 
a--  <119.00g  <22.85g
  /dev/sdb1  data lvm2 a-- 
1.13t <361.30g


[root@node4-g8-h4 multipath]# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  data  1   1   0 wz--n-    1.13t <361.30g
  onn_node4-g8-h4   1  13   0 wz--n- <119.00g  <22.85g

[root@node4-g8-h4 multipath]# lvs
  LV   VG  Attr LSize   
Pool   Origin Data%  Meta% Move Log Cpy%Sync 
Convert

  images_main  data    -wi-ao 800.00g
  home onn_node4-g8-h4 Vwi-aotz--   
1.00g pool00 4.79
  ovirt-node-ng-4.2.5.1-0.20180816.0   onn_node4-g8-h4 Vwi---tz-k 
64.10g pool00 root
  ovirt-node-ng-4.2.5.1-0.20180816.0+1 onn_node4-g8-h4 Vwi---tz-- 
64.10g pool00 ovirt-node-ng-4.2.5.1-0.20180816.0
  ovirt-node-ng-4.2.6-0.20180903.0 onn_node4-g8-h4 Vri---tz-k 
64.10g pool00
  ovirt-node-ng-4.2.6-0.20180903.0+1   onn_node4-g8-h4 Vwi-aotz-- 
64.10g pool00 ovirt-node-ng-4.2.6-0.20180903.0 4.83
  pool00   onn_node4-g8-h4 twi-aotz-- 
91.10g   8.94 0.49
  root onn_node4-g8-h4 Vwi---tz-- 
64.10g pool00

  swap onn_node4-g8-h4 -wi-ao 4.00g
  tmp  onn_node4-g8-h4 Vwi-aotz--   
1.00g pool00 4.87
  var  onn_node4-g8-h4 Vwi-aotz-- 
15.00g pool00 3.31
  var_crash    onn_node4-g8-h4 Vwi-aotz-- 
10.00g pool00 2.86
  var_log  onn_node4-g8-h4 Vwi-aotz--   
8.00g pool00 3.57
  var_log_audit    onn_node4-g8-h4 Vwi-aotz--   
2.00g pool00    4.89




The images_main is setup as "Block device for filesystems" with ext4. Is 
there any reason I should consider pool for thinly provisioned volumes?  
I don't need to over-allocate storage and it seems to me like a fixed 
partition is ideal. Please confirm or let me know if there's anything 
else I should consider.



Thanks

Matt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7N547X6DC7KHHVCDGKXQGNJV6TG7E3U/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-30 Thread Matt Simonsen

I'm not sure I have logs from any instances that failed.

However having upgraded about 10 nodes, the trick for success seems to be:

- Manually cleaning grub.conf with any past node kernels (ie: when on 
4.2.3, I remove 4.2.2)


- Manually removing any past kernel directories from /boot

- Removing any old LVs (the .0 and .0+1)

- yum update & reboot

I'm not sure how our systems got to require this, we've done 5-6 
upgrades starting with 4.1, and never had to do this before.


If I continue to have problems from 4.2.5 to 4.2.6, I will send as clear 
of a bug report with logs as possible.


Thank you for your help,

Matt




On 08/27/2018 12:37 AM, Yuval Turgeman wrote:

Hi Matt,

I just went over the log you sent, couldn't find anything else other 
than the semanage failure which you say seems to be ok.  Do you have 
any other logs (perhaps from other machines) that we can look at it ?


Thanks,
Yuval.

On Tue, Aug 21, 2018 at 10:11 PM, Matt Simonsen <mailto:m...@khoza.com>> wrote:


I ran this on a host that has the same exact failing upgrade. It
returned with no output.

I'm expecting if I manually remove the /boot kernel, the grub
lines from any other installs, and the other LV layers that the
upgrade will work but with myself and others experiencing this I'm
happy to assist in finding the cause.

Is there anything else I can do to assist?

Thanks,

Matt





On 08/21/2018 12:38 AM, Yuval Turgeman wrote:

Hi again Matt,

I was wondering what `semanage permissive -a setfiles_t` looks
like on the host that failed to upgrade because I don't see the
exact error in the log.

Thanks,
Yuval.



    On Tue, Aug 21, 2018 at 12:04 AM, Matt Simonsen mailto:m...@khoza.com>> wrote:

Hello,

I replied to a different email in this thread, noting I
believe I may have a workaround to this issue.

I did run this on a server that has not yet been upgraded,
which previously has failed at being updated, and the command
returned "0" with no output.

[ ~]# semanage permissive -a setfiles_t
[ ~]# echo $?
0

Please let me know if there is anything else I can do to assist,

Matt





On 08/20/2018 08:19 AM, Yuval Turgeman wrote:

Hi Matt,

Can you attach the output from the following line

# semanage permissive -a setfiles_t

Thanks,
    Yuval.


On Fri, Aug 17, 2018 at 2:26 AM, Matt Simonsen
mailto:m...@khoza.com>> wrote:

Hello all,

I've emailed about similar trouble with an oVirt Node
upgrade using the ISO install. I've attached the
/tmp/imgbased.log file in hopes it will help give a clue
as to why the trouble.

Since these use NFS storage I can rebuild, but would
like to know, ideally, what caused the upgrade to break.

Truthfully following the install, I don't think I have
done *that* much to these systems, so I'm not sure what
would have caused the problem.

I have done several successful upgrades in the past and
most of my standalone systems have been working great.

I've been really happy with oVirt, so kudos to the team.

Thanks for any help,

Matt



___
Users mailing list -- users@ovirt.org
<mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
<https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/>










___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7WNTD6EAJ6MA5Q5AMY527YCTRFZJHFDQ/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-21 Thread Matt Simonsen
I ran this on a host that has the same exact failing upgrade. It 
returned with no output.


I'm expecting if I manually remove the /boot kernel, the grub lines from 
any other installs, and the other LV layers that the upgrade will work 
but with myself and others experiencing this I'm happy to assist in 
finding the cause.


Is there anything else I can do to assist?

Thanks,

Matt





On 08/21/2018 12:38 AM, Yuval Turgeman wrote:

Hi again Matt,

I was wondering what `semanage permissive -a setfiles_t` looks like on 
the host that failed to upgrade because I don't see the exact error in 
the log.


Thanks,
Yuval.



On Tue, Aug 21, 2018 at 12:04 AM, Matt Simonsen <mailto:m...@khoza.com>> wrote:


Hello,

I replied to a different email in this thread, noting I believe I
may have a workaround to this issue.

I did run this on a server that has not yet been upgraded, which
previously has failed at being updated, and the command returned
"0" with no output.

[ ~]# semanage permissive -a setfiles_t
[ ~]# echo $?
0

Please let me know if there is anything else I can do to assist,

Matt





On 08/20/2018 08:19 AM, Yuval Turgeman wrote:

Hi Matt,

Can you attach the output from the following line

# semanage permissive -a setfiles_t

Thanks,
Yuval.


    On Fri, Aug 17, 2018 at 2:26 AM, Matt Simonsen mailto:m...@khoza.com>> wrote:

Hello all,

I've emailed about similar trouble with an oVirt Node upgrade
using the ISO install. I've attached the /tmp/imgbased.log
file in hopes it will help give a clue as to why the trouble.

Since these use NFS storage I can rebuild, but would like to
know, ideally, what caused the upgrade to break.

Truthfully following the install, I don't think I have done
*that* much to these systems, so I'm not sure what would have
caused the problem.

I have done several successful upgrades in the past and most
of my standalone systems have been working great.

I've been really happy with oVirt, so kudos to the team.

Thanks for any help,

Matt



___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
<https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/>







___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBASXDKHVVJIOSO5CAMI2VCG4B525UB3/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-20 Thread Matt Simonsen

Hello,

I replied to a different email in this thread, noting I believe I may 
have a workaround to this issue.


I did run this on a server that has not yet been upgraded, which 
previously has failed at being updated, and the command returned "0" 
with no output.


[ ~]# semanage permissive -a setfiles_t
[ ~]# echo $?
0

Please let me know if there is anything else I can do to assist,

Matt





On 08/20/2018 08:19 AM, Yuval Turgeman wrote:

Hi Matt,

Can you attach the output from the following line

# semanage permissive -a setfiles_t

Thanks,
Yuval.


On Fri, Aug 17, 2018 at 2:26 AM, Matt Simonsen <mailto:m...@khoza.com>> wrote:


Hello all,

I've emailed about similar trouble with an oVirt Node upgrade
using the ISO install. I've attached the /tmp/imgbased.log file in
hopes it will help give a clue as to why the trouble.

Since these use NFS storage I can rebuild, but would like to know,
ideally, what caused the upgrade to break.

Truthfully following the install, I don't think I have done *that*
much to these systems, so I'm not sure what would have caused the
problem.

I have done several successful upgrades in the past and most of my
standalone systems have been working great.

I've been really happy with oVirt, so kudos to the team.

Thanks for any help,

Matt



___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
<https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/>




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGFXAN3SLQTBLA2OCM2BZUGYKQ2PTCN2/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-20 Thread Matt Simonsen

Hello all,

I believe I have fixed this for several hosts - or at the very least I 
have successfully upgraded from 4.2.3 to 4.2.5 successfully on servers 
that previously failed.


This is documented from memory, but I believe I didn't do anything else.

What I did is, first removed the old LV for the new install and/or any 
old installs also (we had one on 4.2.2) like this:


lvremove /dev/onn/ovirt-node-ng-4.2.x
lvremove /dev/onn/ovirt-node-ng-4.2.x+1

lvremove /dev/onn/var_crash

The nodes were running with 4.2.3, and were were previously on 4.2.2. 
The LVs remained. I had previously removed only the 4.2.2 LVs & yum 
updated, and I do not believe this caused an upgrade to complete 
properly (ie: grub wasn't updated at the very least).


In my searching, I noticed a suggestion somewhere about removing grub 
entries and the old info in /bootovirt-node-ng-4.2.x*


Removing the directory from the previous install in /boot & manually 
removing the old boot-loader info in /boot/grub2/grub.cfg, along with 
any other LVs that may have remained from previous installs seems to 
"work" -- the yum update following doing this succeed & grub is updated.


Following this I upgraded from 4.2.3 to 4.2.5 on several hosts, and the 
process went perfectly on 3 in a row that previously failed.


I believe I can now successfully upgrade our remaining hosts from 4.2.3 
to 4.2.5, I'm happy to provide more info it it will help identify 
exactly what caused this.


Thanks,

Matt


PS- I will also be in #ovirt for about 3 hours if anyone would like to 
work there with me




On 08/20/2018 08:19 AM, Yuval Turgeman wrote:

Hi Matt,

Can you attach the output from the following line

# semanage permissive -a setfiles_t

Thanks,
Yuval.


On Fri, Aug 17, 2018 at 2:26 AM, Matt Simonsen <mailto:m...@khoza.com>> wrote:


Hello all,

I've emailed about similar trouble with an oVirt Node upgrade
using the ISO install. I've attached the /tmp/imgbased.log file in
hopes it will help give a clue as to why the trouble.

Since these use NFS storage I can rebuild, but would like to know,
ideally, what caused the upgrade to break.

Truthfully following the install, I don't think I have done *that*
much to these systems, so I'm not sure what would have caused the
problem.

I have done several successful upgrades in the past and most of my
standalone systems have been working great.

I've been really happy with oVirt, so kudos to the team.

Thanks for any help,

Matt



___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
<https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/>




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2S4LHNR2CSWBD64PLLCR6CD4XQ42WVQ/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-17 Thread Matt Simonsen

On 08/17/2018 09:00 AM, Vincent Royer wrote:
4.2.5 seems to have NFS issues for some users.  What is your storage 
server?




We have shared storage via NFS via a dedicated Cent7 server.

I'm not so much having issues using 4.2.5, the issue I'm having is that 
4.2.3 isn't upgrading to 4.2.4 or 4.2.5, based on errors that I am 
hoping the attached log file would clarify for somebody familiar with 
the upgrade process.


Thanks,

Matt





On Thu, Aug 16, 2018, 4:49 PM Matt Simonsen <mailto:m...@khoza.com>> wrote:


Hello all,

I've emailed about similar trouble with an oVirt Node upgrade
using the
ISO install. I've attached the /tmp/imgbased.log file in hopes it
will
help give a clue as to why the trouble.

Since these use NFS storage I can rebuild, but would like to know,
ideally, what caused the upgrade to break.

Truthfully following the install, I don't think I have done *that*
much
to these systems, so I'm not sure what would have caused the problem.

I have done several successful upgrades in the past and most of my
standalone systems have been working great.

I've been really happy with oVirt, so kudos to the team.

Thanks for any help,

Matt


___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HBP3GMDLCG2QZSDALTOQOZDAUGL7WLTJ/


[ovirt-users] oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-16 Thread Matt Simonsen

Hello all,

I've emailed about similar trouble with an oVirt Node upgrade using the 
ISO install. I've attached the /tmp/imgbased.log file in hopes it will 
help give a clue as to why the trouble.


Since these use NFS storage I can rebuild, but would like to know, 
ideally, what caused the upgrade to break.


Truthfully following the install, I don't think I have done *that* much 
to these systems, so I'm not sure what would have caused the problem.


I have done several successful upgrades in the past and most of my 
standalone systems have been working great.


I've been really happy with oVirt, so kudos to the team.

Thanks for any help,

Matt




imgbased.log.gz
Description: application/gzip
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed (with solution)

2018-07-03 Thread Matt Simonsen

Many thanks to Yuval.

After moving the discussion to #ovirt, I tried "fstrim -a" and this 
allowed the upgrade to complete successfully.


Matt







On 07/03/2018 12:19 PM, Yuval Turgeman wrote:

Hi Matt,

I would try to run `fstrim -a` (man fstrim) and see if it frees 
anything from the thinpool.  If you do decide to run this, please send 
the output for lvs again.


Also, are you on #ovirt ?

Thanks,
Yuval.


On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen <mailto:m...@khoza.com>> wrote:


Thank you again for the assistance with this issue.

Below is the result of the command below.

In the future I am considering using different Logical RAID
Volumes to get different devices (sda, sdb, etc) for the oVirt
Node image & storage filesystem to simplify.  However I'd like to
understand why this upgrade failed and also how to correct it if
at all possible.

I believe I need to recreate the /var/crash partition? I
incorrectly removed it, is it simply a matter of using LVM to add
a new partition and format it?

Secondly, do you have any suggestions on how to move forward with
the error regarding the pool capacity? I'm not sure if this is a
legitimate error or problem in the upgrade process.

Thanks,

Matt




On 07/03/2018 03:58 AM, Yuval Turgeman wrote:

Not sure this is the problem, autoextend should be enabled for
the thinpool, `lvs -o +profile` should show imgbased-pool
(defined at /etc/lvm/profile/imgbased-pool.profile)

On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David
mailto:d...@redhat.com>> wrote:

    On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue
given I have several hundred GB of storage in the thin pool
that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV  VG              Attr       LSize   Pool  Origin     
                       Data% Meta%  Move Log Cpy%Sync Convert
>   home  onn_node1-g8-h4 Vwi-aotz--   1.00g pool00          
                        4.79
>   ovirt-node-ng-4.2.2-0.20180423.0    onn_node1-g8-h4
Vwi---tz-k <50.06g pool00 root
>   ovirt-node-ng-4.2.2-0.20180423.0+1  onn_node1-g8-h4
Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0  onn_node1-g8-h4
Vri---tz-k <50.06g pool00
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4
Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>   pool00  onn_node1-g8-h4 twi-aotz--  <1.30t              
                       76.63 50.34

I think your thinpool meta volume is close to full and needs
to be enlarged.
This quite likely happened because you extended the thinpool
without
extending the meta vol.

Check also 'lvs -a'.

This might be enough, but check the names first:

lvextend -L+200m onn_node1-g8-h4/pool00_tmeta

Best regards,

>   root  onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
>   tmp   onn_node1-g8-h4 Vwi-aotz--   1.00g pool00 5.04
>   var   onn_node1-g8-h4 Vwi-aotz--  15.00g pool00 5.86
>   var_crash   onn_node1-g8-h4 Vwi---tz--  10.00g pool00
>   var_local_images  onn_node1-g8-h4 Vwi-aotz--   1.10t
pool00 89.72
>   var_log   onn_node1-g8-h4 Vwi-aotz--   8.00g pool00 6.84
>   var_log_audit   onn_node1-g8-h4 Vwi-aotz--   2.00g pool00
6.16
> [root@node6-g8-h4 ~]# vgs
>   VG              #PV #LV #SN Attr  VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version:
imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting
image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling
binary: (['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG]

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Matt Simonsen

Thank you again for the assistance with this issue.

Below is the result of the command below.

In the future I am considering using different Logical RAID Volumes to 
get different devices (sda, sdb, etc) for the oVirt Node image & storage 
filesystem to simplify.  However I'd like to understand why this upgrade 
failed and also how to correct it if at all possible.


I believe I need to recreate the /var/crash partition? I incorrectly 
removed it, is it simply a matter of using LVM to add a new partition 
and format it?


Secondly, do you have any suggestions on how to move forward with the 
error regarding the pool capacity? I'm not sure if this is a legitimate 
error or problem in the upgrade process.


Thanks,

Matt




On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
Not sure this is the problem, autoextend should be enabled for the 
thinpool, `lvs -o +profile` should show imgbased-pool (defined at 
/etc/lvm/profile/imgbased-pool.profile)


On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David <mailto:d...@redhat.com>> wrote:


On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given
I have several hundred GB of storage in the thin pool that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV                                   VG   Attr       LSize 
 Pool   Origin      Data%  Meta%  Move Log Cpy%Sync Convert
>   home  onn_node1-g8-h4 Vwi-aotz--   1.00g pool00              
      4.79
>   ovirt-node-ng-4.2.2-0.20180423.0  onn_node1-g8-h4 Vwi---tz-k
<50.06g pool00 root
>   ovirt-node-ng-4.2.2-0.20180423.0+1  onn_node1-g8-h4 Vwi---tz--
<50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0  onn_node1-g8-h4 Vri---tz-k
<50.06g pool00
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4
Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>   pool00  onn_node1-g8-h4 twi-aotz--  <1.30t                    
   76.63  50.34

I think your thinpool meta volume is close to full and needs to be
enlarged.
This quite likely happened because you extended the thinpool without
extending the meta vol.

Check also 'lvs -a'.

This might be enough, but check the names first:

lvextend -L+200m onn_node1-g8-h4/pool00_tmeta

Best regards,

>   root  onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
>   tmp onn_node1-g8-h4 Vwi-aotz--   1.00g pool00                
    5.04
>   var onn_node1-g8-h4 Vwi-aotz--  15.00g pool00                
    5.86
>   var_crash onn_node1-g8-h4 Vwi---tz--  10.00g pool00
>   var_local_images  onn_node1-g8-h4 Vwi-aotz--   1.10t pool00  
                    89.72
>   var_log onn_node1-g8-h4 Vwi-aotz--   8.00g pool00            
        6.84
>   var_log_audit onn_node1-g8-h4 Vwi-aotz--   2.00g pool00      
              6.16
> [root@node6-g8-h4 ~]# vgs
>   VG              #PV #LV #SN Attr   VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version:
imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {'close_fds': True,
'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned:
/tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary:
(['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,157 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen

On 07/02/2018 12:55 PM, Yuval Turgeman wrote:

Are you mounted with discard ? perhaps fstrim ?





I believe that I have all the default options, and I have one extra 
partition for images.



#
# /etc/fstab
# Created by anaconda on Sat Oct 31 18:04:29 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1 / ext4 
defaults,discard 1 1

UUID=84ca8776-61d6-4b19-9104-99730932b45a /boot ext4    defaults    1 2
/dev/mapper/onn_node1--g8--h4-home /home ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-tmp /tmp ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var /var ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var_local_images /var/local/images   
ext4    defaults    1 2

/dev/mapper/onn_node1--g8--h4-var_log /var/log ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var_log_audit /var/log/audit ext4 
defaults,discard 1 2



At this point I don't have a /var/crash mounted (or a LV even).  I 
assume I should re-create.



I noticed on another server with the same problem, the var_crash LV 
isn't available.  Could this be part of the problem?


  --- Logical volume ---
  LV Path    /dev/onn/var_crash
  LV Name    var_crash
  VG Name    onn
  LV UUID    X1TPMZ-XeZP-DGYv-woZW-3kvk-vWZu-XQcFhL
  LV Write Access    read/write
  LV Creation host, time node1-g7-h1.srihosting.com, 2018-04-05 
07:03:35 -0700

  LV Pool name   pool00
  LV Status  NOT available
  LV Size    10.00 GiB
  Current LE 2560
  Segments   1
  Allocation inherit
  Read ahead sectors auto



Thanks
Matt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WAV667HP5HU6IXGJTLZQ6YSMHSHTHF6M/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen

Yes, it shows 8g on the VG

I removed the LV for /var/crash, then installed again, and it is still 
failing on the step:



2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate', 
'--thin', '--virtualsize', u'53750005760B', '--name', 
'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) 
{'close_fds': True, 'stderr': -2}
2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create 
new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached 
threshold.


2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount', 
'-l', u'/tmp/mnt.ZYOjC'],) {}



Thanks

Matt





On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
Not in front of my laptop so it's a little hard to read but does it 
say 8g free on the vg ?


On Mon, Jul 2, 2018, 20:00 Matt Simonsen <mailto:m...@khoza.com>> wrote:


This error adds some clarity.

That said, I'm a bit unsure how the space can be the issue given I
have several hundred GB of storage in the thin pool that's unused...

How do you suggest I proceed?

Thank you for your help,

Matt



[root@node6-g8-h4 ~]# lvs

  LV   VG Attr   LSize   Pool
Origin Data%  Meta%  Move Log Cpy%Sync
Convert
  home onn_node1-g8-h4
Vwi-aotz--   1.00g pool00 4.79
  ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
<50.06g pool00 root
  ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
<50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
  ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
<50.06g pool00
  ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
<50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
  pool00   onn_node1-g8-h4 twi-aotz--
<1.30t   76.63 50.34
  root onn_node1-g8-h4 Vwi---tz--
<50.06g pool00
  tmp  onn_node1-g8-h4
Vwi-aotz--   1.00g pool00 5.04
  var  onn_node1-g8-h4 Vwi-aotz-- 
15.00g pool00 5.86
  var_crash    onn_node1-g8-h4 Vwi---tz-- 
10.00g pool00
  var_local_images onn_node1-g8-h4
Vwi-aotz--   1.10t pool00 89.72
  var_log  onn_node1-g8-h4
Vwi-aotz--   8.00g pool00 6.84
  var_log_audit    onn_node1-g8-h4
Vwi-aotz--   2.00g pool00 6.16
[root@node6-g8-h4 ~]# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g


2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp',
'-d', '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary:
(['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {}
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
'/tmp/mnt.1OhaU/LiveOS/rootfs.img'
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp',
'-d', '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,1

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen
2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', 
'-l', u'/tmp/mnt.1OhaU'],) {}
2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', 
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}

2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', 
u'/tmp/mnt.1OhaU'],) {}
2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', 
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}

2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", 
line 53, in 

    CliApplication()
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", 
line 82, in CliApplication

    app.hooks.emit("post-arg-parse", args)
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", 
line 120, in emit

    cb(self.context, *args)
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", 
line 56, in post_argparse

    base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME)
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", 
line 118, in extract

    "%s" % size, nvr)
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", 
line 84, in add_base_with_tree

    lvs)
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", 
line 310, in add_base

    new_base_lv = pool.create_thinvol(new_base.lv_name, size)
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", 
line 324, in create_thinvol

    self.lvm_name])
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", 
line 390, in lvcreate

    return self.call(["lvcreate"] + args, **kwargs)
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", 
line 378, in call

    stdout = call(*args, **kwargs)
  File 
"/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", 
line 153, in call

    return subprocess.check_output(*args, **kwargs).strip()
  File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['lvcreate', '--thin', 
'--virtualsize', u'53750005760B', '--name', 
'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned 
non-zero exit status 5






On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach 
/var/log/imgbased.log or /tmp/imgbased.log ?


Thanks,
Yuval.

On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <mailto:sbona...@redhat.com>> wrote:


Yuval, can you please have a look?

2018-06-30 7:48 GMT+02:00 Oliver Riesener
mailto:oliver.riese...@hs-bremen.de>>:

Yes, here is the same.

It seams the bootloader isn’t configured right ?

I did the Upgrade and reboot to 4.2.4 from UI and got:

[root@ovn-monster ~]# nodectl info
layers:
ovirt-node-ng-4.2.4-0.20180626.0:
ovirt-node-ng-4.2.4-0.20180626.0+1
ovirt-node-ng-4.2.3.1-0.20180530.0:
ovirt-node-ng-4.2.3.1-0.20180530.0+1
ovirt-node-ng-4.2.3-0.20180524.0:
ovirt-node-ng-4.2.3-0.20180524.0+1
ovirt-node-ng-4.2.1.1-0.20180223.0:
ovirt-node-ng-4.2.1.1-0.20180223.0+1
bootloader:
  default: ovirt-node-ng-4.2.3-0.20180524.0+1
  entries:
ovirt-node-ng-4.2.3-0.20180524.0+1:
      index: 0
      title: ovirt-node-ng-4.2.3-0.20180524.0
      kernel:

/boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
      args: "ro crashkernel=auto rd.lvm.lv
<http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap
rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"
      initrd:

/boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
      root:
/dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
ovirt-no

[ovirt-users] oVirt 4.2.3 to 4.2.4 failed

2018-06-29 Thread Matt Simonsen

Hello,

I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node 
platform and it doesn't appear the updates worked.



[root@node6-g8-h4 ~]# yum update
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
  : package_upload, product-id, search-disabled-repos, 
subscription-

  : manager
This system is not registered with an entitlement server. You can use 
subscription-manager to register.

Loading mirror speeds from cached hostfile
 * ovirt-4.2-epel: linux.mirrors.es.net
Resolving Dependencies
--> Running transaction check
---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be 
updated
---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be 
obsoleting
---> Package ovirt-node-ng-image-update-placeholder.noarch 
0:4.2.3.1-1.el7 will be obsoleted

--> Finished Dependency Resolution

Dependencies Resolved

=
 Package  Arch 
Version Repository   Size

=
Installing:
 ovirt-node-ng-image-update   noarch 
4.2.4-1.el7 ovirt-4.2   647 M

 replacing  ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7

Transaction Summary
=
Install  1 Package

Total download size: 647 M
Is this ok [y/d/N]: y
Downloading packages:
warning: 
/var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: 
Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY
Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not 
installed

ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
Importing GPG key 0xFE590CB7:
 Userid : "oVirt "
 Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7
 Package    : ovirt-release42-4.2.3.1-1.el7.noarch (installed)
 From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3
warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet 
failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package 
ovirt-node-ng-image-update-4.2.4-1.el7.noarch
  Erasing    : 
ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3

  Cleanup    : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3
warning: file 
/usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: 
remove failed: No such file or directory

Uploading Package Profile
Unable to upload Package Profile
  Verifying  : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3
  Verifying  : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3
  Verifying  : 
ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3


Installed:
  ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7

Replaced:
  ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7

Complete!
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, product-id, subscription-manager
This system is not registered with an entitlement server. You can use 
subscription-manager to register.

Cannot upload enabled repos report, is this client registered?


My engine shows the nodes as having no updates, however the major 
components including the kernel version and port 9090 admin GUI show 4.2.3


Is there anything I can provide to help diagnose the issue?


[root@node6-g8-h4 ~]# rpm -qa | grep ovirt

ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch
ovirt-host-deploy-1.7.3-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch
ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-setup-lib-1.1.4-1.el7.centos.noarch
ovirt-release42-4.2.3.1-1.el7.noarch
ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch
ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch
ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64
ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch
ovirt-host-4.2.2-2.el7.centos.x86_64
ovirt-node-ng-image-update-4.2.4-1.el7.noarch
ovirt-vmconsole-1.0.5-4.el7.centos.noarch
ovirt-release-host-node-4.2.3.1-1.el7.noarch
cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch
ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch
python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64

[root@node6-g8-h4 ~]# yum update
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, 
package_upload, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use 
subscriptio

[ovirt-users] oVirt Node Resize tool for local storage

2018-03-27 Thread Matt Simonsen

Hello,

We have a development box with local storage, running ovirt Node 4.1

It appears that using the admin interface on port 9090 I can resize a 
live partition to a smaller size.


Our storage is a seperate LVM partition, ext4 formated.

My question is, both theoretically and practically, if anyone has 
feedback on:



#1: Does this work (ie- will it shrink the filesystem then shrink the LV)?

#2: May we do this with VMs running?


Thanks

Matt

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Node Next Install Problem

2018-03-08 Thread Matt Simonsen

Doh! Problem solved. Well at least I found it on my own...

Date on server is wrong, and certs were silently failing.

Matt




On 03/08/2018 04:16 PM, Matt Simonsen wrote:


I installed based on an older Node Next DVD (4.1.7) that has worked in 
the past and it doesn't appear to be working when I add it to a cluster.


The installer says//it cannot queue package iproute.

Is there a repo down or that has changed? Thanks for any suggestions.

It appears yum is also broken:/
/

/yum update
Loaded plugins: fastestmirror, imgbased-persist, package_upload, 
product-id,

  : search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use 
subscription-manager to register.
centos-opstools-release | 2.9 kB 
00:00
ovirt-4.1   | 3.0 kB 
00:00
ovirt-4.1-centos-gluster38  | 2.9 kB 
00:00



 One of the configured repositories failed (Unknown),
 and yum doesn't have enough cached data to continue. At this point 
the only

 safe thing yum can do is fail. There are a few ways to work "fix" this:

 1. Contact the upstream for the repository and get them to fix 
the problem.


 2. Reconfigure the baseurl/etc. for the repository, to point to a 
working

    upstream. This is most often useful if you are using a newer
    distribution release than is supported by the repository (and the
    packages for the previous distribution release still work).

 3. Run the command with the repository temporarily disabled
    yum --disablerepo= ...

 4. Disable the repository permanently, so yum won't use it by 
default. Yum
    will then just ignore the repository until you permanently 
enable it

    again or use --enablerepo for temporary usage:

    yum-config-manager --disable 
    or
    subscription-manager repos --disable=

 5. Configure the failing repository to be skipped, if it is 
unavailable.
    Note that yum will try to contact the repo. when it runs most 
commands,
    so will have to try and fail each time (and thus. yum will be 
be much
    slower). If it is a very temporary problem though, this is 
often a nice

    compromise:

    yum-config-manager --save 
--setopt=.skip_if_unavailable=true


Cannot retrieve metalink for repository: ovirt-4.1-epel/x86_64. Please 
verify its path and try again

/



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Node Next Install Problem

2018-03-08 Thread Matt Simonsen
I installed based on an older Node Next DVD (4.1.7) that has worked in 
the past and it doesn't appear to be working when I add it to a cluster.


The installer says//it cannot queue package iproute.

Is there a repo down or that has changed? Thanks for any suggestions.

It appears yum is also broken:/
/

/yum update
Loaded plugins: fastestmirror, imgbased-persist, package_upload, product-id,
  : search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use 
subscription-manager to register.

centos-opstools-release | 2.9 kB 00:00
ovirt-4.1   | 3.0 kB 00:00
ovirt-4.1-centos-gluster38  | 2.9 kB 00:00


 One of the configured repositories failed (Unknown),
 and yum doesn't have enough cached data to continue. At this point the 
only

 safe thing yum can do is fail. There are a few ways to work "fix" this:

 1. Contact the upstream for the repository and get them to fix the 
problem.


 2. Reconfigure the baseurl/etc. for the repository, to point to a 
working

    upstream. This is most often useful if you are using a newer
    distribution release than is supported by the repository (and the
    packages for the previous distribution release still work).

 3. Run the command with the repository temporarily disabled
    yum --disablerepo= ...

 4. Disable the repository permanently, so yum won't use it by 
default. Yum
    will then just ignore the repository until you permanently 
enable it

    again or use --enablerepo for temporary usage:

    yum-config-manager --disable 
    or
    subscription-manager repos --disable=

 5. Configure the failing repository to be skipped, if it is 
unavailable.
    Note that yum will try to contact the repo. when it runs most 
commands,
    so will have to try and fail each time (and thus. yum will be 
be much
    slower). If it is a very temporary problem though, this is 
often a nice

    compromise:

    yum-config-manager --save 
--setopt=.skip_if_unavailable=true


Cannot retrieve metalink for repository: ovirt-4.1-epel/x86_64. Please 
verify its path and try again

/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Partition Trouble on oVirt Node

2018-02-15 Thread Matt Simonsen

Hello all,

This may not be oVirt specific (but it may be) so thank you in advance 
for any assistance.


I have a system installed with oVirt Node Next 4.1.9 that was installed 
to /dev/sda


I had a seperate RAID Volume /dev/sdb that should not have been used, 
but now that the operating system is loaded I'm struggling to get the 
device partitioned.


I've tried mkfs.ext4 on the device and also pvcreate, with the errors 
below. I've also rebooted a couple times and tried to disable 
multipathd.   Is multipathd even safe to disable on Node Next?


Below are the errors I've received, and thank you again for any tips.


[root@node1-g6-h3 ~]# mkfs.ext4 /dev/sdb
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
/dev/sdb is apparently in use by the system; will not make a filesystem 
here!

[root@node1-g6-h3 ~]# gdisk
GPT fdisk (gdisk) version 0.8.6

Type device filename, or press  to exit: /dev/sdb
Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!

Caution! After loading partitions, the CRC doesn't check out!
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: damaged

Found invalid MBR and corrupt GPT. What do you want to do? (Using the
GPT MAY permit recovery of GPT data.)
 1 - Use current GPT
 2 - Create blank GPT

Your answer: 2

Command (? for help): n
Partition number (1-128, default 1):
First sector (34-16952264590, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-16952264590, default = 16952264590) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 8e00
Changed type of partition to 'Linux LVM'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.
[root@node1-g6-h3 ~]# pvcreate /dev/sdb1
  Device /dev/sdb1 not found (or ignored by filtering).


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move Export Domain across web via NFS verses Rsync Image

2018-01-30 Thread Matt Simonsen

On 01/30/2018 03:43 PM, Christopher Cox wrote:
So, you're saying you export to an Export Domain (NFS), detach, and 
then rsync that somewhere else (a different NFS system) and try to 
attach that an Export(import) Domain to a different datacenter and 
import? Sounds like it should work to me.





Yea. Exactly as you described below.

If there's any reason this would be a problem, I'd love to hear others 
chime in.


Thanks

Matt

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Move Export Domain across web via NFS verses Rsync Image

2018-01-30 Thread Matt Simonsen

Hello all,

We have a several oVirt data centers mostly using oVirt 4.1.9 and NFS 
backed storage.


I'm planning a move for what will eventually be an exported VM, from one 
physical location to another one.


Is there any reason it would be problematic to export the image and then 
use rsync to migrate the image directory to a different export domain?


Thanks,

Matt

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1 question, writing files to /root and RPMs

2017-12-18 Thread Matt Simonsen

On 12/15/2017 03:06 AM, Simone Tiraboschi wrote:
On Fri, Dec 15, 2017 at 4:45 AM, Donny Davis <mailto:do...@fortnebula.com>> wrote:


have you gotten an image update yet?

On Thu, Dec 14, 2017 at 8:08 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:

Hello all,

I read at
https://www.ovirt.org/develop/projects/node/troubleshooting/
<https://www.ovirt.org/develop/projects/node/troubleshooting/>
that "Changes made from the command line are done at your own
risk. Making changes has the potential to leave your system in
an unusable state." It seems clear that RPMs should not be
installed.


That document mainly refers to vintage node.
In Next Generation Node now we have rpm persistence; please check
https://www.ovirt.org/develop/release-management/features/node/node-next-persistence/





I'm sure glad we tested!

On one Node image we had images locally stored in /exports and shared 
out via NFS. After an upgrade & reboot, images are gone.


If we "Convert to local storage" will the data persist?  I am planning 
to test, but want to be sure how this is designed.


I assume during a Gluster installation something is also updated in 
oVirt Node to allow for the Gluster partition to persist?


At this point I'm thinking I should manually install via CentOS7 to 
ensure folders and partitions are persistent. Is there any downside to 
installing over CentOS7?


Thanks
Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Node 4.1 question, writing files to /root and RPMs

2017-12-14 Thread Matt Simonsen

Hello all,

I read at https://www.ovirt.org/develop/projects/node/troubleshooting/ 
that "Changes made from the command line are done at your own risk. 
Making changes has the potential to leave your system in an unusable 
state." It seems clear that RPMs should not be installed.


Is this accurate for https://www.ovirt.org/node/ ?

We have added smartctl and hpacucli in order to do disk and RAID 
monitoring. So far our node servers have retained changes across 
reboots, which is the primary reason I'm wondering if perhaps this 
applies to an older version of oVirt Node.


If what we have been doing is not supported, what is the suggested 
method to do do hardware monitoring (in particular disks)?


Thanks
Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Best Practice Question: How many engines, one or more than one, for multiple physical locations

2017-12-08 Thread Matt Simonsen

Hello all,

I read with Gluster using hyper-convergence that the engine must reside 
on the same LAN as the nodes. I guess this makes sense by definition - 
ie: using Gluster storage and replicating Gluster bricks across the web 
sounds awful.


This got me wondering about best practices for the engine setup. We have 
multiple physical locations (co-location data centers).


In my initial plan I had expected to have my oVirt engine hosted 
separately from each physical location so that in the event of trouble 
at a remote facility the engine would still be usable.


In this case, our prod sites would not have a "hyper-converged" setup if 
we decide to run GlusterFS for storage at any particular physical site, 
but I believe it would still be possible to use Gluster. In this case 
oVirt would have a 3 node cluster, using GlusterFS storage, but not 
hyper-converged since the engine would be in a separate facility.


Is there any downside in this setup to having the engine off-site?

Rather than having an off-site engine, should I consider one engine per 
physical co-location space?


Thank you all for any feedback,

Matt

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users