[ovirt-users] production DC non responsive

2017-08-07 Thread Juan Pablo
Hi guys, good morning/night .
first of all thanks to all the community and the whole team, you are doing
a great effort .

Today Im having an issue with ovirt 4.1.2 running as hosted engine over
nfs. and data storage over iscsi.
I found out of the problem when I tried to migrate one vm to another host
and I got an error, so I powered off the vm's on that host and started them
on the backup host with no problem, then I set the first host into
mainteinance mode and after some minutes I restarted it(as usual). when it
came back online, the whole dc turned RED as unavailable.

can anyone please help me out?

thanks, JP
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi Devin,


Please consider that for the OS I have a RAID 1. Now, lets say I use RAID 5 to 
assemble a single disk on each server. In this case, the SSD will not make any 
difference, right? I guess that to be possible to use it, the SSD should not be 
part of the RAID 5. In this case I could create a logical volume made of the 
RAIDed brick and then extend it using the SSD. I.e.: Using gdeploy:


[disktype]

jbod



[pv1]

action=create

devices=sdb, sdc

wipefs=yes

ignore_vg_erros=no


[vg1]

action=create

vgname=gluster_vg_jbod

pvname=sdb

ignore_vg_erros=no


[vg2]

action=extend

vgname=gluster_vg_jbod

pvname=sdc

ignore_vg_erros=no


But will Gluster be able to auto-detect and use this SSD brick for tiering? Do 
I have to do some other configurations? Also, as the VM files (.qcow2) are 
quite big will I benefit from tiering? This is wrong and my approach should be 
other?


Thanks,

Moacir



From: Devin Acosta 
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for several 
different companies, and have dealt with the Red Hat Support Team in depth 
about optimal configuration in regards to setting up GlusterFS most efficiently 
and I wanted to share with you what I learned.

In general Red Hat Virtualization team frowns upon using each DISK of the 
system as just a JBOD, sure there is some protection by having the data 
replicated, however, the recommendation is to use RAID 6 (preferred) or RAID-5, 
or at least RAID-1 at the very least.

Here is the direct quote from Red Hat when I asked about RAID and Bricks:

"A typical Gluster configuration would use RAID underneath the bricks. RAID 6 
is most typical as it gives you 2 disk failure protection, but RAID 5 could be 
used too. Once you have the RAIDed bricks, you'd then apply the desired 
replication on top of that. The most popular way of doing this would be 
distributed replicated with 2x replication. In general you'll get better 
performance with larger bricks. 12 drives is often a sweet spot. Another option 
would be to create a separate tier using all SSD’s.”

In order to SSD tiering from my understanding you would need 1 x NVMe drive in 
each server, or 4 x SSD hot tier (it needs to be distributed, replicated for 
the hot tier if not using NVME). So with you only having 1 SSD drive in each 
server, I’d suggest maybe looking into the NVME option.

Since your using only 3-servers, what I’d probably suggest is to do (2 Replicas 
+ Arbiter Node), this setup actually doesn’t require the 3rd server to have big 
drives at all as it only stores meta-data about the files and not actually a 
full copy.

Please see the attached document that was given to me by Red Hat to get more 
information on this. Hope this information helps you.


--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect


On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com) wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?


You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Domain name in use? After failed domain setup?

2017-08-07 Thread Schorschi
Well, not sure this legit to try, but did try it.. And it seemed to 
work, I was able to recreate the storage domain as needed...  Access the 
database, and find the bogus storage domain.  I figured out the table 
names from other discussions about accessing the database directly to 
clean up storage domains that are broken or bogus.


# su - postgres
$ psql engine

engine=# select id, storage_name from storage_domain_static;
  id  |  storage_name
--+
 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ovirt-image-repository
 3e81b68f-5ddd-49a3-84e5-7209493b490a | Datastore_01
 2b6d2e60-6cfb-45cd-a6e1-8cf38ad2bf34 | Datastore_02
 66f9539a-d43a-4b0b-a823-c8dafa829804 | Datastore_03
 1878bd36-7bca-4d6b-9d39-fe99ba347115 | Datastore_04
 aa5542bd-b43a-4ae6-a996-208c10842878 | Datastore_05
(6 rows)

Delete the 'Datastore_02' storage domain...  Maybe someone can explain 
why the same id is in two different tables?  That seems odd?


engine=# delete from storage_domain_dynamic where 
id='2b6d2e60-6cfb-45cd-a6e1-8cf38ad2bf34';

DELETE 1
engine=# delete from storage_domain_static where 
id='2b6d2e60-6cfb-45cd-a6e1-8cf38ad2bf34';

DELETE 1

Then I was able to remove the connection...  Still have no idea why I 
could not remove the domain and connection via ovirt shell, as noted 
before in previous mail discussion...


[oVirt shell (connected)]# show storageconnection 
21fbad73-f855-48fc-8949-e5d6b077eb83


id : 21fbad73-f855-48fc-8949-e5d6b077eb83
address: crazy
nfs_version: auto
path   : /storage/nfs/datastore_02
type   : nfs

[oVirt shell (connected)]# remove storageconnection 
21fbad73-f855-48fc-8949-e5d6b077eb83


job-id  : ba9094ee-8d6e-438b-b774-601540320768
status-state: complete

I also had to clean up the file system under the connection... since it 
appears the file system structure was created BEFORE the error resulted 
that kicked off this, that is also a bug, IMHO, since it should have 
cleaned up the files created, but the storage domain was never FULLY 
established.  This is something oVirt has never done... cleaned up will 
after errors... I remember all the way back in version 3.0 if not 
before, how clean up is never done well.  This is an example apparently.


A suggestion... is the master is not 100% active, and a request to 
create a new domain is queued, does it not make sense, the failure 
should be graceful, clean up the database, and the file system below the 
storage domain?  Would see to be a very 'user' friendly thing to do.  I 
mean, this a way to protect both the environment and the user from a 
typical issue.  The same should be done for attach of domain, i.e. the 
storage domain and storage connection clean up, if error results.


If there still some junk in the database I need to cleanup, please let 
me know?


On 08/07/2017 20:01, Schorschi . wrote:

Bit more information...

[oVirt shell (connected)]# list storagedomains

id : 2b6d2e60-6cfb-45cd-a6e1-8cf38ad2bf34
name   : Datastore_02

This is the new domain that failed, since the master was not 100% 
up/initialized, but does NOT appear in the UI of course.  When I try 
to remove it..


[oVirt shell (connected)]# remove storagedomain Datastore_02
=== 
ERROR 


  status: 400
  reason: Bad Request
  detail:
== 



Not sure why this error results.  So looked for a connection?  Was 
surprised to find one, given nothing in the UI...


]# show storageconnection 21fbad73-f855-48fc-8949-e5d6b077eb83

id : 21fbad73-f855-48fc-8949-e5d6b077eb83
address: crazy
nfs_version: auto
path   : /storage/nfs/datastore_02
type   : nfs

[oVirt shell (connected)]# remove storageconnection 
21fbad73-f855-48fc-8949-e5d6b077eb83
=== 
ERROR 


  status: 409
  reason: Conflict
  detail: Cannot remove Storage Connection. Storage connection 
parameters are used by the following storage domains : Datastore_02.
== 



This is really a nasty catch 22?  I can't delete the storage domain, 
because there is a storage connection active?  But I can't delete the 
connection because the storage domain exists?  Any suggestions how 
resolve this?


Thanks


On 08/07/2017 19:48, Schorschi . wrote:

Domain name in use?  After failed domain setup?

I attempted to create a new domain, but I did not realize the master 
domain was 100% initialized.  The new 

Re: [ovirt-users] Domain name in use? After failed domain setup?

2017-08-07 Thread Schorschi .

Bit more information...

[oVirt shell (connected)]# list storagedomains

id : 2b6d2e60-6cfb-45cd-a6e1-8cf38ad2bf34
name   : Datastore_02

This is the new domain that failed, since the master was not 100% 
up/initialized, but does NOT appear in the UI of course.  When I try to 
remove it..


[oVirt shell (connected)]# remove storagedomain Datastore_02
=== 
ERROR 


  status: 400
  reason: Bad Request
  detail:
==

Not sure why this error results.  So looked for a connection?  Was 
surprised to find one, given nothing in the UI...


]# show storageconnection 21fbad73-f855-48fc-8949-e5d6b077eb83

id : 21fbad73-f855-48fc-8949-e5d6b077eb83
address: crazy
nfs_version: auto
path   : /storage/nfs/datastore_02
type   : nfs

[oVirt shell (connected)]# remove storageconnection 
21fbad73-f855-48fc-8949-e5d6b077eb83
=== 
ERROR 


  status: 409
  reason: Conflict
  detail: Cannot remove Storage Connection. Storage connection 
parameters are used by the following storage domains : Datastore_02.

==

This is really a nasty catch 22?  I can't delete the storage domain, 
because there is a storage connection active?  But I can't delete the 
connection because the storage domain exists?  Any suggestions how 
resolve this?


Thanks


On 08/07/2017 19:48, Schorschi . wrote:

Domain name in use?  After failed domain setup?

I attempted to create a new domain, but I did not realize the master 
domain was 100% initialized.  The new domain creation failed.  But it 
appears the new domain 'name' was used.  Now I cannot create the new 
domain as expected.  I get UI error that states, "" which can only be 
true if the domain name is in the database, because it is definitely 
no visible in the UI.  This is quite frustrated, because it appears 
the new domain 'creation' logic is broken, if the new domain fails to 
be created, the database should not have junk domain name, right?  I 
call this an ugly bug.  That said, I really need to remove this junk 
domain name so I can use the correct name as expected.


Thanks.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Domain name in use? After failed domain setup?

2017-08-07 Thread Schorschi .

Domain name in use?  After failed domain setup?

I attempted to create a new domain, but I did not realize the master 
domain was 100% initialized.  The new domain creation failed.  But it 
appears the new domain 'name' was used.  Now I cannot create the new 
domain as expected.  I get UI error that states, "" which can only be 
true if the domain name is in the database, because it is definitely no 
visible in the UI.  This is quite frustrated, because it appears the new 
domain 'creation' logic is broken, if the new domain fails to be 
created, the database should not have junk domain name, right?  I call 
this an ugly bug.  That said, I really need to remove this junk domain 
name so I can use the correct name as expected.


Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
Moacir, I understand that if you do this type of configuration you will be
severely impacted on storage performance, specially for writes. Even if you
have a Hardware RAID Controller with Writeback cache you will have a
significant performance penalty and may not fully use all the resources you
mentioned you have.

Fernando

2017-08-07 10:03 GMT-03:00 Moacir Ferreira :

> Hi Colin,
>
>
> Take a look on Devin's response. Also, read the doc he shared that gives
> some hints on how to deploy Gluster.
>
>
> It is more like that if you want high-performance you should have the
> bricks created as RAID (5 or 6) by the server's disk controller and them
> assemble a JBOD GlusterFS. The attached document is Gluster specific and
> not for oVirt. But at this point I think that having SSD will not be a plus
> as using the RAID controller Gluster will not be aware of the SSD.
> Regarding the OS, my idea is to have a RAID 1, made of 2 low cost HDDs, to
> install it.
>
>
> So far, based on the information received I should create a single RAID 5
> or 6 on each server and then use this disk as a brick to create my Gluster
> cluster, made of 2 replicas + 1 arbiter. What is new for me is the detail
> that the arbiter does not need a lot of space as it only keeps meta data.
>
>
> Thanks for your response!
> Moacir
>
> --
> *From:* Colin Coe 
> *Sent:* Monday, August 7, 2017 12:41 PM
>
> *To:* Moacir Ferreira
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
> Hi
>
> I just thought that you'd do hardware RAID if you had the controller or
> JBOD if you didn't.  In hindsight, a server with 40Gbps NICs is pretty
> likely to have a hardware RAID controller.  I've never done JBOD with
> hardware RAID.  I think having a single gluster brick on hardware JBOD
> would be riskier than multiple bricks, each on a single disk, but thats not
> based on anything other than my prejudices.
>
> I thought gluster tiering was for the most frequently accessed files, in
> which case all the VMs disks would end up in the hot tier.  However, I have
> been wrong before...
>
> I just wanted to know where the OS was going as I didn't see it mentioned
> in the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a
> lot of wasted disk.
>
> Honestly, I think Yaniv's answer was far better than my own and made the
> important point about having an arbiter.
>
> Thanks
>
> On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira <
> moacirferre...@hotmail.com> wrote:
>
>> Hi Colin,
>>
>>
>> I am in Portugal, so sorry for this late response. It is quite confusing
>> for me, please consider:
>>
>>
>> 1* - *What if the RAID is done by the server's disk controller, not by
>> software?
>>
>> 2 - For JBOD I am just using gdeploy to deploy it. However, I am not
>> using the oVirt node GUI to do this.
>>
>>
>> 3 - As the VM .qcow2 files are quite big, tiering would only help if
>> made by an intelligent system that uses SSD for chunks of data not for the
>> entire .qcow2 file. But I guess this is a problem everybody else has. So,
>> Do you know how tiering works in Gluster?
>>
>>
>> 4 - I am putting the OS on the first disk. However, would you do
>> differently?
>>
>>
>> Moacir
>>
>> --
>> *From:* Colin Coe 
>> *Sent:* Monday, August 7, 2017 4:48 AM
>> *To:* Moacir Ferreira
>> *Cc:* users@ovirt.org
>> *Subject:* Re: [ovirt-users] Good practices
>>
>> 1) RAID5 may be a performance hit-
>>
>> 2) I'd be inclined to do this as JBOD by creating a distributed disperse
>> volume on each server.  Something like
>>
>> echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
>> $(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
>> "server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
>>
>> 3) I think the above.
>>
>> 4) Gluster does support tiering, but IIRC you'd need the same number of
>> SSD as spindle drives.  There may be another way to use the SSD as a fast
>> cache.
>>
>> Where are you putting the OS?
>>
>> Hope I understood the question...
>>
>> Thanks
>>
>> On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <
>> moacirferre...@hotmail.com> wrote:
>>
>>> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2
>>> CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
>>> GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
>>> a dual 10Gb NIC. So my intention is to create a loop like a server triangle
>>> using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
>>> move VMs around the pod (east /west traffic) while using the 10Gb
>>> interfaces for giving services to the outside world (north/south traffic).
>>>
>>>
>>> This said, my first question is: How should I deploy GlusterFS in such
>>> oVirt scenario? My questions are:
>>>
>>>
>>> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
>>> 

Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi Colin,


Take a look on Devin's response. Also, read the doc he shared that gives some 
hints on how to deploy Gluster.


It is more like that if you want high-performance you should have the bricks 
created as RAID (5 or 6) by the server's disk controller and them assemble a 
JBOD GlusterFS. The attached document is Gluster specific and not for oVirt. 
But at this point I think that having SSD will not be a plus as using the RAID 
controller Gluster will not be aware of the SSD. Regarding the OS, my idea is 
to have a RAID 1, made of 2 low cost HDDs, to install it.


So far, based on the information received I should create a single RAID 5 or 6 
on each server and then use this disk as a brick to create my Gluster cluster, 
made of 2 replicas + 1 arbiter. What is new for me is the detail that the 
arbiter does not need a lot of space as it only keeps meta data.


Thanks for your response!

Moacir


From: Colin Coe 
Sent: Monday, August 7, 2017 12:41 PM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices

Hi

I just thought that you'd do hardware RAID if you had the controller or JBOD if 
you didn't.  In hindsight, a server with 40Gbps NICs is pretty likely to have a 
hardware RAID controller.  I've never done JBOD with hardware RAID.  I think 
having a single gluster brick on hardware JBOD would be riskier than multiple 
bricks, each on a single disk, but thats not based on anything other than my 
prejudices.

I thought gluster tiering was for the most frequently accessed files, in which 
case all the VMs disks would end up in the hot tier.  However, I have been 
wrong before...

I just wanted to know where the OS was going as I didn't see it mentioned in 
the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a lot of 
wasted disk.

Honestly, I think Yaniv's answer was far better than my own and made the 
important point about having an arbiter.

Thanks

On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira 
> wrote:

Hi Colin,


I am in Portugal, so sorry for this late response. It is quite confusing for 
me, please consider:

1 - What if the RAID is done by the server's disk controller, not by software?


2 - For JBOD I am just using gdeploy to deploy it. However, I am not using the 
oVirt node GUI to do this.


3 - As the VM .qcow2 files are quite big, tiering would only help if made by an 
intelligent system that uses SSD for chunks of data not for the entire .qcow2 
file. But I guess this is a problem everybody else has. So, Do you know how 
tiering works in Gluster?


4 - I am putting the OS on the first disk. However, would you do differently?


Moacir


From: Colin Coe >
Sent: Monday, August 7, 2017 4:48 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices

1) RAID5 may be a performance hit-

2) I'd be inclined to do this as JBOD by creating a distributed disperse volume 
on each server.  Something like

echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e 
"server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)

3) I think the above.

4) Gluster does support tiering, but IIRC you'd need the same number of SSD as 
spindle drives.  There may be another way to use the SSD as a fast cache.

Where are you putting the OS?

Hope I understood the question...

Thanks

On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira 
> wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?


You opinion/feedback will be 

Re: [ovirt-users] NTP

2017-08-07 Thread Scott Worthington
chronyd is the new ntpd.

On Aug 7, 2017 7:23 PM, "Moacir Ferreira" 
wrote:

> I found that NTP does not get installed on oVirt node on the latest
> version ovirt-node-ng-installer-ovirt-4.1-2017052309 <(201)%20705-2309>
> .iso.
>
>
> Also the installed repositories does not have it. So, is this a bug or NTP
> is not considered appropriated anymore?
>
>
> Thanks.
>
> Moacir
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NTP

2017-08-07 Thread Moacir Ferreira
I found that NTP does not get installed on oVirt node on the latest version 
ovirt-node-ng-installer-ovirt-4.1-2017052309.iso.


Also the installed repositories does not have it. So, is this a bug or NTP is 
not considered appropriated anymore?


Thanks.

Moacir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Users Digest, Vol 71, Issue 32

2017-08-07 Thread Moacir Ferreira
abled out of the box if you use a hyper-converged setup
> via gdeploy).
> *Moacir:* Yes! This is another reason to have separate networks for
> north/south and east/west. In that way I can use the standard MTU on
> the 10Gb NICs and jumbo frames on the file/move 40Gb NICs.
>
> Y.
>
>
> You opinion/feedback will be really appreciated!
>
> Moacir
>
>
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.ovirt.org/pipermail/users/attachments/20170807/f4181b39/attachment-0001.html>

--

Message: 2
Date: Mon, 7 Aug 2017 15:26:03 +0200
From: Erekle Magradze <erekle.magra...@recogizer.de>
To: FERNANDO FREDIANI <fernando.fredi...@upx.com>, users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Message-ID: <aa829d07-fa77-3ed9-2500-e33cc0141...@recogizer.de>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hi Frenando,

Here is my experience, if you consider a particular hard drive as a
brick for gluster volume and it dies, i.e. it becomes not accessible
it's a huge hassle to discard that brick and exchange with another one,
since gluster some tries to access that broken brick and it's causing
(at least it cause for me) a big pain, therefore it's better to have a
RAID as brick, i.e. have RAID 1 (mirroring) for each brick, in this case
if the disk is down you can easily exchange it and rebuild the RAID
without going offline, i.e switching off the volume doing brick
manipulations and switching it back on.

Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:
>
> For any RAID 5 or 6 configuration I normally follow a simple gold rule
> which gave good results so far:
> - up to 4 disks RAID 5
> - 5 or more disks RAID 6
>
> However I didn't really understand well the recommendation to use any
> RAID with GlusterFS. I always thought that GlusteFS likes to work in
> JBOD mode and control the disks (bricks) directlly so you can create
> whatever distribution rule you wish, and if a single disk fails you
> just replace it and which obviously have the data replicated from
> another. The only downside of using in this way is that the
> replication data will be flow accross all servers but that is not much
> a big issue.
>
> Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.
>
> Thanks
> Regards
> Fernando
>
>
> On 07/08/2017 03:46, Devin Acosta wrote:
>>
>> Moacir,
>>
>> I have recently installed multiple Red Hat Virtualization hosts for
>> several different companies, and have dealt with the Red Hat Support
>> Team in depth about optimal configuration in regards to setting up
>> GlusterFS most efficiently and I wanted to share with you what I learned.
>>
>> In general Red Hat Virtualization team frowns upon using each DISK of
>> the system as just a JBOD, sure there is some protection by having
>> the data replicated, however, the recommendation is to use RAID 6
>> (preferred) or RAID-5, or at least RAID-1 at the very least.
>>
>> Here is the direct quote from Red Hat when I asked about RAID and Bricks:
>> /
>> /
>> /"A typical Gluster configuration would use RAID underneath the
>> bricks. RAID 6 is most typical as it gives you 2 disk failure
>> protection, but RAID 5 could be used too. Once you have the RAIDed
>> bricks, you'd then apply the desired replication on top of that. The
>> most popular way of doing this would be distributed replicated with
>> 2x replication. In general you'll get better performance with larger
>> bricks. 12 drives is often a sweet spot. Another option would be to
>> create a separate tier using all SSD?s.? /
>>
>> /In order to SSD tiering from my understanding you would need 1 x
>> NVMe drive in each server, or 4 x SSD hot tier (it needs to be
>> distributed, replicated for the hot tier if not using NVME). So with
>> you only having 1 SSD drive in each server, I?d suggest maybe looking
>> into the NVME option. /
>> /
>> /
>> /Since your using only 3-servers, what I?d probably suggest is to do
>> (2 Replicas + Arbiter Node), this setup actually doesn?t require the
>> 3rd server to have big drives at all as it only stores meta-data
>> about the files and not actually a full copy. /
>> /
>> /
>> /Please see the 

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-07 Thread Moacir Ferreira
Fabrice,


If you choose to have jumbo frames all over, then when the traffic goes outside 
of your "jumbo frames" enabled network it will be necessary to be fragmented 
back again to the destination MTU. Most of the datacenters will provide 
services to the outside world where the MTU is 1500 bytes. In this case, you 
will slow down your performance because your router will be doing the 
fragmentation. So I would always use jumbo frames in the datacenter for 
east/west traffic and standard (1500 bytes) for north/south traffic.


Moacir

--

Message: 1
Date: Mon, 7 Aug 2017 21:50:36 +0200
From: Fabrice Bacchella <fabrice.bacche...@orange.fr>
To: FERNANDO FREDIANI <fernando.fredi...@upx.com>
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Message-ID: <4365e3f7-4c77-4ff5-8401-1cda2f002...@orange.fr>
Content-Type: text/plain; charset="windows-1252"

>> Moacir: Yes! This is another reason to have separate networks for 
>> north/south and east/west. In that way I can use the standard MTU on the 
>> 10Gb NICs and jumbo frames on the file/move 40Gb NICs.

Why not Jumbo frame every where ?
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.ovirt.org/pipermail/users/attachments/20170807/4ba55f08/attachment-0001.html>

--

Message: 2
Date: Mon, 7 Aug 2017 16:52:40 -0300
From: FERNANDO FREDIANI <fernando.fredi...@upx.com>
To: Fabrice Bacchella <fabrice.bacche...@orange.fr>
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Message-ID: <40d044ae-a41d-082e-131a-bf5fb5503...@upx.com>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

What you mentioned is a specific case and not a generic situation. The
main point there is that RAID 5 or 6 impacts write performance compared
when you write to only 2 given disks at a time. That was the comparison
made.

Fernando


On 07/08/2017 16:49, Fabrice Bacchella wrote:
>
>> Le 7 ao?t 2017 ? 17:41, FERNANDO FREDIANI <fernando.fredi...@upx.com
>> <mailto:fernando.fredi...@upx.com>> a ?crit :
>>
>
>> Yet another downside of having a RAID (specially RAID 5 or 6) is that
>> it reduces considerably the write speeds as each group of disks will
>> end up having the write speed of a single disk as all other disks of
>> that group have to wait for each other to write as well.
>>
>
> That's not true if you have medium to high range hardware raid. For
> example, HP Smart Array come with a flash cache of about 1 or 2 Gb
> that hides that from the OS.

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.ovirt.org/pipermail/users/attachments/20170807/db3094e7/attachment-0001.html>

--

Message: 3
Date: Mon, 7 Aug 2017 22:05:19 +0200
From: Erekle Magradze <erekle.magra...@recogizer.de>
To: FERNANDO FREDIANI <fernando.fredi...@upx.com>, users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Message-ID: <bac362c7-daba-918c-f728-13e1a74d6...@recogizer.de>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hi Franando,

So let's go with the following scenarios:

1. Let's say you have two servers (replication factor is 2), i.e. two
bricks per volume, in this case it is strongly recommended to have the
arbiter node, the metadata storage that will guarantee avoiding the
split brain situation, in this case for arbiter you don't even need a
disk with lots of space, it's enough to have a tiny ssd but hosted on a
separate server. Advantage of such setup is that you don't need the RAID
1 for each brick, you have the metadata information stored in arbiter
node and brick replacement is easy.

2. If you have odd number of bricks (let's say 3, i.e. replication
factor is 3) in your volume and you didn't create the arbiter node as
well as you didn't configure the quorum, in this case the entire load
for keeping the consistency of the volume resides on all 3 servers, each
of them is important and each brick contains key information, they need
to cross-check each other (that's what people usually do with the first
try of gluster :) ), in this case replacing a brick is a big pain and in
this case RAID 1 is a good option to have (that's the disadvantage, i.e.
loosing the space and not having the JBOD option) advantage is that you
don't have the to have additional arbiter node.

3. You have odd number of bricks and configured arbiter node, in this
case you can easily go with JBOD, however a good practice would be to
have a RAID 1 for arbiter disks (tiny 128GB SSD-s ar perfectly
sufficient for volumes with 10s of TB-s in size.)

That's basically it

The rest about the reliability and setup scenarios you can find in
gluster documentation, especially l

Re: [ovirt-users] Good practices

2017-08-07 Thread Erekle Magradze

Hi Fernando,

Indeed, having and arbiter node is always a good idea, and it saves 
costs a lot.


Good luck with your setup.

Cheers

Erekle


On 07.08.2017 23:03, FERNANDO FREDIANI wrote:


Thanks for the detailed answer Erekle.

I conclude that it is worth in any scenario to have a arbiter node in 
order to avoid wasting more disk space to RAID X + Gluster Replication 
on the top of it. The cost seems much lower if you consider running 
costs of the whole storage and compare it with the cost to build the 
arbiter node. Even having a fully redundant arbiter service with 2 
nodes would make it wort on a larger deployment.


Regards
Fernando

On 07/08/2017 17:07, Erekle Magradze wrote:


Hi Fernando (sorry for misspelling your name, I used a different 
keyboard),


So let's go with the following scenarios:

1. Let's say you have two servers (replication factor is 2), i.e. two 
bricks per volume, in this case it is strongly recommended to have 
the arbiter node, the metadata storage that will guarantee avoiding 
the split brain situation, in this case for arbiter you don't even 
need a disk with lots of space, it's enough to have a tiny ssd but 
hosted on a separate server. Advantage of such setup is that you 
don't need the RAID 1 for each brick, you have the metadata 
information stored in arbiter node and brick replacement is easy.


2. If you have odd number of bricks (let's say 3, i.e. replication 
factor is 3) in your volume and you didn't create the arbiter node as 
well as you didn't configure the quorum, in this case the entire load 
for keeping the consistency of the volume resides on all 3 servers, 
each of them is important and each brick contains key information, 
they need to cross-check each other (that's what people usually do 
with the first try of gluster :) ), in this case replacing a brick is 
a big pain and in this case RAID 1 is a good option to have (that's 
the disadvantage, i.e. loosing the space and not having the JBOD 
option) advantage is that you don't have the to have additional 
arbiter node.


3. You have odd number of bricks and configured arbiter node, in this 
case you can easily go with JBOD, however a good practice would be to 
have a RAID 1 for arbiter disks (tiny 128GB SSD-s ar perfectly 
sufficient for volumes with 10s of TB-s in size.)


That's basically it

The rest about the reliability and setup scenarios you can find in 
gluster documentation, especially look for quorum and arbiter node 
configs+options.


Cheers

Erekle

P.S. What I was mentioning, regarding a good practice is mostly 
related to the operations of gluster not installation or deployment, 
i.e. not the conceptual understanding of gluster (conceptually it's a 
JBOD system).


On 08/07/2017 05:41 PM, FERNANDO FREDIANI wrote:


Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as 
it adds another layer of complexity to the system (either a hardware 
or software RAID) before the gluster config and increase the 
system's overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, 
or 6). Then when you have GlusterFS on the top of several RAIDs you 
have again more data replicated so you end up with the same data 
consuming more space in a group of disks and again on the top of 
several RAIDs depending on the Gluster configuration you have (in a 
RAID 1 config the same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is 
that it reduces considerably the write speeds as each group of disks 
will end up having the write speed of a single disk as all other 
disks of that group have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this 
big pain you mentioned if the data is replicated somewhere else, can 
still be retrieved to both serve clients and reconstruct the 
equivalent disk when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not 
accessible it's a huge hassle to discard that brick and exchange 
with another one, since gluster some tries to access that broken 
brick and it's causing (at least it cause for me) a big pain, 
therefore it's better to have a RAID as brick, i.e. have RAID 1 
(mirroring) for each brick, in this case if the disk is down you 
can easily exchange it and rebuild the RAID without going offline, 
i.e switching off the volume doing brick manipulations and 
switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use 
any RAID 

Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-08-07 Thread FERNANDO FREDIANI

Hello.

Despite I didn't get any feedback on this topic anymore I just wanted to 
let people know that since I moved the VM to another oVirt Cluster 
running oVirt-Node-NG and Kernel 3.10 the problem stopped happening. 
Although I still don't know the cause of it I suspect it may have to do 
with the kernel that other Host (hypervsior) is running (4.12) as that 
is the only once running disk kernel for an specific reason.
To support this suspicious in the past I had another Hypervisor also 
running kernel 4.12 and a VM that does that same job had the same issue. 
After I have rebooted the Hypervisor back to default kernel (3.10) the 
problem didn't happen anymore.


If anyone ever faces this or anything similar please let me know as I am 
always interested to find out the root of this issue.


Regards
Fernando


On 28/07/2017 15:01, FERNANDO FREDIANI wrote:


Hello Edwardh and all.

I keep getting these disconnects, were you able to find anything about 
to suggest changing ?


As I mentioned this machine different from the others where it never 
happened uses the ovirtmgmt network as VM network and has kernel 4.12 
instead of the default 3.10 from CentOS 7.3. It seems a particular 
situation that is triggering this behavior but could not gather any 
hint yet.


I have tried to run a regular arping to force the bridge always learn 
the VMs MAC address but it doesn't seem to work and every in a while 
the bridge 'forgets' that particular VM mac address.
I have also even rebuilt the VM completely changing its operating 
system from Ubuntu 16.04 to CentOS 7.3 and the same problem happened.


Fernando


On 24/07/2017 18:20, FERNANDO FREDIANI wrote:


Hello Edward, this happened again today and I was able to check more 
details.


So:

- The VM stopped passing any network traffic.
- Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address 
missing.
- I then went to oVirt Engine, under VM's 'Network Interfaces' tab, 
clicked Edit and changed the Link State to Down then to Up and it 
recovered its connectivity.
- Another 'brctl showmacs ovirtmgmt' showed the VM's mac address 
learned again by the bridge.


This Node server has the particularity of sharing the ovirtmgmt with 
VMs. Could it possibly be the cause of the issue in any way ?


Thanks
Fernando


On 24/07/2017 09:47, FERNANDO FREDIANI wrote:


Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any 
reason for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM 
mac address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
> wrote:


Has anyone had problem when using the ovirtmgmt bridge to
connect VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further
I see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and
it shows me the VM's mac adress with agening timer 200.19.
After the VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is
not used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users











___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI

Thanks for the detailed answer Erekle.

I conclude that it is worth in any scenario to have a arbiter node in 
order to avoid wasting more disk space to RAID X + Gluster Replication 
on the top of it. The cost seems much lower if you consider running 
costs of the whole storage and compare it with the cost to build the 
arbiter node. Even having a fully redundant arbiter service with 2 nodes 
would make it wort on a larger deployment.


Regards
Fernando

On 07/08/2017 17:07, Erekle Magradze wrote:


Hi Fernando (sorry for misspelling your name, I used a different 
keyboard),


So let's go with the following scenarios:

1. Let's say you have two servers (replication factor is 2), i.e. two 
bricks per volume, in this case it is strongly recommended to have the 
arbiter node, the metadata storage that will guarantee avoiding the 
split brain situation, in this case for arbiter you don't even need a 
disk with lots of space, it's enough to have a tiny ssd but hosted on 
a separate server. Advantage of such setup is that you don't need the 
RAID 1 for each brick, you have the metadata information stored in 
arbiter node and brick replacement is easy.


2. If you have odd number of bricks (let's say 3, i.e. replication 
factor is 3) in your volume and you didn't create the arbiter node as 
well as you didn't configure the quorum, in this case the entire load 
for keeping the consistency of the volume resides on all 3 servers, 
each of them is important and each brick contains key information, 
they need to cross-check each other (that's what people usually do 
with the first try of gluster :) ), in this case replacing a brick is 
a big pain and in this case RAID 1 is a good option to have (that's 
the disadvantage, i.e. loosing the space and not having the JBOD 
option) advantage is that you don't have the to have additional 
arbiter node.


3. You have odd number of bricks and configured arbiter node, in this 
case you can easily go with JBOD, however a good practice would be to 
have a RAID 1 for arbiter disks (tiny 128GB SSD-s ar perfectly 
sufficient for volumes with 10s of TB-s in size.)


That's basically it

The rest about the reliability and setup scenarios you can find in 
gluster documentation, especially look for quorum and arbiter node 
configs+options.


Cheers

Erekle

P.S. What I was mentioning, regarding a good practice is mostly 
related to the operations of gluster not installation or deployment, 
i.e. not the conceptual understanding of gluster (conceptually it's a 
JBOD system).


On 08/07/2017 05:41 PM, FERNANDO FREDIANI wrote:


Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as 
it adds another layer of complexity to the system (either a hardware 
or software RAID) before the gluster config and increase the system's 
overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, 
or 6). Then when you have GlusterFS on the top of several RAIDs you 
have again more data replicated so you end up with the same data 
consuming more space in a group of disks and again on the top of 
several RAIDs depending on the Gluster configuration you have (in a 
RAID 1 config the same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is that 
it reduces considerably the write speeds as each group of disks will 
end up having the write speed of a single disk as all other disks of 
that group have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this 
big pain you mentioned if the data is replicated somewhere else, can 
still be retrieved to both serve clients and reconstruct the 
equivalent disk when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another 
one, since gluster some tries to access that broken brick and it's 
causing (at least it cause for me) a big pain, therefore it's better 
to have a RAID as brick, i.e. have RAID 1 (mirroring) for each 
brick, in this case if the disk is down you can easily exchange it 
and rebuild the RAID without going offline, i.e switching off the 
volume doing brick manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use 
any RAID with GlusterFS. I always thought that GlusteFS likes to 
work in JBOD mode and control the disks (bricks) directlly so you 
can create whatever distribution rule you wish, and if a single 

Re: [ovirt-users] How to shutdown an oVirt cluster with Gluster and hosted engine

2017-08-07 Thread Erekle Magradze

Hi Moacir,

First switch off all VMs.

Second you need to declare hosts maintenance mode, don't start with SRM 
(of course if you are able use the ovirt-engine), it will ask you to 
shutdown the glusterfs on a machine.


Third if all machines are in maintenance mode, you can start shutting 
down them.



If you have hosted engine setup follow this [1]


Cheers

Erekle


[1] 
https://github.com/rharmonson/richtech/wiki/OSVDC-Series:-oVirt-3.6-Cluster-Shutdown-and-Startup



On 08/07/2017 08:58 PM, Moacir Ferreira wrote:


I have installed a oVirt cluster in a KVM virtualized test 
environment. Now, how do I properly shutdown the oVirt cluster, with 
Gluster and the hosted engine?


I.e.: I want to install a cluster of 3 servers and then send it to a 
remote office. How do I do it properly? I noticed that glusterd is not 
enabled to start automatically. And how do I deal with the hosted engine?



Thanks,

Moacir



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Recogizer Group GmbH

Dr.rer.nat. Erekle Magradze
Lead Big Data Engineering & DevOps
Rheinwerkallee 2, 53227 Bonn
Tel: +49 228 29974555

E-Mail erekle.magra...@recogizer.de
Web: www.recogizer.com
 
Recogizer auf LinkedIn https://www.linkedin.com/company-beta/10039182/

Folgen Sie uns auf Twitter https://twitter.com/recogizer
 
-

Recogizer Group GmbH
Geschäftsführer: Oliver Habisch, Carsten Kreutze
Handelsregister: Amtsgericht Bonn HRB 20724
Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993
 
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.

Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben,
informieren Sie bitte sofort den Absender und löschen Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der 
darin enthaltenen Informationen ist nicht gestattet.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Erekle Magradze

Hi Fernando (sorry for misspelling your name, I used a different keyboard),

So let's go with the following scenarios:

1. Let's say you have two servers (replication factor is 2), i.e. two 
bricks per volume, in this case it is strongly recommended to have the 
arbiter node, the metadata storage that will guarantee avoiding the 
split brain situation, in this case for arbiter you don't even need a 
disk with lots of space, it's enough to have a tiny ssd but hosted on a 
separate server. Advantage of such setup is that you don't need the RAID 
1 for each brick, you have the metadata information stored in arbiter 
node and brick replacement is easy.


2. If you have odd number of bricks (let's say 3, i.e. replication 
factor is 3) in your volume and you didn't create the arbiter node as 
well as you didn't configure the quorum, in this case the entire load 
for keeping the consistency of the volume resides on all 3 servers, each 
of them is important and each brick contains key information, they need 
to cross-check each other (that's what people usually do with the first 
try of gluster :) ), in this case replacing a brick is a big pain and in 
this case RAID 1 is a good option to have (that's the disadvantage, i.e. 
loosing the space and not having the JBOD option) advantage is that you 
don't have the to have additional arbiter node.


3. You have odd number of bricks and configured arbiter node, in this 
case you can easily go with JBOD, however a good practice would be to 
have a RAID 1 for arbiter disks (tiny 128GB SSD-s ar perfectly 
sufficient for volumes with 10s of TB-s in size.)


That's basically it

The rest about the reliability and setup scenarios you can find in 
gluster documentation, especially look for quorum and arbiter node 
configs+options.


Cheers

Erekle

P.S. What I was mentioning, regarding a good practice is mostly related 
to the operations of gluster not installation or deployment, i.e. not 
the conceptual understanding of gluster (conceptually it's a JBOD system).


On 08/07/2017 05:41 PM, FERNANDO FREDIANI wrote:


Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as 
it adds another layer of complexity to the system (either a hardware 
or software RAID) before the gluster config and increase the system's 
overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, or 
6). Then when you have GlusterFS on the top of several RAIDs you have 
again more data replicated so you end up with the same data consuming 
more space in a group of disks and again on the top of several RAIDs 
depending on the Gluster configuration you have (in a RAID 1 config 
the same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is that 
it reduces considerably the write speeds as each group of disks will 
end up having the write speed of a single disk as all other disks of 
that group have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this 
big pain you mentioned if the data is replicated somewhere else, can 
still be retrieved to both serve clients and reconstruct the 
equivalent disk when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another 
one, since gluster some tries to access that broken brick and it's 
causing (at least it cause for me) a big pain, therefore it's better 
to have a RAID as brick, i.e. have RAID 1 (mirroring) for each brick, 
in this case if the disk is down you can easily exchange it and 
rebuild the RAID without going offline, i.e switching off the volume 
doing brick manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use 
any RAID with GlusterFS. I always thought that GlusteFS likes to 
work in JBOD mode and control the disks (bricks) directlly so you 
can create whatever distribution rule you wish, and if a single disk 
fails you just replace it and which obviously have the data 
replicated from another. The only downside of using in this way is 
that the replication data will be flow accross all servers but that 
is not much a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat 

Re: [ovirt-users] Good practices

2017-08-07 Thread Erekle Magradze

Hi Franando,

So let's go with the following scenarios:

1. Let's say you have two servers (replication factor is 2), i.e. two 
bricks per volume, in this case it is strongly recommended to have the 
arbiter node, the metadata storage that will guarantee avoiding the 
split brain situation, in this case for arbiter you don't even need a 
disk with lots of space, it's enough to have a tiny ssd but hosted on a 
separate server. Advantage of such setup is that you don't need the RAID 
1 for each brick, you have the metadata information stored in arbiter 
node and brick replacement is easy.


2. If you have odd number of bricks (let's say 3, i.e. replication 
factor is 3) in your volume and you didn't create the arbiter node as 
well as you didn't configure the quorum, in this case the entire load 
for keeping the consistency of the volume resides on all 3 servers, each 
of them is important and each brick contains key information, they need 
to cross-check each other (that's what people usually do with the first 
try of gluster :) ), in this case replacing a brick is a big pain and in 
this case RAID 1 is a good option to have (that's the disadvantage, i.e. 
loosing the space and not having the JBOD option) advantage is that you 
don't have the to have additional arbiter node.


3. You have odd number of bricks and configured arbiter node, in this 
case you can easily go with JBOD, however a good practice would be to 
have a RAID 1 for arbiter disks (tiny 128GB SSD-s ar perfectly 
sufficient for volumes with 10s of TB-s in size.)


That's basically it

The rest about the reliability and setup scenarios you can find in 
gluster documentation, especially look for quorum and arbiter node 
configs+options.


Cheers

Erekle

P.S. What I was mentioning, regarding a good practice is mostly related 
to the operations of gluster not installation or deployment, i.e. not 
the conceptual understanding of gluster (conceptually it's a JBOD system).



On 08/07/2017 05:41 PM, FERNANDO FREDIANI wrote:


Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as 
it adds another layer of complexity to the system (either a hardware 
or software RAID) before the gluster config and increase the system's 
overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, or 
6). Then when you have GlusterFS on the top of several RAIDs you have 
again more data replicated so you end up with the same data consuming 
more space in a group of disks and again on the top of several RAIDs 
depending on the Gluster configuration you have (in a RAID 1 config 
the same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is that 
it reduces considerably the write speeds as each group of disks will 
end up having the write speed of a single disk as all other disks of 
that group have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this 
big pain you mentioned if the data is replicated somewhere else, can 
still be retrieved to both serve clients and reconstruct the 
equivalent disk when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another 
one, since gluster some tries to access that broken brick and it's 
causing (at least it cause for me) a big pain, therefore it's better 
to have a RAID as brick, i.e. have RAID 1 (mirroring) for each brick, 
in this case if the disk is down you can easily exchange it and 
rebuild the RAID without going offline, i.e switching off the volume 
doing brick manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use 
any RAID with GlusterFS. I always thought that GlusteFS likes to 
work in JBOD mode and control the disks (bricks) directlly so you 
can create whatever distribution rule you wish, and if a single disk 
fails you just replace it and which obviously have the data 
replicated from another. The only downside of using in this way is 
that the replication data will be flow accross all servers but that 
is not much a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat 
Support Team in depth about optimal configuration in regards 

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
What you mentioned is a specific case and not a generic situation. The 
main point there is that RAID 5 or 6 impacts write performance compared 
when you write to only 2 given disks at a time. That was the comparison 
made.


Fernando


On 07/08/2017 16:49, Fabrice Bacchella wrote:


Le 7 août 2017 à 17:41, FERNANDO FREDIANI > a écrit :




Yet another downside of having a RAID (specially RAID 5 or 6) is that 
it reduces considerably the write speeds as each group of disks will 
end up having the write speed of a single disk as all other disks of 
that group have to wait for each other to write as well.




That's not true if you have medium to high range hardware raid. For 
example, HP Smart Array come with a flash cache of about 1 or 2 Gb 
that hides that from the OS. 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Fabrice Bacchella
>> Moacir: Yes! This is another reason to have separate networks for 
>> north/south and east/west. In that way I can use the standard MTU on the 
>> 10Gb NICs and jumbo frames on the file/move 40Gb NICs.

Why not Jumbo frame every where ?___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Fabrice Bacchella

> Le 7 août 2017 à 17:41, FERNANDO FREDIANI  a écrit 
> :
> 

> Yet another downside of having a RAID (specially RAID 5 or 6) is that it 
> reduces considerably the write speeds as each group of disks will end up 
> having the write speed of a single disk as all other disks of that group have 
> to wait for each other to write as well.
> 

That's not true if you have medium to high range hardware raid. For example, HP 
Smart Array come with a flash cache of about 1 or 2 Gb that hides that from the 
OS.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi Colin,


I am in Portugal, so sorry for this late response. It is quite confusing for 
me, please consider:

1 - What if the RAID is done by the server's disk controller, not by software?


2 - For JBOD I am just using gdeploy to deploy it. However, I am not using the 
oVirt node GUI to do this.


3 - As the VM .qcow2 files are quite big, tiering would only help if made by an 
intelligent system that uses SSD for chunks of data not for the entire .qcow2 
file. But I guess this is a problem everybody else has. So, Do you know how 
tiering works in Gluster?


4 - I am putting the OS on the first disk. However, would you do differently?


Moacir


From: Colin Coe 
Sent: Monday, August 7, 2017 4:48 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices

1) RAID5 may be a performance hit-

2) I'd be inclined to do this as JBOD by creating a distributed disperse volume 
on each server.  Something like

echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e 
"server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)

3) I think the above.

4) Gluster does support tiering, but IIRC you'd need the same number of SSD as 
spindle drives.  There may be another way to use the SSD as a fast cache.

Where are you putting the OS?

Hope I understood the question...

Thanks

On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira 
> wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?


You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to shutdown an oVirt cluster with Gluster and hosted engine

2017-08-07 Thread Moacir Ferreira
I have installed a oVirt cluster in a KVM virtualized test environment. Now, 
how do I properly shutdown the oVirt cluster, with Gluster and the hosted 
engine?

I.e.: I want to install a cluster of 3 servers and then send it to a remote 
office. How do I do it properly? I noticed that glusterd is not enabled to 
start automatically. And how do I deal with the hosted engine?


Thanks,

Moacir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Install ovirt on Azure

2017-08-07 Thread Yaniv Kaul
On Mon, Aug 7, 2017 at 1:52 PM, Karli Sjöberg  wrote:

> On mån, 2017-08-07 at 12:46 +0200, Johan Bernhardsson wrote:
> > There is no point on doing that as azure is a cloud in itself and
> > ovirt
> > is to build your own virtual environment to deploy on local hardware.
>
> Yeah, of course and I think Grzegorz knows that. But for people in the
> testing, evaluating stage, making it a virtualized environment gives a
> greater flexibility. Easier to test without having to buy any metal.
>

The Engine can be installed anywhere. The hosts - a bit more tricky. Does
Azure expose virtualization capable CPU?

Note you can use Lago[1], which we use as our CI tool (with
ovirt-system-tests[2]) - which uses nested virtualization on top of a
single host (my laptop with 8GB runs it).

There's a hyper-converged suite and a regular suite there. They support
Gluster, NFS, iSCSI and many many features can be evaluated on it.
Y.

[1] http://lago.readthedocs.io/en/latest/README.html
[2] http://ovirt-system-tests.readthedocs.io/en/latest/


> >
> > /Johan
> >
> > On Mon, 2017-08-07 at 12:32 +0200, Grzegorz Szypa wrote:
> > >
> > > Hi.
> > >
> > > Did anyone try to install ovirt on Azure Environment?
>
> No idea if Azure VM's support nested virtualization, sorry.
>
> /K
>
> > >
> > > --
> > > G.Sz.
> > > ___
> > >
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Yaniv Kaul
On Mon, Aug 7, 2017 at 2:41 PM, Colin Coe  wrote:

> Hi
>
> I just thought that you'd do hardware RAID if you had the controller or
> JBOD if you didn't.  In hindsight, a server with 40Gbps NICs is pretty
> likely to have a hardware RAID controller.  I've never done JBOD with
> hardware RAID.  I think having a single gluster brick on hardware JBOD
> would be riskier than multiple bricks, each on a single disk, but thats not
> based on anything other than my prejudices.
>
> I thought gluster tiering was for the most frequently accessed files, in
> which case all the VMs disks would end up in the hot tier.  However, I have
> been wrong before...
>

The most frequent shards, may not be complete files.
Y.


> I just wanted to know where the OS was going as I didn't see it mentioned
> in the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a
> lot of wasted disk.
>
> Honestly, I think Yaniv's answer was far better than my own and made the
> important point about having an arbiter.
>
> Thanks
>
> On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira <
> moacirferre...@hotmail.com> wrote:
>
>> Hi Colin,
>>
>>
>> I am in Portugal, so sorry for this late response. It is quite confusing
>> for me, please consider:
>>
>>
>> 1* - *What if the RAID is done by the server's disk controller, not by
>> software?
>>
>> 2 - For JBOD I am just using gdeploy to deploy it. However, I am not
>> using the oVirt node GUI to do this.
>>
>>
>> 3 - As the VM .qcow2 files are quite big, tiering would only help if
>> made by an intelligent system that uses SSD for chunks of data not for the
>> entire .qcow2 file. But I guess this is a problem everybody else has. So,
>> Do you know how tiering works in Gluster?
>>
>>
>> 4 - I am putting the OS on the first disk. However, would you do
>> differently?
>>
>>
>> Moacir
>>
>> --
>> *From:* Colin Coe 
>> *Sent:* Monday, August 7, 2017 4:48 AM
>> *To:* Moacir Ferreira
>> *Cc:* users@ovirt.org
>> *Subject:* Re: [ovirt-users] Good practices
>>
>> 1) RAID5 may be a performance hit-
>>
>> 2) I'd be inclined to do this as JBOD by creating a distributed disperse
>> volume on each server.  Something like
>>
>> echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
>> $(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
>> "server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
>>
>> 3) I think the above.
>>
>> 4) Gluster does support tiering, but IIRC you'd need the same number of
>> SSD as spindle drives.  There may be another way to use the SSD as a fast
>> cache.
>>
>> Where are you putting the OS?
>>
>> Hope I understood the question...
>>
>> Thanks
>>
>> On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <
>> moacirferre...@hotmail.com> wrote:
>>
>>> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2
>>> CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
>>> GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
>>> a dual 10Gb NIC. So my intention is to create a loop like a server triangle
>>> using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
>>> move VMs around the pod (east /west traffic) while using the 10Gb
>>> interfaces for giving services to the outside world (north/south traffic).
>>>
>>>
>>> This said, my first question is: How should I deploy GlusterFS in such
>>> oVirt scenario? My questions are:
>>>
>>>
>>> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
>>> then create a GlusterFS using them?
>>>
>>> 2 - Instead, should I create a JBOD array made of all server's disks?
>>>
>>> 3 - What is the best Gluster configuration to provide for HA while not
>>> consuming too much disk space?
>>>
>>> 4 - Does a oVirt hypervisor pod like I am planning to build, and the
>>> virtualization environment, benefits from tiering when using a SSD disk?
>>> And yes, will Gluster do it by default or I have to configure it to do so?
>>>
>>>
>>> At the bottom line, what is the good practice for using GlusterFS in
>>> small pods for enterprises?
>>>
>>>
>>> You opinion/feedback will be really appreciated!
>>>
>>> Moacir
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Yaniv Kaul
On Mon, Aug 7, 2017 at 6:41 PM, FERNANDO FREDIANI  wrote:

> Thanks for the clarification Erekle.
>
> However I get surprised with this way of operating from GlusterFS as it
> adds another layer of complexity to the system (either a hardware or
> software RAID) before the gluster config and increase the system's overall
> costs.
>

It does, but with HW based RAID it's not a big deal. The complexity is all
the stripe size math... which I personally don't like to calculate.


> An important point to consider is: In RAID configuration you already have
> space 'wasted' in order to build redundancy (either RAID 1, 5, or 6). Then
> when you have GlusterFS on the top of several RAIDs you have again more
> data replicated so you end up with the same data consuming more space in a
> group of disks and again on the top of several RAIDs depending on the
> Gluster configuration you have (in a RAID 1 config the same data is
> replicated 4 times).
>
> Yet another downside of having a RAID (specially RAID 5 or 6) is that it
> reduces considerably the write speeds as each group of disks will end up
> having the write speed of a single disk as all other disks of that group
> have to wait for each other to write as well.
>
> Therefore if Gluster already replicates data why does it create this big
> pain you mentioned if the data is replicated somewhere else, can still be
> retrieved to both serve clients and reconstruct the equivalent disk when it
> is replaced ?
>

I think it's a matter of how fast you can replace a disk (over a long
weekend?), how reliably you can do it (please, don't pull the wrong disk!
I've seen it happening too many times!) and how much of a performance hit
are you willing to accept while in degraded mode (and how long it took to
detect it. HDDs, unlike SSDs, die slowly. At least when SSD dies, it dies a
quick and determined death. HDDs may accumulate errors and errors and still
function).
Y.



Fernando
>
> On 07/08/2017 10:26, Erekle Magradze wrote:
>
> Hi Frenando,
>
> Here is my experience, if you consider a particular hard drive as a brick
> for gluster volume and it dies, i.e. it becomes not accessible it's a huge
> hassle to discard that brick and exchange with another one, since gluster
> some tries to access that broken brick and it's causing (at least it cause
> for me) a big pain, therefore it's better to have a RAID as brick, i.e.
> have RAID 1 (mirroring) for each brick, in this case if the disk is down
> you can easily exchange it and rebuild the RAID without going offline, i.e
> switching off the volume doing brick manipulations and switching it back on.
>
> Cheers
>
> Erekle
>
> On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:
>
> For any RAID 5 or 6 configuration I normally follow a simple gold rule
> which gave good results so far:
> - up to 4 disks RAID 5
> - 5 or more disks RAID 6
>
> However I didn't really understand well the recommendation to use any RAID
> with GlusterFS. I always thought that GlusteFS likes to work in JBOD mode
> and control the disks (bricks) directlly so you can create whatever
> distribution rule you wish, and if a single disk fails you just replace it
> and which obviously have the data replicated from another. The only
> downside of using in this way is that the replication data will be flow
> accross all servers but that is not much a big issue.
>
> Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.
>
> Thanks
> Regards
> Fernando
>
> On 07/08/2017 03:46, Devin Acosta wrote:
>
>
> Moacir,
>
> I have recently installed multiple Red Hat Virtualization hosts for
> several different companies, and have dealt with the Red Hat Support Team
> in depth about optimal configuration in regards to setting up GlusterFS
> most efficiently and I wanted to share with you what I learned.
>
> In general Red Hat Virtualization team frowns upon using each DISK of the
> system as just a JBOD, sure there is some protection by having the data
> replicated, however, the recommendation is to use RAID 6 (preferred) or
> RAID-5, or at least RAID-1 at the very least.
>
> Here is the direct quote from Red Hat when I asked about RAID and Bricks:
>
> *"A typical Gluster configuration would use RAID underneath the bricks.
> RAID 6 is most typical as it gives you 2 disk failure protection, but RAID
> 5 could be used too. Once you have the RAIDed bricks, you'd then apply the
> desired replication on top of that. The most popular way of doing this
> would be distributed replicated with 2x replication. In general you'll get
> better performance with larger bricks. 12 drives is often a sweet spot.
> Another option would be to create a separate tier using all SSD’s.” *
>
> *In order to SSD tiering from my understanding you would need 1 x NVMe
> drive in each server, or 4 x SSD hot tier (it needs to be distributed,
> replicated for the hot tier if not using NVME). So with you only having 1
> SSD drive in each server, I’d suggest maybe looking 

Re: [ovirt-users] Python errors with ovirt 4.1.4

2017-08-07 Thread Staniforth, Paul
Thanks,

  That works but I still have the engine reporting 1 node has an 
update available, strange.


"Check for available updates on host x.xxx.xxx was completed successfully 
with message 'found updates for packages 
ovirt-node-ng-image-update-4.1.4-1.el7.centos'."


Regards,

  Paul S.


From: Yuval Turgeman 
Sent: 07 August 2017 13:37
To: Staniforth, Paul
Cc: david caughey; Users@ovirt.org
Subject: Re: [ovirt-users] Python errors with ovirt 4.1.4

Hi,

The problem should be solved here:

http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/lastSuccessfulBuild/artifact/exported-artifacts/

Thanks,
Yuval.


On Fri, Aug 4, 2017 at 12:37 PM, Staniforth, Paul 
> wrote:

Hello,

   I have 3 nodes and used the engine to update them to


ovirt-node-ng-4.1.4-0.20170728.0


but the engine still reported a new update which I tried but it failed.


On the nodes yum check-update showed an update for


ovirt-node-ng-nodectl.noarch4.1.4-0.20170728.0.el7


installing this produces the same errors when logging into the node or running 
nodectl motd.

nodectl check and info where fine but the engine produced errors when checking 
for updates.


I used yum history to rollback the ovirt-node-ng-nodectl.noarch.


I now have no errors but strangely the engine reports 2 nodes have updates 
available but not the 3rd which wasn't the one I did a nodectl update on.


Regards,

   Paul S.



From: users-boun...@ovirt.org 
> on behalf of david 
caughey >
Sent: 02 August 2017 10:48
To: Users@ovirt.org
Subject: [ovirt-users] Python errors with ovirt 4.1.4

Hi Folks,

I'm testing out the new version with the 4.1.4 ovirt iso and am getting errors 
directly after install:

Last login: Wed Aug  2 10:17:56 2017
Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
  File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in 

CliApplication()
  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in 
CliApplication
return cmdmap.command(args)
  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in 
command
return self.commands[command](**kwargs)
  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 102, in motd
machine_readable=True).output, self.machine).write()
  File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 51, in 
__init__
self._update_info(status)
  File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 78, in 
_update_info
if "ok" not in status.lower():
AttributeError: Status instance has no attribute 'lower'
Admin Console: https://192.168.122.61:9090/

The admin console seems to work fine.

Are these issues serious or can they be ignored.

BR/David
To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread yayo (j)
>
>
>
> Better workaround is to download newer python-apt from here [1] and
> install it manually with dpkg. The guest agent seems to work OK with it.
> There's much less chance of breaking something else, also newer
> python-apt will be picked-up automatically on upgrades.
>
> Tomas
>
>
> [1] https://packages.ubuntu.com/yakkety/python-apt
>
>

Thank you, this is fix the problem!


root@vmdczen01:~# dpkg -i python-apt_1.1.0~beta5_amd64.deb
Selecting previously unselected package python-apt.
(Reading database ... 155994 files and directories currently installed.)
Preparing to unpack python-apt_1.1.0~beta5_amd64.deb ...
Unpacking python-apt (1.1.0~beta5) ...
dpkg: dependency problems prevent configuration of python-apt:
 python-apt depends on dirmngr | gnupg (<< 2); however:
  Package dirmngr is not installed.
  Version of gnupg on system is 2.1.15-1ubuntu6.

dpkg: error processing package python-apt (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 python-apt

root@vmdczen01:~# apt-get -f install
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer
required:
  linux-headers-4.4.0-83 linux-headers-4.4.0-83-generic
linux-image-4.4.0-83-generic linux-image-extra-4.4.0-83-generic
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  dirmngr
The following NEW packages will be installed:
  dirmngr
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
1 not fully installed or removed.
Need to get 235 kB of archives.
After this operation, 644 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://it.archive.ubuntu.com/ubuntu xenial/main amd64 dirmngr amd64
2.1.11-6ubuntu2 [235 kB]
Fetched 235 kB in 0s (775 kB/s)
Selecting previously unselected package dirmngr.
(Reading database ... 156021 files and directories currently installed.)
Preparing to unpack .../dirmngr_2.1.11-6ubuntu2_amd64.deb ...
Unpacking dirmngr (2.1.11-6ubuntu2) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up dirmngr (2.1.11-6ubuntu2) ...
Setting up python-apt (1.1.0~beta5) ...

root@vmdczen01:~# apt-get install ovirt-guest-agent
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer
required:
  linux-headers-4.4.0-83 linux-headers-4.4.0-83-generic
linux-image-4.4.0-83-generic linux-image-extra-4.4.0-83-generic
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libnl-route-3-200 python-dbus python-ethtool qemu-guest-agent
Suggested packages:
  python-dbus-doc python-dbus-dbg
Recommended packages:
  python-gi | python-qt4-dbus
The following NEW packages will be installed:
  libnl-route-3-200 ovirt-guest-agent python-dbus python-ethtool
qemu-guest-agent
0 upgraded, 5 newly installed, 0 to remove and 2 not upgraded.
Need to get 383 kB of archives.
After this operation, 1,574 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://it.archive.ubuntu.com/ubuntu xenial-updates/main amd64
libnl-route-3-200 amd64 3.2.27-1ubuntu0.16.04.1 [124 kB]
Get:2 http://it.archive.ubuntu.com/ubuntu xenial/main amd64 python-dbus
amd64 1.2.0-3 [83.5 kB]
Get:3 http://it.archive.ubuntu.com/ubuntu xenial/universe amd64
python-ethtool amd64 0.11-3 [18.0 kB]
Get:4 http://it.archive.ubuntu.com/ubuntu xenial-updates/universe amd64
qemu-guest-agent amd64 1:2.5+dfsg-5ubuntu10.14 [135 kB]
Get:5 http://it.archive.ubuntu.com/ubuntu xenial/universe amd64
ovirt-guest-agent all 1.0.11.2.dfsg-1 [23.4 kB]
Fetched 383 kB in 0s (993 kB/s)
Selecting previously unselected package libnl-route-3-200:amd64.
(Reading database ... 156038 files and directories currently installed.)
Preparing to unpack .../libnl-route-3-200_3.2.27-1ubuntu0.16.04.1_amd64.deb
...
Unpacking libnl-route-3-200:amd64 (3.2.27-1ubuntu0.16.04.1) ...
Selecting previously unselected package python-dbus.
Preparing to unpack .../python-dbus_1.2.0-3_amd64.deb ...
Unpacking python-dbus (1.2.0-3) ...
Selecting previously unselected package python-ethtool.
Preparing to unpack .../python-ethtool_0.11-3_amd64.deb ...
Unpacking python-ethtool (0.11-3) ...
Selecting previously unselected package qemu-guest-agent.
Preparing to unpack
.../qemu-guest-agent_1%3a2.5+dfsg-5ubuntu10.14_amd64.deb ...
Unpacking qemu-guest-agent (1:2.5+dfsg-5ubuntu10.14) ...
Selecting previously unselected package ovirt-guest-agent.
Preparing to unpack .../ovirt-guest-agent_1.0.11.2.dfsg-1_all.deb ...
Unpacking ovirt-guest-agent (1.0.11.2.dfsg-1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu19) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for dbus (1.10.6-1ubuntu3.3) ...

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI

Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as it 
adds another layer of complexity to the system (either a hardware or 
software RAID) before the gluster config and increase the system's 
overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, or 
6). Then when you have GlusterFS on the top of several RAIDs you have 
again more data replicated so you end up with the same data consuming 
more space in a group of disks and again on the top of several RAIDs 
depending on the Gluster configuration you have (in a RAID 1 config the 
same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is that it 
reduces considerably the write speeds as each group of disks will end up 
having the write speed of a single disk as all other disks of that group 
have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this big 
pain you mentioned if the data is replicated somewhere else, can still 
be retrieved to both serve clients and reconstruct the equivalent disk 
when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another 
one, since gluster some tries to access that broken brick and it's 
causing (at least it cause for me) a big pain, therefore it's better 
to have a RAID as brick, i.e. have RAID 1 (mirroring) for each brick, 
in this case if the disk is down you can easily exchange it and 
rebuild the RAID without going offline, i.e switching off the volume 
doing brick manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use any 
RAID with GlusterFS. I always thought that GlusteFS likes to work in 
JBOD mode and control the disks (bricks) directlly so you can create 
whatever distribution rule you wish, and if a single disk fails you 
just replace it and which obviously have the data replicated from 
another. The only downside of using in this way is that the 
replication data will be flow accross all servers but that is not 
much a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat Support 
Team in depth about optimal configuration in regards to setting up 
GlusterFS most efficiently and I wanted to share with you what I 
learned.


In general Red Hat Virtualization team frowns upon using each DISK 
of the system as just a JBOD, sure there is some protection by 
having the data replicated, however, the recommendation is to use 
RAID 6 (preferred) or RAID-5, or at least RAID-1 at the very least.


Here is the direct quote from Red Hat when I asked about RAID and 
Bricks:

/
/
/"A typical Gluster configuration would use RAID underneath the 
bricks. RAID 6 is most typical as it gives you 2 disk failure 
protection, but RAID 5 could be used too. Once you have the RAIDed 
bricks, you'd then apply the desired replication on top of that. The 
most popular way of doing this would be distributed replicated with 
2x replication. In general you'll get better performance with larger 
bricks. 12 drives is often a sweet spot. Another option would be to 
create a separate tier using all SSD’s.” /


/In order to SSD tiering from my understanding you would need 1 x 
NVMe drive in each server, or 4 x SSD hot tier (it needs to be 
distributed, replicated for the hot tier if not using NVME). So with 
you only having 1 SSD drive in each server, I’d suggest maybe 
looking into the NVME option. /

/
/
/Since your using only 3-servers, what I’d probably suggest is to do 
(2 Replicas + Arbiter Node), this setup actually doesn’t require the 
3rd server to have big drives at all as it only stores meta-data 
about the files and not actually a full copy. /

/
/
/Please see the attached document that was given to me by Red Hat to 
get more information on this. Hope this information helps you./

/
/

--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect

On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com ) wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each 
with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The 
idea is to use GlusterFS to provide HA for the 

Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread Tomáš Golembiovský
On Mon, 7 Aug 2017 16:08:00 +0200
"yayo (j)"  wrote:

> >
> > This is the problem!
> >
> > I looked at the packages for conflict and figured the issue is in gnpug.
> > Zentyal repository contains gnupg version 2.1.15-1ubuntu6 which breaks
> > python-apt <= 1.1.0~beta4.
> >
> >  
> Ok, thank you! Any workaround (something like packege pinning?) to fix this
> problem?

Package pinning is probably not a good idea. I've noticed that there are
some libraries that require gnupg >= 2.x and you would not be able to
install those. They came from Zentyal repo so I assume they might be
essential for some features.

Better workaround is to download newer python-apt from here [1] and
install it manually with dpkg. The guest agent seems to work OK with it.
There's much less chance of breaking something else, also newer
python-apt will be picked-up automatically on upgrades.

Tomas


[1] https://packages.ubuntu.com/yakkety/python-apt

-- 
Tomáš Golembiovský 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread yayo (j)
>
> Agreed.
>>
>> Open a bug with Zentyal. They broke the packages from Ubuntu and should
>> fix it themselves. They have to backport newer version of python-apt.
>> The one from yakkety (1.1.0~beta5) should be good enough to fix the
>> problem.
>>
>> In the bug report note that the ovirt-guest-agent from Ubuntu repository
>> cannot be installed. It is not only related to the package from the
>> private repo.
>
>
>
> Ok, Thank you!
>


Done: https://tracker.zentyal.org/issues/5279

Thank you Again
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Software RAID on oVirt Node

2017-08-07 Thread Derek Atkins
Vinícius Ferrão  writes:

> Hello Chris,
>
> On non-node installation I can’t see any problems as you said, but due
> to the appliance nature of oVirt Node I don’t know if this would be a
> supported scenario. Anyway you raised a good point: local storage. I’m
> not needing this, perhaps someone on the list will be using this
> feature.

It depends on your definition of "supported."

However this is the mode I'm using, too.  ovirt on top of CentOS with
sw-raid.  Haven't had any problems.

> V.

-derek

-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread yayo (j)
>
> This is the problem!
>
> I looked at the packages for conflict and figured the issue is in gnpug.
> Zentyal repository contains gnupg version 2.1.15-1ubuntu6 which breaks
> python-apt <= 1.1.0~beta4.
>
>
Ok, thank you! Any workaround (something like packege pinning?) to fix this
problem?


>
> > And this is a BIG problem ... Can I open a bug ? Where?
>
> Agreed.
>
> Open a bug with Zentyal. They broke the packages from Ubuntu and should
> fix it themselves. They have to backport newer version of python-apt.
> The one from yakkety (1.1.0~beta5) should be good enough to fix the
> problem.
>
> In the bug report note that the ovirt-guest-agent from Ubuntu repository
> cannot be installed. It is not only related to the package from the
> private repo.



Ok, Thank you!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Erekle Magradze

Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another one, 
since gluster some tries to access that broken brick and it's causing 
(at least it cause for me) a big pain, therefore it's better to have a 
RAID as brick, i.e. have RAID 1 (mirroring) for each brick, in this case 
if the disk is down you can easily exchange it and rebuild the RAID 
without going offline, i.e switching off the volume doing brick 
manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold rule 
which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use any 
RAID with GlusterFS. I always thought that GlusteFS likes to work in 
JBOD mode and control the disks (bricks) directlly so you can create 
whatever distribution rule you wish, and if a single disk fails you 
just replace it and which obviously have the data replicated from 
another. The only downside of using in this way is that the 
replication data will be flow accross all servers but that is not much 
a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat Support 
Team in depth about optimal configuration in regards to setting up 
GlusterFS most efficiently and I wanted to share with you what I learned.


In general Red Hat Virtualization team frowns upon using each DISK of 
the system as just a JBOD, sure there is some protection by having 
the data replicated, however, the recommendation is to use RAID 6 
(preferred) or RAID-5, or at least RAID-1 at the very least.


Here is the direct quote from Red Hat when I asked about RAID and Bricks:
/
/
/"A typical Gluster configuration would use RAID underneath the 
bricks. RAID 6 is most typical as it gives you 2 disk failure 
protection, but RAID 5 could be used too. Once you have the RAIDed 
bricks, you'd then apply the desired replication on top of that. The 
most popular way of doing this would be distributed replicated with 
2x replication. In general you'll get better performance with larger 
bricks. 12 drives is often a sweet spot. Another option would be to 
create a separate tier using all SSD’s.” /


/In order to SSD tiering from my understanding you would need 1 x 
NVMe drive in each server, or 4 x SSD hot tier (it needs to be 
distributed, replicated for the hot tier if not using NVME). So with 
you only having 1 SSD drive in each server, I’d suggest maybe looking 
into the NVME option. /

/
/
/Since your using only 3-servers, what I’d probably suggest is to do 
(2 Replicas + Arbiter Node), this setup actually doesn’t require the 
3rd server to have big drives at all as it only stores meta-data 
about the files and not actually a full copy. /

/
/
/Please see the attached document that was given to me by Red Hat to 
get more information on this. Hope this information helps you./

/
/

--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect

On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com ) wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each with 
2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is 
to use GlusterFS to provide HA for the VMs. The 3 servers have a 
dual 40Gb NIC and a dual 10Gb NIC. So my intention is to create a 
loop like a server triangle using the 40Gb NICs for virtualization 
files (VMs .qcow2) access and to move VMs around the pod (east /west 
traffic) while using the 10Gb interfaces for giving services to the 
outside world (north/south traffic).



This said, my first question is: How should I deploy GlusterFS in 
such oVirt scenario? My questions are:



1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, 
and then create a GlusterFS using them?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while 
not consuming too much disk space?


4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD 
disk? And yes, will Gluster do it by default or I have to configure 
it to do so?



At the bottom line, what is the good practice for using GlusterFS in 
small pods for enterprises?



You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
Moacir, I beleive for using the 3 servers directly connected to each 
others without switch you have to have a Bridge on each server for every 
2 physical interfaces to allow the traffic passthrough in layer2 (Is it 
possible to create this from the oVirt Engine Web Interface?). If your 
ovirtmgmt network is separate from other (should really be) that should 
be fine to do.



Fernando


On 07/08/2017 07:13, Moacir Ferreira wrote:


Hi, in-line responses.


Thanks,

Moacir



*From:* Yaniv Kaul 
*Sent:* Monday, August 7, 2017 7:42 AM
*To:* Moacir Ferreira
*Cc:* users@ovirt.org
*Subject:* Re: [ovirt-users] Good practices


On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira 
> wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each
with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The
idea is to use GlusterFS to provide HA for the VMs. The 3 servers
have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to
create a loop like a server triangle using the 40Gb NICs for
virtualization files (VMs .qcow2) access and to move VMs around
the pod (east /west traffic) while using the 10Gb interfaces for
giving services to the outside world (north/south traffic).


Very nice gear. How are you planning the network exactly? Without a 
switch, back-to-back? (sounds OK to me, just wanted to ensure this is 
what the 'dual' is used for). However, I'm unsure if you have the 
correct balance between the interface speeds (40g) and the disks (too 
many HDDs?).


Moacir:The idea is to have a very high performance network for the 
distributed file system and to prevent bottlenecks when we move one VM 
from a node to another. Using 40Gb NICs I can just connect the servers 
back-to-back. In this case I don't need the expensive 40Gb switch, I 
get very high speed and no contention between north/south traffic with 
east/west.



This said, my first question is: How should I deploy GlusterFS in
such oVirt scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node,
and then create a GlusterFS using them?

I would assume RAID 1 for the operating system (you don't want a 
single point of failure there?) and the rest JBODs. The SSD will be 
used for caching, I reckon? (I personally would add more SSDs instead 
of HDDs, but it does depend on the disk sizes and your space requirements.


Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic 
JBOD or a JBOD assembled using RAID-5 "disks" createdby the server's 
disk controller?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while
not consuming too much disk space?


Replica 2 + Arbiter sounds good to me.
Moacir:I agree, and that is what I am using.

4 - Does a oVirt hypervisor pod like I am planning to build, and
the virtualization environment, benefits from tiering when using a
SSD disk? And yes, will Gluster do it by default or I have to
configure it to do so?


Yes, I believe using lvmcache is the best way to go.

Moacir: Are you sure? I say that because the qcow2 files will be
quite big. So if tiering is "file based" the SSD would have to be
very, very big unless Gluster tiering do it by "chunks of data".


At the bottom line, what is the good practice for using GlusterFS
in small pods for enterprises?


Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). 
Sharding (enabled out of the box if you use a hyper-converged setup 
via gdeploy).
*Moacir:* Yes! This is another reason to have separate networks for 
north/south and east/west. In that way I can use the standard MTU on 
the 10Gb NICs and jumbo frames on the file/move 40Gb NICs.


Y.


You opinion/feedback will be really appreciated!

Moacir


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
For any RAID 5 or 6 configuration I normally follow a simple gold rule 
which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use any 
RAID with GlusterFS. I always thought that GlusteFS likes to work in 
JBOD mode and control the disks (bricks) directlly so you can create 
whatever distribution rule you wish, and if a single disk fails you just 
replace it and which obviously have the data replicated from another. 
The only downside of using in this way is that the replication data will 
be flow accross all servers but that is not much a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat Support 
Team in depth about optimal configuration in regards to setting up 
GlusterFS most efficiently and I wanted to share with you what I learned.


In general Red Hat Virtualization team frowns upon using each DISK of 
the system as just a JBOD, sure there is some protection by having the 
data replicated, however, the recommendation is to use RAID 6 
(preferred) or RAID-5, or at least RAID-1 at the very least.


Here is the direct quote from Red Hat when I asked about RAID and Bricks:
/
/
/"A typical Gluster configuration would use RAID underneath the 
bricks. RAID 6 is most typical as it gives you 2 disk failure 
protection, but RAID 5 could be used too. Once you have the RAIDed 
bricks, you'd then apply the desired replication on top of that. The 
most popular way of doing this would be distributed replicated with 2x 
replication. In general you'll get better performance with larger 
bricks. 12 drives is often a sweet spot. Another option would be to 
create a separate tier using all SSD’s.” /


/In order to SSD tiering from my understanding you would need 1 x NVMe 
drive in each server, or 4 x SSD hot tier (it needs to be distributed, 
replicated for the hot tier if not using NVME). So with you only 
having 1 SSD drive in each server, I’d suggest maybe looking into the 
NVME option. /

/
/
/Since your using only 3-servers, what I’d probably suggest is to do 
(2 Replicas + Arbiter Node), this setup actually doesn’t require the 
3rd server to have big drives at all as it only stores meta-data about 
the files and not actually a full copy. /

/
/
/Please see the attached document that was given to me by Red Hat to 
get more information on this. Hope this information helps you./

/
/

--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect

On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com ) wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each with 
2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is 
to use GlusterFS to provide HA for the VMs. The 3 servers have a dual 
40Gb NIC and a dual 10Gb NIC. So my intention is to create a loop 
like a server triangle using the 40Gb NICs for virtualization files 
(VMs .qcow2) access and to move VMs around the pod (east /west 
traffic) while using the 10Gb interfaces for giving services to the 
outside world (north/south traffic).



This said, my first question is: How should I deploy GlusterFS in 
such oVirt scenario? My questions are:



1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, 
and then create a GlusterFS using them?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while 
not consuming too much disk space?


4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD 
disk? And yes, will Gluster do it by default or I have to configure 
it to do so?



At the bottom line, what is the good practice for using GlusterFS in 
small pods for enterprises?



You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi, in-line responses.


Thanks,

Moacir


From: Yaniv Kaul 
Sent: Monday, August 7, 2017 7:42 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices



On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira 
> wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).

Very nice gear. How are you planning the network exactly? Without a switch, 
back-to-back? (sounds OK to me, just wanted to ensure this is what the 'dual' 
is used for). However, I'm unsure if you have the correct balance between the 
interface speeds (40g) and the disks (too many HDDs?).

Moacir: The idea is to have a very high performance network for the distributed 
file system and to prevent bottlenecks when we move one VM from a node to 
another. Using 40Gb NICs I can just connect the servers back-to-back. In this 
case I don't need the expensive 40Gb switch, I get very high speed and no 
contention between north/south traffic with east/west.



This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

I would assume RAID 1 for the operating system (you don't want a single point 
of failure there?) and the rest JBODs. The SSD will be used for caching, I 
reckon? (I personally would add more SSDs instead of HDDs, but it does depend 
on the disk sizes and your space requirements.

Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic JBOD or a 
JBOD assembled using RAID-5 "disks" created by the server's disk controller?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

Replica 2 + Arbiter sounds good to me.
Moacir: I agree, and that is what I am using.


4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?

Yes, I believe using lvmcache is the best way to go.

Moacir: Are you sure? I say that because the qcow2 files will be quite big. So 
if tiering is "file based" the SSD would have to be very, very big unless 
Gluster tiering do it by "chunks of data".


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?

Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). Sharding 
(enabled out of the box if you use a hyper-converged setup via gdeploy).
Moacir: Yes! This is another reason to have separate networks for north/south 
and east/west. In that way I can use the standard MTU on the 10Gb NICs and 
jumbo frames on the file/move 40Gb NICs.

Y.



You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python errors with ovirt 4.1.4

2017-08-07 Thread Yuval Turgeman
Hi,

The problem should be solved here:

http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/lastSuccessfulBuild/artifact/exported-artifacts/

Thanks,
Yuval.


On Fri, Aug 4, 2017 at 12:37 PM, Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> Hello,
>
>I have 3 nodes and used the engine to update them to
>
>
> ovirt-node-ng-4.1.4-0.20170728.0
>
>
> but the engine still reported a new update which I tried but it failed.
>
>
> On the nodes yum check-update showed an update for
>
>
> ovirt-node-ng-nodectl.noarch4.1.4-0.20170728.0.el7
>
>
> installing this produces the same errors when logging into the node or
> running nodectl motd.
>
> nodectl check and info where fine but the engine produced errors when
> checking for updates.
>
>
> I used yum history to rollback the ovirt-node-ng-nodectl.noarch.
>
>
> I now have no errors but strangely the engine reports 2 nodes have
> updates available but not the 3rd which wasn't the one I did a nodectl
> update on.
>
>
> Regards,
>
>Paul S.
>
>
> --
> *From:* users-boun...@ovirt.org  on behalf of
> david caughey 
> *Sent:* 02 August 2017 10:48
> *To:* Users@ovirt.org
> *Subject:* [ovirt-users] Python errors with ovirt 4.1.4
>
> Hi Folks,
>
> I'm testing out the new version with the 4.1.4 ovirt iso and am getting
> errors directly after install:
>
> Last login: Wed Aug  2 10:17:56 2017
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 102,
> in motd
> machine_readable=True).output, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 51, in
> __init__
> self._update_info(status)
>   File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 78, in
> _update_info
> if "ok" not in status.lower():
> AttributeError: Status instance has no attribute 'lower'
> Admin Console: https://192.168.122.61:9090/
>
> The admin console seems to work fine.
>
> Are these issues serious or can they be ignored.
>
> BR/David
> To view the terms under which this email is distributed, please go to:-
> http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread Tomáš Golembiovský
On Mon, 7 Aug 2017 10:18:35 +0200
"yayo (j)"  wrote:

> Hi,
> 
> 
> I just tried that with development version of Zentyal and it works for
> > me. Well, there are some caveats, see below.
> >
> >  
> Please provide steps not just "works for me" ... Thank you
> 
> 
> 
> > > Just wanted to add my input.  I just recently noticed the same thing.
> > > Luckily i was just testing Zentyal, but when I installed python-apt after
> > > reading the error message, apt seemed to completely break.  I would be
> > > curious on a workaround/fix for this as well.  
> >
> > Could you be more specific? What was the problem? Was it problem with
> > python-apt per-se or with ovirt-guest-agent using python-apt?
> >
> >  
> In the past with Zentyal 5 Dev Edition I had the same error: Added
> suggested repository that want install "python-apt" and remove "apt-get"
> (because conflicts)
> 
> 
> 
> >  
> > >
> > >
> > > On Fri, Aug 4, 2017 at 9:28 AM, yayo (j)  wrote:
> > >  
> > > > Hi all,
> > > >
> > > > I have this problem: I'm tring to install the guest tools following  
> > this  
> > > > guide: https://www.ovirt.org/documentation/how-to/guest-
> > > > agent/install-the-guest-agent-in-ubuntu/#for-ubuntu-1604  
> >
> > I've noticed that the ovirt-guest-agent package available in the
> > repository mentioned on the oVirt site is missing dependency on
> > qemu-guest-agent. You have to install it additionally for oVirt to work
> > properly.
> >  
> 
> 
> *Steps with repository:*
> http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/
> 
> *Update:*
> 
> 
> root@vmdczen01:~# apt-get update
> Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
> Hit:2 http://it.archive.ubuntu.com/ubuntu xenial InRelease
> Get:3 http://it.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
> Get:4 http://it.archive.ubuntu.com/ubuntu xenial-backports InRelease [102
> kB]
> Hit:5 http://archive.zentyal.org/zentyal 5.0 InRelease
> Ign:6
> http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04
> InRelease
> Hit:7
> http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04
> Release
> Fetched 306 kB in 0s (325 kB/s)
> Reading package lists... Done
> 
> 
> 
> *Tring to install:*
> 
> 
> root@vmdczen01:~# apt-get install ovirt-guest-agent
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> 
> The following packages have unmet dependencies:
>  ovirt-guest-agent : Depends: python-apt but it is not going to be installed
> E: Unable to correct problems, you have held broken packages.
> 
> 
> *Zentyal version is: *5.0.8
> 
> So, this repository is *totally broken*
> 
> 
> 
> >
> > If you, however, instal the ovirt-guest-agent from official Ubuntu
> > repository there is different issue. There is this unresolved bug:
> >
> > https://bugs.launchpad.net/ubuntu/+source/ovirt-guest-agent/+bug/1609130
> >
> > You have to fix permissions on /var/log/ovirt-guest-agent as mentioned
> > in the bug report.
> >
> >  
> 
> The problem is the same, if you remove the extra repisotory and then try to
> use the "main" repository, you have the problem with python-apt
> 
> 
> Extra tests:
> 
> I have tried to install "python-apt" directly and I can reproduce the
> problem mentioned by Stewart:
> 
> 
> Update (Check that extra repository is commented out):
> 
> 
> root@vmdczen01:~# apt-get update
> Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
> Get:2 http://archive.zentyal.org/zentyal 5.0 InRelease [4,887 B]
> Get:3 http://it.archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
> Get:4 http://archive.zentyal.org/zentyal 5.0/main amd64 Packages [28.1 kB]
> Get:5 http://archive.zentyal.org/zentyal 5.0/main i386 Packages [6,218 B]
> Get:6 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
> [325 kB]
> Get:7 http://it.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
> Get:8 http://it.archive.ubuntu.com/ubuntu xenial-backports InRelease [102
> kB]
> Get:9 http://it.archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1,201
> kB]
> Get:10 http://it.archive.ubuntu.com/ubuntu xenial/main i386 Packages [1,196
> kB]
> Get:11 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages
> [306 kB]
> Get:12 http://it.archive.ubuntu.com/ubuntu xenial/main Translation-en [568
> kB]
> Get:13 http://it.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
> [7,532 kB]
> Get:14 http://it.archive.ubuntu.com/ubuntu xenial/universe i386 Packages
> [7,512 kB]
> Get:15 http://it.archive.ubuntu.com/ubuntu xenial/universe 

Re: [ovirt-users] Good practices

2017-08-07 Thread Colin Coe
Hi

I just thought that you'd do hardware RAID if you had the controller or
JBOD if you didn't.  In hindsight, a server with 40Gbps NICs is pretty
likely to have a hardware RAID controller.  I've never done JBOD with
hardware RAID.  I think having a single gluster brick on hardware JBOD
would be riskier than multiple bricks, each on a single disk, but thats not
based on anything other than my prejudices.

I thought gluster tiering was for the most frequently accessed files, in
which case all the VMs disks would end up in the hot tier.  However, I have
been wrong before...

I just wanted to know where the OS was going as I didn't see it mentioned
in the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a
lot of wasted disk.

Honestly, I think Yaniv's answer was far better than my own and made the
important point about having an arbiter.

Thanks

On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira 
wrote:

> Hi Colin,
>
>
> I am in Portugal, so sorry for this late response. It is quite confusing
> for me, please consider:
>
>
> 1* - *What if the RAID is done by the server's disk controller, not by
> software?
>
> 2 - For JBOD I am just using gdeploy to deploy it. However, I am not
> using the oVirt node GUI to do this.
>
>
> 3 - As the VM .qcow2 files are quite big, tiering would only help if made
> by an intelligent system that uses SSD for chunks of data not for the
> entire .qcow2 file. But I guess this is a problem everybody else has. So,
> Do you know how tiering works in Gluster?
>
>
> 4 - I am putting the OS on the first disk. However, would you do
> differently?
>
>
> Moacir
>
> --
> *From:* Colin Coe 
> *Sent:* Monday, August 7, 2017 4:48 AM
> *To:* Moacir Ferreira
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
> 1) RAID5 may be a performance hit-
>
> 2) I'd be inclined to do this as JBOD by creating a distributed disperse
> volume on each server.  Something like
>
> echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
> $(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
> "server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
>
> 3) I think the above.
>
> 4) Gluster does support tiering, but IIRC you'd need the same number of
> SSD as spindle drives.  There may be another way to use the SSD as a fast
> cache.
>
> Where are you putting the OS?
>
> Hope I understood the question...
>
> Thanks
>
> On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <
> moacirferre...@hotmail.com> wrote:
>
>> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2
>> CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
>> GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
>> a dual 10Gb NIC. So my intention is to create a loop like a server triangle
>> using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
>> move VMs around the pod (east /west traffic) while using the 10Gb
>> interfaces for giving services to the outside world (north/south traffic).
>>
>>
>> This said, my first question is: How should I deploy GlusterFS in such
>> oVirt scenario? My questions are:
>>
>>
>> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
>> then create a GlusterFS using them?
>>
>> 2 - Instead, should I create a JBOD array made of all server's disks?
>>
>> 3 - What is the best Gluster configuration to provide for HA while not
>> consuming too much disk space?
>>
>> 4 - Does a oVirt hypervisor pod like I am planning to build, and the
>> virtualization environment, benefits from tiering when using a SSD disk?
>> And yes, will Gluster do it by default or I have to configure it to do so?
>>
>>
>> At the bottom line, what is the good practice for using GlusterFS in
>> small pods for enterprises?
>>
>>
>> You opinion/feedback will be really appreciated!
>>
>> Moacir
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Install ovirt on Azure

2017-08-07 Thread Karli Sjöberg
On mån, 2017-08-07 at 12:46 +0200, Johan Bernhardsson wrote:
> There is no point on doing that as azure is a cloud in itself and
> ovirt
> is to build your own virtual environment to deploy on local hardware.

Yeah, of course and I think Grzegorz knows that. But for people in the
testing, evaluating stage, making it a virtualized environment gives a
greater flexibility. Easier to test without having to buy any metal.

> 
> /Johan
> 
> On Mon, 2017-08-07 at 12:32 +0200, Grzegorz Szypa wrote:
> > 
> > Hi.
> > 
> > Did anyone try to install ovirt on Azure Environment?

No idea if Azure VM's support nested virtualization, sorry.

/K

> > 
> > -- 
> > G.Sz.
> > ___
> > 
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Install ovirt on Azure

2017-08-07 Thread Johan Bernhardsson
There is no point on doing that as azure is a cloud in itself and ovirt
is to build your own virtual environment to deploy on local hardware.

/Johan

On Mon, 2017-08-07 at 12:32 +0200, Grzegorz Szypa wrote:
> Hi.
> 
> Did anyone try to install ovirt on Azure Environment?
> 
> -- 
> G.Sz.
> ___

> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Install ovirt on Azure

2017-08-07 Thread Grzegorz Szypa
Hi.

Did anyone try to install ovirt on Azure Environment?

-- 
G.Sz.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt LDAP user authentication troubleshooting

2017-08-07 Thread Ondra Machacek
The best is to use this tool:

$ ovirt-engine-extensions-tool --log-level=FINEST aaa search
--extension-name=your-openldap-authz-name --entity-name=myuser

It prints pretty verbose output, which you can analyze.

On Mon, Aug 7, 2017 at 9:01 AM, NUNIN Roberto  wrote:
> I’ve two oVirt 4.1.4.2-1 pods used for labs.
>
>
>
> These two pods are configured in the same way (three node with gluster)
>
>
>
> Trying to setup LDAP auth, towards the same OpenLDAP server, setup ends
> correctly in both engine VM.
>
> When I try to perform system permission modification, only one of these is
> recognizing the LDAP groups and allow setup and next users belonging to
> defined groups to log-in and perform assigned level tasks.
>
>
>
> On the second engine, system permissions, even if it recognize the LDAP
> domain (it appear in the selection box for search base) do not find nothing,
> groups or individuals.
>
> How to analyze this ? I wasn’t able to find logs useful for troubleshooting.
>
>
>
> Setup ended correctly with both Login and Search tasks complete successful.
>
> Thanks
>
>
>
> Roberto
>
>
>
>
>
>
>
>
>
>
> 
>
> Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
> potrebbe contenere informazioni confidenziali, riservate o proprietarie.
> Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
> immediatamente al mittente, cancellando l'originale e ogni sua copia e
> distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
> proibito e potrebbe essere fonte di violazione di legge.
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise private information. If you have
> received it in error, please notify the sender immediately, deleting the
> original and all copies and destroying any hard copies. Any other use is
> strictly prohibited and may be unlawful.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Devin,


Many, many thaks for your response. I will read the doc you sent and if I still 
have questions I will post them here.


But why would I use a RAIDed brick if Gluster, by itself, already "protects" 
the data by making replicas. You see, that is what is confusing to me...


Thanks,

Moacir



From: Devin Acosta 
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for several 
different companies, and have dealt with the Red Hat Support Team in depth 
about optimal configuration in regards to setting up GlusterFS most efficiently 
and I wanted to share with you what I learned.

In general Red Hat Virtualization team frowns upon using each DISK of the 
system as just a JBOD, sure there is some protection by having the data 
replicated, however, the recommendation is to use RAID 6 (preferred) or RAID-5, 
or at least RAID-1 at the very least.

Here is the direct quote from Red Hat when I asked about RAID and Bricks:

"A typical Gluster configuration would use RAID underneath the bricks. RAID 6 
is most typical as it gives you 2 disk failure protection, but RAID 5 could be 
used too. Once you have the RAIDed bricks, you'd then apply the desired 
replication on top of that. The most popular way of doing this would be 
distributed replicated with 2x replication. In general you'll get better 
performance with larger bricks. 12 drives is often a sweet spot. Another option 
would be to create a separate tier using all SSD’s.”

In order to SSD tiering from my understanding you would need 1 x NVMe drive in 
each server, or 4 x SSD hot tier (it needs to be distributed, replicated for 
the hot tier if not using NVME). So with you only having 1 SSD drive in each 
server, I’d suggest maybe looking into the NVME option.

Since your using only 3-servers, what I’d probably suggest is to do (2 Replicas 
+ Arbiter Node), this setup actually doesn’t require the 3rd server to have big 
drives at all as it only stores meta-data about the files and not actually a 
full copy.

Please see the attached document that was given to me by Red Hat to get more 
information on this. Hope this information helps you.


--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect


On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com) wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?


You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Install external certificate

2017-08-07 Thread Yedidyah Bar David
On Tue, Aug 1, 2017 at 3:39 PM, Marcelo Leandro  wrote:
> Good morning
>
> I bought a external certificate, in godaddy , and they send to me only
> one archive crt. I saw this:
> https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/
>
> I don't know how I can generate a certificate p12.
> Can someone help me ?

You don't need the p12. You can simply skip it.

You can use the RHV docs:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/appe-red_hat_enterprise_virtualization_and_ssl

We recently updated this to clarify your (and similar) questions.

Sorry for not updating the oVirt site yet. Patches are welcome :-)

Best,

>
> Thanks.
>
> Marcelo Leandro
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread yayo (j)
Hi,


I just tried that with development version of Zentyal and it works for
> me. Well, there are some caveats, see below.
>
>
Please provide steps not just "works for me" ... Thank you



> > Just wanted to add my input.  I just recently noticed the same thing.
> > Luckily i was just testing Zentyal, but when I installed python-apt after
> > reading the error message, apt seemed to completely break.  I would be
> > curious on a workaround/fix for this as well.
>
> Could you be more specific? What was the problem? Was it problem with
> python-apt per-se or with ovirt-guest-agent using python-apt?
>
>
In the past with Zentyal 5 Dev Edition I had the same error: Added
suggested repository that want install "python-apt" and remove "apt-get"
(because conflicts)



>
> >
> >
> > On Fri, Aug 4, 2017 at 9:28 AM, yayo (j)  wrote:
> >
> > > Hi all,
> > >
> > > I have this problem: I'm tring to install the guest tools following
> this
> > > guide: https://www.ovirt.org/documentation/how-to/guest-
> > > agent/install-the-guest-agent-in-ubuntu/#for-ubuntu-1604
>
> I've noticed that the ovirt-guest-agent package available in the
> repository mentioned on the oVirt site is missing dependency on
> qemu-guest-agent. You have to install it additionally for oVirt to work
> properly.
>


*Steps with repository:*
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/

*Update:*


root@vmdczen01:~# apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:2 http://it.archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://it.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:4 http://it.archive.ubuntu.com/ubuntu xenial-backports InRelease [102
kB]
Hit:5 http://archive.zentyal.org/zentyal 5.0 InRelease
Ign:6
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04
InRelease
Hit:7
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04
Release
Fetched 306 kB in 0s (325 kB/s)
Reading package lists... Done



*Tring to install:*


root@vmdczen01:~# apt-get install ovirt-guest-agent
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 ovirt-guest-agent : Depends: python-apt but it is not going to be installed
E: Unable to correct problems, you have held broken packages.


*Zentyal version is: *5.0.8

So, this repository is *totally broken*



>
> If you, however, instal the ovirt-guest-agent from official Ubuntu
> repository there is different issue. There is this unresolved bug:
>
> https://bugs.launchpad.net/ubuntu/+source/ovirt-guest-agent/+bug/1609130
>
> You have to fix permissions on /var/log/ovirt-guest-agent as mentioned
> in the bug report.
>
>

The problem is the same, if you remove the extra repisotory and then try to
use the "main" repository, you have the problem with python-apt


Extra tests:

I have tried to install "python-apt" directly and I can reproduce the
problem mentioned by Stewart:


Update (Check that extra repository is commented out):


root@vmdczen01:~# apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:2 http://archive.zentyal.org/zentyal 5.0 InRelease [4,887 B]
Get:3 http://it.archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:4 http://archive.zentyal.org/zentyal 5.0/main amd64 Packages [28.1 kB]
Get:5 http://archive.zentyal.org/zentyal 5.0/main i386 Packages [6,218 B]
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
[325 kB]
Get:7 http://it.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:8 http://it.archive.ubuntu.com/ubuntu xenial-backports InRelease [102
kB]
Get:9 http://it.archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1,201
kB]
Get:10 http://it.archive.ubuntu.com/ubuntu xenial/main i386 Packages [1,196
kB]
Get:11 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages
[306 kB]
Get:12 http://it.archive.ubuntu.com/ubuntu xenial/main Translation-en [568
kB]
Get:13 http://it.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
[7,532 kB]
Get:14 http://it.archive.ubuntu.com/ubuntu xenial/universe i386 Packages
[7,512 kB]
Get:15 http://it.archive.ubuntu.com/ubuntu xenial/universe Translation-en
[4,354 kB]
Get:16 http://it.archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages
[144 kB]
Get:17 http://it.archive.ubuntu.com/ubuntu xenial/multiverse i386 Packages
[140 kB]
Get:18 http://it.archive.ubuntu.com/ubuntu xenial/multiverse Translation-en
[106 kB]
Get:19 http://it.archive.ubuntu.com/ubuntu xenial-updates/main amd64
Packages [599 kB]
Get:20 

[ovirt-users] oVirt LDAP user authentication troubleshooting

2017-08-07 Thread NUNIN Roberto
I've two oVirt 4.1.4.2-1 pods used for labs.

These two pods are configured in the same way (three node with gluster)

Trying to setup LDAP auth, towards the same OpenLDAP server, setup ends 
correctly in both engine VM.
When I try to perform system permission modification, only one of these is 
recognizing the LDAP groups and allow setup and next users belonging to defined 
groups to log-in and perform assigned level tasks.

On the second engine, system permissions, even if it recognize the LDAP domain 
(it appear in the selection box for search base) do not find nothing, groups or 
individuals.
How to analyze this ? I wasn't able to find logs useful for troubleshooting.

Setup ended correctly with both Login and Search tasks complete successful.
Thanks

Roberto







Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Yaniv Kaul
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira 
wrote:

> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU
> sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
> GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
> a dual 10Gb NIC. So my intention is to create a loop like a server triangle
> using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
> move VMs around the pod (east /west traffic) while using the 10Gb
> interfaces for giving services to the outside world (north/south traffic).
>

Very nice gear. How are you planning the network exactly? Without a switch,
back-to-back? (sounds OK to me, just wanted to ensure this is what the
'dual' is used for). However, I'm unsure if you have the correct balance
between the interface speeds (40g) and the disks (too many HDDs?).


>
> This said, my first question is: How should I deploy GlusterFS in such
> oVirt scenario? My questions are:
>
>
> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
> then create a GlusterFS using them?
>
I would assume RAID 1 for the operating system (you don't want a single
point of failure there?) and the rest JBODs. The SSD will be used for
caching, I reckon? (I personally would add more SSDs instead of HDDs, but
it does depend on the disk sizes and your space requirements.


> 2 - Instead, should I create a JBOD array made of all server's disks?
>
> 3 - What is the best Gluster configuration to provide for HA while not
> consuming too much disk space?
>

Replica 2 + Arbiter sounds good to me.


> 4 - Does a oVirt hypervisor pod like I am planning to build, and the
> virtualization environment, benefits from tiering when using a SSD disk?
> And yes, will Gluster do it by default or I have to configure it to do so?
>

Yes, I believe using lvmcache is the best way to go.

>
> At the bottom line, what is the good practice for using GlusterFS in small
> pods for enterprises?
>

Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). Sharding
(enabled out of the box if you use a hyper-converged setup via gdeploy).
Y.


>
> You opinion/feedback will be really appreciated!
>
> Moacir
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users