[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Krutika Dhananjay
So in our internal tests (with nvme ssd drives, 10g n/w), we found read
performance to be better with choose-local
disabled in hyperconverged setup.  See
https://bugzilla.redhat.com/show_bug.cgi?id=1566386 for more information.

With choose-local off, the read replica is chosen randomly (based on hash
value of the gfid of that shard).
And when it is enabled, the reads always go to the local replica.
We attributed better performance with the option disabled to bottlenecks in
gluster's rpc/socket layer. Imagine all read
requests lined up to be sent over the same mount-to-brick connection as
opposed to (nearly) randomly getting distributed
over three (because replica count = 3) such connections.

Did you run any tests that indicate "choose-local=on" is giving better read
perf as opposed to when it's disabled?

-Krutika

On Sun, May 19, 2019 at 5:11 PM Strahil Nikolov 
wrote:

> Ok,
>
> so it seems that Darell's case and mine are different as I use vdo.
>
> Now I have destroyed Storage Domains, gluster volumes and vdo and
> recreated again (4 gluster volumes on a single vdo).
> This time vdo has '--emulate512=true' and no issues have been observed.
>
> Gluster volume options before 'Optimize for virt':
>
> Volume Name: data_fast
> Type: Replicate
> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.enable-shared-storage: enable
>
> Gluster volume after 'Optimize for virt':
>
> Volume Name: data_fast
> Type: Replicate
> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
> Status: Stopped
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
> Options Reconfigured:
> network.ping-timeout: 30
> performance.strict-o-direct: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> server.event-threads: 4
> client.event-threads: 4
> cluster.choose-local: off
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
> cluster.enable-shared-storage: enable
>
> After that adding the volumes as storage domains (via UI) worked without
> any issues.
>
> Can someone clarify why we have now 'cluster.choose-local: off' when in
> oVirt 4.2.7 (gluster v3.12.15) we didn't have that ?
> I'm using storage that is faster than network and reading from local brick
> gives very high read speed.
>
> Best Regards,
> Strahil Nikolov
>
>
>
> В неделя, 19 май 2019 г., 9:47:27 ч. Гринуич+3, Strahil <
> hunter86...@yahoo.com> написа:
>
>
> On this one
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/configuring_red_hat_virtualization_with_red_hat_gluster_storage/index#proc-To_Configure_Volumes_Using_the_Command_Line_Interface
> We should have the following options:
>
> performance.quick-read=off performance.read-ahead=off performance.io-cache=off
> performance.stat-prefetch=off performance.low-prio-threads=32
> network.remote-dio=enable cluster.eager-lock=enable
> cluster.quorum-type=auto cluster.server-quorum-type=server
> cluster.data-self-heal-algorithm=full cluster.locking-scheme=granular
> cluster.shd-max-threads=8 cluster.shd-wait-qlength=1 features.shard=on
> user.cifs=off
>
> By the way the 'virt' gluster group disables 'cluster.choose-local' and I
> think it wasn't like that.
> Any reasons behind that , as I use it to speedup my reads, as local
> storage is faster than the network?
>
> Best Regards,
> Strahil Nikolov
> On May 19, 2019 09:36, Strahil  wrote:
>
> OK,
>
> Can we summarize it:
> 1. VDO must 'emulate512=true'
> 2. 'network.remote-dio' should be off ?
>
> As per this:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/configuring_red_hat_openstack_with_red_hat_storage/sect-setting_up_red_hat_storage_trusted_storage_pool
>
> We should have these:
>
> quick-read=off
> read-ahead=off
> io-cache=off
> stat-prefetch=off
> eager-lock=enable
> remote-dio=on
> quorum-type=auto
> server-quorum-type=server
>
> I'm a little bit confused here.
>
> Best Regards,
> Strahil Nikolov
> On May 19, 2019 07:44, Sahina Bose  wrote:
>
>
>
> On 

[ovirt-users] Re: [ovirt-announce] Re: Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Strahil

On May 21, 2019 06:00, Satheesaran Sundaramoorthi  wrote:
>
>
> On Fri, May 17, 2019 at 1:12 AM Nir Soffer  wrote:
>>
>> On Thu, May 16, 2019 at 10:12 PM Darrell Budic  
>> wrote:
>>>
>>> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


 On Thu, May 16, 2019 at 8:38 PM Darrell Budic  
 wrote:
>
> I tried adding a new storage domain on my hyper converged test cluster 
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new 
> gluster volume fine, but it’s not able to add the gluster storage domain 
> (as either a managed gluster volume or directly entering values). The 
> created gluster volume mounts and looks fine from the CLI. Errors in VDSM 
> log:
>
 ... 
>
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] 
> Underlying file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
> createStorageDomain error=Storage Domain target is unsupported: () 
> from=:::10.100.90.5,44732, flow_id=31d993dd, 
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)


 The direct I/O check has failed.


 So something is wrong in the files system.

 To confirm, you can try to do:

 dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct

 This will probably fail with:
 dd: failed to open '/path/to/mountoint/test': Invalid argument

 If it succeeds, but oVirt fail to connect to this domain, file a bug and 
 we will investigate.

 Nir
>>>
>>>
>>> Yep, it fails as expected. Just to check, it is working on pre-existing 
>>> volumes, so I poked around at gluster settings for the new volume. It has 
>>> network.remote-dio=off set on the new volume, but enabled on old volumes. 
>>> After enabling it, I’m able to run the dd test:
>>>
>>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>>> volume set: success
>>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
>>> oflag=direct
>>> 1+0 records in
>>> 1+0 records out
>>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>>
>>> I’m also able to add the storage domain in ovirt now.
>>>
>>> I see network.remote-dio=enable is part of the gluster virt group, so 
>>> apparently it’s not getting set by ovirt duding the volume creation/optimze 
>>> for storage?
>>
>>
>> I'm not sure who is responsible for changing these settings. oVirt always 
>> required directio, and we
>> never had to change anything in gluster.
>>
>> Sahina, maybe gluster changed the defaults?
>>
>> Darrell, please file a bug, probably for RHHI.
>
>
> Hello Darrell & Nir,
>
> Do we have a bug available now for this issue ?
> I just need to make sure performance.strict-o-direct=on is enabled on that 
> volume.
>
>
> Satheesaran Sundaramoorthi
>
> Senior Quality Engineer, RHHI-V QE
>
> Red Hat APAC
>

Please check https://bugzilla.redhat.com/show_bug.cgi?id=1711054

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TLVUAMMW5QIQHJDA55NWC6GKB3KL5K3/


[ovirt-users] Re: [ovirt-announce] Re: Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Satheesaran Sundaramoorthi
On Fri, May 17, 2019 at 1:12 AM Nir Soffer  wrote:

> On Thu, May 16, 2019 at 10:12 PM Darrell Budic 
> wrote:
>
>> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>>
>>
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
>> wrote:
>>
>>> I tried adding a new storage domain on my hyper converged test cluster
>>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>>> volume fine, but it’s not able to add the gluster storage domain (as either
>>> a managed gluster volume or directly entering values). The created gluster
>>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>>
>>> ...
>>
>>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>>> file system doesn't supportdirect IO (fileSD:110)
>>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
>>> createStorageDomain error=Storage Domain target is unsupported: ()
>>> from=:::10.100.90.5,44732, flow_id=31d993dd,
>>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>>
>>
>> The direct I/O check has failed.
>>
>>
>> So something is wrong in the files system.
>>
>> To confirm, you can try to do:
>>
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and
>> we will investigate.
>>
>> Nir
>>
>>
>> Yep, it fails as expected. Just to check, it is working on pre-existing
>> volumes, so I poked around at gluster settings for the new volume. It has
>> network.remote-dio=off set on the new volume, but enabled on old volumes.
>> After enabling it, I’m able to run the dd test:
>>
>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>> volume set: success
>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
>> oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>
>> I’m also able to add the storage domain in ovirt now.
>>
>> I see network.remote-dio=enable is part of the gluster virt group, so
>> apparently it’s not getting set by ovirt duding the volume creation/optimze
>> for storage?
>>
>
> I'm not sure who is responsible for changing these settings. oVirt always
> required directio, and we
> never had to change anything in gluster.
>
> Sahina, maybe gluster changed the defaults?
>
> Darrell, please file a bug, probably for RHHI.
>

Hello Darrell & Nir,

Do we have a bug available now for this issue ?
I just need to make sure performance.strict-o-direct=on is enabled on that
volume.


Satheesaran Sundaramoorthi

Senior Quality Engineer, RHHI-V QE

Red Hat APAC 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B64SWJKMFWMHZUOFRHIRSKI7JSQ2XWON/


[ovirt-users] Re: Virtual office, how?

2019-05-20 Thread andres . b . dev
Hi Derek, thanks for the answer.

> First, if you have two networks, N1, and N2, you probably DO NOT want the 
> same IP Network (192.168.122) on both N1 and N2. So for your sanity, if A and 
> B are on N1 and C and D are on N2, you might want to use

Sorry, my bad. Yup, masking...

> I'm confused by this. What do you mean "has the same public ip"? None of the 
> IPs here are public, they are all RFC1918 (private network) IPs. Do you mean 
> that you've got a router, somewhere, that have a reverse NAT that will 
> translate externally from some public addresses to these private addresses?

Sorry about that. Answering your question, I think yes. I know that 192.X.Y.Z 
is not public. I mean, I want that as local ip and, for example, 172.X.Y.Z as 
public ip for the network. Basically, I want to simulate an office, where the 
owner of the company pays 1 ISP. That isp gives you a router with 1 public ip, 
and all connected PC (VMs here) has his own local ip. It could be 192.X.Y.Z or 
10.X.Y.Z (right? not a network expert as you may notice :P). 

> I'm not sure I understand what this means. What do you mean by "A can ssh on 
> B"? This is probably a language issue. I think you mean that A and B can ssh 
> to each other but can't reach C or D, and C and D can ssh to each other but 
> can't reach A or B.

Yes, I want that. A <-> B, C <-> D

> If you renumber as above then you can do that by not routing between 
> 192.168.10.0/24 and 192.168.20.0/24. However in your original configuration 
> where all four hosts are on the same 192.168.122.0/24 network, there is no 
> way (at the network level) to prevent A and B from talking with C and D.

Yes, I forgot about the network mask.

> You can do this with OVS, or even with basic networking, but you will need to 
> create actual separate networks.

What do you mean with creating actual separate networks? Having 1 NIC per 
public IP? Because I have only 4 NICs and I want to have as many public IPs as 
possible because I want to have as many virtual offices as possible.

Regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7WGLLUJ5JKP7TQLJS3DVIBDEWMTJSEI7/


[ovirt-users] Re: Every now and then Backup question

2019-05-20 Thread Andreas Elvers
Bareos (a fork of Bacula) is nice. And they promote 
http://relax-and-recover.org/ a desaster recovery strategy.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ODNXJDWIZBNCL445SUTMKENDQWUZCCX3/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Strahil Nikolov
 I got confused so far.What is best for oVirt ?remote-dio off or on ?My latest 
gluster volumes were set to 'off' while the older ones are 'on'.
Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 23:42:09 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 Wow, I think Strahil and i both hit different edge cases on this one. I was 
running that on my test cluster with a ZFS backed brick, which does not support 
O_DIRECT (in the current version, 0.8 will, when it’s released). I tested on a 
XFS backed brick with gluster virt group applied and network.remote-dio 
disabled and ovirt was able to create the storage volume correctly. So not a 
huge problem for most people, I imagine.
Now I’m curious about the apparent disconnect between gluster and ovirt though. 
Since the gluster virt group sets network.remote-dio on, what’s the reasoning 
behind disabling it for these tests?


On May 18, 2019, at 11:44 PM, Sahina Bose  wrote:


On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:

On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:

>From RHHI side default we are setting below volume options:
{ group: 'virt',     storage.owner-uid: '36',     storage.owner-gid: '36',     
network.ping-timeout: '30',     performance.strict-o-direct: 'on',     
network.remote-dio: 'off'

According to the user reports, this configuration is not compatible with oVirt.
Was this tested?

Yes, this is set by default in all test configuration. We’re checking on the 
bug, but the error is likely when the underlying device does not support 512b 
writes. With network.remote-dio off gluster will ensure o-direct writes


   }

On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  wrote:

 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
to create the storage domain without any issues.I set it on all 4 new gluster 
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/



-- 


Thanks,Gobinda



___
Users 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Strahil Nikolov
 Hi Adrian,
are you using local storage ?
If yes, set a blacklist in multipath.conf (don't forget the "#VDSM PRIVATE" 
flag) and rebuild the initramfs and reboot.When multipath locks a path - no 
direct access is possible - thus your pvcreate should not be possible.Also , 
multipath is not needed for local storage ;)

Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 19:31:04 ч. Гринуич+3, Adrian Quintero 
 написа:  
 
 Sahina,Yesterday I started with a fresh install, I completely wiped clean all 
the disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda    // Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB HDD): /dev/sdc
DATA2: JBOD (1x3TB HDD): /dev/sdd
Caching disk: JOBD (1x440GB SDD): /dev/sde

After the OS install on the first 3 servers and setting up ssh keys,  I started 
the Hyperconverged deploy process:1.-Logged int to the first server 
http://host1.example.com:90902.-Selected Hyperconverged, clicked on "Run 
Gluster Wizard"3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, 
Bricks, 
Review)Hosts/FQDNs:host1.example.comhost2.example.comhost3.example.comPackages:Volumes:engine:replicate:/gluster_bricks/engine/enginevmstore1:replicate:/gluster_bricks/vmstore1/vmstore1data1:replicate:/gluster_bricks/data1/data1data2:replicate:/gluster_bricks/data2/data2Bricks:engine:/dev/sdb:100GB:/gluster_bricks/enginevmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1data1:/dev/sdc:2700GB:/gluster_bricks/data1data2:/dev/sdd:2700GB:/gluster_bricks/data2LV
 Cache:/dev/sde:400GB:writethrough4.-After I hit deploy on the last step of the 
"Wizard" that is when I get the disk filter error.
TASK [gluster.infra/roles/backend_setup : Create volume groups] 
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': 
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a 
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": 
"Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': 
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a 
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": 
"Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': 
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a 
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": 
"Creating physical volume '/dev/sdd' failed", "rc": 5}
Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml) and 
the "Deployment Failed" file


 
Also wondering if I hit this bug? 
https://bugzilla.redhat.com/show_bug.cgi?id=1635614


Thanks for looking into this.
Adrian quinteroadrianquint...@gmail.com | adrian.quint...@rackspace.com


On Mon, May 20, 2019 at 7:56 AM Sahina Bose  wrote:

To scale existing volumes - you need to add bricks and run rebalance on the 
gluster volume so that data is correctly redistributed as Alex mentioned.We do 
support expanding existing volumes as the bug 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Strahil Nikolov
 Hey Sahina,
it seems that almost all of my devices are locked - just like Fred's.What 
exactly does it mean - I don't have any issues with my bricks/storage domains.
Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose 
 написа:  
 
 To scale existing volumes - you need to add bricks and run rebalance on the 
gluster volume so that data is correctly redistributed as Alex mentioned.We do 
support expanding existing volumes as the bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand volumes:1. Create bricks from UI - select Host -> 
Storage Device -> Storage device. Click on "Create Brick"If the device is shown 
as locked, make sure there's no signature on device.  If multipath entries have 
been created for local devices, you can blacklist those devices in 
multipath.conf and restart multipath.
 (If you see device as locked even after you do this -please report back).2. 
Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks 
created in previous step3. Run Rebalance on the volume. Volume -> Rebalance.

On Thu, May 16, 2019 at 2:48 PM Fred Rolland  wrote:

Sahina,Can someone from your team review the steps done by Adrian?
Thanks,Freddy

On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero  
wrote:

Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach 
them to clear any possible issues and try out the suggestions provided.
thank you!

On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov  wrote:

 I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE
blacklist {
    devnode "*"
    wwid Crucial_CT256MX100SSD1_14390D52DCF5
    wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
    wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
    wwid 
nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
}

If you have multipath reconfigured, do not forget to rebuild the initramfs 
(dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this:   /dev/VG/LV
  /dev/disk/by-id/pvuuid
 /dev/mapper/multipath-uuid
/dev/sdb

Linux will not allow you to work with /dev/sdb , when multipath is locking the 
block device.
Best Regards,Strahil Nikolov

В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 under Compute, hosts, select the host that has the locks on /dev/sdb, 
/dev/sdc, etc.., select storage devices and in here is where you see a small 
column with a bunch of lock images showing for each row.

However as a work around, on the newly added hosts (3 total), I had to manually 
modify /etc/multipath.conf and add the following at the end as this is what I 
noticed from the original 3 node setup.

-
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gluster_hci role

blacklist {
    devnode "*"
}
# END Added by gluster_hci role
--After this I 
restarted multipath and the lock went away and was able to configure the new 
bricks thru the UI, however my concern is what will happen if I reboot the 
server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed 
using http://host4.mydomain.com:9090.

thanks again

On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov  wrote:

 In which menu do you see it this way ?
Best Regards,Strahil Nikolov

В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 Strahil,this is the issue I am seeing now 


The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have 
after server reboots?
thanks,


On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:

I have edited my multipath.conf to exclude local disks , but you need to set 
'#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with 
any linux.

Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>
> Thanks Alex, that makes more sense now  while trying to follow the 
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
> are locked and inidicating " multpath_member" hence not letting me create new 
> bricks. And on the logs I see 
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
> failed", "rc": 5} 
> Same thing for sdc, sdd 
>
> Should I manually edit the filters inside the OS, what will be the impact? 
>
> thanks again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> 

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Darrell Budic
Wow, I think Strahil and i both hit different edge cases on this one. I was 
running that on my test cluster with a ZFS backed brick, which does not support 
O_DIRECT (in the current version, 0.8 will, when it’s released). I tested on a 
XFS backed brick with gluster virt group applied and network.remote-dio 
disabled and ovirt was able to create the storage volume correctly. So not a 
huge problem for most people, I imagine.

Now I’m curious about the apparent disconnect between gluster and ovirt though. 
Since the gluster virt group sets network.remote-dio on, what’s the reasoning 
behind disabling it for these tests?

> On May 18, 2019, at 11:44 PM, Sahina Bose  wrote:
> 
> 
> 
> On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  > wrote:
> On Fri, May 17, 2019 at 7:54 AM Gobinda Das  > wrote:
> From RHHI side default we are setting below volume options:
> 
> { group: 'virt',
>  storage.owner-uid: '36',
>  storage.owner-gid: '36',
>  network.ping-timeout: '30',
>  performance.strict-o-direct: 'on',
>  network.remote-dio: 'off'
> 
> According to the user reports, this configuration is not compatible with 
> oVirt.
> 
> Was this tested?
> 
> Yes, this is set by default in all test configuration. We’re checking on the 
> bug, but the error is likely when the underlying device does not support 512b 
> writes. 
> With network.remote-dio off gluster will ensure o-direct writes
> 
>}
> 
> 
> On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  > wrote:
> Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
> to create the storage domain without any issues.
> I set it on all 4 new gluster volumes and the storage domains were 
> successfully created.
> 
> I have created bug for that:
> https://bugzilla.redhat.com/show_bug.cgi?id=1711060 
> 
> 
> If someone else already opened - please ping me to mark this one as duplicate.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
> mailto:bu...@onholyground.com>> написа:
> 
> 
> On May 16, 2019, at 1:41 PM, Nir Soffer  > wrote:
> 
>> 
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic > > wrote:
>> I tried adding a new storage domain on my hyper converged test cluster 
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster 
>> volume fine, but it’s not able to add the gluster storage domain (as either 
>> a managed gluster volume or directly entering values). The created gluster 
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>> 
>> ... 
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>> createStorageDomain error=Storage Domain target is unsupported: () 
>> from=:::10.100.90.5,44732, flow_id=31d993dd, 
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>> 
>> The direct I/O check has failed.
>> 
>> 
>> So something is wrong in the files system.
>> 
>> To confirm, you can try to do:
>> 
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>> 
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>> 
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
>> will investigate.
>> 
>> Nir
> 
> Yep, it fails as expected. Just to check, it is working on pre-existing 
> volumes, so I poked around at gluster settings for the new volume. It has 
> network.remote-dio=off set on the new volume, but enabled on old volumes. 
> After enabling it, I’m able to run the dd test:
> 
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
> 
> I’m also able to add the storage domain in ovirt now.
> 
> I see network.remote-dio=enable is part of the gluster virt group, so 
> apparently it’s not getting set by ovirt duding the volume creation/optimze 
> for storage?
> 
> 
> 
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
>  
> 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Adrian Quintero
Sahina,
Yesterday I started with a fresh install, I completely wiped clean all the
disks, recreated the arrays from within my controller of our DL380 Gen 9's.

OS: RAID 1 (2x600GB HDDs): /dev/sda// Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB HDD): /dev/sdc
DATA2: JBOD (1x3TB HDD): /dev/sdd
Caching disk: JOBD (1x440GB SDD): /dev/sde

*After the OS install on the first 3 servers and setting up ssh keys,  I
started the Hyperconverged deploy process:*
1.-Logged int to the first server http://host1.example.com:9090
2.-Selected Hyperconverged, clicked on "Run Gluster Wizard"
3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks,
Review)
*Hosts/FQDNs:*
host1.example.com
host2.example.com
host3.example.com
*Packages:*
*Volumes:*
engine:replicate:/gluster_bricks/engine/engine
vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1
data1:replicate:/gluster_bricks/data1/data1
data2:replicate:/gluster_bricks/data2/data2
*Bricks:*
engine:/dev/sdb:100GB:/gluster_bricks/engine
vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1
data1:/dev/sdc:2700GB:/gluster_bricks/data1
data2:/dev/sdd:2700GB:/gluster_bricks/data2
LV Cache:
/dev/sde:400GB:writethrough
4.-After I hit deploy on the last step of the "Wizard" that is when I get
the disk filter error.
TASK [gluster.infra/roles/backend_setup : Create volume groups]

failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}

Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml)
and the "Deployment Failed" file




Also wondering if I hit this bug?
https://bugzilla.redhat.com/show_bug.cgi?id=1635614



Thanks for looking into this.

*Adrian Quintero*
*adrianquint...@gmail.com  |
adrian.quint...@rackspace.com *


On Mon, May 20, 2019 at 7:56 AM Sahina Bose  wrote:

> To scale existing volumes - you need to add bricks and run rebalance on
> the gluster volume so that data is correctly redistributed as Alex
> mentioned.
> We do support expanding existing volumes as the bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
>
> As to procedure to expand volumes:
> 1. Create bricks from UI - select Host -> Storage Device -> Storage
> device. Click on "Create Brick"
> If the device is shown as locked, make sure there's no signature on
> device.  If multipath entries have been created for local devices, you can
> blacklist those devices in multipath.conf and restart multipath.
> (If you see device as 

[ovirt-users] Every now and then Backup question

2019-05-20 Thread Markus Schaufler
Hi,

looking for a backup solution compareable to vmware/veeam - ie. features like 
agent-less, incremental, lan-free backups with dedup and single item restore. 
As I understand the main construction site for all commercial backup solutions 
available for kvm/rhev was the underlying qemu with its CBT implementation. The 
main functions should now be implemented as there are solutions like vprotect 
which state that they support incremental backups for rhev. But as I read at 
https://www.openvirtualization.pro/agent-less-backup-strategies-for-ovirt-rhv-environments/
 there might still be some drawbacks. 
Are there ressources or a roadmap covering the state of the backup/recovery 
process, ie. what is possible now and whats in progress? I'm sure a big 
showstopper for open source virtualization projects are is uncertainty about 
backup/recovery and especially disaster recovery processes. 

Follow up questions to the users with bigger environments:
Whats your current backup and DR strategy? 
Does anybody have experience with commercially availbe backup solutions like 
Bacula, SEP, Commvault, etc.?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YWVMKRRRE5UELZMO27IEZOSVQUVBRNUO/


[ovirt-users] Re: Virtual office, how?

2019-05-20 Thread Derek Atkins
Hi,

andres.b@gmail.com writes:

> I'm trying to be able to create different virtual LANs, where, for
> example, I have 2 groups of pcs
>
> A and B belongs to network N1
> C and D belongs to network N2
>
> N1 and N2 with his own public IP. For example
> A: Local ip: 192.168.122.100
>
> B: Local ip: 192.168.122.101
>
> C: Local ip: 192.168.122.102
>
> D: Local ip: 192.168.122.103

You've got a few problems here.

First, if you have two networks, N1, and N2, you probably DO NOT want
the same IP Network (192.168.122) on both N1 and N2.  So for your
sanity, if A and B are on N1 and C and D are on N2, you might want to
use:

A: 192.168.10.100
B: 192.168.10.101

C: 192.168.20.100
D: 192.168.20.101

> Where A and B has the same public ip, and C and D has the same public ip.

I'm confused by this.  What do you mean "has the same public ip"?  None
of the IPs here are public, they are all RFC1918 (private network) IPs.
Do you mean that you've got a router, somewhere, that have a reverse NAT
that will translate externally from some public addresses to these
private addresses?

Also, you will need that reverse NAT to be smart about how it routes.
Specifically, once you have an active connection to A or B, it will need
to ensure that the connection continues to the same (A or B) target.

> Now, I want that A can ssh on B, but not on C or D. The same goes for
> C, where C can access to D via ssh but not to A or B

I'm not sure I understand what this means.  What do you mean by "A can
ssh on B"?  This is probably a language issue.  I think you mean that A
and B can ssh to each other but can't reach C or D, and C and D can ssh
to each other but can't reach A or B.

If you renumber as above then you can do that by not routing between
192.168.10.0/24 and 192.168.20.0/24.   However in your original
configuration where all four hosts are on the same 192.168.122.0/24
network, there is no way (at the network level) to prevent A and B from
talking with C and D.

> I'm not sure if OVS solve this problem or not, or if this is not possible.
>
> Is this possible? How?

You can do this with OVS, or even with basic networking, but you will
need to create actual separate networks.

Good Luck,

-derek
-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQVJAXSTSU3UXJHAAUXSN2FY2FBHNG47/


[ovirt-users] Re: newbie questions about installation using synology NAS

2019-05-20 Thread Brian Millett
On Mon, 2019-05-20 at 06:26 +0800, Colin Coe wrote:
> Hi
> Add your nodes to oVirt then add your storage.  Don't try to manually
> control the storage, let oVirt do that.
> 
> 

Thanks.-- 
Brian Millett
"I am more 'one of us' at this moment, than I have ever been. More than you
 will ever know."
   -- [ Delenn (to Teronn), "A Distant Star"]
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XDQEYDUZO4VJ2DKWTUPTQRIKH4T4PNE/


[ovirt-users] Re: newbie questions about installation using synology NAS

2019-05-20 Thread Brian Millett
On Mon, 2019-05-20 at 06:49 +0300, Strahil wrote:
> Define the nodes and when you create your NFS storage domain - it
> will be automatically mounted on all hosts in the DC.
> Best Regards,Strahil NikolovOn May 20, 2019 00:03, bmill...@gmail.com
>  wrote:
> > I'm starting fairly fresh. Ovirt engine 4.3.3.7 on a host 2 nodes
> > installed with ovirt-release-host-node-4.3.3.1 synology DS418 with
> > 5 TB disks 
> > I've configured the master engine and am ready to add the nodes. My
> > questions have to do with the order or steps . Do I need to mount
> > /volume1/data/images/rhev from the DS418 on each of the nodes
> > before I add them or do I just add the nodes, then define a domain
> > using the NFS mounts from the DS418? 
> > Thanks.___

Awesome, thanks.
-- 
Brian Millett
"You feel like you're being symbolically cas...t in a bad light."
   -- [ Ivanova (to Londo re: Londo doll), "There All The Honor Lies"]
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TWYPTPQJHVZUASYXL2N2HOWHX4HTSFBK/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Sahina Bose
To scale existing volumes - you need to add bricks and run rebalance on the
gluster volume so that data is correctly redistributed as Alex mentioned.
We do support expanding existing volumes as the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed

As to procedure to expand volumes:
1. Create bricks from UI - select Host -> Storage Device -> Storage device.
Click on "Create Brick"
If the device is shown as locked, make sure there's no signature on
device.  If multipath entries have been created for local devices, you can
blacklist those devices in multipath.conf and restart multipath.
(If you see device as locked even after you do this -please report back).
2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3
bricks created in previous step
3. Run Rebalance on the volume. Volume -> Rebalance.


On Thu, May 16, 2019 at 2:48 PM Fred Rolland  wrote:

> Sahina,
> Can someone from your team review the steps done by Adrian?
> Thanks,
> Freddy
>
> On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero 
> wrote:
>
>> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
>> re-attach them to clear any possible issues and try out the suggestions
>> provided.
>>
>> thank you!
>>
>> On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov 
>> wrote:
>>
>>> I have the same locks , despite I have blacklisted all local disks:
>>>
>>> # VDSM PRIVATE
>>> blacklist {
>>> devnode "*"
>>> wwid Crucial_CT256MX100SSD1_14390D52DCF5
>>> wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
>>> wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
>>> wwid
>>> nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
>>> }
>>>
>>> If you have multipath reconfigured, do not forget to rebuild the
>>> initramfs (dracut -f). It's a linux issue , and not oVirt one.
>>>
>>> In your case you had something like this:
>>>/dev/VG/LV
>>>   /dev/disk/by-id/pvuuid
>>>  /dev/mapper/multipath-uuid
>>> /dev/sdb
>>>
>>> Linux will not allow you to work with /dev/sdb , when multipath is
>>> locking the block device.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <
>>> adrianquint...@gmail.com> написа:
>>>
>>>
>>> under Compute, hosts, select the host that has the locks on /dev/sdb,
>>> /dev/sdc, etc.., select storage devices and in here is where you see a
>>> small column with a bunch of lock images showing for each row.
>>>
>>>
>>> However as a work around, on the newly added hosts (3 total), I had to
>>> manually modify /etc/multipath.conf and add the following at the end as
>>> this is what I noticed from the original 3 node setup.
>>>
>>> -
>>> # VDSM REVISION 1.3
>>> # VDSM PRIVATE
>>> # BEGIN Added by gluster_hci role
>>>
>>> blacklist {
>>> devnode "*"
>>> }
>>> # END Added by gluster_hci role
>>> --
>>> After this I restarted multipath and the lock went away and was able to
>>> configure the new bricks thru the UI, however my concern is what will
>>> happen if I reboot the server will the disks be read the same way by the OS?
>>>
>>> Also now able to expand the gluster with a new replicate 3 volume if
>>> needed using http://host4.mydomain.com:9090.
>>>
>>>
>>> thanks again
>>>
>>> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov 
>>> wrote:
>>>
>>> In which menu do you see it this way ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
>>> adrianquint...@gmail.com> написа:
>>>
>>>
>>> Strahil,
>>> this is the issue I am seeing now
>>>
>>> [image: image.png]
>>>
>>> The is thru the UI when I try to create a new brick.
>>>
>>> So my concern is if I modify the filters on the OS what impact will that
>>> have after server reboots?
>>>
>>> thanks,
>>>
>>>
>>>
>>> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:
>>>
>>> I have edited my multipath.conf to exclude local disks , but you need to
>>> set '#VDSM private' as per the comments in the header of the file.
>>> Otherwise, use the /dev/mapper/multipath-device notation - as you would
>>> do with any linux.
>>>
>>> Best Regards,
>>> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>>> >
>>> > Thanks Alex, that makes more sense now  while trying to follow the
>>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
>>> are locked and inidicating " multpath_member" hence not letting me create
>>> new bricks. And on the logs I see
>>> >
>>> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
>>> "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume
>>> '/dev/sdb' failed", "rc": 5}
>>> > Same thing for sdc, sdd
>>> >
>>> > Should I manually edit the filters inside the OS, what will be the
>>> impact?
>>> >
>>> > thanks again.
>>> > 

[ovirt-users] oVirt survey - May 2019

2019-05-20 Thread Sandro Bonazzola
As we continue to develop oVirt 4.3 and future releases, the Development
and Integration teams at Red Hat would value insights on how you are
deploying the oVirt environment.
Please help us to hit the mark by completing this short survey. Survey will
close on June 7th.
If you're managing multiple oVirt deployments with very different use cases
or very different deployments you can consider answering this survey
multiple times.
Please note the answers to this survey will be publicly accessible. This
survey is under oVirt Privacy Policy available at
https://www.ovirt.org/site/privacy-policy.html

The survey is available here: https://forms.gle/8uzuVNmDWtoKruhm8

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X3ZZQTRPMDQJNDTN6CQC5U5IGCHVVPMB/


[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-20 Thread Strahil Nikolov
 No need,
I already have the number -> https://bugzilla.redhat.com/show_bug.cgi?id=1704782

I have just mentioned it ,as the RC1 for 4.3.4 still doesn't have the fix.
Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 3:00:12 ч. Гринуич-4, Sahina Bose 
 написа:  
 
 

On Sun, May 19, 2019 at 4:11 PM Strahil  wrote:

I would recommend you to postpone  your upgrade if you use gluster (without the 
API)  , as  creation of virtual disks via UI on gluster is having issues - only 
preallocated can be created.


+Gobinda Das +Satheesaran Sundaramoorthi 
Sas, can you log a bug on this?


Best Regards,
Strahil NikolovOn May 19, 2019 09:53, Yedidyah Bar David  
wrote:
>
> On Thu, May 16, 2019 at 3:40 PM  wrote: 
> > 
> > I cannot find an official upgrade procedure from 4.2 to 4.3 oVirt version 
> > on this page: 
> > https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide.html 
> > 
> > Can you help me? 
>
> As others noted, the above should be sufficient, for general upgrade 
> instructions, even though it does require some updates. 
>
> You probably want to read also: 
>
> https://ovirt.org/release/4.3.0/ 
>
> as well as all the other relevant pages in: 
>
> https://ovirt.org/release/ 
>
> Best regards, 
>
> > 
> > Thanks 
> > ___ 
> > Users mailing list -- users@ovirt.org 
> > To unsubscribe send an email to users-le...@ovirt.org 
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/ 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/WG2EI6HL3S2AT6PITGEAJQFGKC6XMYRD/
> >  
>
>
>
> -- 
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAJGM3URCFSNN6S6X3VZFFOSJF52A4RS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T7MO4AA7QHKGTD2E7OUNMSFLM4TXRPA/

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SJ6FKBGSRZR3YVSZLCUX2ZVFJUDA2WKU/


[ovirt-users] Re: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-20 Thread Andreas Elvers

> Without this file [dom_md/ids], you will not have any
> kind of storage.

Ok. Sounds I'm kind of in trouble with that file being un-healable by gluster?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FIR74ALG7WJFIPNIAHZH4PRBY7UI2QRO/


[ovirt-users] Re: deprecating export domain?

2019-05-20 Thread Andreas Elvers
Thanks for the list of options. Will try it.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KFHXK5QYWZRN3G2MNLTSKWN4EK42LVQH/


[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-20 Thread Sachidananda URS
On Mon, May 20, 2019 at 11:58 AM Sahina Bose  wrote:

> Adding Sachi
>
> On Thu, May 9, 2019 at 2:01 AM  wrote:
>
>> This only started to happen with oVirt node 4.3, 4.2 didn't have issue.
>> Since I updated to 4.3, every reboot the host goes into emergency mode.
>> First few times this happened I re-installed O/S from scratch, but after
>> some digging I found out that the drives it mounts in /etc/fstab cause the
>> problem, specifically these mounts.  All three are single drives, one is an
>> SSD and the other 2 are individual NVME drives.
>>
>> UUID=732f939c-f133-4e48-8dc8-c9d21dbc0853 /gluster_bricks/storage_nvme1
>> auto defaults 0 0
>> UUID=5bb67f61-9d14-4d0b-8aa4-ae3905276797 /gluster_bricks/storage_ssd
>> auto defaults 0 0
>> UUID=f55082ca-1269-4477-9bf8-7190f1add9ef /gluster_bricks/storage_nvme2
>> auto defaults 0 0
>>
>> In order to get the host to actually boot, I have to go to console,
>> delete those mounts, reboot, and then re-add them, and they end up with new
>> UUIDs.  all of these hosts reliably rebooted in 4.2 and earlier, but all
>> the versions of 4.3 have this same problem (I keep updating to hope issue
>> is fixed).
>>
>

Hello Michael,

I need your help in resolving this. I would like to understand if the
environment is
affecting something.

What is the out put of:
# blkid /dev/vgname/lvname
For the three bricks you have.

And also what is the error you see when you run the command
# mount /gluster_bricks/storage_nvme1
# mount /gluster_bricks/storage_ssd

Also can you please attach your variable file and playbook?
In my setup things work fine, which is making it difficult for me to fix.

-sac
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YDJCWINE3UP4CKGFD3QARN67TBQOHRTQ/


[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-20 Thread Sahina Bose
On Sun, May 19, 2019 at 4:11 PM Strahil  wrote:

> I would recommend you to postpone  your upgrade if you use gluster
> (without the API)  , as  creation of virtual disks via UI on gluster is
> having issues - only preallocated can be created.
>

+Gobinda Das  +Satheesaran Sundaramoorthi

Sas, can you log a bug on this?


> Best Regards,
> Strahil NikolovOn May 19, 2019 09:53, Yedidyah Bar David 
> wrote:
> >
> > On Thu, May 16, 2019 at 3:40 PM  wrote:
> > >
> > > I cannot find an official upgrade procedure from 4.2 to 4.3 oVirt
> version on this page:
> https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide.html
> > >
> > > Can you help me?
> >
> > As others noted, the above should be sufficient, for general upgrade
> > instructions, even though it does require some updates.
> >
> > You probably want to read also:
> >
> > https://ovirt.org/release/4.3.0/
> >
> > as well as all the other relevant pages in:
> >
> > https://ovirt.org/release/
> >
> > Best regards,
> >
> > >
> > > Thanks
> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WG2EI6HL3S2AT6PITGEAJQFGKC6XMYRD/
> >
> >
> >
> > --
> > Didi
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAJGM3URCFSNN6S6X3VZFFOSJF52A4RS/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T7MO4AA7QHKGTD2E7OUNMSFLM4TXRPA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWAETC4FN3OSMRGGGX6FIOMQUCZM7P42/


[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-20 Thread Sahina Bose
Adding Sachi

On Thu, May 9, 2019 at 2:01 AM  wrote:

> This only started to happen with oVirt node 4.3, 4.2 didn't have issue.
> Since I updated to 4.3, every reboot the host goes into emergency mode.
> First few times this happened I re-installed O/S from scratch, but after
> some digging I found out that the drives it mounts in /etc/fstab cause the
> problem, specifically these mounts.  All three are single drives, one is an
> SSD and the other 2 are individual NVME drives.
>
> UUID=732f939c-f133-4e48-8dc8-c9d21dbc0853 /gluster_bricks/storage_nvme1
> auto defaults 0 0
> UUID=5bb67f61-9d14-4d0b-8aa4-ae3905276797 /gluster_bricks/storage_ssd auto
> defaults 0 0
> UUID=f55082ca-1269-4477-9bf8-7190f1add9ef /gluster_bricks/storage_nvme2
> auto defaults 0 0
>
> In order to get the host to actually boot, I have to go to console, delete
> those mounts, reboot, and then re-add them, and they end up with new
> UUIDs.  all of these hosts reliably rebooted in 4.2 and earlier, but all
> the versions of 4.3 have this same problem (I keep updating to hope issue
> is fixed).
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4UKZAWPQDXWA47AKTQD43PAUCK2JBJN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U6Y36PGDKND7XYYKZI7UII64T4AMBOIL/


[ovirt-users] How to Activate McAfee

2019-05-20 Thread McAfee Activate
if you have a McAfee account and you are trying to activate it then you need to 
first get the key to activate uit. you can get it from McAfee.com. Now  you 
need to follow the steps to activate your account:
Step 1: you need to create a McAfee Account and enter all the details.
step 2: now you need to enter the information.
step 3: Now you have to enter the key you have received with the product.
In this way, you can activate your McAfee Account.
if you want further details you can contact at ;
https://mcafeeactivates.com/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DQVAZ5DJ3L3MIGYEQUKEHF6BIYERTLLX/