[ovirt-users] hosted-engine volume removed 3 bricks (replica 3) out of 12 bricks, now I cant start hosted-engine vm

2021-04-14 Thread adrianquintero
Hi,
I tried removing a replica 3 brick from a distributed replicated volume which 
holds the ovirt hosted-engine VM.
As soon as I hit commit the VM went into pause, I tried to recover the volume 
ID "daa292aa-be5c-426e-b124-64263bf8a3ee" from the remvoed bricks and now I am 
able to do a "hosted-engine --vm-status 

Error I see in the logs:
-
MainThread::WARNING::2021-04-14 
17:26:12,348::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
 Can't connect vdsm storage: Command Image.prepare with args {'imageID': 
'9feffcfa-6af2-4de3-b7d8-e57b84d56003', 'storagepoolID': 
'----', 'volumeID': 
'daa292aa-be5c-426e-b124-64263bf8a3ee', 'storagedomainID': 
'8db17b28-ecbb-4853-8a90-6ed2f69301eb'} failed:
(code=201, message=Volume does not exist: 
(u'daa292aa-be5c-426e-b124-64263bf8a3ee',)) 
-


on the following mount I see the volumeID twice:

[root@vmm10 images]# find /rhev/data-center/mnt/glusterSD/192.168.0.4\:_engine/ 
-name "daa292aa-be5c-426e-b124-64263bf8a3ee"

/rhev/data-center/mnt/glusterSD/192.168.0.4:_engine/8db17b28-ecbb-4853-8a90-6ed2f69301eb/images/9feffcfa-6af2-4de3-b7d8-e57b84d56003/daa292aa-be5c-426e-b124-64263bf8a3ee

/rhev/data-center/mnt/glusterSD/192.168.0.4:_engine/8db17b28-ecbb-4853-8a90-6ed2f69301eb/images/9feffcfa-6af2-4de3-b7d8-e57b84d56003/daa292aa-be5c-426e-b124-64263bf8a3ee


[root@vmm10 9feffcfa-6af2-4de3-b7d8-e57b84d56003]# ls -lh
total 131M
-rw-rw. 1 vdsm kvm  64M Apr 14 19:40 daa292aa-be5c-426e-b124-64263bf8a3ee
-rw-rw. 1 vdsm kvm  64M Apr 14 19:40 daa292aa-be5c-426e-b124-64263bf8a3ee
-rw-rw. 1 vdsm kvm 1.0M Jul  1  2020 
daa292aa-be5c-426e-b124-64263bf8a3ee.lease
-rw-rw. 1 vdsm kvm 1.0M Jul  1  2020 
daa292aa-be5c-426e-b124-64263bf8a3ee.lease
-rw-r--r--. 1 vdsm kvm  329 Jul  1  2020 
daa292aa-be5c-426e-b124-64263bf8a3ee.meta
-rw-r--r--. 1 vdsm kvm  329 Jul  1  2020 
daa292aa-be5c-426e-b124-64263bf8a3ee.meta

Any ideas on how to recover ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4DA4Z6I4OPDPK5FGUCVGER5EEW7URIUI/


[ovirt-users] Error while updating volume meta data: [Errno 17] File exists", ), code = 208

2021-04-05 Thread adrianquintero
Hi,
I am encountering the following error every hour in the ovirt-engine.log file 
and I can also see the same in the oVirt engine UI
---
2021-04-05 01:49:20,072-04 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-76) [7dd3eb3d] Command 
'SetVolumeDescriptionVDSCommand( 
SetVolumeDescriptionVDSCommandParameters:{storagePoolId='dfe84316-bbdb-11ea-beb6-00163e1ab088',
 ignoreFailoverLimit='false', 
storageDomainId='9ed31575-9cc4-4b05-a5d9-23d21d6e915e', 
imageGroupId='2af73aad-5079-4583-824c-a93b20b93835', 
imageId='ad5a523c-fdc1-49dd-900b-73f801564f1d'})' execution failed: 
IRSGenericException: IRSErrorException: Failed to SetVolumeDescriptionVDS, 
error = Error while updating volume meta data: 
("(u'/rhev/data-center/mnt/glusterSD192.168.0.4:_data1/9ed31575-9cc4-4b05-a5d9-23d21d6e915e/images/2af73aad-5079-4583-824c-a93b20b93835/ad5a523c-fdc1-49dd-900b-73f801564f1d',)[Errno
 17] File exists",), code = 208


2021-04-05 02:49:14,333-04 ERROR 
[org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] 
(DefaultQuartzScheduler3) [2bf694b9] VDS error Command execution failed: rc=2 
out='geo-replication command failed\n' err=''


2021-04-05 02:49:20,332-04 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-5) [6b6dd1af] Command 
'SetVolumeDescriptionVDSCommand( 
SetVolumeDescriptionVDSCommandParameters:{storagePoolId='dfe84316-bbdb-11ea-beb6-00163e1ab088',
 ignoreFailoverLimit='false', 
storageDomainId='9ed31575-9cc4-4b05-a5d9-23d21d6e915e', 
imageGroupId='2af73aad-5079-4583-824c-a93b20b93835', 
imageId='ad5a523c-fdc1-49dd-900b-73f801564f1d'})' execution failed: 
IRSGenericException: IRSErrorException: Failed to SetVolumeDescriptionVDS, 
error = Error while updating volume meta data: 
("(u'/rhev/data-center/mnt/glusterSD/192.168.0.4:_data1/9ed31575-9cc4-4b05-a5d9-23d21d6e915e/images/2af73aad-5079-4583-824c-a93b20b93835/ad5a523c-fdc1-49dd-900b-73f801564f1d',)[Errno
 17] File exists",), code = 208

---

I looked up the mentioned error codes but not sure if these are the correct 
ones:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/technical_reference/appe-event_codes
---
 17   > VDS_MAINTENANCE_FAILED  > Error  > Failed to switch Host ${VdsName} to 
Maintenance mode. 
 208 > PROVIDER_UPDATE_FAILED  > Error  > Failed to update provider 
${ProviderName}. (User: ${UserName}) 
---

From the Engine UI's event tab  I can see:
---
Failed to update VMs/Templates OVF data for Storage Domain data1 in Data Center 
Default.
Failed to update OVF disks 2af73aad-5079-4583-824c-a93b20b93835, OVF data isn't 
updated on those OVF stores (Data Center Default, Storage Domain data1).
VDSM command SetVolumeDescriptionVDS failed: Error while updating volume meta 
data: 
("(u'/rhev/data-center/mnt/glusterSD/192.168.0.4:_data1/9ed31575-9cc4-4b05-a5d9-23d21d6e915e/images/2af73aad-5079-4583-824c-a93b20b93835/ad5a523c-fdc1-49dd-900b-73f801564f1d',)[Errno
 17] File exists",)
---

I know the error belongs to the "data1" storage domain and this is a 12 node  
Distributed Replicate volume (Replicat 3) HCI.
If I try to update the OVF using the UI I get the following log entries in the 
engine
---
2021-04-05 03:20:03,101-04 WARN  
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-1183) 
[6761980f-dcf6-400e-bfd8-fd2c70e7569a] The message key 
'UpdateOvfStoreForStorageDomain' is missing from 'bundles/ExecutionMessages'

2021-04-05 03:20:03,178-04 INFO  
[org.ovirt.engine.core.bll.storage.domain.UpdateOvfStoreForStorageDomainCommand]
 (default task-1183) [6761980f-dcf6-400e-bfd8-fd2c70e7569a] Lock Acquired to 
object 
'EngineLock:{exclusiveLocks='[9ed31575-9cc4-4b05-a5d9-23d21d6e915e=STORAGE]', 
sharedLocks=''}'

2021-04-05 03:20:03,204-04 INFO  
[org.ovirt.engine.core.bll.storage.domain.UpdateOvfStoreForStorageDomainCommand]
 (default task-1183) [6761980f-dcf6-400e-bfd8-fd2c70e7569a] Running command: 
UpdateOvfStoreForStorageDomainCommand internal: false. Entities affected :  ID: 
9ed31575-9cc4-4b05-a5d9-23d21d6e915e Type: StorageAction group 
MANIPULATE_STORAGE_DOMAIN with role type ADMIN

2021-04-05 03:20:03,505-04 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-62) [13f4ac4f] Failed in 
'SetVolumeDescriptionVDS' method

2021-04-05 03:20:03,512-04 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-62) [13f4ac4f] EVENT_ID: 
IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command SetVolumeDescriptionVDS 
failed: Error while updating volume meta 

[ovirt-users] Re: Libgfapi considerations

2021-03-29 Thread adrianquintero
Hi all,

I can confirm that when using libgfapi with oVirt + Gluster replica 3 
(Hyperconverged) read and write performance under a VM was 4 to 5 times better 
than when using fuse.

--
Tested with a VM CentOS 6 and 7 under  the hyperconverged cluster HW:
--
ovirt 4.3.10 hypervisors with replica 3
- 256Gb Ram
- 32 total cores with hyperthreading
- RAID 1 (2 HDDs) for OS
- RAID 6 (9 SSDs) for Gluster , also tested with RAID 10, JBOD, all provided 
similar improvements with libgfapi (4 to 5 times better), replica 3 volumes.
- 10Gbe NICs, 1 for ovirtmgmnt and 1 for Gluster
- Ran tests using fio

---
Test results using fuse (1500 MTU) (Took about 4~5 min):
---

[root@test3 mail]# fio --randrepeat=1 --ioengine=libaio --direct=1 
--gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G 
--readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.13
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [100.0% done] [11984K/4079K/0K /s] [2996 /1019 /0  iops] 
[eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=8894: Mon Mar 29 10:05:35 2021
  read : io=3070.5MB, bw=12286KB/s, iops=3071 , runt=255918msec  
<--
  write: io=1025.6MB, bw=4103.5KB/s, iops=1025 , runt=255918msec  
<--
  cpu  : usr=1.84%, sys=10.50%, ctx=859129, majf=0, minf=19
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
 issued: total=r=786043/w=262533/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=3070.5MB, aggrb=12285KB/s, minb=12285KB/s, maxb=12285KB/s, 
mint=255918msec, maxt=255918msec
  WRITE: io=1025.6MB, aggrb=4103KB/s, minb=4103KB/s, maxb=4103KB/s, 
mint=255918msec, maxt=255918msec

Disk stats (read/write):
dm-3: ios=785305/262494, merge=0/0, ticks=492833/15794537, 
in_queue=16289356, util=100.00%, aggrios=786024/262789, aggrmerge=19/45, 
aggrticks=492419/15811831, aggrin_queue=16303803, aggrutil=100.00%
  sda: ios=786024/262789, merge=19/45, ticks=492419/15811831, 
in_queue=16303803, util=100.00%


--
Test results using fuse (9000 MTU) // Did not see much of a difference (Took 
about 4~5 min):
--
[root@test3 mail]# fio --randrepeat=1 --ioengine=libaio --direct=1 
--gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G 
--readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [m] [100.0% done] [14956K/4596K/0K /s] [3739 /1149 /0  iops] 
[eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2193: Mon Mar 29 10:22:44 2021
  read : io=3070.8MB, bw=12882KB/s, iops=3220 , runt=244095msec   
<--
  write: io=1025.3MB, bw=4300.1KB/s, iops=1075 , runt=244095msec   
<--
  cpu  : usr=1.85%, sys=10.43%, ctx=849742, majf=0, minf=21
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
 issued: total=r=786117/w=262459/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=3070.8MB, aggrb=12882KB/s, minb=12882KB/s, maxb=12882KB/s, 
mint=244095msec, maxt=244095msec
  WRITE: io=1025.3MB, aggrb=4300KB/s, minb=4300KB/s, maxb=4300KB/s, 
mint=244095msec, maxt=244095msec

Disk stats (read/write):
dm-3: ios=785805/262493, merge=0/0, ticks=511951/15009580, 
in_queue=15523355, util=100.00%, aggrios=786105/262713, aggrmerge=18/19, 
aggrticks=511235/15026104, aggrin_queue=15536995, aggrutil=100.00%
  sda: ios=786105/262713, merge=18/19, ticks=511235/15026104, 
in_queue=15536995, util=100.00%

--
Test results using LIBGFAPI (9000 MTU), took about 38 seconds
--
[root@vmm04 ~]# ping -I glusternet -M do -s 8972 192.168.1.6
PING 192.168.1.6 (192.168.1.6) from 192.168.1.4 glusternet: 8972(9000) bytes of 
data.
8980 bytes from 192.168.1.6: icmp_seq=1 ttl=64 time=0.300 ms

[root@vmm04 ~]# ping -I ovirtmgmt -M do -s 

[ovirt-users] Re: Can't import some VMs after storage domain detach and reattach to new datacenter.

2020-12-09 Thread adrianquintero
Hi Nir,
Trying to revive this old thread, I ran in to a similar issue. I imported a few 
VMs from an old VMWARE version into ovirt 4.3.3 using OVAs generated on the 
VMWARE side. After adjusting a bit the OVAs I was able to do the import.

A year later I have tried to export those VMs from 4.3.3 to a new 
Hyperconverged cluster that uses oVirt 4.3.10.
If I try to import a VM with only 1 disk I need to modify  the DISKTYPE from 1 
to DATA, and Need to add the alias on the DESCRIPTION field as it does not 
appear on  the v5 (ovirt 4.3.10).

Original settings:

Disk1.-
[root@ovirt1 images]# strings  
a88461c4-61cb-4dbc-806e-484ce6fd5b4d/247d61c6-46a1-4c48-bb2d-55f6b8743692.meta 
DOMAIN=523debef-f166-407d-a8d8-9cfd6d20ebb7
VOLTYPE=LEAF
CTIME=1574205481
MTIME=1574205481
IMAGE=a88461c4-61cb-4dbc-806e-484ce6fd5b4d
DISKTYPE=1
PUUID=----
LEGALITY=LEGAL
POOL_UUID=
SIZE=209715200
FORMAT=RAW
TYPE=SPARSE
DESCRIPTION=generated by virt-v2v 1.38.2rhel_7,release_12.el7_6.2,libvirt

Disk2.-
[root@ovirt1 images]# strings 
75044386-90db-4c16-94ad-0f376fa7ceea/d17a4bf4-e633-4e08-aa7c-42b22a2c8769.meta 
CAP=268435456000
CTIME=1575689283
DESCRIPTION=generated by virt-v2v 1.38.2rhel_7,release_12.el7_6.2,libvirt
DISKTYPE=1
DOMAIN=523debef-f166-407d-a8d8-9cfd6d20ebb7
FORMAT=RAW
GEN=0
IMAGE=75044386-90db-4c16-94ad-0f376fa7ceea
LEGALITY=LEGAL
PUUID=----
TYPE=PREALLOCATED
VOLTYPE=LEAF


I changed  DISKTYPE=1 to DISKTYPE=DATA and DESCRIPTION=DESCRIPTION=generated by 
virt-v2v 1.38.2rhel_7,release_12.el7_6.2,libvirt TO 
DESCRIPTION={"DiskAlias":"MYVM-DISK1","DiskDescription":""} and for Disk2 
DESCRIPTION={"DiskAlias":"MYVM-DISK2","DiskDescription":""}

VMs with only 1 disk on their configuration only migrates the disk and only 
able to import the disk, not the VM from an NFS sharead data domain. So I had 
to create the VM and attach the imported disk and worked fine. Then I tranfered 
the disk from the NFS domain to the VMSTORE Gluster domain... up to here all 
good.

Problem when I tried to do the same with a VM with 2 disks, up to starting the 
VM all was good, however migrating the storage to the VMSTORE Gluster domain 
the second disk stays in migration forever, issue related to deleting auto 
generated snapshot of the second disk. 

In order to half way solve the issue I had to follow up on:
 https://access.redhat.com/solutions/396753
 https://access.redhat.com/solutions/1347513
https://access.redhat.com/solutions/3991591
/usr/share/ovirt-engine/setup/dbutils/   
https://lists.ovirt.org/pipermail/users/2016-May/039898.html


but this caused a lot of downtime, so right now 1 disk of the VM is on VMSTORE 
DOMAIN (Gluster), and one disk on the migration data domain (NFS). Seems I 
can't transfer the second disk to VMSTORE Domain.

Any ideas?

Thanks,

AQ
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5YYFPIYHS3I5XO7DJHO66FTRLICI2VKY/


[ovirt-users] oVirt 4.3.9.4-1:--> VM Has been paused due to storage I/O error

2020-05-27 Thread adrianquintero
Team,
I've been having issues all my VMs cant be started,  error is  "VM Has been 
paused due to storage I/O error"
Any ideas are welcome as all my VMs are down 

Gluster log (gluster version 6.8):
[2020-05-27 09:10:28.132619] E [MSGID: 113040] 
[posix-inode-fd-ops.c:1572:posix_readv] 0-vmstore-posix: read failed on 
gfid=8674ab5f-56b9-4136-9b30-a65ca86be204, fd=0x7f12b807e6d8, offset=0 size=1, 
buf=0x7f134fbd9000 [Invalid argument]
[2020-05-27 09:10:28.132694] E [MSGID: 115068] 
[server-rpc-fops_v2.c:1425:server4_readv_cbk] 0-vmstore-server: 3286: READV 3 
(8674ab5f-56b9-4136-9b30-a65ca86be204), client: 
CTX_ID:03354525-5de3-4390-b775-3db7a85c0022-GRAPH_ID:0-PID:29372-HOST:jrz-061-ovirt3.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0,
 error-xlator: vmstore-posix [Invalid argument]
[2020-05-27 09:10:28.211930] E [MSGID: 113040] 
[posix-inode-fd-ops.c:1572:posix_readv] 0-vmstore-posix: read failed on 
gfid=8674ab5f-56b9-4136-9b30-a65ca86be204, fd=0x7f12b825ed18, offset=0 size=1, 
buf=0x7f134fbd9000 [Invalid argument]
[2020-05-27 09:10:28.211995] E [MSGID: 115068] 
[server-rpc-fops_v2.c:1425:server4_readv_cbk] 0-vmstore-server: 3306: READV 2 
(8674ab5f-56b9-4136-9b30-a65ca86be204), client: 
CTX_ID:03354525-5de3-4390-b775-3db7a85c0022-GRAPH_ID:0-PID:29372-HOST:jrz-061-ovirt3.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0,
 error-xlator: vmstore-posix [Invalid argument]
[2020-05-27 09:10:28.226451] E [MSGID: 113040] 
[posix-inode-fd-ops.c:1572:posix_readv] 0-vmstore-posix: read failed on 
gfid=755664db-2d04-4ed0-9333-251c6cc3dcb1, fd=0x7f12b807e6d8, offset=0 size=1, 
buf=0x7f134fbd9000 [Invalid argument]
[2020-05-27 09:10:28.226511] E [MSGID: 115068] 
[server-rpc-fops_v2.c:1425:server4_readv_cbk] 0-vmstore-server: 3317: READV 2 
(755664db-2d04-4ed0-9333-251c6cc3dcb1), client: 
CTX_ID:03354525-5de3-4390-b775-3db7a85c0022-GRAPH_ID:0-PID:29372-HOST:jrz-061-ovirt3.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0,
 error-xlator: vmstore-posix [Invalid argument]
[2020-05-27 09:10:28.232122] E [MSGID: 113040] 
[posix-inode-fd-ops.c:1572:posix_readv] 0-vmstore-posix: read failed on 
gfid=755664db-2d04-4ed0-9333-251c6cc3dcb1, fd=0x7f12b807e6d8, offset=0 size=1, 
buf=0x7f134fbd9000 [Invalid argument]
[2020-05-27 09:10:28.232181] E [MSGID: 115068] 
[server-rpc-fops_v2.c:1425:server4_readv_cbk] 0-vmstore-server: 3319: READV 2 
(755664db-2d04-4ed0-9333-251c6cc3dcb1), client: 
CTX_ID:03354525-5de3-4390-b775-3db7a85c0022-GRAPH_ID:0-PID:29372-HOST:jrz-061-ovirt3.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0,
 error-xlator: vmstore-posix [Invalid argument]
[2020-05-27 09:10:28.237043] E [MSGID: 113040] 
[posix-inode-fd-ops.c:1572:posix_readv] 0-vmstore-posix: read failed on 
gfid=8674ab5f-56b9-4136-9b30-a65ca86be204, fd=0x7f12b82038b8, offset=0 size=1, 
buf=0x7f134fbd9000 [Invalid argument]
[2020-05-27 09:10:28.237100] E [MSGID: 115068] 
[server-rpc-fops_v2.c:1425:server4_readv_cbk] 0-vmstore-server: 3321: READV 3 
(8674ab5f-56b9-4136-9b30-a65ca86be204), client: 
CTX_ID:03354525-5de3-4390-b775-3db7a85c0022-GRAPH_ID:0-PID:29372-HOST:jrz-061-ovirt3.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0,
 error-xlator: vmstore-posix [Invalid argument]
[2020-05-27 09:10:28.242176] E [MSGID: 113040] 
[posix-inode-fd-ops.c:1572:posix_readv] 0-vmstore-posix: read failed on 
gfid=755664db-2d04-4ed0-9333-251c6cc3dcb1, fd=0x7f12b807e6d8, offset=0 size=1, 
buf=0x7f134fbd9000 [Invalid argument]
[2020-05-27 09:10:28.242235] E [MSGID: 115068] 
[server-rpc-fops_v2.c:1425:server4_readv_cbk] 0-vmstore-server: 3323: READV 2 
(755664db-2d04-4ed0-9333-251c6cc3dcb1), client: 
CTX_ID:03354525-5de3-4390-b775-3db7a85c0022-GRAPH_ID:0-PID:29372-HOST:jrz-061-ovirt3.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0,
 error-xlator: vmstore-posix [Invalid argument]




[2020-05-27 09:11:18.990877] I [MSGID: 115036] [server.c:499:server_rpc_notify] 
0-vmstore-server: disconnecting connection from 
CTX_ID:87270d80-4310-4795-9f4d-2c1a61d16cee-GRAPH_ID:0-PID:28815-HOST:jrz-059-ovirt1.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0
[2020-05-27 09:11:18.991896] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 
0-vmstore-server: Shutting down connection 
CTX_ID:87270d80-4310-4795-9f4d-2c1a61d16cee-GRAPH_ID:0-PID:28815-HOST:jrz-059-ovirt1.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0
[2020-05-27 09:11:19.329120] I [MSGID: 115036] [server.c:499:server_rpc_notify] 
0-vmstore-server: disconnecting connection from 
CTX_ID:5858e269-923e-4a38-9c6b-62f337a6abac-GRAPH_ID:0-PID:20791-HOST:jrz-060-ovirt2.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0
[2020-05-27 09:11:19.329419] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 
0-vmstore-server: Shutting down connection 
CTX_ID:5858e269-923e-4a38-9c6b-62f337a6abac-GRAPH_ID:0-PID:20791-HOST:jrz-060-ovirt2.example.com-PC_NAME:vmstore-client-2-RECON_NO:-0
[2020-05-27 09:11:21.044186] I [addr.c:54:compare_addr_and_update] 
0-/gluster_bricks/vmstore/vmstore: allowed = "*", received addr = "192.168.0.59"
[2020-05-27 

[ovirt-users] Re: he_gluster_vars.json for HCI deployment ovirt 4.3.x

2020-05-12 Thread adrianquintero
Found the following 
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/README.md
any other?

thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IY6GOQ5ROJGPA44WKNCQUA36DJCNDGHL/


[ovirt-users] he_gluster_vars.json for HCI deployment ovirt 4.3.x

2020-05-12 Thread adrianquintero
Hello,
I am trying to find all the possible variables that can be passed inside 
/etc/ansible/roles/gluste.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json
 for an Hyperconverged deployment.

 RHHI 1.6/1.7 only offers a limited list

thank you,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VRWV6PARSV2IXKI4ODIPPCTARWIVBJSW/


[ovirt-users] Re: Mirror oVirt content

2020-04-21 Thread adrianquintero
Hi Barak,
Thanks for the info, we have opened a request/email using that link since a few 
months ago, however no one has reached out back to us.

Anything else we can do from our end?

thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AMBMC2U745ZOIVMQ6SAOU5PVODCVHZPE/


[ovirt-users] Re: oVirt 4.3.7 + Gluster 6.6 unsynced entries

2020-04-16 Thread adrianquintero
Hi Strahil,
This is what method 2 came up with:

[root@host1 ~]# getfattr -n trusted.glusterfs.pathinfo -e text 
/rhev/data-center/mnt/glusterSD/192.168.0.59\:_vmstore/
getfattr: Removing leading '/' from absolute path names
# file: rhev/data-center/mnt/glusterSD/192.168.0.59:_vmstore/
trusted.glusterfs.pathinfo="( 
 
 
)"


I will try method 1, but just want to make sure I am running it against the 
correct file directory.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7377A4JQ54AYWBIVGJ6XJLG5THCGETH3/


[ovirt-users] oVirt 4.3.7 + Gluster 6.6 unsynced entries

2020-04-16 Thread adrianquintero
Hello,
I am having the following issue, it has been a few days and healing never 
finishes, any ideas on how to fix the unsynced entries?

[root@host1 vmstore]# gluster vol heal vmstore info
---
Brick host1:/gluster_bricks/vmstore/vmstore
 
 
 
Status: Connected
Number of entries: 3

Brick host2:/gluster_bricks/vmstore/vmstore
/.shard/9dbd7ab9-3d1c-4eed-908e-852eec1ce3b1.1240 
/.shard/9dbd7ab9-3d1c-4eed-908e-852eec1ce3b1.1241 
/.shard/9dbd7ab9-3d1c-4eed-908e-852eec1ce3b1.1242 
Status: Connected
Number of entries: 3

host3:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
---

Thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIR4DW6LBW6GUNYBVKJ6KTNMATUD55F6/


[ovirt-users] Mirror oVirt content

2020-04-01 Thread adrianquintero
Hello oVirt Community / infrastructure team,
we would like to get guidance on how to mirror the oVirt content publicly.  We 
replicate content using our own networks so we'd only be pulling the content 
from the oVirt content server from one location in Chicago.

Please advise.

Thank you,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJGILYQBKGMKOG3V47DETHJE26FSU5QA/


[ovirt-users] HCI with Ansible deployment steps: oVirt 4.3.7 + Gluster

2020-03-21 Thread adrianquintero
Hello Community,
Wanted to contribute with the little I know after working a few times with 
oVirt in an HCI environment.
One of the questions for which I was not able to find a straight forward answer 
is how to deploy an HCI oVirt environment, so after a few struggles I was able 
to achieve that goal.

Below is an example of what I used to fully deploy a 3 node cluster with oVirt 
4.3.7.
Please note I am no expert and just providing what I have acomplished in my 
lab, so use the below at your own risk.


1.-Hardware used in my lab :
 3 x Dell R610s, 48Gb Ram per server, Intel Xeon L5640 @ 2.27GHz  Hex core 
x 2
 2 x 146GB HDDs for OS
 1 x 600GB HDD for Gluster Bricks
 Dual 10Gb Nic.

2.-Deployed the servers using oVirt 4.3.7 image.
3.-Added ssh keys to be able to ssh from host 1 over to host 2 and host 3 
without a password
4.-Modified the following files for ansible deployment on Host 1
 4.1.-Under 
etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
-- Modify file he_gluster_vars.json with the Hosted Engine VM 
required information:
--Modify file gluster_inventory.yml with hosts and gluster 
information
  
You can find the modified files that I used here 
https://github.com/viejillo/ovirt-hci-ansible

I hope this helps and if any questions feel free to reach out.


Regards,

AQ
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SDKVYWJ3VVOSNILFZOYPQ6DZKC6NSGEA/


[ovirt-users] Re: Damaged hard disk in volume replica gluster

2020-02-25 Thread adrianquintero
We are not planning on using SAN/iSCSI at all.
I will try out your suggested steps for VDO and see if I can create a procedure 
for our current scenario.

thanks I will let you know the outcome.

regards,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NDFM573ZBECBNREG6ZTKXIDRHLY7LBK/


[ovirt-users] Re: Damaged hard disk in volume replica gluster

2020-02-25 Thread adrianquintero
Thanks Strahil, 
I made a mistake in this 3 node cluster, I use this cluster for testing, in our 
Prod environment we do have the blacklist but it is as follows:

# VDSM REVISION 1.8
# VDSM PRIVATE
blacklist {
devnode "*"
}

However we did not add each individual local disk to the blacklist entries, 
would this still be the case where I would have to add the individual blacklist 
entries as you suggested? I thought 'devnode "*" ' achieved this...

From another post you mentioned something similart to:
# VDSM PRIVATE
blacklist {
devnode "*"
wwid Crucial_CT256MX100SSD1_14390D52DCF5
wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
wwid 
nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
}



My production host:
[root@host18 ~]# multipath -v2 -d
[root@host18 ~]# 


Thoughts?

Thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJPRAUG5VBZ4CONTSZ45EOIHOXJAEKQA/


[ovirt-users] Re: Damaged hard disk in volume replica gluster

2020-02-25 Thread adrianquintero
Strahil,
In our case the VDO was created during oVirt HCI setup. So I am trying to 
determine how it gets mounted

As far as I can tell this is the config is as follows:


[root@host1 ~]# vdo printConfigFile
config: !Configuration
  vdos:
vdo_sdc: !VDOService
  _operationState: finished
  ackThreads: 1
  activated: enabled
  bioRotationInterval: 64
  bioThreads: 4
  blockMapCacheSize: 128M
  blockMapPeriod: 16380
  compression: enabled
  cpuThreads: 2
  deduplication: enabled
  device: /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44
  hashZoneThreads: 1
  indexCfreq: 0
  indexMemory: 0.25
  indexSparse: disabled
  indexThreads: 0
  logicalBlockSize: 512
  logicalSize: 7200G
  logicalThreads: 1
  name: vdo_sdc
  physicalSize: 781379416K
  physicalThreads: 1
  readCache: enabled
  readCacheSize: 20M
  slabSize: 32G
  writePolicy: auto
  version: 538380551
filename: /etc/vdoconf.yml



[root@host1 ~]# lsblk /dev/sdc
NAMEMAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sdc   8:32   0 745.2G  0 disk  
├─3600508b1001cd70935270813aca97c44 253:60 745.2G  0 mpath 
└─vdo_sdc   253:21   0 7T  0 vdo   
  └─gluster_vg_sdc-gluster_lv_data  253:22   0 7T  0 lvm   
/gluster_bricks/data

I know that vdo_sdc is the TYPE="LVM2_member", this from /etc/fstab entries:
/dev/mapper/vdo_sdc: UUID="gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR" 
TYPE="LVM2_member"
/dev/mapper/gluster_vg_sdc-gluster_lv_data: 
UUID="8a94b876-baf2-442c-9e7f-6573308c8ef3" TYPE="xfs"

 --- Physical volumes ---
  PV Name   /dev/mapper/vdo_sdc 
  PV UUID   gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR
  PV Status allocatable
  Total PE / Free PE1843199 / 0




Am am trying to piece  things together, I am doing more research on VDO in an 
HCI oVirt setup.

In the meantime any help is welcome.

Thanks again,

Adrian

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HVCYYZSUJ7RCRMKZLMUCTU537VFYCD76/


[ovirt-users] Re: Damaged hard disk in volume replica gluster

2020-02-25 Thread adrianquintero
Strahil,
Something that just came to my attention is that we have a second cluster with 
VDO enabled, is there something out there that can explain the process to 
accomplish a disk replacement with VDO?

Thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D4VLQ673HYLTEF2SRAZLHJVLKP27NESV/


[ovirt-users] Re: Damaged hard disk in volume replica gluster

2020-02-19 Thread adrianquintero
Strahil,
Let me take a step back and provide a bit more context of what we need to 
achieve.

We have a 3 node HCI setup, and host1 has a failed drive (/dev/sdd) that is 
used entirely for 1 brick ( /gluster_bricks/data2/data2), this is a Replica 3 
brick  setup.

The issue we have is that we don't have any more drive bays in our enclosure 
cages to add an extra disk and use it to replace the bad drive/Brick (/dev/sdd).

What would be the best way to replace the drive/brick in this situation? and 
what is the order in which the steps need to be completed?

I think I know how to replace a bad brick with a different brick and get things 
up and running, but in this case as mentioned above I have not more drive bays 
to allocate a new drive.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQPQKTP5LUOEUQITPIIIS6TBDRYJ5T5O/


[ovirt-users] Re: Damaged hard disk in volume replica gluster

2020-02-17 Thread adrianquintero
HI Strahil,
I am also running into an issue trying to replace a brick.

1.-/dev/sdd failed which is our /gluster_bricks/data2/data2 brick, hat disk was 
taken out of the array controller and added a new one.

2.-there are quite a bit of entries related to /dev/sdd , i.e.

[root@host1]# dmsetup info  |grep sdd
Name:  gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd-tpool
Name:  gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd_tdata
Name:  gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd_tmeta
Name:  gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd
Name:  gluster_vg_sdd-gluster_lv_data2

this causes the OS to see the new disk as /dev/sde (the controller presents it 
as /dev/sde)

array C

  Logical Drive: 3
 Size: 2.7 TB
  ...
 Disk Name: /dev/sde 
 Mount Points: None
 .
 Drive Type: Data
 LD Acceleration Method: Controller Cache

If i just remove all the thinpool entries (if i find them) do you think that 
might do the trick?

thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGJ3JNK7CF2PJGEE3TH7UIUAIFPBS6Q6/


[ovirt-users] Re: where can I find the repositories specific to 4.3.7

2020-02-17 Thread adrianquintero
Thank you  Yedidyah, I am working on trying to set something up and will post 
here if achieved.
Would be good if the repos were kept separate for ease of use.

regards,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X6XZ65YW65IB5SHEHUCL7JDH3WJGBNFZ/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread adrianquintero
Thanks Strahil,
I did the Rsync only for the .meta files and that seemed to have done the 
trick, I just waited a couple of hours and the OVF error resolved itself  and 
since it was the engine OVF I think Edward was right and the rest of the issues 
got resolved.

regards,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AL3WBM4NR5KYS2OQWMOOF7EBJVLAKBPE/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread adrianquintero
After a couple of hours all looking good and seems that the timestamps 
corrected themselves and OVF errors are no more.

Thank you all for the help.

Regards,

Adrian 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/57T3ELDXT5DYZA4DEQLB3ZAF4JBMWVE4/


[ovirt-users] Re: oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-16 Thread adrianquintero
Ok so I ran and rsync from host2 over to host3 for the .meta files only and 
that seems to have worked:

98d64fb4-df01-4981-9e5e-62be6ca7e07c.meta
ed569aed-005e-40fd-9297-dd54a1e4946c.meta 


[root@host1 ~]# gluster vol heal engine info
Brick host1.grupolucerna.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick host3:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick host3:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

In this case I did not have to stop/start the ovirt-ha-broker and  
ovirt-ha-agent

I still see the issue of the OVF, wondering if I should just rsync the whole 
/gluster_bricks/engine/engine directory from host3 over to host1 and 2 because 
of the following 1969 timestamps:

I 1969 as the timestamps on some directories in host1:
/gluster_bricks/engine/engine/7a68956e-3736-46d1-8932-8576f8ee8882/images:
drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969 b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd
drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969 86196e10-8103-4b00-bd3e-0f577a8bb5b2

on host2 I see the same:
drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969 b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd
drwxr-xr-x. 2 vdsm kvm 8.0K Dec 31  1969 86196e10-8103-4b00-bd3e-0f577a8bb5b2

but for  host3: I see a valid timestamp
drwxr-xr-x. 2 vdsm kvm 149 Feb 16 09:43 86196e10-8103-4b00-bd3e-0f577a8bb5b2
drwxr-xr-x. 2 vdsm kvm 149 Feb 16 09:45 b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd

If we take a close look it seems that host3 has a valid timestamp but host1 and 
host2 have a 1969 date.

Thoughts?


thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXL3YXZLO4QGFXCYAMUNAK4NPJTHZT3B/


[ovirt-users] oVirt 4.3.7 and Gluster 6.6 multiple issues

2020-02-11 Thread adrianquintero
Hi, 
I am having a couple of issues with fresh ovirt 4.3.7  HCI setup with 3 nodes


1.-vdsm is showing the following errors for HOST1 and HOST2 (HOST3 seems to be 
ok):

 service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Tue 2020-02-11 18:50:28 PST; 28min ago
  Process: 25457 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start (code=exited, status=0/SUCCESS)
 Main PID: 25549 (vdsmd)
Tasks: 76
   CGroup: /system.slice/vdsmd.service
   ├─25549 /usr/bin/python2 /usr/share/vdsm/vdsmd
   ├─25707 /usr/libexec/ioprocess --read-pipe-fd 52 --write-pipe-fd 51 
--max-threads 10 --max-queued-requests 10
   ├─26314 /usr/libexec/ioprocess --read-pipe-fd 92 --write-pipe-fd 86 
--max-threads 10 --max-queued-requests 10
   ├─26325 /usr/libexec/ioprocess --read-pipe-fd 96 --write-pipe-fd 93 
--max-threads 10 --max-queued-requests 10
   └─26333 /usr/libexec/ioprocess --read-pipe-fd 102 --write-pipe-fd 
101 --max-threads 10 --max-queued-requests 10

Feb 11 18:50:28 tij-059-ovirt1.grupolucerna.local vdsmd_init_common.sh[25457]: 
vdsm: Running test_space
Feb 11 18:50:28 tij-059-ovirt1.grupolucerna.local vdsmd_init_common.sh[25457]: 
vdsm: Running test_lo
Feb 11 18:50:28 tij-059-ovirt1.grupolucerna.local systemd[1]: Started Virtual 
Desktop Server Manager.
Feb 11 18:50:29 tij-059-ovirt1.grupolucerna.local vdsm[25549]: WARN MOM not 
available.
Feb 11 18:50:29 tij-059-ovirt1.grupolucerna.local vdsm[25549]: WARN MOM not 
available, KSM stats will be missing.
Feb 11 18:51:25 tij-059-ovirt1.grupolucerna.local vdsm[25549]: ERROR failed to 
retrieve Hosted Engine HA score
   Traceback (most 
recent call last):
 File 
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo...
Feb 11 18:51:34 tij-059-ovirt1.grupolucerna.local vdsm[25549]: ERROR failed to 
retrieve Hosted Engine HA score
   Traceback (most 
recent call last):
 File 
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo...
Feb 11 18:51:35 tij-059-ovirt1.grupolucerna.local vdsm[25549]: ERROR failed to 
retrieve Hosted Engine HA score
   Traceback (most 
recent call last):
 File 
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo...
Feb 11 18:51:43 tij-059-ovirt1.grupolucerna.local vdsm[25549]: ERROR failed to 
retrieve Hosted Engine HA score
   Traceback (most 
recent call last):
 File 
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo...
Feb 11 18:56:32 tij-059-ovirt1.grupolucerna.local vdsm[25549]: WARN ping was 
deprecated in favor of ping2 and confirmConnectivity


2.-"gluster vol engine heal info" is showing the following and it never 
finishes healing

[root@host2~]# gluster vol heal engine info
Brick host1:/gluster_bricks/engine/engine
/7a68956e-3736-46d1-8932-8576f8ee8882/images/86196e10-8103-4b00-bd3e-0f577a8bb5b2/98d64fb4-df01-4981-9e5e-62be6ca7e07c.meta
 
/7a68956e-3736-46d1-8932-8576f8ee8882/images/b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd/ed569aed-005e-40fd-9297-dd54a1e4946c.meta
 
Status: Connected
Number of entries: 2

Brick host2:/gluster_bricks/engine/engine
/7a68956e-3736-46d1-8932-8576f8ee8882/images/86196e10-8103-4b00-bd3e-0f577a8bb5b2/98d64fb4-df01-4981-9e5e-62be6ca7e07c.meta
 
/7a68956e-3736-46d1-8932-8576f8ee8882/images/b8ce22c5-8cbd-4d7f-b544-9ce930e04dcd/ed569aed-005e-40fd-9297-dd54a1e4946c.meta
 
Status: Connected
Number of entries: 2

Brick host3:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0


3.-Every hour I see the following 

[ovirt-users] where can I find the repositories specific to 4.3.7

2020-02-06 Thread adrianquintero
Hi am trying to upgrade our ovirt  hosts and engine in our Hyperconverged 4.3.6 
environment to 4.3.7, we do not want to go to 4.3.8.
Where can I find the correct repositories for 4.3.7 specific to  the engine and 
the ones for  the hosts?


Thank you

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZJEOACBUHB2YO2SQK7HCN6ATSD7HDAX2/


[ovirt-users] Re: Looking for Ovirt Consultants

2020-01-05 Thread adrianquintero
Bob,
Is your environment hyperconverged?
if yes, what is the output of 
#hosted-engine --vm-status


Are your hypervisors up and running?
i.e. 
https://hypervisor1.mydomain.com
# ssh r...@hypervisor1.mydomain.com 



Regards,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRYAYVVQNPEOVKOKGUK5TWNQRPHEMT5A/


[ovirt-users] ovirt 4.3.7 + Gluster in hyperconverged (production design)

2019-12-23 Thread adrianquintero
Hi,
After playing a bit with oVirt and Gluster in our pre-production environment 
for the last year, we have decided to move forward with a our production design 
using ovirt 4.3.7 + Gluster in a hyperconverged setup.

For this we are looking get answers to a few questions that will help out with 
our design and  eventually lead to our production deployment phase:

Current HW specs (total servers = 18):
1.- Server type: DL380 GEN 9
2.- Memory: 256GB
3.-Disk QTY per hypervisor:
- 2x600GB SAS (RAID 0) for the OS
- 9x1.2TB SSD (RAID[0, 6..]..? ) for GLUSTER.
4.-Network:
- Bond1: 2x10Gbps 
- Bond2: 2x10Gbps (for migration and gluster)

Our plan is to build 2x9 node clusters, however the following questions come up:

1.-Should we build 2 separate environments each with its own engine? or should 
we do 1 engine that will manage both clusters?
2.-What would be the best gluster volume layout for #1 above with regards to 
RAID configuration:
- JBOD or RAID6 or…?.
- what is the benefit or downside of using JBOD vs RAID 6 for this particular 
scenario?
3.-Would you recommend Ansible-based deployment (if supported)? If yes where 
would I find the documentation for it? or should we just deploy using the UI?.
- I have reviewed the following and in Chapter 5 it only mentions Web UI 
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html-single/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/index#deployment_workflow
- Also looked at 
https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansible-deployment
 but could not get it to work properly.

4.-what is the recommended max server qty in a hyperconverged setup with 
gluster, 12, 15, 18...?

Thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZEIOQW5KXIF47SZDZPMLUBWTP5QUPMZ/


[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-12 Thread adrianquintero
Forgot to add the log entry that lead us to the solution for our particular 
case:

Log = 
/var/log/glusterfs/geo-replication/geo-master_slave1.mydomain2.com_geo-slave/gsyncd.log
-
[2019-12-11 20:37:27.831976] E [syncdutils(worker 
/gluster_bricks/geo-master/geo-master):339:log_raise_exception] : FAIL:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 330, in main
func(args)
  File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 82, in 
subcmd_worker
local.service_loop(remote)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1267, in 
service_loop
changelog_agent.init()
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 233, in 
__call__
return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 215, in 
__call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such file or 
directory
-
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOD6XRKCCACLDASY6U53IHHLVQ2QPCO6/


[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-12 Thread adrianquintero
Hi Sahina/Strahil,
We followed the recommended setup from gluster documentation however one of my 
colleagues noticed a python entry in the logs, turns out it is a missing sym 
link to a library

We created the following symlink to all  the master servers (cluster 1 oVirt 1) 
and slave servers (Cluster 2, oVirt2) and geo-sync started working:
/lib64/libgfchangelog.so -> /lib64/libgfchangelog.so.0

MASTER NODE MASTER VOLMASTER BRICK 
SLAVE USERSLAVE  SLAVE NODE 
STATUSCRAWL STATUSLAST_SYNCED

host1.mydomain1.comgeo-master/gluster_bricks/geo-master/geo-master
root   slave1.mydomain2.com::geo-slave slave1.mydomain2.com
ActiveChangelog Crawl2019-12-12 05:22:56
host2.mydomain1.comgeo-master/gluster_bricks/geo-master/geo-master
root   slave1.mydomain2.com::geo-slave slave2.mydomain2.com
PassiveN/A N/A
host3.mydomain1.comgeo-master/gluster_bricks/geo-master/geo-master
root   slave1.mydomain2.com::geo-slave slave3.mydomain2.com
PassiveN/A N/A

we still require  a bit more testing but at least it is syncing now.

I am trying to find good documentation on how to achieve geo-replication for 
oVirt, is that something you can point me to? basically looking for a way to do 
Geo-replication from site A to Site B, but the Geo-Replication pop up window 
from ovirt does not seem to have the functionality to connect to a slave server 
from another oVirt setup



As a side note, from the oVirt WEB UI the "cancel button" for the "New 
Geo-Replication" does not seem to work: storage > volumes > "select your 
volume" > "click 'Geo-Replication' 

Any good documentation you can point me to is welcome.

thank you for the swift assistance.

Regards,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EQHPJ3AUTWMSCMUKNCNZTAWKPZKF54M6/


[ovirt-users] ovirt 4.3.7 geo-replication not working

2019-12-11 Thread adrianquintero
Hi,
I am trying to setup geo-replication between 2 sites, but I keep getting:
[root@host1 ~]#  gluster vol geo-rep geo-master slave1.mydomain2.com::geo-slave 
status
 
MASTER NODE MASTER VOLMASTER BRICK 
SLAVE USERSLAVE  SLAVE NODESTATUSCRAWL 
STATUSLAST_SYNCED  
--
host1.mydomain1.comgeo-master/gluster_bricks/geo-master/geo-master
root  slave1.mydomain2.com::geo-slaveN/A   FaultyN/A
 N/A  
host2.mydomain2.comgeo-master/gluster_bricks/geo-master/geo-master
root  slave1.mydomain2.com::geo-slaveN/A   FaultyN/A
 N/A  
vmm11.virt.iad3pgeo-master/gluster_bricks/geo-master/geo-masterroot 
 slave1.mydomain2.com::geo-slaveN/A   FaultyN/A 
N/A


oVirt GUI has an icon in the volume that says "volume data is being 
geo-replicated" but we know that is not the case
From the logs i can see: 
[2019-12-11 19:57:48.441557] I [fuse-bridge.c:6810:fini] 0-fuse: Unmounting 
'/tmp/gsyncd-aux-mount-5WaCmt'.
[2019-12-11 19:57:48.441578] I [fuse-bridge.c:6815:fini] 0-fuse: Closing fuse 
connection to '/tmp/gsyncd-aux-mount-5WaCmt'

and
[2019-12-11 19:45:14.785758] I [monitor(monitor):278:monitor] Monitor: worker 
died in startup phase brick=/gluster_bricks/geo-master/geo-master

thoughts?

thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPTAODQ3Q4ZDKJ7W5BCKYC4NNM3TFQ2V/


[ovirt-users] Re: Migrate VM from oVirt to oVirt

2019-11-28 Thread adrianquintero
Thank you Luca, this process worked for me, just wondering why I could not 
achieve this by generating an ova.

Thanks again.

Adrian.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAU2OC6PQ3YLJXOVDH2WQTZ6DOZBC5YN/


[ovirt-users] Re: Migrate VM from oVirt to oVirt

2019-11-14 Thread adrianquintero
Thanks , I will also setup manageIQ and test.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYENTMP7FZDDOWS4JNAWYNTQ5QHFAKEI/


[ovirt-users] Re: Migrate VM from oVirt to oVirt

2019-11-13 Thread adrianquintero
Thank Jayme,
It worked, wondering if there is a more straight fwd way.

regards,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVMMVD6QRKSM24DATHM6564QKLSSC4CW/


[ovirt-users] Migrate VM from oVirt to oVirt

2019-11-11 Thread adrianquintero
Hello,
What would be the procedure to migrate a VM from oVirt to oVirt?

Migrate from  oVirt 4.2 running on Site A to oVirt 4.3 Site B.

thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OTOXXGYWL2SMBZMHWKZJFVA6IVKUQWU/


[ovirt-users] oVirt 4.3.6: volume start: engine: failed: Failed to find brick directory /gluster_bricks/engine/engine for volume engine. Reason : No such file or directory

2019-11-03 Thread adrianquintero
While trying to deploy using the WEB UI for single hyperconverged setup I run 
into this issue (ovirt 4.3.5, ovirt 4.3.6).
mount point is "/gluster_bricks/engine" however "/gluster_bricks/engine/engine" 
is not created.
Read a post about gluster 6.1 and having to add 
/gluster_bricks/engine/.glusterfs/indices manually, I tried it but it did not 
work, I could not deploy the hosted engine vm.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P5RGYSSNH6OC4GYKE562LBR4JEFUWQKM/


[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-11-01 Thread adrianquintero
Parth,
I am able to successfully install using the UI, however I am trying to automate 
the installation process using ansible for a hyperconverged setup. The install 
works but just have that issue where the last 2 of 3 servers are not being 
joined by the engine and in the ansible output that is what I see:
https://ovirt-engine2.example.com/ovirt-engine/services/pki-resource?resource=engine-certificate=OPENSSH-PUBKEY;

So then I try to do it manually after the Engine was online using host1 but 
they are not capable of hosting the engine (the crown icon is not shown as it 
does on host1). 
I logged in to each of the hosts and see the follwing:

[root@host1 ~]# vdsm-client Host getCapabilities | grep hostedEngineDeployed
"hostedEngineDeployed": true, 

[root@host2 ~]# vdsm-client Host getCapabilities | grep hostedEngineDeployed
"hostedEngineDeployed": true, 
[root@host3 ~]# vdsm-client Host getCapabilities | grep hostedEngineDeployed
"hostedEngineDeployed": true, 

but if I right click the Hosted Engine VM on the UI and click on migrate 
"Destination Host" entry indicates "No available host to migrate VMs to"



Gluster setup worked ok using ansible playbooks by following:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html/automating_rhhi_for_virtualization_deployment/index

Thoughts?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DSXVG4R3WTQO3QU5R6NJNZCWTEDFFDA/


[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-22 Thread adrianquintero
Im still not able to incorporate host2 and 3 thru ansible hyperconverged 
deployment using ansible. After the deploy the engine is up but cant add the 
servers manually as it complains about the host keys...

TASK [Set Engine public key as authorized key without validating the TLS/SSL 
certificates] 
**
task path: 
/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/add_hosts_storage_domains.yml:4


Failed: [host1.example.com -> host2.example.com] (item=host2.example.com) => {
"ansible_loop_var": "host", 
"changed": false, 
"host": "host2.example.com", 
"invocation": {
"module_args": {
"comment": null, 
"exclusive": false, 
"follow": false, 
"key": 
"https://ovirt-engine2.example.com/ovirt-engine/services/pki-resource?resource=engine-certificate=OPENSSH-PUBKEY;,
 
"key_options": null, 
"manage_dir": true, 
"path": null, 
"state": "present", 
"user": "root", 
"validate_certs": false
}
}, 
"msg": "Error getting key from: 
https://ovirt-engine2.example.com/ovirt-engine/services/pki-resource?resource=engine-certificate=OPENSSH-PUBKEY;
}

thoughts?

Thank you,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VY36JSI4YD2URTZQG5VZWMV6OR5TWZNM/


[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-21 Thread adrianquintero
on the other hosts I see entries like the following:

2019-10-21 15:41:32,767-0400 ERROR (periodic/0) [root] failed to retrieve 
Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted 
Engine setup finished? (api:191)

2019-10-21 15:41:47,799-0400 ERROR (periodic/0) [root] failed to retrieve 
Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted 
Engine setup finished? (api:191)

2019-10-21 15:41:49,916-0400 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) (dispatcher:83)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MA7S6M7TVBDUJHVOBGMETZ6DSP33GU7D/


[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-21 Thread adrianquintero
Hi Sahina,
I have checked the vdsm.log of the host and I can see the following:
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'62bfe528-cd5c-4b6c-808a-b097fef76629',)
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'8c2df9c6-b505-4499-abb9-0d15db80f33e',)
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128',)
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a',)

StorageDomainDoesNotExist: Storage domain does not exist: 
(u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128',)
2019-10-21 15:03:45,742-0400 ERROR (monitor/fe24d88) [storage.Monitor] Error 
checking domain fe24d88e-6acf-42d7-a857-eaf1f8deb24a (monitor:425)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 406, in 
_checkDomainStatus
self.domain.selftest()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 48, in 
__getattr__
return getattr(self.getRealDomain(), attrName)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in 
getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in 
_realProduce
domain = self._findDomain(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in 
_findDomain
return findMethod(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterSD.py", line 62, 
in findDomain
return GlusterStorageDomain(GlusterStorageDomain.findDomainPath(sdUUID))
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterSD.py", line 58, 
in findDomainPath
raise se.StorageDomainDoesNotExist(sdUUID)


Is there a way to fix this?

Also, when I try to reinstall one of the hosts (6 node cluster) it reinstalls 
just fine (except for host1) but does not deploy the engine capabilities to it.

thoughts?

thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PSANUHYPVJ227RYYVMLZJWB4H3EOSROU/


[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-20 Thread adrianquintero
Can anyone point out if this 
"https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html-single/automating_rhhi_for_virtualization_deployment/index;
 works for a 3 node hyperconverged setup in ovirt?

I have tried it and seems that every time I get different errors, does not 
matter if I use Dell HW or HPE HW.

Maybe I am not following it properly or not all the full documentation for 
oVirt  HI deployment with ansible is out there for an automated deployment on a 
3 node setup, if you can point me  to the right documentation I would 
appreciate it, for now I am only doing WEB UI deployments.

thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPNPJ5VISXKFYQI2LUI5MELSKVFQ5MIA/


[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-19 Thread adrianquintero
I checked the vdsm.log of the host and see among other warnings the following:

2019-10-19 14:21:19,427-0400 ERROR (jsonrpc/6) [root] failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished? (api:191)
2019-10-19 14:21:19,887-0400 ERROR (jsonrpc/7) [storage.TaskManager.Task] 
(Task='6507ff29-62c9-4056-b7a2-e4fe9395ec44') Unexpected error (task:875)
2019-10-19 14:21:19,887-0400 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) (dispatcher:83)
2019-10-19 14:21:21,894-0400 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='dc4820ae-c6e9-46cf-868f-c30c584c3604') Unexpected error (task:875)
2019-10-19 14:21:21,894-0400 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) (dispatcher:83)
2019-10-19 14:21:23,892-0400 ERROR (jsonrpc/2) [storage.TaskManager.Task] 
(Task='c64aa233-18b3-4a13-8791-5fd5ba03774a') Unexpected error (task:875)
2019-10-19 14:21:23,892-0400 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) (dispatcher:83)
2019-10-19 14:21:25,925-0400 ERROR (jsonrpc/0) [storage.TaskManager.Task] 
(Task='3f8e7814-c72f-41af-9969-971d25e42d63') Unexpected error (task:875)
2019-10-19 14:21:25,925-0400 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) (dispatcher:83)
2019-10-19 14:21:27,888-0400 ERROR (jsonrpc/3) [storage.TaskManager.Task] 
(Task='62b5c85e-cf79-4167-bb4e-65620b4cc16a') Unexpected error (task:875)
2019-10-19 14:21:27,888-0400 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) (dispatcher:83)
2019-10-19 14:21:29,882-0400 ERROR (jsonrpc/7) [storage.TaskManager.Task] 
(Task='fd19fa4a-72e0-4a69-bd57-28532af092a4') Unexpected error (task:875)
2019-10-19 14:21:29,882-0400 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) (dispatcher:83)
2019-10-19 14:21:30,454-0400 ERROR (periodic/3) [root] failed to retrieve 
Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted 
Engine setup finished? (api:191)
2019-10-19 14:21:31,899-0400 ERROR (jsonrpc/5) [storage.TaskManager.Task] 
(Task='e370869a-07de-4d75-9e1c-410202e48a60') Unexpected error (task:875)
2019-10-19 14:21:31,899-0400 ERROR (jsonrpc/5) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) (dispatcher:83)



019-10-19 14:21:21,868-0400 INFO  (jsonrpc/5) [vdsm.api] FINISH 
getStorageDomainInfo return={'info': {'uuid': 
u'4b87a5de-c976-4982-8b62-7cffef4a22d8', 'version': '5', 'role': 'Master', 
'alignment': 1048576, 'remotePath': 'vmm13.virt.iad3p.rsapps.net:/engine', 
'block_size': 512, 'type': 'GLUSTERFS', 'class': 'Data', 'pool': 
['7d3fb14c-ebf0-11e9-9ee5-00163e05e135'], 'name': 'HostedEngine'}} 
from=::1,33908, task_id=55a03cc9-1f9f-44b6-b87f-bbb73cefee53 (api:54)
2019-10-19 14:21:21,868-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
StorageDomain.getInfo succeeded in 0.01 seconds (__init__:312)
2019-10-19 14:21:21,878-0400 INFO  (jsonrpc/4) [vdsm.api] START 
prepareImage(sdUUID=u'4b87a5de-c976-4982-8b62-7cffef4a22d8', 
spUUID=u'----', 
imgUUID=u'7f969d21-1445-4993-a7d8-3af8fb83cbd4', 
leafUUID=u'1ba19dea-32e9-465a-a43c-eec81a88d2e0', allowIllegal=False) 
from=::1,33908, task_id=dc4820ae-c6e9-46cf-868f-c30c584c3604 (api:48)
2019-10-19 14:21:21,894-0400 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage 
error=Volume does not exist: (u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',) 
from=::1,33908, task_id=dc4820ae-c6e9-46cf-868f-c30c584c3604 (api:52)
2019-10-19 14:21:21,894-0400 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='dc4820ae-c6e9-46cf-868f-c30c584c3604') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3203, in 
prepareImage
raise se.VolumeDoesNotExist(leafUUID)
VolumeDoesNotExist: Volume does not exist: 
(u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',)
2019-10-19 14:21:21,894-0400 INFO  (jsonrpc/4) [storage.TaskManager.Task] 
(Task='dc4820ae-c6e9-46cf-868f-c30c584c3604') aborting: Task is aborted: 
"Volume does not exist: (u'1ba19dea-32e9-465a-a43c-eec81a88d2e0',)" - code 201 
(task:1181)
2019-10-19 14:21:21,894-0400 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 

[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-16 Thread adrianquintero
Strahil,
this is what i see for each service
all services are active and running except for ovirt-ha-agent which says 
"activating", even though the rest of the services are Active/running they 
still show a few errors warnings.

---
● sanlock.service - Shared Storage Lease Manager
   Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor 
preset: disabled)
   Active: active (running) since Thu 2019-10-17 00:47:20 EDT; 2min 1s ago
  Process: 16495 ExecStart=/usr/sbin/sanlock daemon (code=exited, 
status=0/SUCCESS)
 Main PID: 2023
Tasks: 10
   CGroup: /system.slice/sanlock.service
   └─2023 /usr/sbin/sanlock daemon

Oct 17 00:47:20 host1.example.com systemd[1]: Starting Shared Storage Lease 
Manager...
Oct 17 00:47:20 host1.example.com systemd[1]: Started Shared Storage Lease 
Manager.
Oct 17 00:47:20 host1.example.com sanlock[16496]: 2019-10-17 00:47:20 33920 
[16496]: lockfile setlk error /var/run/sanlock/sanlock.pid: Resource 
temporarily unavailable
● supervdsmd.service - Auxiliary vdsm service for running helper functions as 
root
   Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static; vendor 
preset: enabled)
   Active: active (running) since Thu 2019-10-17 00:43:06 EDT; 6min ago
 Main PID: 15277 (supervdsmd)
Tasks: 5
   CGroup: /system.slice/supervdsmd.service
   └─15277 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile 
/var/run/vdsm/svdsm.sock

Oct 17 00:43:06 host1.example.com systemd[1]: Started Auxiliary vdsm service 
for running helper functions as root.
Oct 17 00:43:07 host1.example.com supervdsmd[15277]: failed to load module 
nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or 
directory
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Thu 2019-10-17 00:47:27 EDT; 1min 54s ago
  Process: 16402 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh 
--post-stop (code=exited, status=0/SUCCESS)
  Process: 16499 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start (code=exited, status=0/SUCCESS)
 Main PID: 16572 (vdsmd)
Tasks: 38
   CGroup: /system.slice/vdsmd.service
   └─16572 /usr/bin/python2 /usr/share/vdsm/vdsmd

Oct 17 00:47:28 host1.example.com vdsm[16572]: WARN MOM not available.
Oct 17 00:47:28 host1.example.com vdsm[16572]: WARN MOM not available, KSM 
stats will be missing.
Oct 17 00:47:28 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:47:43 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:47:58 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:48:13 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:48:28 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:48:43 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:48:58 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:49:13 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
● ovirt-ha-broker.service - oVirt Hosted Engine High Availability 
Communications Broker
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; 
vendor preset: disabled)
   Active: active (running) since Thu 2019-10-17 00:44:11 EDT; 5min ago
 Main PID: 16379 (ovirt-ha-broker)
Tasks: 2
   CGroup: /system.slice/ovirt-ha-broker.service
   └─16379 /usr/bin/python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker

Oct 17 00:44:11 host1.example.com systemd[1]: Started oVirt Hosted Engine High 
Availability Communications Broker.
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Thu 2019-10-17 
00:49:13 EDT; 8s ago
  Process: 16925 ExecStart=/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent 
(code=exited, status=157)
 Main PID: 16925 (code=exited, status=157)

Oct 17 00:49:13 host1.example.com systemd[1]: Unit 

[ovirt-users] oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-16 Thread adrianquintero
Hi, 
I am trying to re-install a host from the web UI in oVirt 4.3.5, but it always 
fails and goes to "Setting Host state to Non-Operational"

From the engine.log I see the following WARN/ERROR:
2019-10-16 16:32:57,263-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s)  attached to the Data Center Default-DC1. Setting 
Host state to Non-Operational.
2019-10-16 16:32:57,271-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. 
2019-10-16 16:32:57,276-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to 
Storage Pool Default-DC1
2019-10-16 16:35:06,151-04 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
(EE-ManagedThreadFactory-engine-Thread-137245) [] Could not connect host 
'host1.example.com' to pool 'Default-DC1': Error storage pool connection: 
(u"spUUID=7d3fb14c-ebf0-11e9-9ee5-00163e05e135, 
msdUUID=4b87a5de-c976-4982-8b62-7cffef4a22d8, masterVersion=1, hostID=1, 
domainsMap={u'8c2df9c6-b505-4499-abb9-0d15db80f33e': u'active', 
u'4b87a5de-c976-4982-8b62-7cffef4a22d8': u'active', 
u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128': u'active', 
u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a': u'active'}",)
2019-10-16 16:35:06,248-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s)  attached to the Data Center Default-DC1. Setting 
Host state to Non-Operational.
2019-10-16 16:35:06,256-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. 
2019-10-16 16:35:06,261-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to 
Storage Pool Default-DC1
2019-10-16 16:37:46,011-04 ERROR 
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
Connection timeout for host 'host1.example.com', last response arrived 1501 ms 
ago.
2019-10-16 16:41:57,095-04 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
(EE-ManagedThreadFactory-engine-Thread-137527) [17f3aadd] Could not connect 
host 'host1.example.com' to pool 'Default-DC1': Error storage pool connection: 
(u"spUUID=7d3fb14c-ebf0-11e9-9ee5-00163e05e135, 
msdUUID=4b87a5de-c976-4982-8b62-7cffef4a22d8, masterVersion=1, hostID=1, 
domainsMap={u'8c2df9c6-b505-4499-abb9-0d15db80f33e': u'active', 
u'4b87a5de-c976-4982-8b62-7cffef4a22d8': u'active', 
u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128': u'active', 
u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a': u'active'}",)
2019-10-16 16:41:57,199-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s)  attached to the Data Center Default-DC1. Setting 
Host state to Non-Operational.
2019-10-16 16:41:57,211-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. 
2019-10-16 16:41:57,216-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: 
CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to 
Storage Pool Default-DC1

Any ideas why this might be happening?
I have researched, however I have not been able to find a solution.

thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 

[ovirt-users] Re: Ovirt 4.3.5/6 automated install fails

2019-10-11 Thread adrianquintero
I think I finally found the issue, it it was related to 
tasks/add_hosts_storage_domains.yml where hosts: localhost should be set to 
hosts: host1.example.other.com

As mentioned all worked except for one warning:

TASK [ovirt.hosted_engine_setup : Always revoke the SSO token] 
***
fatal: [host1.example.other.com]: FAILED! => {"changed": false, "msg": "You 
must specify either 'url' or 'hostname'."}
...ignoring

host1.example.other.com : ok=418  changed=150  unreachable=0failed=0
skipped=220  rescued=0ignored=1   

I researched a bit about this error and found 
https://github.com/ansible/ansible/issues/53379 but not sure if this is still 
the case.

any feedback is welcome.

thank you,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN5OWABCO4FCYTGBUXJI3HAMNSOWPPSW/


[ovirt-users] Re: Ovirt 4.3.5/6 automated install fails

2019-10-10 Thread adrianquintero
Simone,
I was able to deploy the environment with your suggetion above, the issue was 
my FQDN host1.example.com vs host1.example.other.com however right at the end 
the playbooks launched a acouple of errors/warnings and that prevented the 
other 2 hosts to be joined into the cluster. Note that the hosted-engine is up 
and running and reachable thru the WEB UI.

TASK [Add additional gluster hosts to engine] 
**
task path: 
/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/add_hosts_storage_domains.yml:17
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error 
was: 'ovirt_auth' is undefined\n\nThe error appears to be in 
'/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/add_hosts_storage_domains.yml':
 line 17, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n\n  - name: Add 
additional gluster hosts to engine\n^ here\n"
}
...ignoring

TASK [Add additional glusterfs storage domains] ***
task path: 
/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/add_hosts_storage_domains.yml:34
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error 
was: 'he_host_name' is undefined\n\nThe error appears to be in 
'/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/add_hosts_storage_domains.yml':
 line 34, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n\n  - name: \"Add 
additional glusterfs storage domains\"\n^ here\n"
}
...ignoring
META: ran handlers
META: ran handlers

PLAY RECAP *
localhost: ok=4changed=1unreachable=0
failed=0skipped=0rescued=0ignored=2   
host1.example.other.com : ok=408  changed=144  unreachable=0failed=0
skipped=210  rescued=0ignored=1   
host2.example.other.com : ok=34   changed=17   unreachable=0failed=0
skipped=70   rescued=0ignored=0   
host3.example.other.com : ok=34   changed=17   unreachable=0failed=0
skipped=70   rescued=0ignored=0


Thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6RBUDCIHBTFMXVILJNOLMB3O46KNRTUZ/


[ovirt-users] Re: Ovirt 4.3.5/6 automated install fails

2019-10-09 Thread adrianquintero
Hi Simone,

I do not want to configure the VM to get an IP from DHCP and not 100% sure if I 
need to define the information in 
/usr/share/ansible/roles/ovirt.hosted_engine_setup/defaults/main.yml? because I 
see the following:
--
# Define if using STATIC ip configuration
he_vm_ip_addr: null
he_vm_ip_prefix: null
he_dns_addr: null  # up to 3 DNS servers IPs can be added
he_vm_etc_hosts: false  # user can add lines to /etc/hosts on the engine VM
he_default_gateway: null
he_network_test: 'dns'  # can be: 'dns', 'ping', 'tcp' or 'none'
he_tcp_t_address: null
he_tcp_t_port: null
--
But if I thought that playbooks/he_gluster_vars.json was used for ?


I tried to follow: 
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html/automating_rhhi_for_virtualization_deployment/setting-deployment-variables


From those instructions I modified:
1.-playbooks/gluster_inventory.yml
   #Here I don't see he_ansible_host_name variable
   # I do have 
   hc_nodes:
 hosts:
host1.example.com
host2.example.com
host3.example.com

Should I had "he_ansible_host_name": "host1.example.com" to this file also?

2.-playbooks/he_gluster_vars.json
 "he_ansible_host_name": "host1.example.com"

I have 3 servers freshly rekicked and ready to redeploy the hosted engine, but 
will wait for your thoughts as I might be missing something from my process.


thank you,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PYX6L24VV26RSXZHAA7UN3CC3DFFHF4M/


[ovirt-users] Ovirt 4.3.5/6 automated install fails

2019-10-08 Thread adrianquintero
I am having issues while trying to deploy an automated HC install

ansible 2.8.2
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', 
u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 
(Red Hat 4.8.5-36)]


Ovirt node = 4.3.5 or 4.3.5


1.- Followed the procedure listed in the following link:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html/automating_rhhi_for_virtualization_deployment/setting-deployment-variables

2.-Ran the playbook within Path: 
/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment:
ansible-playbook -i gluster_inventory.yml hc_deployment.yml 
--extra-vars='@he_gluster_vars.json'

3.-Json file
[root@host1 hc-ansible-deployment]# cat he_gluster_vars.json
{
  "he_appliance_password": "changeme",
  "he_admin_password": "changeme",
  "he_domain_type": "glusterfs",
  "he_fqdn": "ovirt-engine.example.com",
  "he_vm_mac_addr": "00:16:fe:05:e1:ee",
  "he_default_gateway": "10.10.10.1",
  "he_mgmt_network": "ovirtmgmt",
  "he_ansible_host_name": "host1.example.com",
  "he_storage_domain_name": "HostedEngine",
  "he_storage_domain_path": "/engine",
  "he_storage_domain_addr": "vmm10.virt.aid3p",
  "he_mount_options": 
"backup-volfile-servers=host2.example.com:host3.example.com",
  "he_bridge_if": "eno49",
  "he_enable_hc_gluster_service": true,
  "he_mem_size_MB": "32768",
  "he_cluster": "Default",
}

Error:
task path: 
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml:8
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error 
was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 
'he_host_ip'\n\nThe error appears to be in 
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml':
 line 8, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n  timeout: 180\n  - 
name: Add an entry for this host on /etc/hosts on the local VM\n^ here\n"
}


task path: 
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml:8
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error 
was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 
'he_host_ip'\n\nThe error appears to be in 
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml':
 line 8, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n  timeout: 180\n  - 
name: Add an entry for this host on /etc/hosts on the local VM\n^ here\n"
}


task path: 
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml:57
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error 
was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 
'he_local_vm_dir'\n\nThe error appears to be in 
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml':
 line 57, column 7, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n  delegate_to: \"{{ 
he_ansible_host_name }}\"\n- name: Get local VM dir path\n  ^ here\n"
}
PLAY RECAP 
**
localhost  : ok=184  changed=56   unreachable=0failed=2
skipped=65   rescued=0ignored=0   
host1.example.com   : ok=34   changed=17   unreachable=0failed=0
skipped=70   rescued=0ignored=0   
host2.example.com   : ok=37   changed=20   unreachable=0failed=0
skipped=100  rescued=0ignored=0   
host3.example.com   : ok=34   changed=17   unreachable=0failed=0
skipped=70   rescued=0ignored=0   

So it seems that for some strange reason it is not getting the correct 
he_host_ip and  he_local_vm_dir.

has anybody else encountered this error?

-Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/737OW75GZGPLXNSOKGUVQB7ZXRCG4FJP/


[ovirt-users] Re: oVirt 4.3.5 glusterfs 6.3 performance tunning

2019-10-04 Thread adrianquintero
Hello Sahina,
Apologies for the tardiness on this, I will try to send you the collected data 
by next week.

thanks

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XR3L33HVMVGSOJGY32PNSCLJDRIW6TSF/


[ovirt-users] Re: ovirt 4.3.6 kickstart install fails when

2019-10-03 Thread adrianquintero
I tried the suggestions from here but same issue:
https://rhv.bradmin.org/ovirt-engine/docs/Installing_Red_Hat_Virtualization_as_a_standalone_Manager_with_local_databases/Installing_Hosts_for_RHV_SM_localDB_deploy.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKKGBJENU4YHBZUNURAL4WGTM62FX5PZ/


[ovirt-users] ovirt 4.3.6 kickstart install fails when

2019-10-03 Thread adrianquintero
Kikckstart entries:

-
liveimg --url=http://192.168.1.10/ovirt-iso-436/ovirt-node-ng-image.squashfs.img

clearpart --drives=sda --initlabel --all
autopart --type=thinp
rootpw --iscrypted $1$xxxbSLxxgwc0
lang en_US
keyboard --vckeymap=us --xlayouts='us'
timezone --utc America/New_York 
--ntpservers=0.centos.pool.ntp.org,1.centos.pool.ntp.org,2.centos.pool.ntp.org,3.centos.pool.ntp
#network  --hostname=host21.example.com
network --onboot yes --device eno49
zerombr
text

reboot
-

The Error on screen right after " Creating swap on /dev/mapper/onn_host1-swap"
DeviceCreatorError: ('lvcreate failed for onn_host1/pool00: running /sbin/lvm 
lvcreate --thinpool onn_host1/pool00 --size 464304m --poolmetadatasize 232 
--chunksize 64 --config devices { preffered_names=["^/dev/mapper/", "^/dev/md", 
"^/dev/sd"] } failed', ' onn_host1-pool00')


No issue with oVirt 4.3.5 and the same kickstart file.


Any suggestions?

Thanks,

AQ
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YVHULSDL3WMSR52IYLP7VKLGSVPDQZGY/


[ovirt-users] Re: oVirt 4.3.5 WARN no gluster network found in cluster

2019-10-03 Thread adrianquintero
It seems to be working
ping to host1.example.com returns the main management IP.

From host1.example.com I can ping the 2 gluster IPs that are configured in the 
other 2 hosts that conform the cluster, i.e. ping -I ens4f1 192.168.0.68. 
However I cant ping to itself i.e. ping -I ens4f1 192.168.0.69.

and in the engine logs I still see:
 WARN  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler7) [780d3310] Could not associate brick 
host1.example.com:/gluster_bricks/vmstore/vmstore' of volume 
'x9e0f-649055e0e07b' with correct network as no gluster network found 
in cluster 'xx-11e9-b8d3-00163e5d860d'
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/22NZCQGUETHGNIUNLPIH6H2ZYGCBKJTK/


[ovirt-users] Re: oVirt 4.3.5 WARN no gluster network found in cluster

2019-08-26 Thread adrianquintero
Hi,
Yes I have glusternet shows the role  of "migration" and "gluster".
Hosts show 1 network connected to management and the other to logical network 
"glusternet"

Just not sure if I am interpreting right?
thanks,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFQE6XRVHA76PI47O253VHV4EK3M5QDQ/


[ovirt-users] oVirt 4.3.5 glusterfs 6.3 performance tunning

2019-08-21 Thread adrianquintero
Hello,
I have a hyperconverged setup using ovirt 4.3.5 and the "optimize for ovirt 
store" seems to fail on gluster volumes.
I am seeing poor performance and trying to see how should I tune gluster to 
give better performance.
Can you provide any suggestions on the following volume settings(parameters)?

option Value
-- -
cluster.lookup-unhashedon
cluster.lookup-optimizeon
cluster.min-free-disk  10%
cluster.min-free-inodes5%
cluster.rebalance-statsoff
cluster.subvols-per-directory  (null)
cluster.readdir-optimize   off
cluster.rsync-hash-regex   (null)
cluster.extra-hash-regex   (null)
cluster.dht-xattr-name trusted.glusterfs.dht
cluster.randomize-hash-range-by-gfid   off
cluster.rebal-throttle normal
cluster.lock-migration off
cluster.force-migrationoff
cluster.local-volume-name  (null)
cluster.weighted-rebalance on
cluster.switch-pattern (null)
cluster.entry-change-log   on
cluster.read-subvolume (null)
cluster.read-subvolume-index-1
cluster.read-hash-mode  1
cluster.background-self-heal-count  8
cluster.metadata-self-heal off
cluster.data-self-heal off
cluster.entry-self-healoff
cluster.self-heal-daemon   on
cluster.heal-timeout600
cluster.self-heal-window-size   1
cluster.data-change-logon
cluster.metadata-change-logon
cluster.data-self-heal-algorithm   full
cluster.eager-lock enable
disperse.eager-lockon
disperse.other-eager-lock  on
disperse.eager-lock-timeout 1
disperse.other-eager-lock-timeout   1
cluster.quorum-typeauto
cluster.quorum-count   (null)
cluster.choose-local   off
cluster.self-heal-readdir-size 1KB
cluster.post-op-delay-secs  1
cluster.ensure-durability  on
cluster.consistent-metadatano
cluster.heal-wait-queue-length  128
cluster.favorite-child-policy  none
cluster.full-lock  yes
diagnostics.latency-measurementoff
diagnostics.dump-fd-stats  off
diagnostics.count-fop-hits off
diagnostics.brick-log-levelINFO
diagnostics.client-log-level   INFO
diagnostics.brick-sys-log-levelCRITICAL
diagnostics.client-sys-log-level   CRITICAL
diagnostics.brick-logger   (null)
diagnostics.client-logger  (null)
diagnostics.brick-log-format   (null)
diagnostics.client-log-format  (null)
diagnostics.brick-log-buf-size  5
diagnostics.client-log-buf-size 5
diagnostics.brick-log-flush-timeout 120
diagnostics.client-log-flush-timeout120
diagnostics.stats-dump-interval 0
diagnostics.fop-sample-interval 0
diagnostics.stats-dump-format  json
diagnostics.fop-sample-buf-size 65535
diagnostics.stats-dnscache-ttl-sec  86400
performance.cache-max-file-size 0
performance.cache-min-file-size 0
performance.cache-refresh-timeout   1
performance.cache-priority  
performance.cache-size 32MB
performance.io-thread-count 16
performance.high-prio-threads   16
performance.normal-prio-threads 16
performance.low-prio-threads32
performance.least-prio-threads  1
performance.enable-least-priority  on
performance.iot-watchdog-secs  (null)
performance.iot-cleanup-disconnected-   reqsoff
performance.iot-pass-through   false
performance.io-cache-pass-through  false
performance.cache-size 128MB
performance.qr-cache-timeout1
performance.cache-invalidation false
performance.ctime-invalidation false
performance.flush-behind   on
performance.nfs.flush-behind   on
performance.write-behind-window-size   1MB
performance.resync-failed-syncs-after   -fsyncoff
performance.nfs.write-behind-window-s   ize1MB
performance.strict-o-directon
performance.nfs.strict-o-directoff
performance.strict-write-ordering  off
performance.nfs.strict-write-ordering  off
performance.write-behind-trickling-wr   iteson
performance.aggregate-size 128KB
performance.nfs.write-behind-tricklin   g-writeson
performance.lazy-open  yes
performance.read-after-openyes
performance.open-behind-pass-through   false
performance.read-ahead-page-count   4
performance.read-ahead-pass-throughfalse
performance.readdir-ahead-pass-throug   h  false
performance.md-cache-pass-through  false
performance.md-cache-timeout1
performance.cache-swift-metadata   true
performance.cache-samba-metadata   false
performance.cache-capability-xattrstrue
performance.cache-ima-xattrs   true
performance.md-cache-statfsoff
performance.xattr-cache-list
performance.nl-cache-pass-through  false
features.encryptionoff
network.frame-timeout   1800
network.ping-timeout30
network.tcp-window-size(null)
client.ssl off
network.remote-dio off
client.event-threads4
client.tcp-user-timeout 0
client.keepalive-time   20
client.keepalive-interval   2

[ovirt-users] oVirt 4.3.5 WARN no gluster network found in cluster

2019-08-21 Thread adrianquintero
Hi,
I have a 4.3.5 hyperconverged setup with 3 hosts, each host has 2x10G NIC ports

Host1:
NIC1: 192.168.1.11
NIC2: 192.168.0.67 (Gluster)

Host2:
NIC1: 10.10.1.12
NIC2: 192.168.0.68 (Gluster)

Host3:
NIC1: 10.10.1.13
NIC2: 192.168.0.69 (Gluster)

I am able to ping all the gluster IPs from within the hosts. i.e  from host1 i 
can ping 192.168.0.68 and 192.168.0.69
However from the HostedEngine VM I cant ping any of those IPs
[root@ovirt-engine ~]# ping 192.168.0.9
PING 192.168.0.60 (192.168.0.60) 56(84) bytes of data.
From 10.10.255.5 icmp_seq=1 Time to live exceeded


and on the HostedEngine I see the following WARNINGS (only for host1) which 
make me think that I am not using a separate network exclusively for gluster.

2019-08-21 21:04:34,215-04 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler10) [397b6038] Could not associate brick 
'host1.example.com:/gluster_bricks/engine/engine' of volume 
'ac1f73ce-cdf0-4bb9-817d-xxcxxx' with correct network as no gluster network 
found in cluster '11e9-b8d3-00163e5d860d'


Any ideas?

thank you!!
2019-08-21 21:04:34,220-04 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler10) [397b6038] Could not associate brick 
'vmm01.virt.ord1d:/gluster_bricks/data/data' of volume 
'bc26633a-9a0b-49de-b714-97e76f222a02' with correct network as no gluster 
network found in cluster 'e98e2c16-c31e-11e9-b8d3-00163e5d860d'

2019-08-21 21:04:34,224-04 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler10) [397b6038] Could not associate brick 
'host1:/gluster_bricks/vmstore/vmstore' of volume 
'x-ca96-45cc-9e0f-649055e0e07b' with correct network as no gluster 
network found in cluster 'e98e2c16-c31e-11e9-b8d3xxx'
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABYMLUNKBVCPDBGHSDFNCKMH7LOLVA7O/


[ovirt-users] Re: Hyperconverged setup ovirt 4.3.x using ansible?

2019-08-20 Thread adrianquintero
Thanks, I am researching and looks promising...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YIKKPLNZNBORMIRXUOMFTUW62INFJBQI/


[ovirt-users] Hyperconverged setup ovirt 4.3.x using ansible?

2019-08-16 Thread adrianquintero
Hi,
I am trying to do a Hyperconverged setup using ansible.
So far I have been able to run a playbook to setup gluster but have not been 
able to identify how to install the Hosted-Engine VM  using ansible and tie all 
the pieces together.
Can someone point me in the right direction to deploy a hyperconverged  
environment using ansible?

I have succesfully deployed ovirt 4.3.5 (Hyperconverged) using the web UI.

thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5MHEOITDNKEYPUPN2AI2J4GRAK3LZSKI/


[ovirt-users] 4.3.4 caching disk error during hyperconverged deployment

2019-06-12 Thread adrianquintero
While trying to do a hyperconverged setup and trying to use "configure LV 
Cache" /dev/sdf the deployment fails. If I dont use the LV cache SSD Disk the 
setup succeds, thought you mighg want to know, for now I retested with 4.3.3 
and all worked fine, so reverting to 4.3.3 unless you know of a workaround?

Error:
TASK [gluster.infra/roles/backend_setup : Extend volume group] *
failed: [vmm11.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', 
u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': 
u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', 
u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': 
u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) => 
{"ansible_loop_var": "item", "changed": false, "err": "  Physical volume 
\"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": 
"cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "0.9G", 
"cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": 
"0.1G", "cachemode": "writethrough", "cachethinpoolname": 
"gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable 
to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}

failed: [vmm12.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', 
u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': 
u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', 
u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': 
u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) => 
{"ansible_loop_var": "item", "changed": false, "err": "  Physical volume 
\"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": 
"cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "0.9G", 
"cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": 
"0.1G", "cachemode": "writethrough", "cachethinpoolname": 
"gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable 
to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}

failed: [vmm10.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', 
u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': 
u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', 
u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': 
u'writethrough', u'cachemetalvsize': u'30G', u'cachelvsize': u'270G'}) => 
{"ansible_loop_var": "item", "changed": false, "err": "  Physical volume 
\"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": 
"cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "270G", 
"cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": 
"30G", "cachemode": "writethrough", "cachethinpoolname": 
"gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable 
to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}

PLAY RECAP *
vmm10.mydomain.com   : ok=13   changed=4unreachable=0failed=1   
 skipped=10   rescued=0ignored=0 
vmm11.mydomain.com   : ok=13   changed=4unreachable=0failed=1   
 skipped=10   rescued=0ignored=0 
vmm12.mydomain.com   : ok=13   changed=4unreachable=0failed=1   
 skipped=10   rescued=0ignored=0 



-
#cat /etc/ansible/hc_wizard_inventory.yml
-
hc_nodes:
  hosts:
vmm10.mydomain.com:
  gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
  pvname: /dev/sdb
- vgname: gluster_vg_sdc
  pvname: /dev/sdc
- vgname: gluster_vg_sdd
  pvname: /dev/sdd
- vgname: gluster_vg_sde
  pvname: /dev/sde
  gluster_infra_mount_devices:
- path: /gluster_bricks/engine
  lvname: gluster_lv_engine
  vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
  lvname: gluster_lv_vmstore1
  vgname: gluster_vg_sdc
- path: /gluster_bricks/data1
  lvname: gluster_lv_data1
  vgname: gluster_vg_sdd
- path: /gluster_bricks/data2
  lvname: gluster_lv_data2
  vgname: gluster_vg_sde
  gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
  cachedisk: /dev/sdf
  cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
  cachethinpoolname: gluster_thinpool_gluster_vg_sdb
  cachelvsize: 270G
  cachemetalvsize: 30G
  cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
  cachemode: writethrough
  gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
  lvname: gluster_lv_engine
  size: 100G
  gluster_infra_thinpools:

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread adrianquintero
Definitely is a challenge trying to replace a bad host.

So let me tell you what I see and have done so far:

1.-I have a host that went bad due to HW issues.
2.-This bad host is still showing in the compute --> hosts section.
3.-This host was part of a hyperconverged setup with Gluster.
4.-The gluster bricks for this server show up with a "?" mark inside the 
volumes under Storage ---> Volumes ---> Myvolume ---> bricks
5.-Under Compute ---> Hosts --> mybadhost.mydomain.com the host  is in 
maintenance mode.
6.-When I try to remove that host (with "Force REmove" ticked) I keep getting:
Operation Canceled
 Error while executing action: 
mybadhost.mydomain.com
- Cannot remove Host. Server having Gluster volume.
Note: I have also confirmed "host has been rebooted"

Since the bad host was not recoverable (it was fried), I took a brand new 
server with the same specs and installed oVirt 4.3.3 on it and have it ready to 
add it to the cluster with the same hostname and IP but I cant do this until I 
remove the old entries on the WEB UI of the Hosted Engine VM.

If this is not possible would I really need to add this new host with a 
different name and IP?  
What would be the correct and best procedure to fix this?

Note that my setup is a 9 node setup with hyperconverged and replica 3  bricks 
and  in a  distributed replicated volume scenario.

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N4HFTCWNFTOJJ34VSBHY5NKK5ZQAEDB7/


[ovirt-users] Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-05 Thread adrianquintero
Anybody have had to replace a failed host from a 3, 6, or 9 node hyperconverged 
setup with gluster storage?

One of my hosts is completely dead, I need to do a fresh install using ovirt 
node iso, can anybody point me to the proper steps?

thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/


[ovirt-users] Error importing kvm vm to oVirt 4.3.3: https://bugzilla.redhat.com/show_bug.cgi?id=1667488

2019-06-04 Thread adrianquintero
Has a resolution for https://bugzilla.redhat.com/show_bug.cgi?id=1667488 been 
provided somewhere? I cant seem to find a work around except to create an empty 
ISO domain.

Engine logs:
2019-06-04 14:17:45,364-04 INFO  
[org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] 
(default task-223) [f65d3feb-e76b-4e76-98f2-852152032a48] Lock Acquired to 
object 'EngineLock:{exclusiveLocks='[11553d4c-e084-4dfa-8ceb-6b00d62da19f=VM, 
testvm.mydomain.com=VM_NAME]', sharedLocks=''}'
2019-06-04 14:17:45,421-04 WARN  
[org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] 
(default task-223) [] Validation of action 'ImportVmFromExternalProvider' 
failed for user admin@internal-authz. Reasons: 
VAR__ACTION__IMPORT,VAR__TYPE__VM,ERROR_CANNOT_FIND_ISO_IMAGE_PATH
2019-06-04 14:17:45,422-04 INFO  
[org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] 
(default task-223) [] Lock freed to object 
'EngineLock:{exclusiveLocks='[11553d4c-e084-4dfa-8ceb-6b00d62da19f=VM, 
testvm.mydomain.com=VM_NAME]', sharedLocks=''}'


Thanks,

AQ
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EULDPNO4MGMGC4HQUSCV4DZFTM4ETXW2/


[ovirt-users] Re: Uploading disk images (oVirt 4.3.3.1)

2019-06-03 Thread adrianquintero

I get the following from the console in developer tools in chrome/firefox:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the 
remote resource at https://ovirt-engine.mydomain.com:54323/info/. (Reason: CORS 
request did not succeed).

note that I am behind a f5, so one domain is used from outside the f5 and 
another from within the f5.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C573LDTJUYEY5LKEN56443E65VZLAAWJ/


[ovirt-users] Re: Uploading disk images (oVirt 4.3.3.1)

2019-06-03 Thread adrianquintero
please disregard the part that says it "hangs in loading" , that part works , 
it was an issue on my browser.
however as mentioned still changing the  ImageProxyAddress did not work.

thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFQVRXZAYVWCDHGPPSTYZXJY3SVGWAJ3/


[ovirt-users] Re: Uploading disk images (oVirt 4.3.3.1)

2019-06-03 Thread adrianquintero
I tried it, now my engine wont work using https://ovirt-engine.mydomain.com 
over the browsers just hangs in "loading"
It does load from https://ovirt-engine.mydomain.com, but still I get 
"Connection to ovirt-imageio-proxy service has failed. Make sure the service is 
installed, configured, and ovirt-engine certificate is registered as a valid CA 
in the browser." and the cert is valid but points to ovirt-engine.mydomain.com.

I did the following:
engine-config --get ImageProxyAddress
ImageProxyAddress: ovirt-engine.mydomain.com:54323 version: general
engine-config -s ImageProxyAddress=ovirt-engine.mydomain.otherstuff.com:54323
systemctl restart ovirt-engine
engine-config --get ImageProxyAddress
ImageProxyAddress: ovirt-engine.mydomain.otherstuff.com:54323 version: general



that did not seem to help so I put it back as it was.
engine-config -s  ImageProxyAddress=ovirt-engine.mydomain.com:54323
systemctl restart ovirt-engine
engine-config --get ImageProxyAddress
ImageProxyAddress: ovirt-engine.mydomain.com:54323 version: general

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QIA7KT6GHGEZCL56MXFJUMFQFRH6HWEW/


[ovirt-users] Re: Uploading disk images (oVirt 4.3.3.1)

2019-06-03 Thread adrianquintero
Thanks Arik, however I would like to understand how it really works.
As mentioned the only thing I did was to add a vps in order to hit the server
Real fqdn: ovirt-engine.mydomain.com(192.168.0.45)
Using fqdn: ovirt-engine.mydomain.otherstuff.com (192.168.10.109)

Mapping: 192.168.10.109:80, 192.168.10.109:443, 192.168.0.45:80

The cert is already in my browser: This certificate is already installed as a 
certificate authority.

engine.log:
2019-06-03 09:08:21,240-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-37) 
[6586fac6-acfe-4f72-872d-9cc2b7b9f2d6] Renewing transfer ticket for Upload disk 
'ovirt-node-ng-installer-4.3.0-2019020409.el7.iso' (disk id: 
'48312455-9003-44ee-a4bc-1e6533cc5408', image id: 
'ab86967a-1516-45e2-bffd-fb7973a7efd5')
2019-06-03 09:08:21,241-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ExtendImageTicketVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-37) 
[6586fac6-acfe-4f72-872d-9cc2b7b9f2d6] START, 
ExtendImageTicketVDSCommand(HostName = host1.mydomain.com, 
ExtendImageTicketVDSCommandParameters:{hostId='20ac91be-dec8-4083-b09a-381e0185ddbe',
 ticketId='b6232936-cd9d-42dd-b794-a16ef6eaed44', timeout='300'}), log id: 
77da6b6e
2019-06-03 09:08:21,247-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ExtendImageTicketVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-37) 
[6586fac6-acfe-4f72-872d-9cc2b7b9f2d6] FINISH, ExtendImageTicketVDSCommand, 
return: StatusOnlyReturn [status=Status [code=0, message=Done]], log id: 
77da6b6e
2019-06-03 09:08:21,247-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-37) 
[6586fac6-acfe-4f72-872d-9cc2b7b9f2d6] Transfer session with ticket id 
b6232936-cd9d-42dd-b794-a16ef6eaed44 extended, timeout 300 seconds
2019-06-03 09:08:21,253-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] 
(EE-ManagedThreadFactory-engineScheduled-Thread-37) 
[6586fac6-acfe-4f72-872d-9cc2b7b9f2d6] Updating image transfer 
9cbe652c-1249-4eb1-8fe2-3a8387bd324a (image 
48312455-9003-44ee-a4bc-1e6533cc5408) phase to Transferring (message: 'Pausing 
due to client error')
2019-06-03 09:08:21,729-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-1357) [c46ac95d-a684-4860-b49b-6ed720a2df6e] Running command: 
TransferImageStatusCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: SystemAction group CREATE_DISK with 
role type USER
2019-06-03 09:08:25,730-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-1357) [2660ddba-f9b7-4843-b356-fb6e10485dfb] Running command: 
TransferImageStatusCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: SystemAction group CREATE_DISK with 
role type USER
2019-06-03 09:08:29,735-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-1357) [a644727a-0b56-4164-8ff2-2697e5de3132] Running command: 
TransferImageStatusCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: SystemAction group CREATE_DISK with 
role type USER
2019-06-03 09:08:30,889-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-1357) [5bb6e263-c6e8-4e2e-9923-420c20117c83] Running command: 
TransferImageStatusCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: SystemAction group CREATE_DISK with 
role type USER
2019-06-03 09:08:30,890-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (default 
task-1357) [5bb6e263-c6e8-4e2e-9923-420c20117c83] Updating image transfer 
9cbe652c-1249-4eb1-8fe2-3a8387bd324a (image 
48312455-9003-44ee-a4bc-1e6533cc5408) phase to Paused by System (message: 'Sent 
0MB')
2019-06-03 09:08:30,904-04 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-1357) [5bb6e263-c6e8-4e2e-9923-420c20117c83] EVENT_ID: 
UPLOAD_IMAGE_NETWORK_ERROR(1,062), Unable to upload image to disk 
48312455-9003-44ee-a4bc-1e6533cc5408 due to a network error. Ensure that 
ovirt-imageio-proxy service is installed and configured and that ovirt-engine's 
CA certificate is registered as a trusted CA in the browser. The certificate 
can be fetched from 
https://ovirt-engine.mydomain.otherstuff.com/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA
2019-06-03 09:08:31,290-04 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-47) 
[6586fac6-acfe-4f72-872d-9cc2b7b9f2d6] Transfer was paused by system. Upload 
disk 'ovirt-node-ng-installer-4.3.0-2019020409.el7.iso' (disk id: 
'48312455-9003-44ee-a4bc-1e6533cc5408', image id: 
'ab86967a-1516-45e2-bffd-fb7973a7efd5')

[ovirt-users] Uploading disk images (oVirt 4.3.3.1)

2019-05-31 Thread adrianquintero
Hi,
I have an issue while trying to upload disk images thru the Web UI.
"Connection to ovirt-imageio-proxy service has failed. Make sure the service is 
installed, configured, and ovirt-engine certificate is registered as a valid CA 
in the browser."

My ovirt engine's fqdn is ovirt-engine.mydomain.com however due to network 
restrictions I had to set rules in order to reach our ovirt-engine
ovirt-engine.mydomain.com = 192.168.0.45

For example ovirt-engine.mydomain.otherstuff.com - 192.168.10.109:80, 
192.168.10.109:443, 192.168.0.45:80
So as you can see I need to hit the ovirt-engine using 
ovirt-engine.mydomain.otherstuff.com which I am able to by modifyting the 
11-setup-sso.conf file and adding 
"SSO_ENGINE_URL="https://ovirt-engine.mydomain.otherstuff.com:443/ovirt-engine/;

I am able to upload disk images when using http://ovirt-engine.mydomain.com  
but not able to http://ovirt-engine.mydomain.otherstuff.com
I know it might be related to the certificates but I need to be able to upload 
disk images using both URLs.

any ideas?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSQKTZ6YCD7XBEUI6FZGVEQY2XKGZ6Q5/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread adrianquintero
Thanks Alex, that makes more sense now  while trying to follow the instructions 
provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and 
inidicating " multpath_member" hence not letting me create new bricks. And on 
the logs I see 

Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
"vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
failed", "rc": 5}
Same thing for sdc, sdd

Should I manually edit the filters inside the OS, what will be the impact?

thanks again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread adrianquintero
Found the following and answered part of my own questions, however I think this 
sets a new set of Replica 3 Bricks, so if I have 2 hosts fail from the first 3 
hosts then I loose my hyperconverged?

https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/scaling#task-cockpit-gluster_mgmt-expand_cluster

thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YVTQHOOLPM3Z73CJYCPRY6ACZ72KAUW/


[ovirt-users] Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread adrianquintero
Hello,
I have a 3 node Hyperconverged setup with gluster and added 3 new nodes to the 
cluster for a total of 6 servers.

I am now taking advantage of more compute power but cant scale out my storage 
volumes.

Current Hyperconverged setup:
- host1.mydomain.com ---> Bricks: engine data1 vmstore1 
- host2.mydomain.com ---> Bricks: engine data1 vmstore1
- host3.mydomain.com ---> Bricks: engine data1 vmstore1 

From these 3 servers we get the following Volumes:
- engine(host1:engine, host2:engine, host3:engine)
- data1  (host1:data1, host2:data1, host3:data1)
- vmstore1  (host1:vmstore1, host2:vmstore1, host3:vmstore11)

The following are the newly added servers to the cluster, however as you can 
see there are no gluster bricks
- host4.mydomain.com
- host5.mydomain.com
- host6.mydomain.com

I know that the bricks must be added in sets of 3 and per the first 3 hosts 
that is how it was deployed thru the web UI.

Questions:
-how can i extend the gluster volumes engine, data1 and vmstore using host4, 
host5 and host6?
-Do I need to configure gluster volumes manually through the OS CLI in order 
for them to span amongst all 6 servers?
-If I configure the fail storage scenario manually will oVirt know about it?, 
will it still be hyperconverged?


I have only seen 3 host hyperconverged setup examples with gluster,  but have 
not found examples for 6, 9 or 12 host cluster with gluster.
I know it might be a lack of understanding from my end on how ovirt and gluster 
integrate with one another.

If you can point me in the right direction would be great.

thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJJ2JVIXTGLCHUSGEUMHSIWPKVREPTEJ/


[ovirt-users] Re: Expand existing gluster storage in ovirt 4.2/4.3

2019-04-17 Thread adrianquintero
Hi Strahil,
I had a 3 node Hyperconverged setup and added 3 new nodes to the cluster for a 
total of 6 servers. I am now taking advantage of more compute power, however 
the gluster storage part is what gets me.

Current Hyperconverged setup:
- host1.mydomain.com
  Bricks:
engine
data1
vmstore1
- host2.mydomain.com
  Bricks:
engine
data1
vmstore1
- host3.mydomain.com
  Bricks:
engine
data1
vmstore1

- host4.mydomain.com
  Bricks:

- host5.mydomain.com
  Bricks:

- host6.mydomain.com
  Bricks:


As you can see from the above, the original first 3 servers are the only ones 
that contain the gluster storage bricks, so storage redundancy is not set 
across all 6 nodes. I think it is a lack of understanding from my end on how 
ovirt and gluster integrate with one another so have a few questions:

How would I go about achieving storage redundancy across all nodes?
Do I need to configure gluster volumes manually through the OS CLI?
If I configure the fail storage scenario manually will oVirt know about it?

Again I know that the bricks must be added in sets of 3 and per the first 3 
nodes my gluster setup looks like this (all done by hyperconverged seup in 
ovirt):
engine volume:host1:brick1, host2:brick1, host3:brick1
data1 volume:  host1:brick2, host2:brick2, host3:brick2
vmstore1 volume:host1:brick3, host2:brick3, host3:brick3

So after adding the 3 new servers I dont know if I need to do something similar 
to the example in 
https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-62dd6614194e,
 if I do a similar change will oVirt know about it? will it be able to handle 
it as hyperconverged?

As I mentioned before I normally see 3 node hyperconverged setup examples with 
gluster but have not found one for 6, 9 or 12 node cluster.

Thanks again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5T7TCSP4HFB25ZUKYLZVSNKST2NIIJB/


[ovirt-users] Re: Expand existing gluster storage in ovirt 4.2/4.3

2019-04-14 Thread adrianquintero
ok, I can try out in the next couple of weeks, however I still need to 
understand the gluster volumes/bricks layout  and if this scaleout falls out of 
the hyperconverged requirements.



thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQIAUKXDRFDDXQVMIUVUJ2UYIQEL2L56/


[ovirt-users] Re: Expand existing gluster storage in ovirt 4.2/4.3

2019-04-11 Thread adrianquintero
Would you know of any documentation that I could follow for that type of setup?

I have read quite a bit about oVIrt and Hyperconverged but I have only found 3 
node setup examples, nobody seems to go past that or at least I've yet to find 
one. I already have a 3 node setup with hyperconverged and working as it should 
with no issues and tested fail scenarios, but I need to have an environment 
with at least 12 servers and be able to maintain the correct fail scenarios 
with storage (gluster), however I cant seem to figure out the proper steps to 
achieve this.

So if anyone can point me in the right direction it will help a lot, and once I 
test I should be able to provide back any knowledge that I gain with such type 
of setup.


thanks again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJIRZGP6NQ4RK4F552KC5CUU2UJKXIIT/


[ovirt-users] Re: Expand existing gluster storage in ovirt 4.2/4.3

2019-04-11 Thread adrianquintero
Thank you for the explanation, so from my understanding we can't have a 
hyperconverged setup if more that 3 nodes are required? or is it possible if 
gluster is setup separate of the oVirt node installs?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77URQGHY7LBF4HO2ICNJMQGRVBJFT7GK/


[ovirt-users] gdeployConfig.conf errors (Hyperconverged setup using GUI)

2019-03-10 Thread adrianquintero
Hello I am trying to run a Hyperconverged setup "COnfigure gluster storage and 
ovirt hosted engine", however  I get the following error

__
 PLAY [gluster_servers] 
*

TASK [Create LVs with specified size for the VGs] **
failed: [ovirt01.grupokino.com] (item={u'lv': u'gluster_thinpool_sdb', u'size': 
u'45GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) => {"changed": 
false, "item": {"extent": "100%FREE", "lv": "gluster_thinpool_sdb", "size": 
"45GB", "vg": "gluster_vg_sdb"}, "msg": "lvcreate: metadata/pv_map.c:198: 
consume_pv_area: Assertion `to_go <= pva->count' failed.\n", "rc": -6}
to retry, use: --limit @/tmp/tmpwo4SNB/lvcreate.retry

PLAY RECAP *
ovirt01.grupokino.com  : ok=0changed=0unreachable=0failed=1   
__

I know that oVirt Hosted Engine Setup GUI for "gluster wizard (gluster 
deployment) does not populate the geodeployConfig.conf file properly (Generated 
Gdeploy configuration : 
/var/lib/ovirt-hosted-engine-setup/gdeploy/gdeployConfig.conf) so I have tried 
to modify it to fit our needs but keep getting the above error everytime.

Any ideas or comments are welcome...Thanks!




My servers are setup with 4x50GB disks, 1 for the OS and the rest for Gluster 
Hyperconverged setup.
__
my gdeployConfig.conf file:
__
#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ovirt01.mydomain.com
ovirt02.mydomain.com
ovirt03.mydomain.com

[script1:ovirt01.mydomain.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb,sdc,sdd -h 
ovirt01.mydomain.com, ovirt02.mydomain.com, ovirt03.mydomain.com

[script1:ovirt02.mydomain.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb,sdc,sdd -h 
ovirt01.mydomain.com, ovirt02.mydomain.com, ovirt03.mydomain.com

[script1:ovirt03.mydomain.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb,sdc,sdd -h 
ovirt01.mydomain.com, ovirt02.mydomain.com, ovirt03.mydomain.com

[disktype]
jbod

[diskcount]
3

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no

[pv1:ovirt01.mydomain.com]
action=create
devices=sdb
ignore_pv_errors=no

[pv1:ovirt02.mydomain.com]
action=create
devices=sdb
ignore_pv_errors=no

[pv1:ovirt03.mydomain.com]
action=create
devices=sdb
ignore_pv_errors=no

[pv2:ovirt01.mydomain.com]
action=create
devices=sdc
ignore_pv_errors=no

[pv2:ovirt02.mydomain.com]
action=create
devices=sdc
ignore_pv_errors=no

[pv2:ovirt03.mydomain.com]
action=create
devices=sdc
ignore_pv_errors=no

[pv3:ovirt01.mydomain.com]
action=create
devices=sdd
ignore_pv_errors=no

[pv3:ovirt02.mydomain.com]
action=create
devices=sdd
ignore_pv_errors=no

[pv3:ovirt03.mydomain.com]
action=create
devices=sdd
ignore_pv_errors=no

[vg1:ovirt01.mydomain.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[vg1:ovirt02.mydomain.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[vg1:ovirt03.mydomain.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[vg2:ovirt01.mydomain.com]
action=create
vgname=gluster_vg_sdc
pvname=sdc
ignore_vg_errors=no

[vg2:ovirt02.mydomain.com]
action=create
vgname=gluster_vg_sdc
pvname=sdc
ignore_vg_errors=no

[vg2:ovirt03.mydomain.com]
action=create
vgname=gluster_vg_sdc
pvname=sdc
ignore_vg_errors=no

[vg3:ovirt01.mydomain.com]
action=create
vgname=gluster_vg_sdd
pvname=sdd
ignore_vg_errors=no

[vg3:ovirt02.mydomain.com]
action=create
vgname=gluster_vg_sdd
pvname=sdd
ignore_vg_errors=no

[vg3:ovirt03.mydomain.com]
action=create
vgname=gluster_vg_sdd
pvname=sdd
ignore_vg_errors=no

[lv1:ovirt01.mydomain.com]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=45GB
poolmetadatasize=3GB

[lv2:ovirt02.mydomain.com]
action=create
poolname=gluster_thinpool_sdc
ignore_lv_errors=no
vgname=gluster_vg_sdc
lvtype=thinpool
size=45GB
poolmetadatasize=3GB

[lv3:ovirt03.mydomain.com]
action=create
poolname=gluster_thinpool_sdd
ignore_lv_errors=no
vgname=gluster_vg_sdd
lvtype=thinpool
size=45GB
poolmetadatasize=3GB

[lv4:ovirt01.mydomain.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb

[ovirt-users] Expand existing gluster storage in ovirt 4.2/4.3

2019-02-21 Thread adrianquintero
Hello,
I have a 3 node ovirt 4.3 cluster that I've setup and using gluster 
(Hyperconverged setup)
I need to increase the amount of storage and compute so I added a 4th host 
(server4.example.com) if it is possible to expand the amount of bricks 
(storage) in the "data" volume?

I did some research and from that research the following caught my eye: old 
post 
"https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-62dd6614194e;
So the question is, if taking that approach feasible? , is it even possible an 
oVirt point of view?



---
My gluster volume:
---
Volume Name: data
Type: Replicate
Volume ID: 003ffea0-b441-43cb-a38f-ccdf6ffb77f8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: server1.example1.com:/gluster_bricks/data/data
Brick2: server2.example.com:/gluster_bricks/data/data
Brick3: server3.example.com:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


Thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HKPAM65CSDJ7LQTZTAUQSBDOFDZM7RQS/


[ovirt-users] Re: ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2019-02-19 Thread adrianquintero
Apologies, just saw the answer on a previous post in this same thread
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TEXXRYTIHT3HPZH6VOFU5YJ5MM32ZR5/


[ovirt-users] Re: ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2019-02-19 Thread adrianquintero
Apologies, did not see a previous post
This works for me:
---
I had this issue this week as well.

When asked about the glusterfs that you self provisioned you stated 
"ovirt1.localdomain:/gluster_bricks/engine”

So I am new to gluster but as a client of gluster you can only refer to it via 
volume name.
host:/

Hence maybe try ovirt1.localdomain:/engine.
-

thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IC4V24SWBWYYYNJ5B3TTTD2AZSBXPINN/


[ovirt-users] Re: ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2019-02-19 Thread adrianquintero
Apologies, did not see a previous post from Julie

This works for me:
---
I had this issue this week as well.

When asked about the glusterfs that you self provisioned you stated 
"ovirt1.localdomain:/gluster_bricks/engine”

So I am new to gluster but as a client of gluster you can only refer to it via 
volume name.
host:/

Hence maybe try ovirt1.localdomain:/engine.
-

thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7CFXGJSZ55DZ2JCIX7GBMTTBSOJX4UVX/


[ovirt-users] Re: ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2019-02-19 Thread adrianquintero
Apologies, did not see a previous post from Julie

This works for me:
---
I had this issue this week as well.

When asked about the glusterfs that you self provisioned you stated 
"ovirt1.localdomain:/gluster_bricks/engine”

So I am new to gluster but as a client of gluster you can only refer to it via 
volume name.
host:/

Hence maybe try ovirt1.localdomain:/engine.
-

thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RZAIWU5XHGXMUTJKBAY2DOO7WEUGRJAQ/


[ovirt-users] Re: ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2019-02-19 Thread adrianquintero
Apologies, did not see a previous post from Julie

This works for me:
---
I had this issue this week as well.

When asked about the glusterfs that you self provisioned you stated 
"ovirt1.localdomain:/gluster_bricks/engine”

So I am new to gluster but as a client of gluster you can only refer to it via 
volume name.
host:/

Hence maybe try ovirt1.localdomain:/engine.
-

thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TILUKMWBPGYRQCEEX635ZOL4VQPOBTTD/


[ovirt-users] Re: ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2019-02-19 Thread adrianquintero
I am having the same issue from CLI and trying to use existing gluster storage 
(server1:/gluster_bricks/engine).
Error:
[ INFO  ] TASK [Add glusterfs storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed 
to fetch Gluster Volume List]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "deprecations": 
[{"msg": "The 'ovirt_storage_domains' module is being renamed 
'ovirt_storage_domain'", "version": 2.8}], "msg": "Fault reason is \"Operation 
Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP 
response code is 400."}
  Please specify the storage you would like to use (glusterfs, iscsi, 
fc, nfs)[nfs]: 

If you found a solution can you please share.

thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L66MTCNHBWBOV7OVGC6U3LEBQ3EDDCPO/