[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
Most peobably that VM has a very old snapshot. Sadly, 'deletion' ( merging 
snapahot to base disk) takes extra space.
You have to find temporary storage that can be added to the volume , in order 
to delete that snapahot and free enough space.
Best Regards,Strahil Nikolov
 
 
  On Mon, Jun 21, 2021 at 20:42, José Ferradeira via Users 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GDF3V6AAMAU43OFAGDFENVXVFTUI7ZLJ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PMYEJYZJ4SUBK2QMOLCQYYVHAPWL7WQC/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
If you enabled sharding, don't disable it any more.My previous comment was just 
a statement and not a recommendation.
Based on the output, the /home2/brick2  is part of the root.
Obviosly, gluster thinks that the brick is full and you got very few options:- 
extend any brick or add more bricks to the volume- shutdown all VMs on that 
volume, reduce the minimum reserved space in gluster (gluster volume set data1 
cluster.min-free-disk 0%) and then storage migrate the disk to a bigger 
datastore- shitdown all VMs on that volume(datastore),reduce the minimum 
reserved space in gluster (gluster volume set data1 cluster.min-free-disk 0%) 
and then delete unnecessary data (like snapshots you don't need any more).
The first option is the most reliable. Once gluster has some free space to 
operate, you will be able to delete (consolidate) VM snapshots or to delete VMs 
that are less important (for example sandboxes, test VMs, etc). Yet, paused VMs 
will resume automatically and will continue to fill in Gluster which could lead 
to the same situation - so just power them off (ugly but most probably 
necessary).

 Best Regards,Strahil Nikolov
 
  On Mon, Jun 14, 2021 at 20:21, supo...@logicworks.pt 
wrote:   # df -h /home/brick1 
Sist.fichs   Tama  Ocup Livre Uso% Montado em 
/dev/mapper/cl-home  1,8T  1,8T   18G 100% /home
 
# df -h /home2/brick2  
Sist.fichs   Tama  Ocup Livre Uso% Montado em 
/dev/mapper/cl-root   50G   28G   23G  56% /
 

If sharding is enable the restore pauses the VM with unknown storage error

Thanks
José

De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

And what is the status of the bricks:
df -h /home/brick1 /home2/brick2
When sharding is not enabled, the qcow2 disks cannot be spread between the 
bricks.

Best Regards,Strahil Nikolov 
 
# gluster volume info data1   
  
Volume Name: data1 
Type: Distribute 
Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gs.domain.pt:/home/brick1 
Brick2: gs.domain.pt:/home2/brick2 
Options Reconfigured: 
nfs.disable: on 
transport.address-family: inet 
storage.fips-mode-rchecksum: on 
storage.owner-uid: 36 
storage.owner-gid: 36 
cluster.min-free-disk: 10% 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-wait-qlength: 1 
features.shard: off 
user.cifs: off 
cluster.choose-local: off 
client.event-threads: 4 
server.event-threads: 4 
performance.client-io-threads: on
 

# gluster volume status data1 
Status of volume: data1 
Gluster process TCP Port  RDMA Port  Online  Pid 
-- 
Brick gs.domain.pt:/home/brick1    49153 0  Y   1824862 
Brick gs.domain.pt:/home2/brick2   49154 0  Y   1824880 
  
Task Status of Volume data1 
-- 
There are no active volume tasks
 
# gluster volume heal data1 info summary 
This command is supported for only volumes of replicate/disperse type. Volume 
data1 is not of type replicate/disperse 
Volume heal failed.
 

# df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ 
Sist.fichs   Tama  Ocup Livre Uso% Montado em 
gs.domain.pt:/data1  1,9T  1,8T   22G  99% 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1
 
thanks
José

De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

Can you provode the output of:gluster volume info VOLUME gluster volume status 
VOLUMEgluster volume heal VOLUME info summary
df -h /rhev/data-center/mnt/glusterSD/:_

In pure replica volumes , the bricks should be of the same size. If not - the 
smallest one defines the size of the volume.If the VM has thin qcow2 disks, it 
will grow slowly till it reaches its maximum size or till the volume space is 
finished.
Best Regards,Strahil Nikolov


Well, I have one brick without space, 1.8TB.In fact I don't know why, because I 
only have one VM on that domain storage with less the 1TB.
When I try to start the VM I get this error:

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b

[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
# gluster volume info data1 

Volume Name: data1 
Type: Distribute 
Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gs.domain.pt:/home/brick1 
Brick2: gs.domain.pt:/home2/brick2 
Options Reconfigured: 
nfs.disable: on 
transport.address-family: inet 
storage.fips-mode-rchecksum: on 
storage.owner-uid: 36 
storage.owner-gid: 36 
cluster.min-free-disk: 10% 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-wait-qlength: 1 
features.shard: off 
user.cifs: off 
cluster.choose-local: off 
client.event-threads: 4 
server.event-threads: 4 
performance.client-io-threads: on 


# gluster volume status data1 
Status of volume: data1 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 
Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 

Task Status of Volume data1 
-- 
There are no active volume tasks 

# gluster volume heal data1 info summary 
This command is supported for only volumes of replicate/disperse type. Volume 
data1 is not of type replicate/disperse 
Volume heal failed. 


# df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ 
Sist.fichs Tama Ocup Livre Uso% Montado em 
gs.domain.pt:/data1 1,9T 1,8T 22G 99% 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 

thanks 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Can you provode the output of: 
gluster volume info VOLUME 
gluster volume status VOLUME 
gluster volume heal VOLUME info summary 
df -h /rhev/data-center/mnt/glusterSD/:_ 


In pure replica volumes , the bricks should be of the same size. If not - the 
smallest one defines the size of the volume. 
If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum 
size or till the volume space is finished. 

Best Regards, 
Strahil Nikolov 




Well, I have one brick without space, 1.8TB. 
In fact I don't know why, because I only have one VM on that domain storage 
with less the 1TB. 
When I try to start the VM I get this error: 

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device. 

I'm stuck in here 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, "José Ferradeira via Users"  
Cc: "Alex McWhirter" , "José Ferradeira via Users" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

So, 

how is it going ? 
Do you have space ? 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov 
 wrote: 
You need to use thick VM disks on Gluster, which is the default behavior for a 
long time. 
Also, check all bricks' free space. Most probably you are out of space on one 
of the bricks (term for server + mountpoint combination). 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
 wrote: 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/
 




BQ_END



BQ_END


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3BQ36NDBBT3LDEG4MKCCKP5SEVYGFGH/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
# df -h /home/brick1 
Sist.fichs Tama Ocup Livre Uso% Montado em 
/dev/mapper/cl-home 1,8T 1,8T 18G 100% /home 

# df -h /home2/brick2 
Sist.fichs Tama Ocup Livre Uso% Montado em 
/dev/mapper/cl-root 50G 28G 23G 56% / 


If sharding is enable the restore pauses the VM with unknown storage error 

Thanks 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

And what is the status of the bricks: 

df -h /home/brick1 /home2/brick2 

When sharding is not enabled, the qcow2 disks cannot be spread between the 
bricks. 

Best Regards, 
Strahil Nikolov 





# gluster volume info data1 

Volume Name: data1 
Type: Distribute 
Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gs.domain.pt:/home/brick1 
Brick2: gs.domain.pt:/home2/brick2 
Options Reconfigured: 
nfs.disable: on 
transport.address-family: inet 
storage.fips-mode-rchecksum: on 
storage.owner-uid: 36 
storage.owner-gid: 36 
cluster.min-free-disk: 10% 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-wait-qlength: 1 
features.shard: off 
user.cifs: off 
cluster.choose-local: off 
client.event-threads: 4 
server.event-threads: 4 
performance.client-io-threads: on 


# gluster volume status data1 
Status of volume: data1 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 
Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 

Task Status of Volume data1 
-- 
There are no active volume tasks 

# gluster volume heal data1 info summary 
This command is supported for only volumes of replicate/disperse type. Volume 
data1 is not of type replicate/disperse 
Volume heal failed. 


# df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ 
Sist.fichs Tama Ocup Livre Uso% Montado em 
gs.domain.pt:/data1 1,9T 1,8T 22G 99% 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 

thanks 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Can you provode the output of: 
gluster volume info VOLUME 
gluster volume status VOLUME 
gluster volume heal VOLUME info summary 
df -h /rhev/data-center/mnt/glusterSD/:_ 


In pure replica volumes , the bricks should be of the same size. If not - the 
smallest one defines the size of the volume. 
If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum 
size or till the volume space is finished. 

Best Regards, 
Strahil Nikolov 

BQ_BEGIN


Well, I have one brick without space, 1.8TB. 
In fact I don't know why, because I only have one VM on that domain storage 
with less the 1TB. 
When I try to start the VM I get this error: 

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device. 

I'm stuck in here 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, "José Ferradeira via Users"  
Cc: "Alex McWhirter" , "José Ferradeira via Users" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

So, 

how is it going ? 
Do you have space ? 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov 
 wrote: 
You need to use thick VM disks on Gluster, which is the default behavior for a 
long time. 
Also, check all bricks' free space. Most probably you are out of space on one 
of the bricks (term for server + mountpoint combination). 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
 wrote: 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/
 




BQ_END



BQ_E

[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
Can you provode the output of:gluster volume info VOLUME gluster volume status 
VOLUMEgluster volume heal VOLUME info summary
df -h /rhev/data-center/mnt/glusterSD/:_

In pure replica volumes , the bricks should be of the same size. If not - the 
smallest one defines the size of the volume.If the VM has thin qcow2 disks, it 
will grow slowly till it reaches its maximum size or till the volume space is 
finished.
Best Regards,Strahil Nikolov
 
Well, I have one brick without space, 1.8TB. In fact I don't know why, 
because I only have one VM on that domain storage with less the 1TB.
When I try to start the VM I get this error:

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device.
I'm stuck in here
Thanks

José
De: "Strahil Nikolov" 
Para: supo...@logicworks.pt, "José Ferradeira via Users" 
Cc: "Alex McWhirter" , "José Ferradeira via Users" 

Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

So,
how is it going ?Do you have space ?
Best Regards,Strahil Nikolov

On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov wrote:You 
need to use thick VM disks on Gluster, which is the default behavior for a long 
time.Also, check all bricks' free space. Most probably you are out of space on 
one of the bricks (term for server + mountpoint combination).
Best Regards,Strahil Nikolov

On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
wrote:___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/



  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FXONFEZUGEYRMBPUTJ6GNTTGA4XTXLZL/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
So,
how is it going ?Do you have space ?
Best Regards,Strahil Nikolov 
 
  On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov wrote:  
 You need to use thick VM disks on Gluster, which is the default behavior for a 
long time.Also, check all bricks' free space. Most probably you are out of 
space on one of the bricks (term for server + mountpoint combination).
Best Regards,Strahil Nikolov 
 
  On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/
  
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NITZLI7T4P7UFJCYEKOBXDQ5JELSFBU6/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
Do you mean, in manage domain - Critical Space Action Blocker (GB) - change 5 
for 1 ? 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Quarta-feira, 16 De Junho de 2021 4:50:42 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Did you reduce the minimum free space option in gluster prior removing of the 
snapshot (and it's failure) ? 

Best Regards, 
Strahil Nikolov 




On Wed, Jun 16, 2021 at 0:35, supo...@logicworks.pt 
 wrote: 
Yes, there is one snapshot but I cannot remove it: 
Error while executing action: Cannot remove Disk Snapshot. Low disk space on 
Storage Domain DATA1. 

Regards 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Did you check for snapshots ? 

You can check the contents of the /rhev... mount point. 

Best Regards, 
Strahil Nikolov 






В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3,  
написа: 





I just free every space I could. That's the only VM in that storage domain. 



 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

You will need to free some space. 
Check my previous e-mail. 

Bedt Regards, 
Strahil Nikolov 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BH3AH3HFPZLN2GUXULBS3B4VCSPJUW6I/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
I just change cluster.min-free-disk to 5% 
but still get the message: Error while executing action: Cannot remove Disk 
Snapshot. Low disk space on Storage Domain DATA1. 

# gluster volume get data1 all|grep cluster.min-free-disk 
cluster.min-free-disk 5% 


De: supo...@logicworks.pt 
Para: "Strahil Nikolov"  
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Quarta-feira, 16 De Junho de 2021 16:31:46 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Do you mean, in manage domain - Critical Space Action Blocker (GB) - change 5 
for 1 ? 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Quarta-feira, 16 De Junho de 2021 4:50:42 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Did you reduce the minimum free space option in gluster prior removing of the 
snapshot (and it's failure) ? 

Best Regards, 
Strahil Nikolov 




On Wed, Jun 16, 2021 at 0:35, supo...@logicworks.pt 
 wrote: 
Yes, there is one snapshot but I cannot remove it: 
Error while executing action: Cannot remove Disk Snapshot. Low disk space on 
Storage Domain DATA1. 

Regards 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Did you check for snapshots ? 

You can check the contents of the /rhev... mount point. 

Best Regards, 
Strahil Nikolov 






В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3,  
написа: 





I just free every space I could. That's the only VM in that storage domain. 



 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

You will need to free some space. 
Check my previous e-mail. 

Bedt Regards, 
Strahil Nikolov 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/65PDKRG7QZ7YBZGD3JBDTMKCGOESZOV3/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
You need to use thick VM disks on Gluster, which is the default behavior for a 
long time.Also, check all bricks' free space. Most probably you are out of 
space on one of the bricks (term for server + mountpoint combination).
Best Regards,Strahil Nikolov 
 
  On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJE33UNL5DTLZ6KLRXDDKSWM63M67ONL/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
# df -h /home/brick1 
Sist.fichs Tama Ocup Livre Uso% Montado em 
/dev/mapper/cl-home 1,8T 1,8T 18G 100% /home 

# df -h /home2/brick2 
Sist.fichs Tama Ocup Livre Uso% Montado em 
/dev/mapper/cl-root 50G 28G 23G 56% / 


If sharding is enable the restore pauses the VM with unknown storage error 

Thanks 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

And what is the status of the bricks: 

df -h /home/brick1 /home2/brick2 

When sharding is not enabled, the qcow2 disks cannot be spread between the 
bricks. 

Best Regards, 
Strahil Nikolov 





# gluster volume info data1 

Volume Name: data1 
Type: Distribute 
Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gs.domain.pt:/home/brick1 
Brick2: gs.domain.pt:/home2/brick2 
Options Reconfigured: 
nfs.disable: on 
transport.address-family: inet 
storage.fips-mode-rchecksum: on 
storage.owner-uid: 36 
storage.owner-gid: 36 
cluster.min-free-disk: 10% 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-wait-qlength: 1 
features.shard: off 
user.cifs: off 
cluster.choose-local: off 
client.event-threads: 4 
server.event-threads: 4 
performance.client-io-threads: on 


# gluster volume status data1 
Status of volume: data1 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 
Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 

Task Status of Volume data1 
-- 
There are no active volume tasks 

# gluster volume heal data1 info summary 
This command is supported for only volumes of replicate/disperse type. Volume 
data1 is not of type replicate/disperse 
Volume heal failed. 


# df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ 
Sist.fichs Tama Ocup Livre Uso% Montado em 
gs.domain.pt:/data1 1,9T 1,8T 22G 99% 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 

thanks 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Can you provode the output of: 
gluster volume info VOLUME 
gluster volume status VOLUME 
gluster volume heal VOLUME info summary 
df -h /rhev/data-center/mnt/glusterSD/:_ 


In pure replica volumes , the bricks should be of the same size. If not - the 
smallest one defines the size of the volume. 
If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum 
size or till the volume space is finished. 

Best Regards, 
Strahil Nikolov 

BQ_BEGIN


Well, I have one brick without space, 1.8TB. 
In fact I don't know why, because I only have one VM on that domain storage 
with less the 1TB. 
When I try to start the VM I get this error: 

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device. 

I'm stuck in here 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, "José Ferradeira via Users"  
Cc: "Alex McWhirter" , "José Ferradeira via Users" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

So, 

how is it going ? 
Do you have space ? 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov 
 wrote: 
You need to use thick VM disks on Gluster, which is the default behavior for a 
long time. 
Also, check all bricks' free space. Most probably you are out of space on one 
of the bricks (term for server + mountpoint combination). 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
 wrote: 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/
 




BQ_END



BQ_E

[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
Your last e-mail was empty.

Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YPHAFLN4B6QQFNKQKXQCEG2HJPI7N6RA/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
Yes, there is one snapshot but I cannot remove it: 
Error while executing action: Cannot remove Disk Snapshot. Low disk space on 
Storage Domain DATA1. 

Regards 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Did you check for snapshots ? 

You can check the contents of the /rhev... mount point. 

Best Regards, 
Strahil Nikolov 






В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3,  
написа: 





I just free every space I could. That's the only VM in that storage domain. 



 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

You will need to free some space. 
Check my previous e-mail. 

Bedt Regards, 
Strahil Nikolov 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TISW3B7SHQXOJDJN7S5RIT4SC27KDHFX/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
Did you reduce the minimum free space option in gluster prior removing of the 
snapshot (and it's failure) ?
Best Regards,Strahil Nikolov
 
 
  On Wed, Jun 16, 2021 at 0:35, supo...@logicworks.pt 
wrote:   Yes, there is one snapshot but I cannot remove it:
Error while executing action: Cannot remove Disk Snapshot. Low disk space on 
Storage Domain DATA1.
Regards
José

De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

Did you check for snapshots ?

You can check the contents of the /rhev... mount point.

Best Regards,
Strahil Nikolov






В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3,  
написа: 





I just free every space I could. That's the only VM in that storage domain.




De: "Strahil Nikolov" 
Para: supo...@logicworks.pt, supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

You will need to free some space.
Check my previous e-mail.

Bedt Regards,
Strahil Nikolov
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G5BAVGPI3W562S6SIKDJAE2HRKCW3L7A/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
And what is the status of the bricks:
df -h /home/brick1 /home2/brick2
When sharding is not enabled, the qcow2 disks cannot be spread between the 
bricks.

Best Regards,Strahil Nikolov 
 
# gluster volume info data1   
  
Volume Name: data1 
Type: Distribute 
Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gs.domain.pt:/home/brick1 
Brick2: gs.domain.pt:/home2/brick2 
Options Reconfigured: 
nfs.disable: on 
transport.address-family: inet 
storage.fips-mode-rchecksum: on 
storage.owner-uid: 36 
storage.owner-gid: 36 
cluster.min-free-disk: 10% 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-wait-qlength: 1 
features.shard: off 
user.cifs: off 
cluster.choose-local: off 
client.event-threads: 4 
server.event-threads: 4 
performance.client-io-threads: on
 

# gluster volume status data1 
Status of volume: data1 
Gluster process TCP Port  RDMA Port  Online  Pid 
-- 
Brick gs.domain.pt:/home/brick1    49153 0  Y   1824862 
Brick gs.domain.pt:/home2/brick2   49154 0  Y   1824880 
  
Task Status of Volume data1 
-- 
There are no active volume tasks
 
# gluster volume heal data1 info summary 
This command is supported for only volumes of replicate/disperse type. Volume 
data1 is not of type replicate/disperse 
Volume heal failed.
 

# df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ 
Sist.fichs   Tama  Ocup Livre Uso% Montado em 
gs.domain.pt:/data1  1,9T  1,8T   22G  99% 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1
 
thanks
José

De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

Can you provode the output of:gluster volume info VOLUME gluster volume status 
VOLUMEgluster volume heal VOLUME info summary
df -h /rhev/data-center/mnt/glusterSD/:_

In pure replica volumes , the bricks should be of the same size. If not - the 
smallest one defines the size of the volume.If the VM has thin qcow2 disks, it 
will grow slowly till it reaches its maximum size or till the volume space is 
finished.
Best Regards,Strahil Nikolov


Well, I have one brick without space, 1.8TB.In fact I don't know why, because I 
only have one VM on that domain storage with less the 1TB.
When I try to start the VM I get this error:

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device.
I'm stuck in here
Thanks

José
De: "Strahil Nikolov" 
Para: supo...@logicworks.pt, "José Ferradeira via Users" 
Cc: "Alex McWhirter" , "José Ferradeira via Users" 

Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

So,
how is it going ?Do you have space ?
Best Regards,Strahil Nikolov

On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov wrote:You 
need to use thick VM disks on Gluster, which is the default behavior for a long 
time.Also, check all bricks' free space. Most probably you are out of space on 
one of the bricks (term for server + mountpoint combination).
Best Regards,Strahil Nikolov

On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
wrote:___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/





  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FN7IYHQBYPK3GKB3PCHT6HUSY5RH7AHC/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
I just free every space I could. That's the only VM in that storage domain. 



De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

You will need to free some space. 
Check my previous e-mail. 

Bedt Regards, 
Strahil Nikolov 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GDF3V6AAMAU43OFAGDFENVXVFTUI7ZLJ/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
Well, I have one brick without space, 1.8TB. 
In fact I don't know why, because I only have one VM on that domain storage 
with less the 1TB. 
When I try to start the VM I get this error: 

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device. 

I'm stuck in here 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, "José Ferradeira via Users"  
Cc: "Alex McWhirter" , "José Ferradeira via Users" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

So, 

how is it going ? 
Do you have space ? 

Best Regards, 
Strahil Nikolov 




On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov 
 wrote: 
You need to use thick VM disks on Gluster, which is the default behavior for a 
long time. 
Also, check all bricks' free space. Most probably you are out of space on one 
of the bricks (term for server + mountpoint combination). 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
 wrote: 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/
 




BQ_END


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4SXTSMNTD7DF5MAIFSEZP5LOLTV3C24B/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread José Ferradeira via Users
# df -h /home/brick1 
Sist.fichs Tama Ocup Livre Uso% Montado em 
/dev/mapper/cl-home 1,8T 1,8T 18G 100% /home 

# df -h /home2/brick2 
Sist.fichs Tama Ocup Livre Uso% Montado em 
/dev/mapper/cl-root 50G 28G 23G 56% / 


If sharding is enable the restore pauses the VM with unknown storage error 

Thanks 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

And what is the status of the bricks: 

df -h /home/brick1 /home2/brick2 

When sharding is not enabled, the qcow2 disks cannot be spread between the 
bricks. 

Best Regards, 
Strahil Nikolov 




# gluster volume info data1 

Volume Name: data1 
Type: Distribute 
Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gs.domain.pt:/home/brick1 
Brick2: gs.domain.pt:/home2/brick2 
Options Reconfigured: 
nfs.disable: on 
transport.address-family: inet 
storage.fips-mode-rchecksum: on 
storage.owner-uid: 36 
storage.owner-gid: 36 
cluster.min-free-disk: 10% 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-wait-qlength: 1 
features.shard: off 
user.cifs: off 
cluster.choose-local: off 
client.event-threads: 4 
server.event-threads: 4 
performance.client-io-threads: on 


# gluster volume status data1 
Status of volume: data1 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 
Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 

Task Status of Volume data1 
-- 
There are no active volume tasks 

# gluster volume heal data1 info summary 
This command is supported for only volumes of replicate/disperse type. Volume 
data1 is not of type replicate/disperse 
Volume heal failed. 


# df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ 
Sist.fichs Tama Ocup Livre Uso% Montado em 
gs.domain.pt:/data1 1,9T 1,8T 22G 99% 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 

thanks 
José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

Can you provode the output of: 
gluster volume info VOLUME 
gluster volume status VOLUME 
gluster volume heal VOLUME info summary 
df -h /rhev/data-center/mnt/glusterSD/:_ 


In pure replica volumes , the bricks should be of the same size. If not - the 
smallest one defines the size of the volume. 
If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum 
size or till the volume space is finished. 

Best Regards, 
Strahil Nikolov 

BQ_BEGIN


Well, I have one brick without space, 1.8TB. 
In fact I don't know why, because I only have one VM on that domain storage 
with less the 1TB. 
When I try to start the VM I get this error: 

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device. 

I'm stuck in here 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, "José Ferradeira via Users"  
Cc: "Alex McWhirter" , "José Ferradeira via Users" 
 
Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 

So, 

how is it going ? 
Do you have space ? 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov 
 wrote: 
You need to use thick VM disks on Gluster, which is the default behavior for a 
long time. 
Also, check all bricks' free space. Most probably you are out of space on one 
of the bricks (term for server + mountpoint combination). 

Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users 
 wrote: 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/
 




BQ_END



BQ_E

[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
Exactly that one. Sadly, I got no clue how big is your snapshot(s). Maybe even 
with those extra Gigs, it's still too much to merge it.
If you have any other VMs there, try to move their disks away in order to 
release more space.
Otherwise, you will have to find another brick to extend the volume.
How many snapshots do you see in oVirt ? Maybe you have more than one snapshot.
Best Regards,Strahil Nikolov
 
 
  On Wed, Jun 16, 2021 at 21:22, supo...@logicworks.pt 
wrote:   I just change cluster.min-free-disk to 5%
but still get the message: Error while executing action: Cannot remove Disk 
Snapshot. Low disk space on Storage Domain DATA1.

# gluster volume get data1 all|grep cluster.min-free-disk
cluster.min-free-disk 5% 
De: supo...@logicworks.pt
Para: "Strahil Nikolov" 
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Quarta-feira, 16 De Junho de 2021 16:31:46
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

Do you mean, in manage domain - Critical Space Action Blocker (GB) - change 5 
for 1 ?

De: "Strahil Nikolov" 
Para: supo...@logicworks.pt, supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Quarta-feira, 16 De Junho de 2021 4:50:42
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

Did you reduce the minimum free space option in gluster prior removing of the 
snapshot (and it's failure) ?
Best Regards,Strahil Nikolov
 

On Wed, Jun 16, 2021 at 0:35, supo...@logicworks.pt 
wrote:Yes, there is one snapshot but I cannot remove it:
Error while executing action: Cannot remove Disk Snapshot. Low disk space on 
Storage Domain DATA1.
Regards
José

De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

Did you check for snapshots ?

You can check the contents of the /rhev... mount point.

Best Regards,
Strahil Nikolov






В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3,  
написа: 





I just free every space I could. That's the only VM in that storage domain.




De: "Strahil Nikolov" 
Para: supo...@logicworks.pt, supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

You will need to free some space.
Check my previous e-mail.

Bedt Regards,
Strahil Nikolov


  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BULE25KWY7GHT4X3TD4M34HRTN7KJNTG/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
Did you check for snapshots ?

You can check the contents of the /rhev... mount point.

Best Regards,
Strahil Nikolov






В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3,  
написа: 





I just free every space I could. That's the only VM in that storage domain.




De: "Strahil Nikolov" 
Para: supo...@logicworks.pt, supo...@logicworks.pt
Cc: "José Ferradeira via Users" , "Alex McWhirter" 

Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

You will need to free some space.
Check my previous e-mail.

Bedt Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77DSAYPBTW2ASMDZL7SR475NV3MLDQA3/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-21 Thread Strahil Nikolov via Users
You will need to free some space.Check my previous e-mail.
Bedt Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZV5N23FVRXG6IWV4XCAVR7MDJEU46LV/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-09 Thread José Ferradeira via Users
The gluster storage domain, now is a mess, I cannot run a VM, always get this 
error: 
VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs1.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device. 

I had a second brick and have 1.5TB free, I don't know why it says "No space 
left on device" 

Also, I cannot use the iso files I have in this storage domain. 

Regards 
José 



De: "José Ferradeira via Users"  
Para: "Alex McWhirter"  
Cc: "Strahil Nikolov" , "José Ferradeira via Users" 
 
Enviadas: Terça-feira, 8 De Junho de 2021 22:47:25 
Assunto: [ovirt-users] Re: oVirt + Gluster issues 

This is a glusterfs on top of CentOS 8.3, using LVM 


De: "Alex McWhirter"  
Para: "Strahil Nikolov"  
Cc: supo...@logicworks.pt, "José Ferradeira via Users"  
Enviadas: Terça-feira, 8 De Junho de 2021 17:48:48 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 



I've run into a similar problem when using VDO + LVM + XFS stacks, also with 
ZFS. 

If you're trying to use ZFS on 4.4, my recommendation is don't. You have to run 
the testing branch at minimum, and quiet a few things just don't work. 




As for VDO, i ran into this issue when using VDO and a NVME for LVM caching of 
the thin pool, VDO would throw a fit and under high load scenario, VM's would 
regularly pause. 

VDO with no cache was fine however, seems to be related to mixing device types 
/ block sizes (even if you override block sizes). 




Not sure if that helps. 

On 2021-06-08 12:26, Strahil Nikolov via Users wrote: 


Maybe the shard xlator cannot cope with the speed of the shard creation speed. 
Are you using preallocated disks on the Zimbra VM ? 
Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users 
 wrote: 
Hello, 
running ovirt 4.4.4.7-1.el8 and gluster 8.3. 
When i performe a restore of Zimbra Collaboration Email with features.shard on, 
the VM pauses with an unknown storage error. 
When I performe a restore of Zimbra Collaboration Email with features.shard 
off, it fills all the gluster storage domain disks. 
With older versions of gluster and ovirt the same happens. If I use a NFS 
storage domain it runs OK. 
-- 

Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTOWFWQKZFQMD56YWHTPZ/
 




___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOJJZJG7LCGHDIYULF5572L52JE53T6D/
 

BQ_END





___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDEHBKD4OVRK3MC6ZFGFBNPVPYMK2VXI/
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-08 Thread José Ferradeira via Users
This is a glusterfs on top of CentOS 8.3, using LVM 


De: "Alex McWhirter"  
Para: "Strahil Nikolov"  
Cc: supo...@logicworks.pt, "José Ferradeira via Users"  
Enviadas: Terça-feira, 8 De Junho de 2021 17:48:48 
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues 



I've run into a similar problem when using VDO + LVM + XFS stacks, also with 
ZFS. 

If you're trying to use ZFS on 4.4, my recommendation is don't. You have to run 
the testing branch at minimum, and quiet a few things just don't work. 




As for VDO, i ran into this issue when using VDO and a NVME for LVM caching of 
the thin pool, VDO would throw a fit and under high load scenario, VM's would 
regularly pause. 

VDO with no cache was fine however, seems to be related to mixing device types 
/ block sizes (even if you override block sizes). 




Not sure if that helps. 

On 2021-06-08 12:26, Strahil Nikolov via Users wrote: 


Maybe the shard xlator cannot cope with the speed of the shard creation speed. 
Are you using preallocated disks on the Zimbra VM ? 
Best Regards, 
Strahil Nikolov 


BQ_BEGIN

On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users 
 wrote: 
Hello, 
running ovirt 4.4.4.7-1.el8 and gluster 8.3. 
When i performe a restore of Zimbra Collaboration Email with features.shard on, 
the VM pauses with an unknown storage error. 
When I performe a restore of Zimbra Collaboration Email with features.shard 
off, it fills all the gluster storage domain disks. 
With older versions of gluster and ovirt the same happens. If I use a NFS 
storage domain it runs OK. 
-- 

Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTOWFWQKZFQMD56YWHTPZ/
 




___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOJJZJG7LCGHDIYULF5572L52JE53T6D/
 

BQ_END




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDEHBKD4OVRK3MC6ZFGFBNPVPYMK2VXI/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-08 Thread José Ferradeira via Users
The disks on the VM are Thin Provision 

Regards 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt, "José Ferradeira via Users" , 
"oVirt Users"  
Enviadas: Terça-feira, 8 De Junho de 2021 17:26:42 
Assunto: Re: [ovirt-users] oVirt + Gluster issues 

Maybe the shard xlator cannot cope with the speed of the shard creation speed. 

Are you using preallocated disks on the Zimbra VM ? 

Best Regards, 
Strahil Nikolov 




On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users 
 wrote: 
Hello, 

running ovirt 4.4.4.7-1.el8 and gluster 8.3. 
When i performe a restore of Zimbra Collaboration Email with features.shard on, 
the VM pauses with an unknown storage error. 
When I performe a restore of Zimbra Collaboration Email with features.shard 
off, it fills all the gluster storage domain disks. 

With older versions of gluster and ovirt the same happens. If I use a NFS 
storage domain it runs OK. 



-- 

Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTOWFWQKZFQMD56YWHTPZ/
 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJVVQZIRPVQOQQXY5T7BFZX4NYUE5N5S/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-08 Thread Alex McWhirter

I've run into a similar problem when using VDO + LVM + XFS stacks, also
with ZFS. 


If you're trying to use ZFS on 4.4, my recommendation is don't. You have
to run the testing branch at minimum, and quiet a few things just don't
work. 


As for VDO, i ran into this issue when using VDO and a NVME for LVM
caching of the thin pool, VDO would throw a fit and under high load
scenario, VM's would regularly pause.  


VDO with no cache was fine however, seems to be related to mixing device
types / block sizes (even if you override block sizes). 

Not sure if that helps. 


On 2021-06-08 12:26, Strahil Nikolov via Users wrote:

Maybe the shard xlator cannot cope with the speed of the shard creation speed. 

Are you using preallocated disks on the Zimbra VM ? 

Best Regards, 
Strahil Nikolov


On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users 
 wrote: 

Hello, 

running ovirt 4.4.4.7-1.el8 and gluster 8.3. 
When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error. 
When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks. 

With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK. 


--

-

Jose Ferradeira
http://www.logicworks.pt ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTOWFWQKZFQMD56YWHTPZ/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOJJZJG7LCGHDIYULF5572L52JE53T6D/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JWUSPYSFOMFOHPDOPZBLYGRHYYFVCZ5H/


[ovirt-users] Re: oVirt + Gluster issues

2021-06-08 Thread Strahil Nikolov via Users
Maybe the shard xlator cannot cope with the speed of the shard creation speed.
Are you using preallocated disks on the Zimbra VM ?
Best Regards,Strahil Nikolov
 
 
  On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users 
wrote:   Hello,

running ovirt 4.4.4.7-1.el8 and gluster 8.3.
When i performe a restore of Zimbra Collaboration Email with features.shard on, 
the VM pauses with an unknown storage error.
When I performe a restore of Zimbra Collaboration Email with features.shard 
off, it fills all the gluster storage domain disks. 

With older versions of gluster and ovirt the same happens. If I use a NFS 
storage domain it runs OK.



-- 
Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTOWFWQKZFQMD56YWHTPZ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOJJZJG7LCGHDIYULF5572L52JE53T6D/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-24 Thread Ritesh Chikatwar
On Wed, Sep 23, 2020 at 10:16 PM Strahil Nikolov via Users 
wrote:

>
> >1) ..."I would give the engine a 'Windows'-style fix (a.k.a.
> reboot)"  >how does one restart just the oVirt-engine?
> ssh to HostedEngine VM and run one of the following:
> - reboot
> - systemctl restart ovirt-engine.service
>
> >2) I now show in shell  3 nodes, each with the one brick for data,
> vmstore, >engine (and an ISO one I am trying to make).. with one brick each
> and all >online and replicating.  But the GUI shows thor (first server
> running >engine) offline needing to be reloaded.  Now volumes show two
> bricks.. one >online one offline.  And no option to start / force restart.
> If it shows one offline brick -> you can try the "force start". You can go
> to UI -> Storage -> Volume -> select Volume -> Start and then mark "Force"
> and "OK"
>
>
> >4) To the question of "did I add third node later."  I would attach
> >deployment guide I am building ... but can't do that in this forum.  but
> >this is as simple as I can make it.  3 intel generic servers,  1 x boot
> >drive , 1 x 512GB SSD,  2 x 1TB SSD in each.  wipe all data all
> >configuration fresh Centos8 minimal install.. setup SSH setup basic
> >networking... install cockpit.. run HCI wizard for all three nodes. That
> is >all.
>
> >How many hosts do you see in oVirt ?
> >Help is appreciated.  The main concern I have is gap in what engine sees
> >and what CLI shows.  Can someone show me where to get logs?  the GUI log
> >when I try to "activate" thor server "Status of host thor was set to
> >NonOperational."  "Gluster command [] failed on server
> >."  is very unhelpful.
> Check the following services on the node:
> - glusterd.service
>
If glusterd service is running on the host check whether vdsm-client is
returning gluster host uuid. just run this on the host "vdsm-client
--gluster-enable GlusterHost uuid".

> - sanlock.service
> - supervdsmd.service
> - vdsmd.service
> - ovirt-ha-broker.service
> - ovirt-ha-agent.service
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJ7L5G7NU4PQAPQDCDIMC37JCEEGAILF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7OKJXLUVA6G7B4NR6YVEVN3MNPMIJUXT/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-23 Thread Gobinda Das
We do have gluster volume UI sync issue and this is fixed in ovirt-4.4.2
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1860775
Could be the same issue.

On Wed, Sep 23, 2020 at 10:16 PM Strahil Nikolov via Users 
wrote:

>
> >1) ..."I would give the engine a 'Windows'-style fix (a.k.a.
> reboot)"  >how does one restart just the oVirt-engine?
> ssh to HostedEngine VM and run one of the following:
> - reboot
> - systemctl restart ovirt-engine.service
>
> >2) I now show in shell  3 nodes, each with the one brick for data,
> vmstore, >engine (and an ISO one I am trying to make).. with one brick each
> and all >online and replicating.  But the GUI shows thor (first server
> running >engine) offline needing to be reloaded.  Now volumes show two
> bricks.. one >online one offline.  And no option to start / force restart.
> If it shows one offline brick -> you can try the "force start". You can go
> to UI -> Storage -> Volume -> select Volume -> Start and then mark "Force"
> and "OK"
>
>
> >4) To the question of "did I add third node later."  I would attach
> >deployment guide I am building ... but can't do that in this forum.  but
> >this is as simple as I can make it.  3 intel generic servers,  1 x boot
> >drive , 1 x 512GB SSD,  2 x 1TB SSD in each.  wipe all data all
> >configuration fresh Centos8 minimal install.. setup SSH setup basic
> >networking... install cockpit.. run HCI wizard for all three nodes. That
> is >all.
>
> >How many hosts do you see in oVirt ?
> >Help is appreciated.  The main concern I have is gap in what engine sees
> >and what CLI shows.  Can someone show me where to get logs?  the GUI log
> >when I try to "activate" thor server "Status of host thor was set to
> >NonOperational."  "Gluster command [] failed on server
> >."  is very unhelpful.
> Check the following services on the node:
> - glusterd.service
> - sanlock.service
> - supervdsmd.service
> - vdsmd.service
> - ovirt-ha-broker.service
> - ovirt-ha-agent.service
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJ7L5G7NU4PQAPQDCDIMC37JCEEGAILF/
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BC3JYXVDAKANFLZQGMT6NUM6XBZNK254/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-23 Thread Strahil Nikolov via Users

>1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)"  
>>how does one restart just the oVirt-engine?
ssh to HostedEngine VM and run one of the following:
- reboot
- systemctl restart ovirt-engine.service

>2) I now show in shell  3 nodes, each with the one brick for data, vmstore, 
>>engine (and an ISO one I am trying to make).. with one brick each and all 
>>online and replicating.  But the GUI shows thor (first server running 
>>engine) offline needing to be reloaded.  Now volumes show two bricks.. one 
>>online one offline.  And no option to start / force restart.
If it shows one offline brick -> you can try the "force start". You can go to 
UI -> Storage -> Volume -> select Volume -> Start and then mark "Force" and "OK"


>4) To the question of "did I add third node later."  I would attach 
>>deployment guide I am building ... but can't do that in this forum.  but 
>>this is as simple as I can make it.  3 intel generic servers,  1 x boot 
>>drive , 1 x 512GB SSD,  2 x 1TB SSD in each.  wipe all data all 
>>configuration fresh Centos8 minimal install.. setup SSH setup basic 
>>networking... install cockpit.. run HCI wizard for all three nodes. That is 
>>all.

>How many hosts do you see in oVirt ?
>Help is appreciated.  The main concern I have is gap in what engine sees >and 
>what CLI shows.  Can someone show me where to get logs?  the GUI log  >when I 
>try to "activate" thor server "Status of host thor was set to 
>>NonOperational."  "Gluster command [] failed on server >."  
>is very unhelpful.
Check the following services on the node:
- glusterd.service
- sanlock.service
- supervdsmd.service
- vdsmd.service
- ovirt-ha-broker.service
- ovirt-ha-agent.service

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJ7L5G7NU4PQAPQDCDIMC37JCEEGAILF/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-23 Thread Jeremey Wise
in oVirt Engine I think I see some of the issue

When you go under volumes -> Data ->

[image: image.png]

It notes two servers..  when you choose "add brick" it says volume has 3
bricks but only two servers.

So I went back to my deployment notes and walked through setup

yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm -y

yum install -y cockpit-ovirt-dashboard vdsm-gluster ovirt-host

Last metadata expiration check: 1:59:46 ago on Wed 23 Sep 2020 06:10:46 AM
EDT.
Package cockpit-ovirt-dashboard-0.14.11-1.el8.noarch is already installed.
Package ovirt-host-4.4.1-4.el8.x86_64 is already installed.
Dependencies resolved.
=
 Package
 ArchitectureVersion
  Repository
   Size
=
Installing:
 vdsm-gluster   x86_64
 4.40.26.3-1.el8
  ovirt-4.4
  67 k
Installing dependencies:
 blivet-datanoarch
 1:3.1.0-21.el8_2
 AppStream
 238 k
 glusterfs-events   x86_64
 7.7-1.el8
  ovirt-4.4-centos-gluster7
  65 k
 glusterfs-geo-replication  x86_64
 7.7-1.el8
  ovirt-4.4-centos-gluster7
 212 k
 libblockdev-plugins-allx86_64
 2.19-12.el8
  AppStream
  62 k
 libblockdev-vdox86_64
 2.19-12.el8
  AppStream
  74 k
 python3-blivet noarch
 1:3.1.0-21.el8_2
 AppStream
 995 k
 python3-blockdev   x86_64
 2.19-12.el8
  AppStream
  79 k
 python3-bytesize   x86_64
 1.4-3.el8
  AppStream
  28 k
 python3-magic  noarch
 5.33-13.el8
  BaseOS
   45 k
 python3-pyparted   x86_64
 1:3.11.0-13.el8
  AppStream
 123 k


Dependencies resolved.
Nothing to do.
Complete!
[root@thor media]#



AKA.. something got removed from the node..


Rebooted.. as I am not sure which dependancies and services would need to
be restarted to get oVirt-engine to pick things up.


Host is now "green" .. now only errors are about gluster bricks..



On Tue, Sep 22, 2020 at 9:30 PM penguin pages 
wrote:

>
>
> eMail client with this forum is a bit .. I was told this web
> interface I could post images... as embedded ones in email get scraped
> out...  but not seeing how that is done. Seems to be txt only.
>
>
>
> 1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)"
> how does one restart just the oVirt-engine?
>
> 2) I now show in shell  3 nodes, each with the one brick for data,
> vmstore, engine (and an ISO one I am trying to make).. with one brick each
> and all online and replicating.   But the GUI shows thor (first server
> running engine) offline needing to be reloaded.  Now volumes show two
> bricks.. one online one offline.  And no option to start / force restart.
>
> 3) I have tried several times to try a graceful reboot to see if startup
> sequence was issue.   I tore down VLANs and bridges to make it flat 1 x 1Gb
> mgmt, 1 x 10Gb storage.   SSH between nodes is fine... copy test was
> great.   I don't think it is nodes.
>
> 4) To the question of "did I add third node later."  I would attach
> deployment guide I am building ... but can't do that in this forum.  but
> this is as simple as I can make it.  3 intel generic servers,  1 x boot
> drive , 1 x 512GB SSD,  2 x 1TB SSD in each.   wipe all data all
> configuration fresh Centos8 minimal install.. setup SSH setup basic
> networking... install cockpit.. run HCI wizard for all three nodes. That is
> all.
>
> Trying to learn and support concept of oVirt as a viable platform but
> still trying to work through learning how to root cause, kick tires, and
> debug / recover when things go down .. as they will.
>
> Help is appreciated.  The main concern I have is gap in what engine sees
> 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread penguin pages


eMail client with this forum is a bit .. I was told this web interface 
I could post images... as embedded ones in email get scraped out...  but not 
seeing how that is done. Seems to be txt only.



1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)"  how 
does one restart just the oVirt-engine?

2) I now show in shell  3 nodes, each with the one brick for data, vmstore, 
engine (and an ISO one I am trying to make).. with one brick each and all 
online and replicating.   But the GUI shows thor (first server running engine) 
offline needing to be reloaded.  Now volumes show two bricks.. one online one 
offline.  And no option to start / force restart.

3) I have tried several times to try a graceful reboot to see if startup 
sequence was issue.   I tore down VLANs and bridges to make it flat 1 x 1Gb 
mgmt, 1 x 10Gb storage.   SSH between nodes is fine... copy test was great.   I 
don't think it is nodes.

4) To the question of "did I add third node later."  I would attach deployment 
guide I am building ... but can't do that in this forum.  but this is as simple 
as I can make it.  3 intel generic servers,  1 x boot drive , 1 x 512GB SSD,  2 
x 1TB SSD in each.   wipe all data all configuration fresh Centos8 minimal 
install.. setup SSH setup basic networking... install cockpit.. run HCI wizard 
for all three nodes. That is all.

Trying to learn and support concept of oVirt as a viable platform but still 
trying to work through learning how to root cause, kick tires, and debug / 
recover when things go down .. as they will.

Help is appreciated.  The main concern I have is gap in what engine sees and 
what CLI shows.  Can someone show me where to get logs?  the GUI log  when I 
try to "activate" thor server "Status of host thor was set to NonOperational."  
"Gluster command [] failed on server ."   is very unhelpful.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKD7LJMC4X3LG5SEZ2M64YN5UKX36RAS/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Ovirt uses the "/rhev/mnt... mountpoints.

Do you have those (for each storage domain ) ?

Here is an example from one of my nodes:
[root@ovirt1 ~]# df -hT | grep rhev
gluster1:/engine                              fuse.glusterfs  100G   19G   82G  
19% /rhev/data-center/mnt/glusterSD/gluster1:_engine
gluster1:/fast4                               fuse.glusterfs  100G   53G   48G  
53% /rhev/data-center/mnt/glusterSD/gluster1:_fast4
gluster1:/fast1                               fuse.glusterfs  100G   56G   45G  
56% /rhev/data-center/mnt/glusterSD/gluster1:_fast1
gluster1:/fast2                               fuse.glusterfs  100G   56G   45G  
56% /rhev/data-center/mnt/glusterSD/gluster1:_fast2
gluster1:/fast3                               fuse.glusterfs  100G   55G   46G  
55% /rhev/data-center/mnt/glusterSD/gluster1:_fast3
gluster1:/data                                fuse.glusterfs  2.4T  535G  1.9T  
23% /rhev/data-center/mnt/glusterSD/gluster1:_data



Best Regards,
Strahil Nikolov


В вторник, 22 септември 2020 г., 19:44:54 Гринуич+3, Jeremey Wise 
 написа: 






Yes.

And at one time it was fine.   I did a graceful shutdown.. and after booting it 
always seems to now have issue with the one server... of course the one hosting 
the ovirt-engine :P

# Three nodes in cluster

# Error when you hover over node


# when i select node and choose "activate"



#Gluster is working fine... this is oVirt who is confused.
[root@medusa vmstore]# mount |grep media/vmstore
medusast.penguinpages.local:/vmstore on /media/vmstore type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev)
[root@medusa vmstore]# echo > /media/vmstore/test.out
[root@medusa vmstore]# ssh -f thor 'echo $HOSTNAME >> /media/vmstore/test.out'
[root@medusa vmstore]# ssh -f odin 'echo $HOSTNAME >> /media/vmstore/test.out'
[root@medusa vmstore]# ssh -f medusa 'echo $HOSTNAME >> /media/vmstore/test.out'
[root@medusa vmstore]# cat /media/vmstore/test.out

thor.penguinpages.local
odin.penguinpages.local
medusa.penguinpages.local


Ideas to fix oVirt?



On Tue, Sep 22, 2020 at 10:42 AM Strahil Nikolov  wrote:
> By the way, did you add the third host in the oVirt ?
> 
> If not , maybe that is the real problem :)
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 17:23:28 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> Its like oVirt thinks there are only two nodes in gluster replication
> 
> 
> 
> 
> 
> # Yet it is clear the CLI shows three bricks.
> [root@medusa vms]# gluster volume status vmstore
> Status of volume: vmstore
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore                        49154     0          Y       9444
> Brick odinst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore                        49154     0          Y       3269
> Brick medusast.penguinpages.local:/gluster_
> bricks/vmstore/vmstore                      49154     0          Y       7841
> Self-heal Daemon on localhost               N/A       N/A        Y       80152
> Self-heal Daemon on odinst.penguinpages.loc
> al                                          N/A       N/A        Y       
> 141750
> Self-heal Daemon on thorst.penguinpages.loc
> al                                          N/A       N/A        Y       
> 245870
> 
> Task Status of Volume vmstore
> --
> There are no active volume tasks
> 
> 
> 
> How do I get oVirt to re-establish reality to what Gluster sees?
> 
> 
> 
> On Tue, Sep 22, 2020 at 8:59 AM Strahil Nikolov  wrote:
>> Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 
>> bricks up , but usually it was an UI issue and you go to UI and mark a 
>> "force start" which will try to start any bricks that were down (won't 
>> affect gluster) and will wake up the UI task to verify again brick status.
>> 
>> 
>> https://github.com/gluster/gstatus is a good one to verify your cluster 
>> health , yet human's touch is priceless in any kind of technology.
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> when I posted last..  in the tread I paste a roling restart.    And...  now 
>> it is replicating.
>> 
>> oVirt still showing wrong.  BUT..   I did my normal test from each of the 
>> three nodes.
>> 
>> 1) Mount Gluster file system with localhost as primary and other two as 
>> tertiary to local mount (like a client would do)
>> 2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
>> 3) repeat from each node then read back that all are in sync.
>> 
>> I REALLY hate reboot (restart) as a fix.  I need to get better with root 
>> 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
By the way, did you add the third host in the oVirt ?

If not , maybe that is the real problem :)


Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 17:23:28 Гринуич+3, Jeremey Wise 
 написа: 





Its like oVirt thinks there are only two nodes in gluster replication





# Yet it is clear the CLI shows three bricks.
[root@medusa vms]# gluster volume status vmstore
Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       9444
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       3269
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore                      49154     0          Y       7841
Self-heal Daemon on localhost               N/A       N/A        Y       80152
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       141750
Self-heal Daemon on thorst.penguinpages.loc
al                                          N/A       N/A        Y       245870

Task Status of Volume vmstore
--
There are no active volume tasks



How do I get oVirt to re-establish reality to what Gluster sees?



On Tue, Sep 22, 2020 at 8:59 AM Strahil Nikolov  wrote:
> Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 
> bricks up , but usually it was an UI issue and you go to UI and mark a "force 
> start" which will try to start any bricks that were down (won't affect 
> gluster) and will wake up the UI task to verify again brick status.
> 
> 
> https://github.com/gluster/gstatus is a good one to verify your cluster 
> health , yet human's touch is priceless in any kind of technology.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> 
> when I posted last..  in the tread I paste a roling restart.    And...  now 
> it is replicating.
> 
> oVirt still showing wrong.  BUT..   I did my normal test from each of the 
> three nodes.
> 
> 1) Mount Gluster file system with localhost as primary and other two as 
> tertiary to local mount (like a client would do)
> 2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
> 3) repeat from each node then read back that all are in sync.
> 
> I REALLY hate reboot (restart) as a fix.  I need to get better with root 
> cause of gluster issues if I am going to trust it.  Before when I manually 
> made the volumes and it was simply (vdo + gluster) then worst case was that 
> gluster would break... but I could always go into "brick" path and copy data 
> out.
> 
> Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted from 
> simple file recovery..  Without GLUSTER AND oVirt Engine up... all my 
> environment  and data is lost.  This means nodes moved more to "pets" then 
> cattle.
> 
> And with three nodes.. I can't afford to loose any pets. 
> 
> I will post more when I get cluster settled and work on those wierd notes 
> about quorum volumes noted on two nodes when glusterd is restarted.
> 
> Thanks,
> 
> On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov  wrote:
>> Replication issue could mean that one of the client (FUSE mounts) is not 
>> attached to all bricks.
>> 
>> You can check the amount of clients via:
>> gluster volume status all client-list
>> 
>> 
>> As a prevention , just do a rolling restart:
>> - set a host in maintenance and mark it to stop glusterd service (I'm 
>> reffering to the UI)
>> - Activate the host , once it was moved to maintenance
>> 
>> Wait for the host's HE score to recover (silver/gold crown in UI) and then 
>> proceed with the next one.
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> 
>> I did.
>> 
>> Here are all three nodes with restart. I find it odd ... their has been a 
>> set of messages at end (see below) which I don't know enough about what 
>> oVirt laid out to know if it is bad.
>> 
>> ###
>> [root@thor vmstore]# systemctl status glusterd
>> ● glusterd.service - GlusterFS, a clustered file-system server
>>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
>> preset: disabled)
>>   Drop-In: /etc/systemd/system/glusterd.service.d
>>            └─99-cpu.conf
>>    Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>>      Docs: man:glusterd(8)
>>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
>> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
>>  Main PID: 2113 (glusterd)
>>     Tasks: 151 (limit: 1235410)
>>    Memory: 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
That's really wierd.
I would give the engine a 'Windows'-style fix (a.k.a. reboot).

I guess some of the engine's internal processes crashed/looped and it doesn't 
see the reality.

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 16:27:25 Гринуич+3, Jeremey Wise 
 написа: 





Its like oVirt thinks there are only two nodes in gluster replication





# Yet it is clear the CLI shows three bricks.
[root@medusa vms]# gluster volume status vmstore
Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       9444
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       3269
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore                      49154     0          Y       7841
Self-heal Daemon on localhost               N/A       N/A        Y       80152
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       141750
Self-heal Daemon on thorst.penguinpages.loc
al                                          N/A       N/A        Y       245870

Task Status of Volume vmstore
--
There are no active volume tasks



How do I get oVirt to re-establish reality to what Gluster sees?



On Tue, Sep 22, 2020 at 8:59 AM Strahil Nikolov  wrote:
> Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 
> bricks up , but usually it was an UI issue and you go to UI and mark a "force 
> start" which will try to start any bricks that were down (won't affect 
> gluster) and will wake up the UI task to verify again brick status.
> 
> 
> https://github.com/gluster/gstatus is a good one to verify your cluster 
> health , yet human's touch is priceless in any kind of technology.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> 
> when I posted last..  in the tread I paste a roling restart.    And...  now 
> it is replicating.
> 
> oVirt still showing wrong.  BUT..   I did my normal test from each of the 
> three nodes.
> 
> 1) Mount Gluster file system with localhost as primary and other two as 
> tertiary to local mount (like a client would do)
> 2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
> 3) repeat from each node then read back that all are in sync.
> 
> I REALLY hate reboot (restart) as a fix.  I need to get better with root 
> cause of gluster issues if I am going to trust it.  Before when I manually 
> made the volumes and it was simply (vdo + gluster) then worst case was that 
> gluster would break... but I could always go into "brick" path and copy data 
> out.
> 
> Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted from 
> simple file recovery..  Without GLUSTER AND oVirt Engine up... all my 
> environment  and data is lost.  This means nodes moved more to "pets" then 
> cattle.
> 
> And with three nodes.. I can't afford to loose any pets. 
> 
> I will post more when I get cluster settled and work on those wierd notes 
> about quorum volumes noted on two nodes when glusterd is restarted.
> 
> Thanks,
> 
> On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov  wrote:
>> Replication issue could mean that one of the client (FUSE mounts) is not 
>> attached to all bricks.
>> 
>> You can check the amount of clients via:
>> gluster volume status all client-list
>> 
>> 
>> As a prevention , just do a rolling restart:
>> - set a host in maintenance and mark it to stop glusterd service (I'm 
>> reffering to the UI)
>> - Activate the host , once it was moved to maintenance
>> 
>> Wait for the host's HE score to recover (silver/gold crown in UI) and then 
>> proceed with the next one.
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> 
>> I did.
>> 
>> Here are all three nodes with restart. I find it odd ... their has been a 
>> set of messages at end (see below) which I don't know enough about what 
>> oVirt laid out to know if it is bad.
>> 
>> ###
>> [root@thor vmstore]# systemctl status glusterd
>> ● glusterd.service - GlusterFS, a clustered file-system server
>>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
>> preset: disabled)
>>   Drop-In: /etc/systemd/system/glusterd.service.d
>>            └─99-cpu.conf
>>    Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>>      Docs: man:glusterd(8)
>>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
>> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 bricks 
up , but usually it was an UI issue and you go to UI and mark a "force start" 
which will try to start any bricks that were down (won't affect gluster) and 
will wake up the UI task to verify again brick status.


https://github.com/gluster/gstatus is a good one to verify your cluster health 
, yet human's touch is priceless in any kind of technology.

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
 написа: 







when I posted last..  in the tread I paste a roling restart.    And...  now it 
is replicating.

oVirt still showing wrong.  BUT..   I did my normal test from each of the three 
nodes.

1) Mount Gluster file system with localhost as primary and other two as 
tertiary to local mount (like a client would do)
2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
3) repeat from each node then read back that all are in sync.

I REALLY hate reboot (restart) as a fix.  I need to get better with root cause 
of gluster issues if I am going to trust it.  Before when I manually made the 
volumes and it was simply (vdo + gluster) then worst case was that gluster 
would break... but I could always go into "brick" path and copy data out.

Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted from 
simple file recovery..  Without GLUSTER AND oVirt Engine up... all my 
environment  and data is lost.  This means nodes moved more to "pets" then 
cattle.

And with three nodes.. I can't afford to loose any pets. 

I will post more when I get cluster settled and work on those wierd notes about 
quorum volumes noted on two nodes when glusterd is restarted.

Thanks,

On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov  wrote:
> Replication issue could mean that one of the client (FUSE mounts) is not 
> attached to all bricks.
> 
> You can check the amount of clients via:
> gluster volume status all client-list
> 
> 
> As a prevention , just do a rolling restart:
> - set a host in maintenance and mark it to stop glusterd service (I'm 
> reffering to the UI)
> - Activate the host , once it was moved to maintenance
> 
> Wait for the host's HE score to recover (silver/gold crown in UI) and then 
> proceed with the next one.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> I did.
> 
> Here are all three nodes with restart. I find it odd ... their has been a set 
> of messages at end (see below) which I don't know enough about what oVirt 
> laid out to know if it is bad.
> 
> ###
> [root@thor vmstore]# systemctl status glusterd
> ● glusterd.service - GlusterFS, a clustered file-system server
>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/glusterd.service.d
>            └─99-cpu.conf
>    Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>      Docs: man:glusterd(8)
>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
>  Main PID: 2113 (glusterd)
>     Tasks: 151 (limit: 1235410)
>    Memory: 3.8G
>       CPU: 6min 46.050s
>    CGroup: /glusterfs.slice/glusterd.service
>            ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level 
> INFO
>            ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data -p 
> /var/run/gluster/shd/data/data-shd.pid -l /var/log/glusterfs/glustershd.log 
> -S /var/run/gluster/2f41374c2e36bf4d.socket --xlator-option 
> *replicate*.node-uu>
>            ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p 
> /var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
>  -S /var/r>
>            ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine -p 
> /var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
>            ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore 
> -p 
> /var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
>            └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p 
> /var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid
>  -S /var/run/glu>
> 
> Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a 
> clustered file-system server...
> Sep 21 20:32:26 thor.penguinpages.local systemd[1]: Started GlusterFS, a 
> clustered file-system server.
> Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
> 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Usually I first start with:
'gluster volume heal  info summary'

Anything that is not 'Connected' is bad.

Yeah, the abstraction is not so nice, but the good thing is that you can always 
extract the data from a single node left (it will require to play a little bit 
with the quorum of the volume).

Usually I have seen that the FUSE fails to reconnect to a "gone bad and 
recovered" brick and then you got that endless healing (as FUSE will write the 
data to only 2 out of 3 bricks and then a heal is pending :D ).

I would go with the gluster logs and the brick logs and then you can dig deeper 
if you suspect network issue.


Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
 написа: 







when I posted last..  in the tread I paste a roling restart.    And...  now it 
is replicating.

oVirt still showing wrong.  BUT..   I did my normal test from each of the three 
nodes.

1) Mount Gluster file system with localhost as primary and other two as 
tertiary to local mount (like a client would do)
2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
3) repeat from each node then read back that all are in sync.

I REALLY hate reboot (restart) as a fix.  I need to get better with root cause 
of gluster issues if I am going to trust it.  Before when I manually made the 
volumes and it was simply (vdo + gluster) then worst case was that gluster 
would break... but I could always go into "brick" path and copy data out.

Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted from 
simple file recovery..  Without GLUSTER AND oVirt Engine up... all my 
environment  and data is lost.  This means nodes moved more to "pets" then 
cattle.

And with three nodes.. I can't afford to loose any pets. 

I will post more when I get cluster settled and work on those wierd notes about 
quorum volumes noted on two nodes when glusterd is restarted.

Thanks,

On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov  wrote:
> Replication issue could mean that one of the client (FUSE mounts) is not 
> attached to all bricks.
> 
> You can check the amount of clients via:
> gluster volume status all client-list
> 
> 
> As a prevention , just do a rolling restart:
> - set a host in maintenance and mark it to stop glusterd service (I'm 
> reffering to the UI)
> - Activate the host , once it was moved to maintenance
> 
> Wait for the host's HE score to recover (silver/gold crown in UI) and then 
> proceed with the next one.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> I did.
> 
> Here are all three nodes with restart. I find it odd ... their has been a set 
> of messages at end (see below) which I don't know enough about what oVirt 
> laid out to know if it is bad.
> 
> ###
> [root@thor vmstore]# systemctl status glusterd
> ● glusterd.service - GlusterFS, a clustered file-system server
>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/glusterd.service.d
>            └─99-cpu.conf
>    Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>      Docs: man:glusterd(8)
>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
>  Main PID: 2113 (glusterd)
>     Tasks: 151 (limit: 1235410)
>    Memory: 3.8G
>       CPU: 6min 46.050s
>    CGroup: /glusterfs.slice/glusterd.service
>            ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level 
> INFO
>            ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data -p 
> /var/run/gluster/shd/data/data-shd.pid -l /var/log/glusterfs/glustershd.log 
> -S /var/run/gluster/2f41374c2e36bf4d.socket --xlator-option 
> *replicate*.node-uu>
>            ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p 
> /var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
>  -S /var/r>
>            ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine -p 
> /var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
>            ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore 
> -p 
> /var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
>            └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p 
> /var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid
>  -S /var/run/glu>
> 
> Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a 
> clustered file-system server...

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Jeremey Wise
when I posted last..  in the tread I paste a roling restart.And...  now
it is replicating.

oVirt still showing wrong.  BUT..   I did my normal test from each of the
three nodes.

1) Mount Gluster file system with localhost as primary and other two as
tertiary to local mount (like a client would do)
2) run test file create Ex:   echo $HOSTNAME >>
/media/glustervolume/test.out
3) repeat from each node then read back that all are in sync.

I REALLY hate reboot (restart) as a fix.  I need to get better with root
cause of gluster issues if I am going to trust it.  Before when I manually
made the volumes and it was simply (vdo + gluster) then worst case was that
gluster would break... but I could always go into "brick" path and copy
data out.

Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted
from simple file recovery..  Without GLUSTER AND oVirt Engine up... all my
environment  and data is lost.  This means nodes moved more to "pets" then
cattle.

And with three nodes.. I can't afford to loose any pets.

I will post more when I get cluster settled and work on those wierd notes
about quorum volumes noted on two nodes when glusterd is restarted.

Thanks,

On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov 
wrote:

> Replication issue could mean that one of the client (FUSE mounts) is not
> attached to all bricks.
>
> You can check the amount of clients via:
> gluster volume status all client-list
>
>
> As a prevention , just do a rolling restart:
> - set a host in maintenance and mark it to stop glusterd service (I'm
> reffering to the UI)
> - Activate the host , once it was moved to maintenance
>
> Wait for the host's HE score to recover (silver/gold crown in UI) and then
> proceed with the next one.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise <
> jeremey.w...@gmail.com> написа:
>
>
>
>
>
>
> I did.
>
> Here are all three nodes with restart. I find it odd ... their has been a
> set of messages at end (see below) which I don't know enough about what
> oVirt laid out to know if it is bad.
>
> ###
> [root@thor vmstore]# systemctl status glusterd
> ● glusterd.service - GlusterFS, a clustered file-system server
>Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled;
> vendor preset: disabled)
>   Drop-In: /etc/systemd/system/glusterd.service.d
>└─99-cpu.conf
>Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>  Docs: man:glusterd(8)
>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
>  Main PID: 2113 (glusterd)
> Tasks: 151 (limit: 1235410)
>Memory: 3.8G
>   CPU: 6min 46.050s
>CGroup: /glusterfs.slice/glusterd.service
>├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
> INFO
>├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data
> -p /var/run/gluster/shd/data/data-shd.pid -l
> /var/log/glusterfs/glustershd.log -S
> /var/run/gluster/2f41374c2e36bf4d.socket --xlator-option
> *replicate*.node-uu>
>├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local
> --volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p
> /var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
> -S /var/r>
>├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local
> --volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine
> -p
> /var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
>├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local
> --volfile-id
> vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore -p
> /var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
>└─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local
> --volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p
> /var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid
> -S /var/run/glu>
>
> Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a
> clustered file-system server...
> Sep 21 20:32:26 thor.penguinpages.local systemd[1]: Started GlusterFS, a
> clustered file-system server.
> Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22
> 00:32:28.605674] C [MSGID: 106003]
> [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action]
> 0-management: Server quorum regained for volume data. Starting lo>
> Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22
> 00:32:28.639490] C [MSGID: 106003]
> [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action]
> 0-management: Server quorum regained for volume engine. Starting >
> Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22
> 00:32:28.680665] C [MSGID: 106003]
> [glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action]
> 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
At around Sep 21 20:33 local time , you got  a loss of quorum - that's not good.

Could it be a network 'hicup' ?

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 15:05:16 Гринуич+3, Jeremey Wise 
 написа: 






I did.

Here are all three nodes with restart. I find it odd ... their has been a set 
of messages at end (see below) which I don't know enough about what oVirt laid 
out to know if it is bad.

###
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
           └─99-cpu.conf
   Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
     Docs: man:glusterd(8)
  Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2113 (glusterd)
    Tasks: 151 (limit: 1235410)
   Memory: 3.8G
      CPU: 6min 46.050s
   CGroup: /glusterfs.slice/glusterd.service
           ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
           ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data -p 
/var/run/gluster/shd/data/data-shd.pid -l /var/log/glusterfs/glustershd.log -S 
/var/run/gluster/2f41374c2e36bf4d.socket --xlator-option *replicate*.node-uu>
           ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p 
/var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
 -S /var/r>
           ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine -p 
/var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
           ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore 
-p 
/var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
           └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p 
/var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid 
-S /var/run/glu>

Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a 
clustered file-system server...
Sep 21 20:32:26 thor.penguinpages.local systemd[1]: Started GlusterFS, a 
clustered file-system server.
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.605674] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume data. Starting lo>
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.639490] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume engine. Starting >
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.680665] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume vmstore. Starting>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.813409] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
0-data-client-0: server 172.16.101.101:24007 has not responded in the last 30 
seconds, discon>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.815147] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
2-engine-client-0: server 172.16.101.101:24007 has not responded in the last 30 
seconds, disc>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.818735] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
4-vmstore-client-0: server 172.16.101.101:24007 has not responded in the last 
30 seconds, dis>
Sep 21 20:33:36 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:36.816978] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
3-iso-client-0: server 172.16.101.101:24007 has not responded in the last 42 
seconds, disconn>
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]# systemctl restart glusterd
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
           └─99-cpu.conf
   Active: active (running) since Tue 2020-09-22 07:24:34 EDT; 2s ago
     Docs: man:glusterd(8)
  Process: 245831 ExecStart=/usr/sbin/glusterd -p 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Replication issue could mean that one of the client (FUSE mounts) is not 
attached to all bricks.

You can check the amount of clients via:
gluster volume status all client-list


As a prevention , just do a rolling restart:
- set a host in maintenance and mark it to stop glusterd service (I'm reffering 
to the UI)
- Activate the host , once it was moved to maintenance

Wait for the host's HE score to recover (silver/gold crown in UI) and then 
proceed with the next one.

Best Regards,
Strahil Nikolov




В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
 написа: 






I did.

Here are all three nodes with restart. I find it odd ... their has been a set 
of messages at end (see below) which I don't know enough about what oVirt laid 
out to know if it is bad.

###
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
           └─99-cpu.conf
   Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
     Docs: man:glusterd(8)
  Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2113 (glusterd)
    Tasks: 151 (limit: 1235410)
   Memory: 3.8G
      CPU: 6min 46.050s
   CGroup: /glusterfs.slice/glusterd.service
           ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
           ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data -p 
/var/run/gluster/shd/data/data-shd.pid -l /var/log/glusterfs/glustershd.log -S 
/var/run/gluster/2f41374c2e36bf4d.socket --xlator-option *replicate*.node-uu>
           ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p 
/var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
 -S /var/r>
           ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine -p 
/var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
           ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore 
-p 
/var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
           └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p 
/var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid 
-S /var/run/glu>

Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a 
clustered file-system server...
Sep 21 20:32:26 thor.penguinpages.local systemd[1]: Started GlusterFS, a 
clustered file-system server.
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.605674] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume data. Starting lo>
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.639490] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume engine. Starting >
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.680665] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume vmstore. Starting>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.813409] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
0-data-client-0: server 172.16.101.101:24007 has not responded in the last 30 
seconds, discon>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.815147] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
2-engine-client-0: server 172.16.101.101:24007 has not responded in the last 30 
seconds, disc>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.818735] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
4-vmstore-client-0: server 172.16.101.101:24007 has not responded in the last 
30 seconds, dis>
Sep 21 20:33:36 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:36.816978] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
3-iso-client-0: server 172.16.101.101:24007 has not responded in the last 42 
seconds, disconn>
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]# systemctl restart glusterd
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Jeremey Wise
I did.

Here are all three nodes with restart. I find it odd ... their has been a
set of messages at end (see below) which I don't know enough about what
oVirt laid out to know if it is bad.

###
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled;
vendor preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
   └─99-cpu.conf
   Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
 Docs: man:glusterd(8)
  Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2113 (glusterd)
Tasks: 151 (limit: 1235410)
   Memory: 3.8G
  CPU: 6min 46.050s
   CGroup: /glusterfs.slice/glusterd.service
   ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
INFO
   ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data
-p /var/run/gluster/shd/data/data-shd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/2f41374c2e36bf4d.socket --xlator-option
*replicate*.node-uu>
   ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local
--volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p
/var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
-S /var/r>
   ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local
--volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine
-p
/var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
   ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local
--volfile-id
vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore -p
/var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
   └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local
--volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p
/var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid
-S /var/run/glu>

Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a
clustered file-system server...
Sep 21 20:32:26 thor.penguinpages.local systemd[1]: Started GlusterFS, a
clustered file-system server.
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22
00:32:28.605674] C [MSGID: 106003]
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume data. Starting lo>
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22
00:32:28.639490] C [MSGID: 106003]
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume engine. Starting >
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22
00:32:28.680665] C [MSGID: 106003]
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume vmstore. Starting>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22
00:33:24.813409] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired]
0-data-client-0: server 172.16.101.101:24007 has not responded in the last
30 seconds, discon>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22
00:33:24.815147] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired]
2-engine-client-0: server 172.16.101.101:24007 has not responded in the
last 30 seconds, disc>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22
00:33:24.818735] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired]
4-vmstore-client-0: server 172.16.101.101:24007 has not responded in the
last 30 seconds, dis>
Sep 21 20:33:36 thor.penguinpages.local glustershd[2914]: [2020-09-22
00:33:36.816978] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired]
3-iso-client-0: server 172.16.101.101:24007 has not responded in the last
42 seconds, disconn>
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]# systemctl restart glusterd
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled;
vendor preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
   └─99-cpu.conf
   Active: active (running) since Tue 2020-09-22 07:24:34 EDT; 2s ago
 Docs: man:glusterd(8)
  Process: 245831 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 245832 (glusterd)
Tasks: 151 (limit: 1235410)
   Memory: 3.8G
  CPU: 132ms
   CGroup: /glusterfs.slice/glusterd.service
   ├─  2914 /usr/sbin/glusterfs -s localhost 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Have you restarted glusterd.service on the affected node.
glusterd is just management layer and it won't affect the brick processes.

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 01:43:36 Гринуич+3, Jeremey Wise 
 написа: 






Start is not an option.

It notes two bricks.  but command line denotes three bricks and all present

[root@odin thorst.penguinpages.local:_vmstore]# gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       33123
Brick odinst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data                            49152     0          Y       2646
Self-heal Daemon on localhost               N/A       N/A        Y       3004
Self-heal Daemon on thorst.penguinpages.loc
al                                          N/A       N/A        Y       33230
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume data
--
There are no active volume tasks

[root@odin thorst.penguinpages.local:_vmstore]# gluster peer status
Number of Peers: 2

Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@odin thorst.penguinpages.local:_vmstore]#




On Mon, Sep 21, 2020 at 4:32 PM Strahil Nikolov  wrote:
> Just select the volume and press "start" . It will automatically mark "force 
> start" and will fix itself.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> oVirt engine shows  one of the gluster servers having an issue.  I did a 
> graceful shutdown of all three nodes over weekend as I have to move around 
> some power connections in prep for UPS.
> 
> Came back up.. but
> 
> 
> 
> And this is reflected in 2 bricks online (should be three for each volume)
> 
> 
> Command line shows gluster should be happy.
> 
> [root@thor engine]# gluster peer status
> Number of Peers: 2
> 
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
> 
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
> 
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data                              49152     0          Y       11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data                              49152     0          Y       2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data                            49152     0          Y       2646
> Self-heal Daemon on localhost               N/A       N/A        Y       50560
> Self-heal Daemon on odinst.penguinpages.loc
> al                                          N/A       N/A        Y       3004
> Self-heal Daemon on medusast.penguinpages.l
> ocal                                        N/A       N/A        Y       2475
> 
> Task Status of Volume data
> --
> There are no active volume tasks
> 
> Status of volume: engine
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine                          49153     0          Y       11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine                          49153     0          Y       2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine                        49153     0          Y       2657
> Self-heal Daemon on localhost               N/A       N/A        Y       50560
> Self-heal Daemon on odinst.penguinpages.loc
> al                                          N/A       N/A        Y       3004
> Self-heal Daemon on medusast.penguinpages.l
> ocal                                        N/A       N/A        Y       2475
> 
> Task Status of Volume engine
> --
> There are no active 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Jeremey Wise
Start is not an option.

It notes two bricks.  but command line denotes three bricks and all present

[root@odin thorst.penguinpages.local:_vmstore]# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data  49152 0  Y
33123
Brick odinst.penguinpages.local:/gluster_br
icks/data/data  49152 0  Y
2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data49152 0  Y
2646
Self-heal Daemon on localhost   N/A   N/AY
3004
Self-heal Daemon on thorst.penguinpages.loc
al  N/A   N/AY
33230
Self-heal Daemon on medusast.penguinpages.l
ocalN/A   N/AY
2475

Task Status of Volume data
--
There are no active volume tasks

[root@odin thorst.penguinpages.local:_vmstore]# gluster peer status
Number of Peers: 2

Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@odin thorst.penguinpages.local:_vmstore]#




On Mon, Sep 21, 2020 at 4:32 PM Strahil Nikolov 
wrote:

> Just select the volume and press "start" . It will automatically mark
> "force start" and will fix itself.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise <
> jeremey.w...@gmail.com> написа:
>
>
>
>
>
>
> oVirt engine shows  one of the gluster servers having an issue.  I did a
> graceful shutdown of all three nodes over weekend as I have to move around
> some power connections in prep for UPS.
>
> Came back up.. but
>
>
>
> And this is reflected in 2 bricks online (should be three for each volume)
>
>
> Command line shows gluster should be happy.
>
> [root@thor engine]# gluster peer status
> Number of Peers: 2
>
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
>
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
>
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data49152 0  Y
> 2646
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume data
>
> --
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine49153 0  Y
> 2657
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume engine
>
> --
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 151426
> Brick 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Strahil Nikolov via Users
Just select the volume and press "start" . It will automatically mark "force 
start" and will fix itself.

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise 
 написа: 






oVirt engine shows  one of the gluster servers having an issue.  I did a 
graceful shutdown of all three nodes over weekend as I have to move around some 
power connections in prep for UPS.

Came back up.. but



And this is reflected in 2 bricks online (should be three for each volume)


Command line shows gluster should be happy.

[root@thor engine]# gluster peer status
Number of Peers: 2

Hostname: odinst.penguinpages.local
Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@thor engine]#

# All bricks showing online
[root@thor engine]# gluster volume status
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       11001
Brick odinst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data                            49152     0          Y       2646
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume data
--
There are no active volume tasks

Status of volume: engine
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/engine/engine                          49153     0          Y       11012
Brick odinst.penguinpages.local:/gluster_br
icks/engine/engine                          49153     0          Y       2982
Brick medusast.penguinpages.local:/gluster_
bricks/engine/engine                        49153     0          Y       2657
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume engine
--
There are no active volume tasks

Status of volume: iso
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/iso/iso                                49156     49157      Y       151426
Brick odinst.penguinpages.local:/gluster_br
icks/iso/iso                                49156     49157      Y       69225
Brick medusast.penguinpages.local:/gluster_
bricks/iso/iso                              49156     49157      Y       45018
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume iso
--
There are no active volume tasks

Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       11023
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       2993
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore                      49154     0          Y       2668
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004

Task Status of Volume vmstore
--
There are no active volume 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Jayme
You could try setting host to maintenance and check stop gluster option,
then re-activate host or try restarting glusterd service on the host

On Mon, Sep 21, 2020 at 2:52 PM Jeremey Wise  wrote:

>
> oVirt engine shows  one of the gluster servers having an issue.  I did a
> graceful shutdown of all three nodes over weekend as I have to move around
> some power connections in prep for UPS.
>
> Came back up.. but
>
> [image: image.png]
>
> And this is reflected in 2 bricks online (should be three for each volume)
> [image: image.png]
>
> Command line shows gluster should be happy.
>
> [root@thor engine]# gluster peer status
> Number of Peers: 2
>
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
>
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
>
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data49152 0  Y
> 2646
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume data
>
> --
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine49153 0  Y
> 2657
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume engine
>
> --
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 151426
> Brick odinst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 69225
> Brick medusast.penguinpages.local:/gluster_
> bricks/iso/iso  49156 49157  Y
> 45018
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume iso
>
> --
> There are no active volume tasks
>
> Status of volume: vmstore
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore49154 0  Y
> 11023
> Brick odinst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore49154 0  Y
> 2993
> Brick medusast.penguinpages.local:/gluster_
> bricks/vmstore/vmstore  49154 0  Y
> 2668
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
>
> Task Status of Volume 

[ovirt-users] Re: oVirt Gluster Node Completely Failed Replacement

2019-10-24 Thread Gobinda Das
Hi Robert,
 If it's only a gluster host not hypervisor then this document is correct
one.
Removing a host from hyperconverged setup

   - Hosts that have a brick cannot be removed - can only be replaced. You
   will need to first replace the bricks on host with another brick.
  - Add a new host to the cluster
  - Create bricks on the newly added host - the Create Brick UI option
  from the Storage Devices sub-tab can be used to prepare and mount bricks.
  - Replace bricks from the host to be removed using the Replace Brick
   option
  - Once all bricks are replaces, the host can be moved to maintenance
  and removed


On Thu, Oct 24, 2019 at 12:42 PM Sandro Bonazzola 
wrote:

>
>
> Il giorno gio 24 ott 2019 alle ore 06:57 Robert Crawford <
> robert.crawford4.14...@gmail.com> ha scritto:
>
>> Hey Guys,
>>
>> I had a Gluster node fail and I need to replace it; is there a
>> replacement guide?
>>
>
> I think the only guide we have on this topic is
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Maintenance_and_Upgrading_Resources.html
> +Sahina Bose  ? +Gobinda Das  ?
>
>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QVYO55KPAG5TGJLQ5OTZIRM4EDNZARWV/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> *
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EGU6U6WIGUHJ7JWZLDE54JFANFG7GE7B/


[ovirt-users] Re: oVirt Gluster Node Completely Failed Replacement

2019-10-24 Thread Sandro Bonazzola
Il giorno gio 24 ott 2019 alle ore 06:57 Robert Crawford <
robert.crawford4.14...@gmail.com> ha scritto:

> Hey Guys,
>
> I had a Gluster node fail and I need to replace it; is there a replacement
> guide?
>

I think the only guide we have on this topic is
https://ovirt.org/documentation/gluster-hyperconverged/chap-Maintenance_and_Upgrading_Resources.html
+Sahina Bose  ? +Gobinda Das  ?


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QVYO55KPAG5TGJLQ5OTZIRM4EDNZARWV/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JG7MAR5Z3IHQYCQPZDMXG6345AR7V4O2/


[ovirt-users] Re: oVirt + Gluster

2019-09-23 Thread Strahil
Recently both oVirt node and Hypervisor have moved to gluster v6  (although I 
have moved a little bit earlier).
For me v6 of Gluster is quite stable and if you dig up really nice , you will 
be able to even setup 'gtop' which gives great info.

Ovirt is quite well integrated with gluster - doing health checks ,provide 
monitoring, allow creating additional gluster volumes and many more.

Still, if you have FC available - it should be way easier to setup and tune.

Docs are one of the weak stuff , but you can always check RHV (4.0+) and Oracle 
docs - as oVirt is upstream of RedHat's and Oracle's Virtualization.

Best Regards,
Strahil NikolovOn Sep 23, 2019 17:03, Edward Berger  wrote:
>
> Current oVirt node-ng uses 6
>
> ]# yum list installed |grep gluster
> gluster-ansible-cluster.noarch                1.0.0-1.el7              
> installed
> gluster-ansible-features.noarch               1.0.5-3.el7              
> installed
> gluster-ansible-infra.noarch                  1.0.4-3.el7              
> installed
> gluster-ansible-maintenance.noarch            1.0.1-1.el7              
> installed
> gluster-ansible-repositories.noarch           1.0.1-1.el7              
> installed
> gluster-ansible-roles.noarch                  1.0.5-4.el7              
> installed
> glusterfs.x86_64                              6.4-1.el7                
> installed
> glusterfs-api.x86_64                          6.4-1.el7                
> installed
> glusterfs-cli.x86_64                          6.4-1.el7                
> installed
> glusterfs-client-xlators.x86_64               6.4-1.el7                
> installed
> glusterfs-events.x86_64                       6.4-1.el7                
> installed
> glusterfs-fuse.x86_64                         6.4-1.el7                
> installed
> glusterfs-geo-replication.x86_64              6.4-1.el7                
> installed
> glusterfs-libs.x86_64                         6.4-1.el7                
> installed
> glusterfs-rdma.x86_64                         6.4-1.el7                
> installed
> glusterfs-server.x86_64                       6.4-1.el7                
> installed
> libvirt-daemon-driver-storage-gluster.x86_64  4.5.0-10.el7_6.12        
> installed
> python2-gluster.x86_64                        6.4-1.el7                
> installed
> vdsm-gluster.x86_64                           4.30.24-1.el7            
> installed
>
> On Mon, Sep 23, 2019 at 12:22 AM TomK  wrote:
>>
>> Hey All,
>>
>> Seeing GlusterFS up to version 7, do any of the oVirt versions support 
>> anything higher then 3.X?
>>
>> Or is Gluster not the preferred distributed file system choice for oVirt?
>>
>> -- 
>> Thx,
>> TK.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NNKXLMIGLV3GJJVW3PIF2OXV6V3PS3BV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7656Z7PSAM5ZBJIYWWEGOHLU3OS6NOB/


[ovirt-users] Re: oVirt + Gluster

2019-09-23 Thread Edward Berger
Current oVirt node-ng uses 6

]# yum list installed |grep gluster
gluster-ansible-cluster.noarch1.0.0-1.el7
 installed
gluster-ansible-features.noarch   1.0.5-3.el7
 installed
gluster-ansible-infra.noarch  1.0.4-3.el7
 installed
gluster-ansible-maintenance.noarch1.0.1-1.el7
 installed
gluster-ansible-repositories.noarch   1.0.1-1.el7
 installed
gluster-ansible-roles.noarch  1.0.5-4.el7
 installed
glusterfs.x86_64  6.4-1.el7
 installed
glusterfs-api.x86_64  6.4-1.el7
 installed
glusterfs-cli.x86_64  6.4-1.el7
 installed
glusterfs-client-xlators.x86_64   6.4-1.el7
 installed
glusterfs-events.x86_64   6.4-1.el7
 installed
glusterfs-fuse.x86_64 6.4-1.el7
 installed
glusterfs-geo-replication.x86_64  6.4-1.el7
 installed
glusterfs-libs.x86_64 6.4-1.el7
 installed
glusterfs-rdma.x86_64 6.4-1.el7
 installed
glusterfs-server.x86_64   6.4-1.el7
 installed
libvirt-daemon-driver-storage-gluster.x86_64  4.5.0-10.el7_6.12
 installed
python2-gluster.x86_646.4-1.el7
 installed
vdsm-gluster.x86_64   4.30.24-1.el7
 installed

On Mon, Sep 23, 2019 at 12:22 AM TomK  wrote:

> Hey All,
>
> Seeing GlusterFS up to version 7, do any of the oVirt versions support
> anything higher then 3.X?
>
> Or is Gluster not the preferred distributed file system choice for oVirt?
>
> --
> Thx,
> TK.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NNKXLMIGLV3GJJVW3PIF2OXV6V3PS3BV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILSB2DR343R42J2KSYNOZW23ZR5ZNGXF/


[ovirt-users] Re: oVirt + Gluster

2019-09-23 Thread TomK

Thanks Strahil!

Search results return info where only Gluster 3.X is supported.

Sometime back I asked if a specific version of Gluster was available 
having issues with GFS 4.1. Was told to wait up a bit even though the 
wiki pages indicated GFS 3.3+ is supported.


https://lists.ovirt.org/archives/list/users@ovirt.org/thread/TLKLRUZXVMTRKLS2I26AEGKETT2BVXRD/

Are there more recent Wiki pages keeping a list of what's supported and 
what's not?


Google returns alot of dated pages for me.

Cheers,
TK

On 9/23/2019 1:29 AM, Strahil wrote:

On the opposite - ovirt supports up to v6 of gluster and in cockpit, you can 
setup your hyperconverged setup (on gluster) abd your hosted engine ontop.

Best Regards,
Strahil NikolovOn Sep 23, 2019 07:20, TomK  wrote:


Hey All,

Seeing GlusterFS up to version 7, do any of the oVirt versions support
anything higher then 3.X?

Or is Gluster not the preferred distributed file system choice for oVirt?

--
Thx,
TK.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NNKXLMIGLV3GJJVW3PIF2OXV6V3PS3BV/



--
Thx,
TK.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DQ2RAC4L5B2ZIKV5IHWCUPARY2Y3QYQ4/


[ovirt-users] Re: oVirt + Gluster

2019-09-22 Thread Strahil
On the opposite - ovirt supports up to v6 of gluster and in cockpit, you can 
setup your hyperconverged setup (on gluster) abd your hosted engine ontop.

Best Regards,
Strahil NikolovOn Sep 23, 2019 07:20, TomK  wrote:
>
> Hey All, 
>
> Seeing GlusterFS up to version 7, do any of the oVirt versions support 
> anything higher then 3.X? 
>
> Or is Gluster not the preferred distributed file system choice for oVirt? 
>
> -- 
> Thx, 
> TK.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NNKXLMIGLV3GJJVW3PIF2OXV6V3PS3BV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XAOPAL66Y6DEP3DKBQAMA5QYI4HX5HH5/


[ovirt-users] Re: Ovirt gluster arbiter within hosted VM

2019-04-20 Thread Alex K
On Sat, Apr 20, 2019, 09:15 Strahil  wrote:

> With GlusterD2  , you can use thin arbiter - which can be deployed in the
> cloud.
> Do not try to setup regular arbiter far away from the data bicks or yoir
> performance will be awful.
>
Thanx Strahil.

> Best Regards,
> Strahil Nikolov
> On Apr 19, 2019 13:28, Alex K  wrote:
>
>
>
> On Fri, Apr 19, 2019 at 1:14 PM Scott Worthington <
> scott.c.worthing...@gmail.com> wrote:
>
> And where, magically, is that node's storage going to live?
>
> In the disk of the VM. Let me know if I need to clarify further.
>
>
> You can't fake a proper setup of gluster.
>
> Not trying to fake it. Trying to find a solution with the available
> options.
>
>
> On Fri, Apr 19, 2019, 5:00 AM Alex K  wrote:
>
> Hi all,
>
> I have a two node hyper-converged setup which are causing me split-brains
> when network issues are encountered. Since I cannot add a third hardware
> node, I was thinking to add a dedicated guest VM hosted in same
> hyper-converged cluster which would do the arbiter for the volumes.
>
> What do you think about this setup in regards to stability and performance?
> I am running ovirt 4.2.
>
> Thanx,
> Alex
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTDOAMBKVE5BP4V245MPYSCM6DK7S52X/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4QPANCRIHGGCONGINYJHB4EOWYTQHNUT/


[ovirt-users] Re: Ovirt gluster arbiter within hosted VM

2019-04-20 Thread Strahil
With GlusterD2  , you can use thin arbiter - which can be deployed in the cloud.
Do not try to setup regular arbiter far away from the data bicks or yoir 
performance will be awful.

Best Regards,
Strahil NikolovOn Apr 19, 2019 13:28, Alex K  wrote:
>
>
>
> On Fri, Apr 19, 2019 at 1:14 PM Scott Worthington 
>  wrote:
>>
>> And where, magically, is that node's storage going to live?
>
> In the disk of the VM. Let me know if I need to clarify further. 
>>
>>
>> You can't fake a proper setup of gluster.
>
> Not trying to fake it. Trying to find a solution with the available options. 
>>
>>
>> On Fri, Apr 19, 2019, 5:00 AM Alex K  wrote:
>>>
>>> Hi all,
>>>
>>> I have a two node hyper-converged setup which are causing me split-brains 
>>> when network issues are encountered. Since I cannot add a third hardware 
>>> node, I was thinking to add a dedicated guest VM hosted in same 
>>> hyper-converged cluster which would do the arbiter for the volumes. 
>>>
>>> What do you think about this setup in regards to stability and performance?
>>> I am running ovirt 4.2. 
>>>
>>> Thanx, 
>>> Alex
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTDOAMBKVE5BP4V245MPYSCM6DK7S52X/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D2PELCLDZJBU5MLHOMRT4TDMADWSZO22/


[ovirt-users] Re: Ovirt gluster arbiter within hosted VM

2019-04-20 Thread Strahil
As long as the VM is not hosted by those 2 hyperconverged nodes - there should 
be no problem.

Another option is a small machine with a single SSD.
I'm using Lenovo Tiny M-series as an arbiter.

Best Regards,
Strahil NikolovOn Apr 19, 2019 12:59, Alex K  wrote:
>
> Hi all,
>
> I have a two node hyper-converged setup which are causing me split-brains 
> when network issues are encountered. Since I cannot add a third hardware 
> node, I was thinking to add a dedicated guest VM hosted in same 
> hyper-converged cluster which would do the arbiter for the volumes. 
>
> What do you think about this setup in regards to stability and performance?
> I am running ovirt 4.2. 
>
> Thanx, 
> Alex
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWRCLPQDQ6HYRDJGUH55WE5ACPV2JYJL/


[ovirt-users] Re: Ovirt gluster arbiter within hosted VM

2019-04-19 Thread Alex K
On Fri, Apr 19, 2019 at 1:14 PM Scott Worthington <
scott.c.worthing...@gmail.com> wrote:

> And where, magically, is that node's storage going to live?
>
In the disk of the VM. Let me know if I need to clarify further.

>
> You can't fake a proper setup of gluster.
>
Not trying to fake it. Trying to find a solution with the available
options.

>
> On Fri, Apr 19, 2019, 5:00 AM Alex K  wrote:
>
>> Hi all,
>>
>> I have a two node hyper-converged setup which are causing me split-brains
>> when network issues are encountered. Since I cannot add a third hardware
>> node, I was thinking to add a dedicated guest VM hosted in same
>> hyper-converged cluster which would do the arbiter for the volumes.
>>
>> What do you think about this setup in regards to stability and
>> performance?
>> I am running ovirt 4.2.
>>
>> Thanx,
>> Alex
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTDOAMBKVE5BP4V245MPYSCM6DK7S52X/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/64IP2BXAESUFY2G4PWSQ2WMIHHPXJA22/


[ovirt-users] Re: OVirt Gluster Fail

2019-03-26 Thread Andrea Milan
  

Hi Sahina, Strahil 

thank you for the information, I managed to
start the heal and restore both the hosted engine and the Vms. 

This is
the log's on all nodes 

[2019-03-26 08:30:58.462329] I [MSGID: 104045]
[glfs-master.c:91:notify] 0-gfapi: New graph
676c6e6f-6465-3032-2e61-736370642e6c (0) coming up 

[2019-03-26
08:30:58.462364] I [MSGID: 114020] [client.c:2356:notify]
0-asc-client-0: parent translators are ready, attempting connect on
transport 

[2019-03-26 08:30:58.464374] I [MSGID: 114020]
[client.c:2356:notify] 0-asc-client-1: parent translators are ready,
attempting connect on transport 

[2019-03-26 08:30:58.464898] I
[rpc-clnt.c:1965:rpc_clnt_reconfig] 0-asc-client-0: changing port to
49438 (from 0) 

[2019-03-26 08:30:58.466148] I [MSGID: 114020]
[client.c:2356:notify] 0-asc-client-3: parent translators are ready,
attempting connect on transport 

[2019-03-26 08:30:58.468028] E
[socket.c:2309:socket_connect_finish] 0-asc-client-0: connection to
192.170.254.3:49438 failed (Nessun instradamento per l'host)


[2019-03-26 08:30:58.468054] I [rpc-clnt.c:1965:rpc_clnt_reconfig]
0-asc-client-1: changing port to 49441 (from 0) 

[2019-03-26
08:30:58.470040] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 0-asc-client-3:
changing port to 49421 (from 0) 

[2019-03-26 08:30:58.471345] I [MSGID:
114057] [client-handshake.c:1440:select_server_supported_programs]
0-asc-client-1: Using Program GlusterFS 3.3, Num (1298437), Version
(330) 

[2019-03-26 08:30:58.472642] I [MSGID: 114046]
[client-handshake.c:1216:client_setvolume_cbk] 0-asc-client-1: Connected
to asc-client-1, attached to remote volume '/bricks/asc/brick'.


[2019-03-26 08:30:58.472659] I [MSGID: 114047]
[client-handshake.c:1227:client_setvolume_cbk] 0-asc-client-1: Server
and Client lk-version numbers are not same, reopening the fds


[2019-03-26 08:30:58.472714] I [MSGID: 108005]
[afr-common.c:4387:afr_notify] 0-asc-replicate-0: Subvolume
'asc-client-1' came back up; going online. 

[2019-03-26
08:30:58.472731] I [MSGID: 114035]
[client-handshake.c:202:client_set_lk_version_cbk] 0-asc-client-1:
Server lk version = 1 

[2019-03-26 08:30:58.473112] E
[socket.c:2309:socket_connect_finish] 0-asc-client-3: connection to
192.170.254.6:49421 failed (Nessun instradamento per l'host)


[2019-03-26 08:30:58.473152] W [MSGID: 108001]
[afr-common.c:4467:afr_notify] 0-asc-replicate-0: Client-quorum is not
met 

[2019-03-26 08:30:58.477699] I [MSGID: 108031]
[afr-common.c:2157:afr_local_discovery_cbk] 0-asc-replicate-0: selecting
local read_child asc-client-1 

[2019-03-26 08:30:58.477804] I [MSGID:
104041] [glfs-resolve.c:885:__glfs_active_subvol] 0-asc: switched to
graph 676c6e6f-6465-3032-2e61-736370642e6c (0) 

I analyzed the single
nodes and I realized that the firewalld service has been stopped on all
nodes. 

Firewalld re-enabled the heal started automatically, and the
"gluster heal volume VOLNAME info" immediately gave correct connections.


the recovery of the single bricks started immediatly. 

When finished
that I have correctly detected and start the host-engine. 

I wanted to
tell you about the sequence that led me to the block: 

1) Node03 in
maintenance by hosted-engine. 

2) Maintenance performed and restarted.


3) Repositioned active node03 

4) Heal automatic controlled with
Ovirt Manager. 

5) Heal completed correctly. 

6) Node02 put into
maintenance. 

7) During the shutdown of the Node02 some VMs have gone
to Pause, Ovirt Manager has signaled the block of the Node01 and
immediately the host-engine has stopped. 

8) Restarted the Node02, I
saw that the gluster had the peer but there was no healing between the
nodes. 

I had to close everything, and the situation that presented
itself was that of previous emails. 

Questions: 

- Why did the Node02
in maintenance block Node01? 

- Why was restarting the system not
restarting the firewalld service? Is it also managed by vdsm? 

- What
is the correct way to backup virtual machines on an external machine? We
use Ovirt4.1 

- can backup be used outside of Ovirt? Es qemu-kvm
standard ... 
Thanks for all. 
Best regards 
Andrea Milan 

Il
25.03.2019 11:53 Sahina Bose ha scritto: 

> You will first need to
restore connectivity between the gluster peers
> for heal to work. So
restart glusterd on all hosts as Strahil
> mentioned, and check if
"gluster peer status" returns the other nodes
> as connected. If not,
please check the glusterd log to see what's
> causing the issue. Share
the logs if we need to look at it, along with
> the version info
> 
> On
Sun, Mar 24, 2019 at 1:08 AM Strahil wrote:
> 
>> Hi Andrea, The cluster
volumes might have sharding enabled and thus files larger than shard
size can be recovered only via cluster. You can try to restart gluster
on all nodes and force heal: 1. Kill gluster processes: systemctl stop
glusterd /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh 2.
Start gluster: systemctl start glusterd 3. Force heal: for i in
$(gluster volume list); do gluster 

[ovirt-users] Re: OVirt Gluster Fail

2019-03-26 Thread Strahil
Hi Andrea,

My guess is that while node2 was in maintenance , node3 brick(s) have died, or 
there were some pending heals.

For backup, you can use anything that  works for KVM, but the hard part is to 
get the configuration of each VM. If the VM is running, you can use 'virsh 
dumpxml domain' to get the configuration of the running VM, but this won't work 
for VM that are off.

Why firewalld was not stopped  - my guess is a rare bug that is hard to 
reproduce.

Best Regards,
Strahil Nikolov

On Mar 26, 2019 17:10, Andrea Milan  wrote:
>
> Hi Sahina, Strahil
>
>  
>
> thank you for the information, I managed to start the heal and restore both 
> the hosted engine and the Vms.
>
>  
>
> This is the log’s on all nodes
>
>  
>
> [2019-03-26 08:30:58.462329] I [MSGID: 104045] [glfs-master.c:91:notify] 
> 0-gfapi: New graph 676c6e6f-6465-3032-2e61-736370642e6c (0) coming up
>
> [2019-03-26 08:30:58.462364] I [MSGID: 114020] [client.c:2356:notify] 
> 0-asc-client-0: parent translators are ready, attempting connect on transport
>
> [2019-03-26 08:30:58.464374] I [MSGID: 114020] [client.c:2356:notify] 
> 0-asc-client-1: parent translators are ready, attempting connect on transport
>
> [2019-03-26 08:30:58.464898] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 
> 0-asc-client-0: changing port to 49438 (from 0)
>
> [2019-03-26 08:30:58.466148] I [MSGID: 114020] [client.c:2356:notify] 
> 0-asc-client-3: parent translators are ready, attempting connect on transport
>
> [2019-03-26 08:30:58.468028] E [socket.c:2309:socket_connect_finish] 
> 0-asc-client-0: connection to 192.170.254.3:49438 failed (Nessun 
> instradamento per l'host)
>
> [2019-03-26 08:30:58.468054] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 
> 0-asc-client-1: changing port to 49441 (from 0)
>
> [2019-03-26 08:30:58.470040] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 
> 0-asc-client-3: changing port to 49421 (from 0)
>
> [2019-03-26 08:30:58.471345] I [MSGID: 114057] 
> [client-handshake.c:1440:select_server_supported_programs] 0-asc-client-1: 
> Using Program GlusterFS 3.3, Num (1298437), Version (330)
>
> [2019-03-26 08:30:58.472642] I [MSGID: 114046] 
> [client-handshake.c:1216:client_setvolume_cbk] 0-asc-client-1: Connected to 
> asc-client-1, attached to remote volume '/bricks/asc/brick'.
>
> [2019-03-26 08:30:58.472659] I [MSGID: 114047] 
> [client-handshake.c:1227:client_setvolume_cbk] 0-asc-client-1: Server and 
> Client lk-version numbers are not same, reopening the fds
>
> [2019-03-26 08:30:58.472714] I [MSGID: 108005] [afr-common.c:4387:afr_notify] 
> 0-asc-replicate-0: Subvolume 'asc-client-1' came back up; going online.
>
> [2019-03-26 08:30:58.472731] I [MSGID: 114035] 
> [client-handshake.c:202:client_set_lk_version_cbk] 0-asc-client-1: Server lk 
> version = 1
>
> [2019-03-26 08:30:58.473112] E [socket.c:2309:socket_connect_finish] 
> 0-asc-client-3: connection to 192.170.254.6:49421 failed (Nessun 
> instradamento per l'host)
>
> [2019-03-26 08:30:58.473152] W [MSGID: 108001] [afr-common.c:4467:afr_notify] 
> 0-asc-replicate-0: Client-quorum is not met
>
> [2019-03-26 08:30:58.477699] I [MSGID: 108031] 
> [afr-common.c:2157:afr_local_discovery_cbk] 0-asc-replicate-0: selecting 
> local read_child asc-client-1
>
> [2019-03-26 08:30:58.477804] I [MSGID: 104041] 
> [glfs-resolve.c:885:__glfs_active_subvol] 0-asc: switched to graph 
> 676c6e6f-6465-3032-2e61-736370642e6c (0)
>
>  
>
>  
>
> I analyzed the single nodes and I realized that the firewalld service has 
> been stopped on all nodes.
>
> Firewalld re-enabled the heal started automatically, and the “gluster heal 
> volume VOLNAME info” immediately gave correct connections.
>
>  
>
> the recovery of the single bricks started immediatly.
>
> When finished that I have correctly detected and start the host-engine.
>
>  
>
> I wanted to tell you about the sequence that led me to the block:
>
>  
>
> 1) Node03 in maintenance by hosted-engine.
>
> 2) Maintenance performed and restarted.
>
> 3) Repositioned active node03
>
> 4) Heal automatic controlled with Ovirt Manager.
>
> 5) Heal completed correctly.
>
> 6) Node02 put into maintenance.
>
> 7) During the shutdown of the Node02 some VMs have gone to Pause, Ovirt 
> Manager has signaled the block of the Node01 and immediately the host-engine 
> has stopped.
>
> 8) Restarted the Node02, I saw that the gluster had the peer but there was no 
> healing between the nodes.
>
> I had to close everything, and the situation that presented itself was that 
> of previous emails.
>
>  
>
> Questions:
>
> - Why did the Node02 in maintenance block Node01?
>
> - Why was restarting the system not restarting the firewalld service? Is it 
> also managed by vdsm?
>
> - What is the correct way to backup virtual machines on an external machine? 
> We use Ovirt4.1
>
> - can backup be used outside of Ovirt? Es qemu-kvm standard ...
>
> Thanks for all.
> Best regards
> Andrea Milan
>
> Il 25.03.2019 11:53 Sahina Bose ha scritto:
>>
>> You will first need to restore connectivity 

[ovirt-users] Re: OVirt Gluster Fail

2019-03-25 Thread Sahina Bose
You will first need to restore connectivity between the gluster peers
for heal to work. So restart glusterd on all hosts as Strahil
mentioned, and check if "gluster peer status" returns the other nodes
as connected. If not, please check the glusterd log to see what's
causing the issue. Share the logs if we need to look at it, along with
the version info


On Sun, Mar 24, 2019 at 1:08 AM Strahil  wrote:
>
> Hi Andrea,
>
> The cluster volumes might have sharding enabled and thus files larger than 
> shard size can be recovered only  via cluster.
>
> You  can try to restart gluster on all nodes and force heal:
> 1. Kill gluster processes:
> systemctl stop glusterd
> /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
>
> 2. Start gluster:
> systemctl start glusterd
>
> 3. Force heal:
> for i in $(gluster volume list);  do gluster volume heal $i full  ; done
> sleep 300
> for i in $(gluster volume list);  do gluster volume heal $i info summary ; 
> done
>
> Best Regards,
> Strahil Nikolov
>
> On Mar 23, 2019 13:51, commram...@tiscali.it wrote: > > During maintenance of 
> a machine the hosted engine crashed. > At that point there was no more chance 
> of managing anything. > > The VMs have paused, and were no longer manageable. 
> > I restarted the machine, but one point all the bricks were no longer 
> reachable. > > Now I am in a situation where the engine support is no longer 
> loaded. > > The gluster sees the peers connected and the services turned on 
> for the various bricks, but fails to heal the messages that I find for each 
> machine are the following > > # gluster volume heal engine info > Brick 
> 192.170.254.3:/bricks/engine/brick > > . > . > . > > Status: Connected Number 
> of entries: 190 > > Brick 192.170.254.4:/bricks/engine/brick > Status: Il 
> socket di destinazione non è connesso > Number of entries: - > > Brick 
> 192.170.254.6:/bricks/engine/brick > Status: Il socket di destinazione non è 
> connesso > Number of entries: - > > this for all the bricks (some have no 
> heal to do because the machines inside were turned off). > > In practice all 
> the bricks see only localhost as connected. > > How can I restore the 
> machines? > Is there a way to read data from the physical machine and export 
> it so that it can be reused? > Unfortunately we need to access that data. > > 
> Someone can help me. > > Thanks Andrea > 
> ___ > Users mailing list -- 
> users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of 
> Conduct: https://www.ovirt.org/community/about/community-guidelines/ > List 
> Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EOIY7ZU4GOEMRUNY3CWF6R3JIQNPHLVA/
>  ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NMBDYBOY4TZB37I6O6VYBCVVGM5H3Y3F/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IUVVKMLEN5NLBCANUHA6QU5FVNXTHUJ6/


[ovirt-users] Re: OVirt Gluster Fail

2019-03-23 Thread Strahil
Hi Andrea,

The cluster volumes might have sharding enabled and thus files larger than 
shard size can be recovered only  via cluster.

You  can try to restart gluster on all nodes and force heal:
1. Kill gluster processes:
systemctl stop glusterd
/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh

2. Start gluster:
systemctl start glusterd


3. Force heal:
for i in $(gluster volume list);  do gluster volume heal $i full  ; done
sleep 300
for i in $(gluster volume list);  do gluster volume heal $i info summary ; done

Best Regards,
Strahil NikolovOn Mar 23, 2019 13:51, commram...@tiscali.it wrote: > > During 
maintenance of a machine the hosted engine crashed. > At that point there was 
no more chance of managing anything. > > The VMs have paused, and were no 
longer manageable. > I restarted the machine, but one point all the bricks were 
no longer reachable. > > Now I am in a situation where the engine support is no 
longer loaded. > > The gluster sees the peers connected and the services turned 
on for the various bricks, but fails to heal the messages that I find for each 
machine are the following > > # gluster volume heal engine info > Brick 
192.170.254.3:/bricks/engine/brick > > . > . > . > > Status: Connected Number 
of entries: 190 > > Brick 192.170.254.4:/bricks/engine/brick > Status: Il 
socket di destinazione non è connesso > Number of entries: - > > Brick 
192.170.254.6:/bricks/engine/brick > Status: Il socket di destinazione non è 
connesso > Number of entries: - > > this for all the bricks (some have no heal 
to do because the machines inside were turned off). > > In practice all the 
bricks see only localhost as connected. > > How can I restore the machines? > 
Is there a way to read data from the physical machine and export it so that it 
can be reused? > Unfortunately we need to access that data. > > Someone can 
help me. > > Thanks Andrea > ___ > 
Users mailing list -- users@ovirt.org > To unsubscribe send an email to 
users-le...@ovirt.org > Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ > List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EOIY7ZU4GOEMRUNY3CWF6R3JIQNPHLVA/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NMBDYBOY4TZB37I6O6VYBCVVGM5H3Y3F/


[ovirt-users] Re: OVirt Gluster Fail

2019-03-23 Thread commramius
During maintenance of a machine the hosted engine crashed. 
At that point there was no more chance of managing anything.

The VMs have paused, and were no longer manageable.
I restarted the machine, but one point all the bricks were no longer reachable.

Now I am in a situation where the engine support is no longer loaded.

The gluster sees the peers connected and the services turned on for the various 
bricks, but fails to heal the messages that I find for each machine are the 
following

# gluster volume heal engine info 
Brick 192.170.254.3:/bricks/engine/brick 
 
.
. 
. 
 
Status: Connected Number of entries: 190 

Brick 192.170.254.4:/bricks/engine/brick
Status: Il socket di destinazione non è connesso 
Number of entries: - 

Brick 192.170.254.6:/bricks/engine/brick 
Status: Il socket di destinazione non è connesso 
Number of entries: -

this for all the bricks (some have no heal to do because the machines inside 
were turned off). 

In practice all the bricks see only localhost as connected. 

How can I restore the machines? 
Is there a way to read data from the physical machine and export it so that it 
can be reused? 
Unfortunately we need to access that data. 

Someone can help me. 

Thanks Andrea
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EOIY7ZU4GOEMRUNY3CWF6R3JIQNPHLVA/


[ovirt-users] Re: oVirt + gluster with JBOD

2018-09-13 Thread Eduardo Mayoral
Good point, I will certainly take that into account!

Thanks,

--

Eduardo Mayoral.



On 13/09/18 19:15, Donny Davis wrote:
> You could always go expand the volume after deployment by adding more
> disks to the logical volume it creates.
>
> On Wed, Sep 12, 2018, 11:38 PM Jayme  > wrote:
>
> Eduardo, there is one thing you should be aware of.  The jbod
> disks aren't all pooled together to create one large storage
> domain. I had to spread my data domains over the devices such as a
> data domain on /dev/SDA and data2 on sdb etc not a big problem
> with only two disks per server but with ten it could get ugly 
>
> On Thu, Sep 13, 2018, 4:52 AM Eduardo Mayoral,  > wrote:
>
> Thanks for your answers!
>
> Yes, my plan is a replica-3 too. The cluster will be bigger,
> though, 4 or 5 nodes with 10 disks each -> 40 to 50 bricks.
>
> Again, thanks for your help.
>
> --
> Eduardo Mayoral.
>
> On 12/09/18 20:54, Jayme wrote:
>> Should also note that I am using replica 3 configuration with
>> no arbiter for extra redundancy.
>>
>> On Wed, Sep 12, 2018 at 3:53 PM Jayme > > wrote:
>>
>> I am running a three server oVirt hyperconverged setup
>> with JBOD disks.  Two 2TB SSDs in JBOD per server.  The
>> configuration is working very well for me so far.
>>
>>
>>
>> On Wed, Sep 12, 2018 at 2:36 PM Donny Davis
>> mailto:do...@fortnebula.com>> wrote:
>>
>> JBOD is on the drop down when you do the setup for
>> the volumes
>>
>> On Wed, Sep 12, 2018, 1:50 AM Eduardo Mayoral
>> mailto:emayo...@arsys.es>> wrote:
>>
>> Hi!
>>
>>   I am thinking about using some spare servers I
>> have for a
>> hyperconverged oVirt + gluster. Only problem with
>> these servers is that
>> the RAID card is not specially good, so I am
>> considering using gluster
>> in a JBOD configuration. JBOD is supported for
>> recent versions of
>> gluster, but I am not sure if this configuration
>> is supported by oVirt,
>> or if there are any special considerations I
>> should take into account.
>>
>>     Anyone running oVirt + gluster with JBOD who
>> can comment on this?
>>
>>     Thanks!
>>
>> --
>>
>> Eduardo Mayoral.
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> 
>> To unsubscribe send an email to
>> users-le...@ovirt.org 
>> Privacy Statement:
>> https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DTIGZ5VOKG52ZHCZTTCHGPT5NFKYSTP/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> 
>> To unsubscribe send an email to users-le...@ovirt.org
>> 
>> Privacy Statement:
>> https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGQUVKGP3S34JRB4CPWIRXZ2YDR27NFQ/
>>
>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EEJQ35KS4D4LYWRQNOSXD3EQRHSPD3KV/


[ovirt-users] Re: oVirt + gluster with JBOD

2018-09-13 Thread Donny Davis
You could always go expand the volume after deployment by adding more disks
to the logical volume it creates.

On Wed, Sep 12, 2018, 11:38 PM Jayme  wrote:

> Eduardo, there is one thing you should be aware of.  The jbod disks aren't
> all pooled together to create one large storage domain. I had to spread my
> data domains over the devices such as a data domain on /dev/SDA and data2
> on sdb etc not a big problem with only two disks per server but with ten it
> could get ugly
>
> On Thu, Sep 13, 2018, 4:52 AM Eduardo Mayoral,  wrote:
>
>> Thanks for your answers!
>>
>> Yes, my plan is a replica-3 too. The cluster will be bigger, though, 4 or
>> 5 nodes with 10 disks each -> 40 to 50 bricks.
>> Again, thanks for your help.
>>
>> --
>> Eduardo Mayoral.
>>
>> On 12/09/18 20:54, Jayme wrote:
>>
>> Should also note that I am using replica 3 configuration with no arbiter
>> for extra redundancy.
>>
>> On Wed, Sep 12, 2018 at 3:53 PM Jayme  wrote:
>>
>>> I am running a three server oVirt hyperconverged setup with JBOD disks.
>>> Two 2TB SSDs in JBOD per server.  The configuration is working very well
>>> for me so far.
>>>
>>>
>>>
>>> On Wed, Sep 12, 2018 at 2:36 PM Donny Davis 
>>> wrote:
>>>
 JBOD is on the drop down when you do the setup for the volumes

 On Wed, Sep 12, 2018, 1:50 AM Eduardo Mayoral 
 wrote:

> Hi!
>
>   I am thinking about using some spare servers I have for a
> hyperconverged oVirt + gluster. Only problem with these servers is that
> the RAID card is not specially good, so I am considering using gluster
> in a JBOD configuration. JBOD is supported for recent versions of
> gluster, but I am not sure if this configuration is supported by oVirt,
> or if there are any special considerations I should take into account.
>
> Anyone running oVirt + gluster with JBOD who can comment on this?
>
> Thanks!
>
> --
>
> Eduardo Mayoral.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DTIGZ5VOKG52ZHCZTTCHGPT5NFKYSTP/
>
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGQUVKGP3S34JRB4CPWIRXZ2YDR27NFQ/

>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WAHY6ICZZXRQMFRZ72SMFXAPEQZ3CE3N/


[ovirt-users] Re: oVirt + gluster with JBOD

2018-09-13 Thread Jayme
Eduardo, there is one thing you should be aware of.  The jbod disks aren't
all pooled together to create one large storage domain. I had to spread my
data domains over the devices such as a data domain on /dev/SDA and data2
on sdb etc not a big problem with only two disks per server but with ten it
could get ugly

On Thu, Sep 13, 2018, 4:52 AM Eduardo Mayoral,  wrote:

> Thanks for your answers!
>
> Yes, my plan is a replica-3 too. The cluster will be bigger, though, 4 or
> 5 nodes with 10 disks each -> 40 to 50 bricks.
> Again, thanks for your help.
>
> --
> Eduardo Mayoral.
>
> On 12/09/18 20:54, Jayme wrote:
>
> Should also note that I am using replica 3 configuration with no arbiter
> for extra redundancy.
>
> On Wed, Sep 12, 2018 at 3:53 PM Jayme  wrote:
>
>> I am running a three server oVirt hyperconverged setup with JBOD disks.
>> Two 2TB SSDs in JBOD per server.  The configuration is working very well
>> for me so far.
>>
>>
>>
>> On Wed, Sep 12, 2018 at 2:36 PM Donny Davis  wrote:
>>
>>> JBOD is on the drop down when you do the setup for the volumes
>>>
>>> On Wed, Sep 12, 2018, 1:50 AM Eduardo Mayoral  wrote:
>>>
 Hi!

   I am thinking about using some spare servers I have for a
 hyperconverged oVirt + gluster. Only problem with these servers is that
 the RAID card is not specially good, so I am considering using gluster
 in a JBOD configuration. JBOD is supported for recent versions of
 gluster, but I am not sure if this configuration is supported by oVirt,
 or if there are any special considerations I should take into account.

 Anyone running oVirt + gluster with JBOD who can comment on this?

 Thanks!

 --

 Eduardo Mayoral.

 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DTIGZ5VOKG52ZHCZTTCHGPT5NFKYSTP/

>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGQUVKGP3S34JRB4CPWIRXZ2YDR27NFQ/
>>>
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HATZBNQTEZUZLNRLA264SWN4X6J2HU5F/


[ovirt-users] Re: oVirt + gluster with JBOD

2018-09-13 Thread Eduardo Mayoral
Thanks for your answers!

Yes, my plan is a replica-3 too. The cluster will be bigger, though, 4
or 5 nodes with 10 disks each -> 40 to 50 bricks.

Again, thanks for your help.

--
Eduardo Mayoral.

On 12/09/18 20:54, Jayme wrote:
> Should also note that I am using replica 3 configuration with no
> arbiter for extra redundancy.
>
> On Wed, Sep 12, 2018 at 3:53 PM Jayme  > wrote:
>
> I am running a three server oVirt hyperconverged setup with JBOD
> disks.  Two 2TB SSDs in JBOD per server.  The configuration is
> working very well for me so far.
>
>
>
> On Wed, Sep 12, 2018 at 2:36 PM Donny Davis  > wrote:
>
> JBOD is on the drop down when you do the setup for the volumes
>
> On Wed, Sep 12, 2018, 1:50 AM Eduardo Mayoral
> mailto:emayo...@arsys.es>> wrote:
>
> Hi!
>
>   I am thinking about using some spare servers I have for a
> hyperconverged oVirt + gluster. Only problem with these
> servers is that
> the RAID card is not specially good, so I am considering
> using gluster
> in a JBOD configuration. JBOD is supported for recent
> versions of
> gluster, but I am not sure if this configuration is
> supported by oVirt,
> or if there are any special considerations I should take
> into account.
>
>     Anyone running oVirt + gluster with JBOD who can
> comment on this?
>
>     Thanks!
>
> --
>
> Eduardo Mayoral.
>
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DTIGZ5VOKG52ZHCZTTCHGPT5NFKYSTP/
>
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGQUVKGP3S34JRB4CPWIRXZ2YDR27NFQ/
>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BXYQ7LCDWOUUU2WKNHXWXXN2JS7ZYOM2/


[ovirt-users] Re: oVirt + gluster with JBOD

2018-09-12 Thread Jayme
Should also note that I am using replica 3 configuration with no arbiter
for extra redundancy.

On Wed, Sep 12, 2018 at 3:53 PM Jayme  wrote:

> I am running a three server oVirt hyperconverged setup with JBOD disks.
> Two 2TB SSDs in JBOD per server.  The configuration is working very well
> for me so far.
>
>
>
> On Wed, Sep 12, 2018 at 2:36 PM Donny Davis  wrote:
>
>> JBOD is on the drop down when you do the setup for the volumes
>>
>> On Wed, Sep 12, 2018, 1:50 AM Eduardo Mayoral  wrote:
>>
>>> Hi!
>>>
>>>   I am thinking about using some spare servers I have for a
>>> hyperconverged oVirt + gluster. Only problem with these servers is that
>>> the RAID card is not specially good, so I am considering using gluster
>>> in a JBOD configuration. JBOD is supported for recent versions of
>>> gluster, but I am not sure if this configuration is supported by oVirt,
>>> or if there are any special considerations I should take into account.
>>>
>>> Anyone running oVirt + gluster with JBOD who can comment on this?
>>>
>>> Thanks!
>>>
>>> --
>>>
>>> Eduardo Mayoral.
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DTIGZ5VOKG52ZHCZTTCHGPT5NFKYSTP/
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGQUVKGP3S34JRB4CPWIRXZ2YDR27NFQ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5LAVPJGOFIITR67XAGVGCOPCVJWDCXVY/


[ovirt-users] Re: oVirt + gluster with JBOD

2018-09-12 Thread Jayme
I am running a three server oVirt hyperconverged setup with JBOD disks.
Two 2TB SSDs in JBOD per server.  The configuration is working very well
for me so far.



On Wed, Sep 12, 2018 at 2:36 PM Donny Davis  wrote:

> JBOD is on the drop down when you do the setup for the volumes
>
> On Wed, Sep 12, 2018, 1:50 AM Eduardo Mayoral  wrote:
>
>> Hi!
>>
>>   I am thinking about using some spare servers I have for a
>> hyperconverged oVirt + gluster. Only problem with these servers is that
>> the RAID card is not specially good, so I am considering using gluster
>> in a JBOD configuration. JBOD is supported for recent versions of
>> gluster, but I am not sure if this configuration is supported by oVirt,
>> or if there are any special considerations I should take into account.
>>
>> Anyone running oVirt + gluster with JBOD who can comment on this?
>>
>> Thanks!
>>
>> --
>>
>> Eduardo Mayoral.
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DTIGZ5VOKG52ZHCZTTCHGPT5NFKYSTP/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGQUVKGP3S34JRB4CPWIRXZ2YDR27NFQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXKP5VHSJUR4ZOBIAPX6ICMP36BPT27E/


[ovirt-users] Re: oVirt + gluster with JBOD

2018-09-12 Thread Donny Davis
JBOD is on the drop down when you do the setup for the volumes

On Wed, Sep 12, 2018, 1:50 AM Eduardo Mayoral  wrote:

> Hi!
>
>   I am thinking about using some spare servers I have for a
> hyperconverged oVirt + gluster. Only problem with these servers is that
> the RAID card is not specially good, so I am considering using gluster
> in a JBOD configuration. JBOD is supported for recent versions of
> gluster, but I am not sure if this configuration is supported by oVirt,
> or if there are any special considerations I should take into account.
>
> Anyone running oVirt + gluster with JBOD who can comment on this?
>
> Thanks!
>
> --
>
> Eduardo Mayoral.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DTIGZ5VOKG52ZHCZTTCHGPT5NFKYSTP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGQUVKGP3S34JRB4CPWIRXZ2YDR27NFQ/


[ovirt-users] Re: oVirt + Gluster geo-replication

2018-09-07 Thread femi adegoke
Thanks Sahina for the quick reply & links.

Time to go do some reading!!
On Sep 7 2018, at 3:21 am, Sahina Bose  wrote:
>
> Did you see the 
> https://ovirt.org/develop/release-management/features/gluster/gluster-geo-replication/#create-a-new-geo-replication-session
>  ?
>
> You can set up a new session from oVirt only if you are managing the remote 
> (slave) gluster cluster from oVirt as well. Otherwise you can either use 
> gluster-ansible role (https://github.com/gluster/gluster-ansible-features) or 
> the cli 
> (https://docs.gluster.org/en/v3/Administrator%20Guide/Geo%20Replication/) to 
> do so.
>
> You can have 1 volume replicated to mulitple sites.
>
> On Fri, Sep 7, 2018 at 3:34 PM, femi adegoke  (mailto:ov...@fateknollogee.com)> wrote:
> > Hi Sahina,
> > https://ovirt.org/develop/release-management/features/gluster/gluster-geo-replication/
> >
> > Unfortunately, there isn't info on how to set this up.
> > Can you share with us any tips or info needed to do geo-rep between 2 sites?
> >
> > I also read an article that says geo-rep between 3 sites is possible?
> >
> > Thanks,
> > Femi
>
>
>
>
>
>
>
>
>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HMZ2GKURRZGI6FJNUAEN6TRUV5TE2VOD/


[ovirt-users] Re: oVirt + Gluster geo-replication

2018-09-07 Thread Sahina Bose
Did you see the
https://ovirt.org/develop/release-management/features/gluster/gluster-geo-replication/#create-a-new-geo-replication-session
?

You can set up a new session from oVirt only if you are managing the remote
(slave) gluster cluster from oVirt as well. Otherwise you can either use
gluster-ansible role (https://github.com/gluster/gluster-ansible-features)
or the cli (
https://docs.gluster.org/en/v3/Administrator%20Guide/Geo%20Replication/) to
do so.

You can have 1 volume replicated to mulitple sites.

On Fri, Sep 7, 2018 at 3:34 PM, femi adegoke 
wrote:

> Hi Sahina,
> https://ovirt.org/develop/release-management/features/gluster/gluster-geo-
> replication/
>
> Unfortunately, there isn't info on how to set this up.
> Can you share with us any tips or info needed to do geo-rep between 2
> sites?
> I also read an article that says geo-rep between 3 sites is possible?
>
> Thanks,
> Femi
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GY5JX6RV5BCHXB5Y6LYBMJYQYXF3XI7J/


[ovirt-users] Re: Ovirt + Gluster : How do I gain access to the file systems of the VMs

2018-06-19 Thread Roy Golan
On Tue, 19 Jun 2018 at 18:51 Hanson Turner 
wrote:

> Hi Guys,
>
> I've an answer... Here's how I did it...
>
> First, I needed kpartx ... so
>
> #apt-get install kpartx
>
> Then setup a loopback device for the raw hdd image
>
> #losetup /dev/loop4 [IMAGE FILE]
>
> #kpartx -a /dev/loop4
>
> This allowed me to mount the various partitions included in the VM. There
> you can modify the configs, make backups etc.
>
> Thanks,
>
> Hanson
>
>
Hi Hanson, also look at http://libguestfs.org/


On 06/19/2018 09:31 AM, Hanson Turner wrote:
>
> Hi Sahina,
>
> Thanks for your reply, I can copy the files off without issue. Using
> either a remote mount gluster, or just use the node and scp the files to
> where I want them.
>
> I was asking how to/do I mount the VM's disk in a way to be able to
> pull/modify files that are on the HDD of the VM.
>
> Thanks,
>
> Hason
>
>
> On 06/19/2018 05:02 AM, Sahina Bose wrote:
>
>
>
> On Mon, Jun 18, 2018 at 5:12 PM, Hanson Turner  > wrote:
>
>> Hi Guys,
>>
>> My engine has corrupted, and while waiting for help, I'd like to see if I
>> can pull some data off the VM's to re purpose back onto dedicated hardware.
>>
>> Our setup is/was a gluster based storage system for VM's. The gluster
>> data storage I'm assuming is okay, I think the hosted engine is hosed, and
>> needs restored, but that's another thread.
>>
>> I can copy the raw disk file off of the gluster data domain. What's the
>> best way to mount it short of importing it into another gluster domain?
>>
>> With vmware, we can grab the disk file and move it from server to server
>> without issue. You can mount and explore contents with workstation.
>>
>
> If you want to copy the image file. you can mount the gluster volume and
> copy it.
> using mount -t glusterfs :/ 
>
>
>> What do we have available to us for ovirt?
>>
>> Thanks,
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/
>>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZBEM7WJ5SGRIQLC53GKZSFXYMEOXLRW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UCZO7QL3ZG27DSCYKGMYI7BMB2HNRDJ7/


[ovirt-users] Re: Ovirt + Gluster : How do I gain access to the file systems of the VMs

2018-06-19 Thread Hanson Turner

Hi Guys,

I've an answer... Here's how I did it...

First, I needed kpartx ... so

#apt-get install kpartx

Then setup a loopback device for the raw hdd image

#losetup /dev/loop4 [IMAGE FILE]

#kpartx -a /dev/loop4

This allowed me to mount the various partitions included in the VM. 
There you can modify the configs, make backups etc.


Thanks,

Hanson


On 06/19/2018 09:31 AM, Hanson Turner wrote:


Hi Sahina,

Thanks for your reply, I can copy the files off without issue. Using 
either a remote mount gluster, or just use the node and scp the files 
to where I want them.


I was asking how to/do I mount the VM's disk in a way to be able to 
pull/modify files that are on the HDD of the VM.


Thanks,

Hason


On 06/19/2018 05:02 AM, Sahina Bose wrote:



On Mon, Jun 18, 2018 at 5:12 PM, Hanson Turner 
mailto:han...@andrewswireless.net>> wrote:


Hi Guys,

My engine has corrupted, and while waiting for help, I'd like to
see if I can pull some data off the VM's to re purpose back onto
dedicated hardware.

Our setup is/was a gluster based storage system for VM's. The
gluster data storage I'm assuming is okay, I think the hosted
engine is hosed, and needs restored, but that's another thread.

I can copy the raw disk file off of the gluster data domain.
What's the best way to mount it short of importing it into
another gluster domain?

With vmware, we can grab the disk file and move it from server to
server without issue. You can mount and explore contents with
workstation.


If you want to copy the image file. you can mount the gluster volume 
and copy it.

using mount -t glusterfs :/ 


What do we have available to us for ovirt?

Thanks,

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/








___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZBEM7WJ5SGRIQLC53GKZSFXYMEOXLRW/


[ovirt-users] Re: Ovirt + Gluster : How do I gain access to the file systems of the VMs

2018-06-19 Thread Sahina Bose
On Mon, Jun 18, 2018 at 5:12 PM, Hanson Turner 
wrote:

> Hi Guys,
>
> My engine has corrupted, and while waiting for help, I'd like to see if I
> can pull some data off the VM's to re purpose back onto dedicated hardware.
>
> Our setup is/was a gluster based storage system for VM's. The gluster data
> storage I'm assuming is okay, I think the hosted engine is hosed, and needs
> restored, but that's another thread.
>
> I can copy the raw disk file off of the gluster data domain. What's the
> best way to mount it short of importing it into another gluster domain?
>
> With vmware, we can grab the disk file and move it from server to server
> without issue. You can mount and explore contents with workstation.
>

If you want to copy the image file. you can mount the gluster volume and
copy it.
using mount -t glusterfs :/ 


> What do we have available to us for ovirt?
>
> Thanks,
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archiv
> es/list/users@ovirt.org/message/2AF5K2JERYH63K25XKA4FFP4QQDZSVWM/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOV4LWTXZD636WD3JHKFUIU76GZBEQLZ/