[ovirt-users] Cockpit and Gluster

2021-02-13 Thread C Williams
Hello,

I have a hardware RAID 10 array and want to use Cockpit to deploy a hosted
engine with replica 3 Gluster.

Should I pick the JBOD option for this ?

Will my disk alignments be correct for my PVs, VGs, LVs and xfs formatting?

I don't see RAID 10. I see where it was removed in 4.2.8 -- I think.

Thank You All For Your Help !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LA66XNZ776CFAU6XTOUILQ2OOSCRR3PP/


[ovirt-users] Re: Select host for image upload

2020-11-01 Thread C Williams
Nir,

Thank you for getting back with me !  Sorry to be slow responding.

Indeed, when i pick the host that I want to use for the upload, oVirt
ignores me and picks the first host in my data center.

Thank You For Your Help

On Sun, Nov 1, 2020 at 4:29 AM Nir Soffer  wrote:

>
>
> On Mon, Oct 26, 2020, 21:34 C Williams  wrote:
>
>> Hello,
>>
>> I am having problems selecting the host that I want to use for .iso and
>> other image uploads  via the oVirt Web Console. I have 20+ servers of
>> varying speeds.
>>
>> Somehow oVirt keeps wanting to select my oldest and slowest system to use
>> for image uploads.
>>
>
> oVirts picks a random host from the hosts that can access the storage
> domain. If you have a better way to do this, you can specify the host in
> the UI.
>
> If you select a host and oVirt ignores your selection it sounds like a bug.
>
> Nir
>
>
>> Please Help !
>>
>> Thank You !
>>
>> C Williams
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RY6SCAMTCA4W5MALU5QFGSR33F6LDXG6/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5DYX66LHDM6OSG643KGMFK3SBBH3K3BL/


[ovirt-users] Select host for image upload

2020-10-26 Thread C Williams
Hello,

I am having problems selecting the host that I want to use for .iso and
other image uploads  via the oVirt Web Console. I have 20+ servers of
varying speeds.

Somehow oVirt keeps wanting to select my oldest and slowest system to use
for image uploads.

Please Help !

Thank You !

C Williams
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RY6SCAMTCA4W5MALU5QFGSR33F6LDXG6/


[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-15 Thread C Williams
Thank You Strahil !

On Thu, Oct 15, 2020 at 8:05 AM Strahil Nikolov 
wrote:

> >Please clarify what are the disk groups that you are referring to?
> Either Raid5/6 or Raid10 with a HW controller(s).
>
>
> >Regarding your statement  "In JBOD mode, Red Hat support only 'replica 3'
> >volumes." does this also mean "replica 3" variants ex.
> >"distributed-replicate"
> Nope, As far as I know - only when you have 3 copies of the data ('replica
> 3' only).
>
> Best Regards,
> Strahil Nikolov
>
>
> On Wed, Oct 14, 2020 at 7:34 AM C Williams 
> wrote:
> > Thanks Strahil !
> >
> > More questions may follow.
> >
> > Thanks Again For Your Help !
> >
> > On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov 
> wrote:
> >> Imagine you got a host with 60 Spinning Disks -> I would recommend you
> to split it to 10/12 disk groups and these groups will represent several
> bricks (6/5).
> >>
> >> Keep in mind that when you start using many (some articles state
> hundreds , but no exact number was given) bricks , you should consider
> brick multiplexing (cluster.brick-multiplex).
> >>
> >> So, you can use as many bricks you want , but each brick requires cpu
> time (separate thread) , tcp port number and memory.
> >>
> >> In my setup I use multiple bricks in order to spread the load via LACP
> over several small (1GBE) NICs.
> >>
> >>
> >> The only "limitation" is to have your data on separate hosts , so when
> you create the volume it is extremely advisable that you follow this model:
> >>
> >> hostA:/path/to/brick
> >> hostB:/path/to/brick
> >> hostC:/path/to/brick
> >> hostA:/path/to/brick2
> >> hostB:/path/to/brick2
> >> hostC:/path/to/brick2
> >>
> >> In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep
> that in mind.
> >>
> >> From my perspective , JBOD is suitable for NVMEs/SSDs while spinning
> disks should be in a raid of some type (maybe RAID10 for perf).
> >>
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >>
> >>
> >>
> >>
> >>
> >> В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
> >>
> >>
> >>
> >>
> >>
> >> Hello,
> >>
> >> I am getting some questions from others on my team.
> >>
> >> I have some hosts that could provide up to 6 JBOD disks for oVirt data
> (not arbiter) bricks
> >>
> >> Would this be workable / advisable ?  I'm under the impression there
> should not be more than 1 data brick per HCI host .
> >>
> >> Please correct me if I'm wrong.
> >>
> >> Thank You For Your Help !
> >>
> >>
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
> >>
> >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIXPK66DMDN24WQQWXEJJAHZ54ATQDLG/


[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-14 Thread C Williams
Strahil,

Please clarify what are the disk groups that you are referring to?

Also,
Regarding your statement  "In JBOD mode, Red Hat support only 'replica 3'
volumes." does this also mean "replica 3" variants ex.
"distributed-replicate"

Thank You For Your Help !

On Wed, Oct 14, 2020 at 7:34 AM C Williams  wrote:

> Thanks Strahil !
>
> More questions may follow.
>
> Thanks Again For Your Help !
>
> On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov 
> wrote:
>
>> Imagine you got a host with 60 Spinning Disks -> I would recommend you to
>> split it to 10/12 disk groups and these groups will represent several
>> bricks (6/5).
>>
>> Keep in mind that when you start using many (some articles state hundreds
>> , but no exact number was given) bricks , you should consider brick
>> multiplexing (cluster.brick-multiplex).
>>
>> So, you can use as many bricks you want , but each brick requires cpu
>> time (separate thread) , tcp port number and memory.
>>
>> In my setup I use multiple bricks in order to spread the load via LACP
>> over several small (1GBE) NICs.
>>
>>
>> The only "limitation" is to have your data on separate hosts , so when
>> you create the volume it is extremely advisable that you follow this model:
>>
>> hostA:/path/to/brick
>> hostB:/path/to/brick
>> hostC:/path/to/brick
>> hostA:/path/to/brick2
>> hostB:/path/to/brick2
>> hostC:/path/to/brick2
>>
>> In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep
>> that in mind.
>>
>> From my perspective , JBOD is suitable for NVMEs/SSDs while spinning
>> disks should be in a raid of some type (maybe RAID10 for perf).
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams <
>> cwilliams3...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hello,
>>
>> I am getting some questions from others on my team.
>>
>> I have some hosts that could provide up to 6 JBOD disks for oVirt data
>> (not arbiter) bricks
>>
>> Would this be workable / advisable ?  I'm under the impression there
>> should not be more than 1 data brick per HCI host .
>>
>> Please correct me if I'm wrong.
>>
>> Thank You For Your Help !
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z4HUGVEPTXY2LGTGMO656K4KADCXRLND/


[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-14 Thread C Williams
Thanks Strahil !

More questions may follow.

Thanks Again For Your Help !

On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov 
wrote:

> Imagine you got a host with 60 Spinning Disks -> I would recommend you to
> split it to 10/12 disk groups and these groups will represent several
> bricks (6/5).
>
> Keep in mind that when you start using many (some articles state hundreds
> , but no exact number was given) bricks , you should consider brick
> multiplexing (cluster.brick-multiplex).
>
> So, you can use as many bricks you want , but each brick requires cpu time
> (separate thread) , tcp port number and memory.
>
> In my setup I use multiple bricks in order to spread the load via LACP
> over several small (1GBE) NICs.
>
>
> The only "limitation" is to have your data on separate hosts , so when you
> create the volume it is extremely advisable that you follow this model:
>
> hostA:/path/to/brick
> hostB:/path/to/brick
> hostC:/path/to/brick
> hostA:/path/to/brick2
> hostB:/path/to/brick2
> hostC:/path/to/brick2
>
> In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep that
> in mind.
>
> From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks
> should be in a raid of some type (maybe RAID10 for perf).
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
>
>
>
>
>
> Hello,
>
> I am getting some questions from others on my team.
>
> I have some hosts that could provide up to 6 JBOD disks for oVirt data
> (not arbiter) bricks
>
> Would this be workable / advisable ?  I'm under the impression there
> should not be more than 1 data brick per HCI host .
>
> Please correct me if I'm wrong.
>
> Thank You For Your Help !
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XA2CKVS7OMFJEKOXXCZB4UTXVZMEL7TB/


[ovirt-users] Number of data bricks per oVirt HCI host

2020-10-13 Thread C Williams
Hello,

I am getting some questions from others on my team.

I have some hosts that could provide up to 6 JBOD disks for oVirt data (not
arbiter) bricks

Would this be workable / advisable ?  I'm under the impression there should
not be more than 1 data brick per HCI host .

Please correct me if I'm wrong.

Thank You For Your Help !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/


[ovirt-users] Re: Replica Question

2020-09-30 Thread C Williams
Thank You Strahil !!

On Wed, Sep 30, 2020 at 12:21 PM Strahil Nikolov 
wrote:

> In your case it seems reasonable, but you should test the 2 stripe sizes
> (128K vs 256K) before running in production. The good thing about replica
> volumes is that you can remove a brick , recreate it from cli and then add
> it back.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В сряда, 30 септември 2020 г., 17:03:11 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
>
>
>
>
>
> Hello,
>
> I am planning to use (6) 1 TB disks in a hardware RAID 6 array which will
> yield 4 data disks per oVirt Gluster brick.
>
> However.
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
> ---  states
>
> "For RAID 6, the stripe unit size must be chosen such that the full stripe
> size (stripe unit * number of data disks) is between 1 MiB and 2 MiB,
> preferably in the lower end of the range. Hardware RAID controllers usually
> allow stripe unit sizes that are a power of 2. For RAID 6 with 12 disks (10
> data disks), the recommended stripe unit size is 128KiB"
>
> In my case,  each  oVirt Gluster Brick would have 4 data disks which would
> mean a 256K hardware RAID stripe size ( by what I see above)  to get the
> full stripe size unit to above 1 MiB  ( 4 data disks X 256K Hardware RAID
> stripe = 1024 K )
>
> Could I use a smaller stripe size ex. 128K -- since most of the oVirt
> virtual disk traffic will be small sharded files and a 256K stripe would
> seem to me to be pretty big.for this type of data ?
>
> Thanks For Your Help !!
>
>
>
>
>
> On Tue, Sep 29, 2020 at 8:22 PM C Williams 
> wrote:
> > Strahil,
> >
> > Once Again Thank You For Your Help !
> >
> > On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov 
> wrote:
> >> One important step is to align the XFS to the stripe size * stripe
> width. Don't miss it or you might have issues.
> >>
> >> Details can be found at:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >>
> >>
> >>
> >>
> >>
> >> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
> >>
> >>
> >>
> >>
> >>
> >> Hello,
> >>
> >> We have decided to get a 6th server for the install. I hope to set up a
> 2x3 Distributed replica 3 .
> >>
> >> So we are not going to worry about the "5 server" situation.
> >>
> >> Thank You All For Your Help !!
> >>
> >> On Mon, Sep 28, 2020 at 5:53 PM C Williams 
> wrote:
> >>> Hello,
> >>>
> >>> More questions on this -- since I have 5 servers . Could the following
> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
> contiguous storage.
> >>>
> >>> Mountpoint for RAID 6 partition (3TB)  /brick
> >>>
> >>> Server A: VOL1 - Brick 1  directory
>  /brick/brick1 (VOL1 Data brick)
>
> >>> Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory
>  /brick/brick2 (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> >>> Server C: VOL1 - Brick 3 directory
> /brick/brick3  (VOL1 Data brick)
> >>> Server D: VOL2 - Brick 1 directory
> /brick/brick1  (VOL2 Data brick)
> >>> Server E  VOL2 - Brick 2 directory
> /brick/brick2  (VOL2 Data brick)
> >>>
> >>> Questions about this configuration
> >>> 1.  Is it safe to use a  mount point 2 times ?
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
> says "Ensure that no more than one brick is created from a single mount."
> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
> >>>
> >>> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and
> add 2 additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
> >>>  By contiguous storage I mean that df -h would show ~6 TB disk
> space.
> >>>
> >>> Thank You For Your Help !!
> >>>
> >>&g

[ovirt-users] Re: Replica Question

2020-09-30 Thread C Williams
Hello,

I am planning to use (6) 1 TB disks in a hardware RAID 6 array which will
yield 4 data disks per oVirt Gluster brick.

However.
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
---  states

"For RAID 6, the stripe unit size must be chosen such that the full stripe
size (stripe unit * number of data disks) is between 1 MiB and 2 MiB,
preferably in the lower end of the range. Hardware RAID controllers usually
allow stripe unit sizes that are a power of 2. For RAID 6 with 12 disks (10
data disks), the recommended stripe unit size is 128KiB"

In my case,  each  oVirt Gluster Brick would have 4 data disks which would
mean a 256K hardware RAID stripe size ( by what I see above)  to get the
full stripe size unit to above 1 MiB  ( 4 data disks X 256K Hardware RAID
stripe = 1024 K )

Could I use a smaller stripe size ex. 128K -- since most of the oVirt
virtual disk traffic will be small sharded files and a 256K stripe would
seem to me to be pretty big.for this type of data ?

Thanks For Your Help !!





On Tue, Sep 29, 2020 at 8:22 PM C Williams  wrote:

> Strahil,
>
> Once Again Thank You For Your Help !
>
> On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov 
> wrote:
>
>> One important step is to align the XFS to the stripe size * stripe width.
>> Don't miss it or you might have issues.
>>
>> Details can be found at:
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams <
>> cwilliams3...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hello,
>>
>> We have decided to get a 6th server for the install. I hope to set up a
>> 2x3 Distributed replica 3 .
>>
>> So we are not going to worry about the "5 server" situation.
>>
>> Thank You All For Your Help !!
>>
>> On Mon, Sep 28, 2020 at 5:53 PM C Williams 
>> wrote:
>> > Hello,
>> >
>> > More questions on this -- since I have 5 servers . Could the following
>> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
>> contiguous storage.
>> >
>> > Mountpoint for RAID 6 partition (3TB)  /brick
>> >
>> > Server A: VOL1 - Brick 1  directory
>>  /brick/brick1 (VOL1 Data brick)
>>
>> > Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
>> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
>> > Server C: VOL1 - Brick 3 directory
>> /brick/brick3  (VOL1 Data brick)
>> > Server D: VOL2 - Brick 1 directory
>> /brick/brick1  (VOL2 Data brick)
>> > Server E  VOL2 - Brick 2 directory
>> /brick/brick2  (VOL2 Data brick)
>> >
>> > Questions about this configuration
>> > 1.  Is it safe to use a  mount point 2 times ?
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
>> says "Ensure that no more than one brick is created from a single mount."
>> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>> >
>> > 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add
>> 2 additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
>> >  By contiguous storage I mean that df -h would show ~6 TB disk
>> space.
>> >
>> > Thank You For Your Help !!
>> >
>> > On Mon, Sep 28, 2020 at 4:04 PM C Williams 
>> wrote:
>> >> Strahil,
>> >>
>> >> Thank You For Your Help !
>> >>
>> >> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
>> wrote:
>> >>> You can setup your bricks in such way , that each host has at least 1
>> brick.
>> >>> For example:
>> >>> Server A: VOL1 - Brick 1
>> >>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>> >>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>> >>> Server D: VOL2 - brick 1
>> >>>
>> >>> The most optimal is to find a small system/VM for being an arbiter
>> and having a 'replica 3 arbiter 1' volume.
>> >>>
>> >>> Best Regards,
>> >>&

[ovirt-users] Re: Replica Question

2020-09-29 Thread C Williams
Strahil,

Once Again Thank You For Your Help !

On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov 
wrote:

> One important step is to align the XFS to the stripe size * stripe width.
> Don't miss it or you might have issues.
>
> Details can be found at:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
>
>
>
>
>
> Hello,
>
> We have decided to get a 6th server for the install. I hope to set up a
> 2x3 Distributed replica 3 .
>
> So we are not going to worry about the "5 server" situation.
>
> Thank You All For Your Help !!
>
> On Mon, Sep 28, 2020 at 5:53 PM C Williams 
> wrote:
> > Hello,
> >
> > More questions on this -- since I have 5 servers . Could the following
> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
> contiguous storage.
> >
> > Mountpoint for RAID 6 partition (3TB)  /brick
> >
> > Server A: VOL1 - Brick 1  directory
>  /brick/brick1 (VOL1 Data brick)
>
> > Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> > Server C: VOL1 - Brick 3 directory
> /brick/brick3  (VOL1 Data brick)
> > Server D: VOL2 - Brick 1 directory
> /brick/brick1  (VOL2 Data brick)
> > Server E  VOL2 - Brick 2 directory
> /brick/brick2  (VOL2 Data brick)
> >
> > Questions about this configuration
> > 1.  Is it safe to use a  mount point 2 times ?
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
> says "Ensure that no more than one brick is created from a single mount."
> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
> >
> > 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add
> 2 additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
> >  By contiguous storage I mean that df -h would show ~6 TB disk space.
> >
> > Thank You For Your Help !!
> >
> > On Mon, Sep 28, 2020 at 4:04 PM C Williams 
> wrote:
> >> Strahil,
> >>
> >> Thank You For Your Help !
> >>
> >> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
> wrote:
> >>> You can setup your bricks in such way , that each host has at least 1
> brick.
> >>> For example:
> >>> Server A: VOL1 - Brick 1
> >>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
> >>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
> >>> Server D: VOL2 - brick 1
> >>>
> >>> The most optimal is to find a small system/VM for being an arbiter and
> having a 'replica 3 arbiter 1' volume.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
> jay...@gmail.com> написа:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> It might be possible to do something similar as described in the
> documentation here:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
> -- but I'm not sure if oVirt HCI would support it. You might have to roll
> out your own GlusterFS storage solution. Someone with more Gluster/HCI
> knowledge might know better.
> >>>
> >>> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
> wrote:
> >>>> Jayme,
> >>>>
> >>>> Thank for getting back with me !
> >>>>
> >>>> If I wanted to be wasteful with storage, could I start with an
> initial replica 2 + arbiter and then add 2 bricks to the volume ? Could the
> arbiter solve split-brains for 4 bricks ?
> >>>>
> >>>> Thank You For Your Help !
> >>>>
> >>>> On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
> >>>>> You can only do HCI in multiple's of 3. You could do a 3 server HCI
> setup and add the other two servers as compute nodes or you 

[ovirt-users] Re: Replica Question

2020-09-29 Thread C Williams
Strahil,

Thank You For Your Help !

On Tue, Sep 29, 2020 at 11:24 AM Strahil Nikolov 
wrote:

>
> More questions on this -- since I have 5 servers . Could the following
> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
> contiguous storage.
>
> Mountpoint for RAID 6 partition (3TB)  /brick
>
> Server A: VOL1 - Brick 1  directory
>  /brick/brick1 (VOL1 Data brick)
>
> Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> Server C: VOL1 - Brick 3 directory
> /brick/brick3  (VOL1 Data brick)
> Server D: VOL2 - Brick 1 directory
> /brick/brick1  (VOL2 Data brick)
> Server E  VOL2 - Brick 2 directory
> /brick/brick2  (VOL2 Data brick)
>
> Questions about this configuration
> 1.  Is it safe to use a  mount point 2 times ?
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
> says "Ensure that no more than one brick is created from a single mount."
> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>
> As long as you keep the arbiter brick (VOL 2) separate from the data brick
> (VOL 1) , it will be fine. A brick is a unique combination of Gluster TSP
> node + mount point. You can use /brick/brick1 on all your nodes and the
> volume will be fine. Using "/brick/brick1" for data brick (VOL1) and
> "/brick/brick1" for arbiter brick (VOL2) on the same host IS NOT
> ACCEPTABLE. So just keep the brick names more unique and everything will be
> fine. Maybe something like this will be easier to work with, but you can
> set it to anything you want as long as you don't use same brick in 2
> volumes:
> serverA:/gluster_bricks/VOL1/brick1
> serverB:/gluster_bricks/VOL1/brick2
> serverB:/gluster_bricks/VOL2/arbiter
> serverC:/gluster_bricks/VOL1/brick3
> serverD:/gluster_bricks/VOL2/brick1
> serverE:/gluster_bricks/VOL2/brick2
>
>
> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2
> additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
>  By contiguous storage I mean that df -h would show ~6 TB disk space.
>
> No, you either use 6 data bricks (subvol1 -> 3 data disks, subvol2 -> 3
> data disks), or you use 4 data + 2 arbiter bricks (subvol 1 -> 2 data + 1
> arbiter, subvol 2 -> 2 data + 1 arbiter). The good thing is that you can
> reshape the volume once you have more disks.
>
>
> If you have only Linux VMs , you can follow point 1 and create 2 volumes
> which will be 2 storage domains in Ovirt. Then you can stripe (software
> raid 0 via mdadm or lvm native) your VMs with 1 disk from the first volume
> and 1 disk from the second volume.
>
> Actually , I'm using 4 gluster volumes for my NVMe as my network is too
> slow. My VMs have 4 disks in a raid0 (boot) and striped LV (for "/").
>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQAOSAG3VMKIAM62UWDA67X3DOVNI6K3/


[ovirt-users] Re: Replica Question

2020-09-29 Thread C Williams
Hello,

We have decided to get a 6th server for the install. I hope to set up a 2x3
Distributed replica 3 .

So we are not going to worry about the "5 server" situation.

Thank You All For Your Help !!

On Mon, Sep 28, 2020 at 5:53 PM C Williams  wrote:

> Hello,
>
> More questions on this -- since I have 5 servers . Could the following
> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
> contiguous storage.
>
> Mountpoint for RAID 6 partition (3TB)  /brick
>
> Server A: VOL1 - Brick 1  directory
>  /brick/brick1 (VOL1 Data brick)
>
> Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> Server C: VOL1 - Brick 3 directory
> /brick/brick3  (VOL1 Data brick)
> Server D: VOL2 - Brick 1 directory
> /brick/brick1  (VOL2 Data brick)
> Server E  VOL2 - Brick 2 directory
> /brick/brick2  (VOL2 Data brick)
>
> Questions about this configuration
> 1.  Is it safe to use a  mount point 2 times ?
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
> says "Ensure that no more than one brick is created from a single mount."
> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>
> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2
> additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
>  By contiguous storage I mean that df -h would show ~6 TB disk space.
>
> Thank You For Your Help !!
>
> On Mon, Sep 28, 2020 at 4:04 PM C Williams 
> wrote:
>
>> Strahil,
>>
>> Thank You For Your Help !
>>
>> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
>> wrote:
>>
>>> You can setup your bricks in such way , that each host has at least 1
>>> brick.
>>> For example:
>>> Server A: VOL1 - Brick 1
>>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>>> Server D: VOL2 - brick 1
>>>
>>> The most optimal is to find a small system/VM for being an arbiter and
>>> having a 'replica 3 arbiter 1' volume.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
>>> jay...@gmail.com> написа:
>>>
>>>
>>>
>>>
>>>
>>> It might be possible to do something similar as described in the
>>> documentation here:
>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
>>> -- but I'm not sure if oVirt HCI would support it. You might have to roll
>>> out your own GlusterFS storage solution. Someone with more Gluster/HCI
>>> knowledge might know better.
>>>
>>> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
>>> wrote:
>>> > Jayme,
>>> >
>>> > Thank for getting back with me !
>>> >
>>> > If I wanted to be wasteful with storage, could I start with an initial
>>> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
>>> solve split-brains for 4 bricks ?
>>> >
>>> > Thank You For Your Help !
>>> >
>>> > On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>>> >> You can only do HCI in multiple's of 3. You could do a 3 server HCI
>>> setup and add the other two servers as compute nodes or you could add a 6th
>>> server and expand HCI across all 6
>>> >>
>>> >> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>>> wrote:
>>> >>> Hello,
>>> >>>
>>> >>> We recently received 5 servers. All have about 3 TB of storage.
>>> >>>
>>> >>> I want to deploy an oVirt HCI using as much of my storage and
>>> compute resources as possible.
>>> >>>
>>> >>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>> >>>
>>> >>> I have deployed replica 3s and know about replica 2 + arbiter -- but
>>> an arbiter would not be applicable here -- since I have equal storage on
>>> all of the planned bricks.

[ovirt-users] Re: Replica Question

2020-09-28 Thread C Williams
Hello,

More questions on this -- since I have 5 servers . Could the following work
?   Each server has (1) 3TB RAID 6 partition that I want to use for
contiguous storage.

Mountpoint for RAID 6 partition (3TB)  /brick

Server A: VOL1 - Brick 1  directory
 /brick/brick1 (VOL1 Data brick)

Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
(VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
Server C: VOL1 - Brick 3 directory
/brick/brick3  (VOL1 Data brick)
Server D: VOL2 - Brick 1 directory
/brick/brick1  (VOL2 Data brick)
Server E  VOL2 - Brick 2 directory
/brick/brick2  (VOL2 Data brick)

Questions about this configuration
1.  Is it safe to use a  mount point 2 times ?
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
says "Ensure that no more than one brick is created from a single mount."
In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B

2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2
additional data bricks plus 1 arbitrator brick (VOL2)  to create a
 distributed-replicate cluster providing ~6TB of contiguous storage ?  .
 By contiguous storage I mean that df -h would show ~6 TB disk space.

Thank You For Your Help !!

On Mon, Sep 28, 2020 at 4:04 PM C Williams  wrote:

> Strahil,
>
> Thank You For Your Help !
>
> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
> wrote:
>
>> You can setup your bricks in such way , that each host has at least 1
>> brick.
>> For example:
>> Server A: VOL1 - Brick 1
>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>> Server D: VOL2 - brick 1
>>
>> The most optimal is to find a small system/VM for being an arbiter and
>> having a 'replica 3 arbiter 1' volume.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
>> jay...@gmail.com> написа:
>>
>>
>>
>>
>>
>> It might be possible to do something similar as described in the
>> documentation here:
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
>> -- but I'm not sure if oVirt HCI would support it. You might have to roll
>> out your own GlusterFS storage solution. Someone with more Gluster/HCI
>> knowledge might know better.
>>
>> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
>> wrote:
>> > Jayme,
>> >
>> > Thank for getting back with me !
>> >
>> > If I wanted to be wasteful with storage, could I start with an initial
>> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
>> solve split-brains for 4 bricks ?
>> >
>> > Thank You For Your Help !
>> >
>> > On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>> >> You can only do HCI in multiple's of 3. You could do a 3 server HCI
>> setup and add the other two servers as compute nodes or you could add a 6th
>> server and expand HCI across all 6
>> >>
>> >> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>> wrote:
>> >>> Hello,
>> >>>
>> >>> We recently received 5 servers. All have about 3 TB of storage.
>> >>>
>> >>> I want to deploy an oVirt HCI using as much of my storage and compute
>> resources as possible.
>> >>>
>> >>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>> >>>
>> >>> I have deployed replica 3s and know about replica 2 + arbiter -- but
>> an arbiter would not be applicable here -- since I have equal storage on
>> all of the planned bricks.
>> >>>
>> >>> Thank You For Your Help !!
>> >>>
>> >>> C Williams
>> >>> ___
>> >>> Users mailing list -- users@ovirt.org
>> >>> To unsubscribe send an email to users-le...@ovirt.org
>> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> >>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>> >>>
>> >

[ovirt-users] Re: Replica Question

2020-09-28 Thread C Williams
Strahil,

Thank You For Your Help !

On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
wrote:

> You can setup your bricks in such way , that each host has at least 1
> brick.
> For example:
> Server A: VOL1 - Brick 1
> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
> Server D: VOL2 - brick 1
>
> The most optimal is to find a small system/VM for being an arbiter and
> having a 'replica 3 arbiter 1' volume.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
> jay...@gmail.com> написа:
>
>
>
>
>
> It might be possible to do something similar as described in the
> documentation here:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
> -- but I'm not sure if oVirt HCI would support it. You might have to roll
> out your own GlusterFS storage solution. Someone with more Gluster/HCI
> knowledge might know better.
>
> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
> wrote:
> > Jayme,
> >
> > Thank for getting back with me !
> >
> > If I wanted to be wasteful with storage, could I start with an initial
> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
> solve split-brains for 4 bricks ?
> >
> > Thank You For Your Help !
> >
> > On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
> >> You can only do HCI in multiple's of 3. You could do a 3 server HCI
> setup and add the other two servers as compute nodes or you could add a 6th
> server and expand HCI across all 6
> >>
> >> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
> wrote:
> >>> Hello,
> >>>
> >>> We recently received 5 servers. All have about 3 TB of storage.
> >>>
> >>> I want to deploy an oVirt HCI using as much of my storage and compute
> resources as possible.
> >>>
> >>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
> >>>
> >>> I have deployed replica 3s and know about replica 2 + arbiter -- but
> an arbiter would not be applicable here -- since I have equal storage on
> all of the planned bricks.
> >>>
> >>> Thank You For Your Help !!
> >>>
> >>> C Williams
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
> >>>
> >>
> >
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DYWB3JGTNVWDPIDXUFXDSXI6JIOGNJOU/


[ovirt-users] Re: Replica Question

2020-09-28 Thread C Williams
Thanks Jayme !

I'll read up on it.

Thanks Again For Your Help !!

On Mon, Sep 28, 2020 at 1:42 PM Jayme  wrote:

> It might be possible to do something similar as described in the
> documentation here:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
> -- but I'm not sure if oVirt HCI would support it. You might have to roll
> out your own GlusterFS storage solution. Someone with more Gluster/HCI
> knowledge might know better.
>
> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
> wrote:
>
>> Jayme,
>>
>> Thank for getting back with me !
>>
>> If I wanted to be wasteful with storage, could I start with an initial
>> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
>> solve split-brains for 4 bricks ?
>>
>> Thank You For Your Help !
>>
>> On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>>
>>> You can only do HCI in multiple's of 3. You could do a 3 server HCI
>>> setup and add the other two servers as compute nodes or you could add a 6th
>>> server and expand HCI across all 6
>>>
>>> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> We recently received 5 servers. All have about 3 TB of storage.
>>>>
>>>> I want to deploy an oVirt HCI using as much of my storage and compute
>>>> resources as possible.
>>>>
>>>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>>>
>>>> I have deployed replica 3s and know about replica 2 + arbiter -- but an
>>>> arbiter would not be applicable here -- since I have equal storage on all
>>>> of the planned bricks.
>>>>
>>>> Thank You For Your Help !!
>>>>
>>>> C Williams
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>>>
>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLA7LG35GTDPFSPFFEC6YYSSQNC4NPHM/


[ovirt-users] Re: Replica Question

2020-09-28 Thread C Williams
Jayme,

Thank for getting back with me !

If I wanted to be wasteful with storage, could I start with an initial
replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
solve split-brains for 4 bricks ?

Thank You For Your Help !

On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:

> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
> and add the other two servers as compute nodes or you could add a 6th
> server and expand HCI across all 6
>
> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
> wrote:
>
>> Hello,
>>
>> We recently received 5 servers. All have about 3 TB of storage.
>>
>> I want to deploy an oVirt HCI using as much of my storage and compute
>> resources as possible.
>>
>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>
>> I have deployed replica 3s and know about replica 2 + arbiter -- but an
>> arbiter would not be applicable here -- since I have equal storage on all
>> of the planned bricks.
>>
>> Thank You For Your Help !!
>>
>> C Williams
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/57PVX55WE6M6W2LAXU4KNJ5YIJQUAKVS/


[ovirt-users] Replica Question

2020-09-28 Thread C Williams
Hello,

We recently received 5 servers. All have about 3 TB of storage.

I want to deploy an oVirt HCI using as much of my storage and compute
resources as possible.

Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?

I have deployed replica 3s and know about replica 2 + arbiter -- but an
arbiter would not be applicable here -- since I have equal storage on all
of the planned bricks.

Thank You For Your Help !!

C Williams
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/


[ovirt-users] Re: Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread C Williams
Strahil,

I have downgraded the target. The copy from the problematic volume to the
target is going on now.
Once I have the data copied, I might downgrade the problematic volume's
Gluster to 6.5.
At that point I might reattach the original ovirt domain and see if it will
work again.
But the copy is going on right now.

Thank You For Your Help !

On Mon, Jun 22, 2020 at 10:52 AM Strahil Nikolov 
wrote:

> You should ensure  that in the storage domain tab, the old  storage is not
> visible.
>
> I  still wander why yoiu didn't try to downgrade first.
>
> Best Regards,
> Strahil Nikolov
>
> На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >The GLCL3 storage domain was detached prior to attempting to add the
> >new
> >storage domain.
> >
> >Should I also "Remove" it ?
> >
> >Thank You For Your Help !
> >
> >-- Forwarded message -
> >From: Strahil Nikolov 
> >Date: Mon, Jun 22, 2020 at 12:50 AM
> >Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
> >To: C Williams 
> >Cc: users 
> >
> >
> >You can't add the new volume as it contains the same data (UUID) as the
> >old
> >one , thus you need to detach the old one before adding the new one -
> >of
> >course this means downtime for all VMs on that storage.
> >
> >As you see , downgrading is more simpler. For me v6.5 was working,
> >while
> >anything above (6.6+) was causing complete lockdown.  Also v7.0 was
> >working, but it's supported  in oVirt 4.4.
> >
> >Best Regards,
> >Strahil Nikolov
> >
> >На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
> >
> >написа:
> >>Another question
> >>
> >>What version could I downgrade to safely ? I am at 6.9 .
> >>
> >>Thank You For Your Help !!
> >>
> >>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
> >>
> >>wrote:
> >>
> >>> You are definitely reading it wrong.
> >>> 1. I didn't create a new storage  domain ontop this new volume.
> >>> 2. I used cli
> >>>
> >>> Something like this  (in your case it should be 'replica 3'):
> >>> gluster volume create newvol replica 3 arbiter 1
> >>ovirt1:/new/brick/path
> >>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
> >>> gluster volume start newvol
> >>>
> >>> #Detach oldvol from ovirt
> >>>
> >>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
> >>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
> >>> cp -a /mnt/oldvol/* /mnt/newvol
> >>>
> >>> #Add only newvol as a storage domain in oVirt
> >>> #Import VMs
> >>>
> >>> I still think that you should downgrade your gluster packages!!!
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
> >>
> >>> написа:
> >>> >Strahil,
> >>> >
> >>> >It sounds like  you used a "System Managed Volume" for the new
> >>storage
> >>> >domain,is that correct?
> >>> >
> >>> >Thank You For Your Help !
> >>> >
> >>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams
> >
> >>> >wrote:
> >>> >
> >>> >> Strahil,
> >>> >>
> >>> >> So you made another oVirt Storage Domain -- then copied the data
> >>with
> >>> >cp
> >>> >> -a from the failed volume to the new volume.
> >>> >>
> >>> >> At the root of the volume there will be the old domain folder id
> >>ex
> >>> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
> >>> >>  in my case. Did that cause issues with making the new domain
> >>since
> >>> >it is
> >>> >> the same folder id as the old one ?
> >>> >>
> >>> >> Thank You For Your Help !
> >>> >>
> >>> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
> >>> >
> >>> >> wrote:
> >>> >>
> >>> >>> In my situation I had  only the ovirt nodes.
> >>> >>>
> >>> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
> >>> >
> >>> >>> написа:
> >>> >>> >Strahil,
> >>> >>> 

[ovirt-users] Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread C Williams
Strahil,

The GLCL3 storage domain was detached prior to attempting to add the new
storage domain.

Should I also "Remove" it ?

Thank You For Your Help !

-- Forwarded message -
From: Strahil Nikolov 
Date: Mon, Jun 22, 2020 at 12:50 AM
Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
To: C Williams 
Cc: users 


You can't add the new volume as it contains the same data (UUID) as the old
one , thus you need to detach the old one before adding the new one - of
course this means downtime for all VMs on that storage.

As you see , downgrading is more simpler. For me v6.5 was working, while
anything above (6.6+) was causing complete lockdown.  Also v7.0 was
working, but it's supported  in oVirt 4.4.

Best Regards,
Strahil Nikolov

На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams 
написа:
>Another question
>
>What version could I downgrade to safely ? I am at 6.9 .
>
>Thank You For Your Help !!
>
>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
>
>wrote:
>
>> You are definitely reading it wrong.
>> 1. I didn't create a new storage  domain ontop this new volume.
>> 2. I used cli
>>
>> Something like this  (in your case it should be 'replica 3'):
>> gluster volume create newvol replica 3 arbiter 1
>ovirt1:/new/brick/path
>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
>> gluster volume start newvol
>>
>> #Detach oldvol from ovirt
>>
>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
>> cp -a /mnt/oldvol/* /mnt/newvol
>>
>> #Add only newvol as a storage domain in oVirt
>> #Import VMs
>>
>> I still think that you should downgrade your gluster packages!!!
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
>
>> написа:
>> >Strahil,
>> >
>> >It sounds like  you used a "System Managed Volume" for the new
>storage
>> >domain,is that correct?
>> >
>> >Thank You For Your Help !
>> >
>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams 
>> >wrote:
>> >
>> >> Strahil,
>> >>
>> >> So you made another oVirt Storage Domain -- then copied the data
>with
>> >cp
>> >> -a from the failed volume to the new volume.
>> >>
>> >> At the root of the volume there will be the old domain folder id
>ex
>> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
>> >>  in my case. Did that cause issues with making the new domain
>since
>> >it is
>> >> the same folder id as the old one ?
>> >>
>> >> Thank You For Your Help !
>> >>
>> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
>> >
>> >> wrote:
>> >>
>> >>> In my situation I had  only the ovirt nodes.
>> >>>
>> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
>> >
>> >>> написа:
>> >>> >Strahil,
>> >>> >
>> >>> >So should I make the target volume on 3 bricks which do not have
>> >ovirt
>> >>> >--
>> >>> >just gluster ? In other words (3) Centos 7 hosts ?
>> >>> >
>> >>> >Thank You For Your Help !
>> >>> >
>> >>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
>> >
>> >>> >wrote:
>> >>> >
>> >>> >> I  created a fresh volume  (which is not an ovirt sgorage
>> >domain),
>> >>> >set
>> >>> >> the  original  storage  domain  in maintenance and detached
>it.
>> >>> >> Then I 'cp  -a ' the data from the old to the new volume.
>Next,
>> >I
>> >>> >just
>> >>> >> added  the  new  storage domain (the old  one  was  a  kind
>of a
>> >>> >> 'backup')  - pointing to the  new  volume  name.
>> >>> >>
>> >>> >> If  you  observe  issues ,  I would  recommend  you  to
>downgrade
>> >>> >> gluster  packages one node  at  a  time  . Then you might be
>able
>> >to
>> >>> >> restore  your  oVirt operations.
>> >>> >>
>> >>> >> Best  Regards,
>> >>> >> Strahil  Nikolov
>> >>> >>
>> >>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
>> >>> >
>> >>> >> написа:
>> &

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Another question

What version could I downgrade to safely ? I am at 6.9 .

Thank You For Your Help !!

On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov 
wrote:

> You are definitely reading it wrong.
> 1. I didn't create a new storage  domain ontop this new volume.
> 2. I used cli
>
> Something like this  (in your case it should be 'replica 3'):
> gluster volume create newvol replica 3 arbiter 1 ovirt1:/new/brick/path
> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
> gluster volume start newvol
>
> #Detach oldvol from ovirt
>
> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
> mount -t glusterfs ovirt1:/newvol /mnt/newvol
> cp -a /mnt/oldvol/* /mnt/newvol
>
> #Add only newvol as a storage domain in oVirt
> #Import VMs
>
> I still think that you should downgrade your gluster packages!!!
>
> Best Regards,
> Strahil Nikolov
>
> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >It sounds like  you used a "System Managed Volume" for the new storage
> >domain,is that correct?
> >
> >Thank You For Your Help !
> >
> >On Sun, Jun 21, 2020 at 5:40 PM C Williams 
> >wrote:
> >
> >> Strahil,
> >>
> >> So you made another oVirt Storage Domain -- then copied the data with
> >cp
> >> -a from the failed volume to the new volume.
> >>
> >> At the root of the volume there will be the old domain folder id ex
> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
> >>  in my case. Did that cause issues with making the new domain since
> >it is
> >> the same folder id as the old one ?
> >>
> >> Thank You For Your Help !
> >>
> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
> >
> >> wrote:
> >>
> >>> In my situation I had  only the ovirt nodes.
> >>>
> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
> >
> >>> написа:
> >>> >Strahil,
> >>> >
> >>> >So should I make the target volume on 3 bricks which do not have
> >ovirt
> >>> >--
> >>> >just gluster ? In other words (3) Centos 7 hosts ?
> >>> >
> >>> >Thank You For Your Help !
> >>> >
> >>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
> >
> >>> >wrote:
> >>> >
> >>> >> I  created a fresh volume  (which is not an ovirt sgorage
> >domain),
> >>> >set
> >>> >> the  original  storage  domain  in maintenance and detached  it.
> >>> >> Then I 'cp  -a ' the data from the old to the new volume. Next,
> >I
> >>> >just
> >>> >> added  the  new  storage domain (the old  one  was  a  kind  of a
> >>> >> 'backup')  - pointing to the  new  volume  name.
> >>> >>
> >>> >> If  you  observe  issues ,  I would  recommend  you  to downgrade
> >>> >> gluster  packages one node  at  a  time  . Then you might be able
> >to
> >>> >> restore  your  oVirt operations.
> >>> >>
> >>> >> Best  Regards,
> >>> >> Strahil  Nikolov
> >>> >>
> >>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
> >>> >
> >>> >> написа:
> >>> >> >Strahil,
> >>> >> >
> >>> >> >Thanks for the follow up !
> >>> >> >
> >>> >> >How did you copy the data to another volume ?
> >>> >> >
> >>> >> >I have set up another storage domain GLCLNEW1 with a new volume
> >>> >imgnew1
> >>> >> >.
> >>> >> >How would you copy all of the data from the problematic domain
> >GLCL3
> >>> >> >with
> >>> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all
> >the
> >>> >VMs,
> >>> >> >VM
> >>> >> >disks, settings, etc. ?
> >>> >> >
> >>> >> >Remember all of the regular ovirt disk copy, disk move, VM
> >export
> >>> >> >tools
> >>> >> >are failing and my VMs and disks are trapped on domain GLCL3 and
> >>> >volume
> >>> >> >images3 right now.
> >>> >> >
> >>> >> >Please let me know
> >>> >> >
> >>> >> >Thank You For Your Help

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Here is what I did to make my volume

 gluster volume create imgnew2a replica 3 transport tcp
ov12.strg.srcle.com:/bricks/brick10/imgnew2a
ov13.strg.srcle.com:/bricks/brick11/imgnew2a ov14.strg.srcle.com:
/bricks/brick12/imgnew2a

on a host with the old volume I did this

 mount -t glusterfs yy.yy.24.18:/images3/ /mnt/test/  -- Old defective
volume
  974  ls /mnt/test
  975  mount
  976  mkdir /mnt/test1trg
  977  mount -t glusterfs yy.yy.24.24:/imgnew2a /mnt/test1trg    New
volume
  978  mount
  979  ls /mnt/test
  980  cp -a /mnt/test/* /mnt/test1trg/

When I tried to add the storage domain -- I got the errors described
previously about needing to clean out the old domain.

Thank You For Your Help !

On Mon, Jun 22, 2020 at 12:01 AM C Williams  wrote:

> Thanks Strahil
>
> I made a new gluster volume using only gluster CLI. Mounted the old volume
> and the new volume. Copied my data from the old volume to the new domain.
> Set the volume options like the old domain via the CLI. Tried to make a new
> storage domain using the paths to the new servers. However, oVirt
> complained that there was already a domain there and that I needed to clean
> it first. .
>
> What to do ?
>
> Thank You For Your Help !
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KR4ARQ5IN6LEZ2HCAPBEH5G6GA3LPRJ2/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Thanks Strahil

I made a new gluster volume using only gluster CLI. Mounted the old volume
and the new volume. Copied my data from the old volume to the new domain.
Set the volume options like the old domain via the CLI. Tried to make a new
storage domain using the paths to the new servers. However, oVirt
complained that there was already a domain there and that I needed to clean
it first. .

What to do ?

Thank You For Your Help !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQJYO6CK4WQHQLM2FJ435MXH3H2BI6JL/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Strahil,

It sounds like  you used a "System Managed Volume" for the new storage
domain,is that correct?

Thank You For Your Help !

On Sun, Jun 21, 2020 at 5:40 PM C Williams  wrote:

> Strahil,
>
> So you made another oVirt Storage Domain -- then copied the data with cp
> -a from the failed volume to the new volume.
>
> At the root of the volume there will be the old domain folder id ex
> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
>  in my case. Did that cause issues with making the new domain since it is
> the same folder id as the old one ?
>
> Thank You For Your Help !
>
> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov 
> wrote:
>
>> In my situation I had  only the ovirt nodes.
>>
>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams 
>> написа:
>> >Strahil,
>> >
>> >So should I make the target volume on 3 bricks which do not have ovirt
>> >--
>> >just gluster ? In other words (3) Centos 7 hosts ?
>> >
>> >Thank You For Your Help !
>> >
>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov 
>> >wrote:
>> >
>> >> I  created a fresh volume  (which is not an ovirt sgorage domain),
>> >set
>> >> the  original  storage  domain  in maintenance and detached  it.
>> >> Then I 'cp  -a ' the data from the old to the new volume. Next,  I
>> >just
>> >> added  the  new  storage domain (the old  one  was  a  kind  of a
>> >> 'backup')  - pointing to the  new  volume  name.
>> >>
>> >> If  you  observe  issues ,  I would  recommend  you  to downgrade
>> >> gluster  packages one node  at  a  time  . Then you might be able to
>> >> restore  your  oVirt operations.
>> >>
>> >> Best  Regards,
>> >> Strahil  Nikolov
>> >>
>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
>> >
>> >> написа:
>> >> >Strahil,
>> >> >
>> >> >Thanks for the follow up !
>> >> >
>> >> >How did you copy the data to another volume ?
>> >> >
>> >> >I have set up another storage domain GLCLNEW1 with a new volume
>> >imgnew1
>> >> >.
>> >> >How would you copy all of the data from the problematic domain GLCL3
>> >> >with
>> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the
>> >VMs,
>> >> >VM
>> >> >disks, settings, etc. ?
>> >> >
>> >> >Remember all of the regular ovirt disk copy, disk move, VM export
>> >> >tools
>> >> >are failing and my VMs and disks are trapped on domain GLCL3 and
>> >volume
>> >> >images3 right now.
>> >> >
>> >> >Please let me know
>> >> >
>> >> >Thank You For Your Help !
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
>> >
>> >> >wrote:
>> >> >
>> >> >> Sorry to hear that.
>> >> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I
>> >> >upgraded
>> >> >> to 7.0 .
>> >> >> In the ended ,  I have ended  with creating a  new fresh volume
>> >and
>> >> >> physically copying the data there,  then I detached the storage
>> >> >domains and
>> >> >> attached  to the  new ones  (which holded the old  data),  but I
>> >> >could
>> >> >> afford  the downtime.
>> >> >> Also,  I can say that v7.0  (  but not 7.1  or anything later)
>> >also
>> >> >> worked  without the ACL  issue,  but it causes some trouble  in
>> >oVirt
>> >> >- so
>> >> >> avoid that unless  you have no other options.
>> >> >>
>> >> >> Best Regards,
>> >> >> Strahil  Nikolov
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
>> >> >
>> >> >> написа:
>> >> >> >Hello,
>> >> >> >
>> >> >> >Upgrading diidn't help
>> >> >> >
>> >> >> >Still acl errors trying to use a Virtual Disk from a VM
>> >> >> >
>> >> >> >[roo

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Strahil,

So you made another oVirt Storage Domain -- then copied the data with cp -a
from the failed volume to the new volume.

At the root of the volume there will be the old domain folder id ex
5fe3ad3f-2d21-404c-832e-4dc7318ca10d
 in my case. Did that cause issues with making the new domain since it is
the same folder id as the old one ?

Thank You For Your Help !

On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov 
wrote:

> In my situation I had  only the ovirt nodes.
>
> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >So should I make the target volume on 3 bricks which do not have ovirt
> >--
> >just gluster ? In other words (3) Centos 7 hosts ?
> >
> >Thank You For Your Help !
> >
> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov 
> >wrote:
> >
> >> I  created a fresh volume  (which is not an ovirt sgorage domain),
> >set
> >> the  original  storage  domain  in maintenance and detached  it.
> >> Then I 'cp  -a ' the data from the old to the new volume. Next,  I
> >just
> >> added  the  new  storage domain (the old  one  was  a  kind  of a
> >> 'backup')  - pointing to the  new  volume  name.
> >>
> >> If  you  observe  issues ,  I would  recommend  you  to downgrade
> >> gluster  packages one node  at  a  time  . Then you might be able to
> >> restore  your  oVirt operations.
> >>
> >> Best  Regards,
> >> Strahil  Nikolov
> >>
> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
> >
> >> написа:
> >> >Strahil,
> >> >
> >> >Thanks for the follow up !
> >> >
> >> >How did you copy the data to another volume ?
> >> >
> >> >I have set up another storage domain GLCLNEW1 with a new volume
> >imgnew1
> >> >.
> >> >How would you copy all of the data from the problematic domain GLCL3
> >> >with
> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the
> >VMs,
> >> >VM
> >> >disks, settings, etc. ?
> >> >
> >> >Remember all of the regular ovirt disk copy, disk move, VM export
> >> >tools
> >> >are failing and my VMs and disks are trapped on domain GLCL3 and
> >volume
> >> >images3 right now.
> >> >
> >> >Please let me know
> >> >
> >> >Thank You For Your Help !
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
> >
> >> >wrote:
> >> >
> >> >> Sorry to hear that.
> >> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I
> >> >upgraded
> >> >> to 7.0 .
> >> >> In the ended ,  I have ended  with creating a  new fresh volume
> >and
> >> >> physically copying the data there,  then I detached the storage
> >> >domains and
> >> >> attached  to the  new ones  (which holded the old  data),  but I
> >> >could
> >> >> afford  the downtime.
> >> >> Also,  I can say that v7.0  (  but not 7.1  or anything later)
> >also
> >> >> worked  without the ACL  issue,  but it causes some trouble  in
> >oVirt
> >> >- so
> >> >> avoid that unless  you have no other options.
> >> >>
> >> >> Best Regards,
> >> >> Strahil  Nikolov
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
> >> >
> >> >> написа:
> >> >> >Hello,
> >> >> >
> >> >> >Upgrading diidn't help
> >> >> >
> >> >> >Still acl errors trying to use a Virtual Disk from a VM
> >> >> >
> >> >> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
> >> >> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
> >> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
> >> >0-images3-access-control:
> >> >> >client:
> >> >>
> >> >>
> >>
> >>
>
> >>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >> >req(uid:107,gid:107,perm:1,ngrps:3),
> >> >

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Strahil,

So should I make the target volume on 3 bricks which do not have ovirt --
just gluster ? In other words (3) Centos 7 hosts ?

Thank You For Your Help !

On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov 
wrote:

> I  created a fresh volume  (which is not an ovirt sgorage domain),  set
> the  original  storage  domain  in maintenance and detached  it.
> Then I 'cp  -a ' the data from the old to the new volume. Next,  I just
> added  the  new  storage domain (the old  one  was  a  kind  of a
> 'backup')  - pointing to the  new  volume  name.
>
> If  you  observe  issues ,  I would  recommend  you  to downgrade
> gluster  packages one node  at  a  time  . Then you might be able to
> restore  your  oVirt operations.
>
> Best  Regards,
> Strahil  Nikolov
>
> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >Thanks for the follow up !
> >
> >How did you copy the data to another volume ?
> >
> >I have set up another storage domain GLCLNEW1 with a new volume imgnew1
> >.
> >How would you copy all of the data from the problematic domain GLCL3
> >with
> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the VMs,
> >VM
> >disks, settings, etc. ?
> >
> >Remember all of the regular ovirt disk copy, disk move, VM export
> >tools
> >are failing and my VMs and disks are trapped on domain GLCL3 and volume
> >images3 right now.
> >
> >Please let me know
> >
> >Thank You For Your Help !
> >
> >
> >
> >
> >
> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov 
> >wrote:
> >
> >> Sorry to hear that.
> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I
> >upgraded
> >> to 7.0 .
> >> In the ended ,  I have ended  with creating a  new fresh volume and
> >> physically copying the data there,  then I detached the storage
> >domains and
> >> attached  to the  new ones  (which holded the old  data),  but I
> >could
> >> afford  the downtime.
> >> Also,  I can say that v7.0  (  but not 7.1  or anything later)  also
> >> worked  without the ACL  issue,  but it causes some trouble  in oVirt
> >- so
> >> avoid that unless  you have no other options.
> >>
> >> Best Regards,
> >> Strahil  Nikolov
> >>
> >>
> >>
> >>
> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
> >
> >> написа:
> >> >Hello,
> >> >
> >> >Upgrading diidn't help
> >> >
> >> >Still acl errors trying to use a Virtual Disk from a VM
> >> >
> >> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
> >> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
> >0-images3-access-control:
> >> >client:
> >>
> >>
>
> >>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >req(uid:107,gid:107,perm:1,ngrps:3),
> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >[Permission denied]
> >> >The message "I [MSGID: 139001]
> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
> >0-images3-access-control:
> >> >client:
> >>
> >>
>
> >>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >req(uid:107,gid:107,perm:1,ngrps:3),
> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >[Permission denied]" repeated 2 times between [2020-06-21
> >> >01:33:45.665888]
> >> >and [2020-06-21 01:33:45.806779]
> >> >
> >> >Thank You For Your Help !
> >> >
> >> >On Sat, Jun 20, 2020 at 8:59 PM C Williams 
> >> >wrote:
> >> >
> >> >> Hello,
> >> >>
> >> >> Based on the situation, I am planning to upgrade the 3 affected
> >> >hosts.
> >> >>
> >> >> My reasoning is that the hosts/bricks were attached to 6.9 at one
> >> >time.
> >> >>
> >> >> Thanks For Your Help !
> >> >>
> >> >> On Sat, Jun 20, 2020 at 8:38 PM C Williams
> >
> >> >> wrote:
> >> >>
> >

[ovirt-users] Re: Multiple Gluster ACL issues with oVirt

2020-06-21 Thread C Williams
Hello,

I look forward to hearing back about a fix for this !

Thank You All For Your Help !

On Sun, Jun 21, 2020 at 9:47 AM Sahina Bose  wrote:

> Thanks Strahil.
>
> Adding Sas and Ravi for their inputs.
>
> On Sun, 21 Jun 2020 at 6:11 PM, Strahil Nikolov 
> wrote:
>
>> Hello Sahina, Sandro,
>>
>> I have noticed that the ACL issue with Gluster (
>> https://github.com/gluster/glusterfs/issues/876) is  happening to
>> multiple oVirt users  (so far at least 5)  and I think that this  issue
>> needs greater attention.
>> Did anyone from the RHHI team managed  to reproduce the  bug  ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AGCTENWEXPTDCNZFZDYRXPRZ2QF422SL/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5ATYKPCJ3B4K7MEDORL76HE5ASX6PKA/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Strahil,

Thanks for the follow up !

How did you copy the data to another volume ?

I have set up another storage domain GLCLNEW1 with a new volume imgnew1 .
How would you copy all of the data from the problematic domain GLCL3 with
volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the VMs, VM
disks, settings, etc. ?

Remember all of the regular ovirt disk copy, disk move, VM export  tools
are failing and my VMs and disks are trapped on domain GLCL3 and volume
images3 right now.

Please let me know

Thank You For Your Help !





On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov 
wrote:

> Sorry to hear that.
> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I upgraded
> to 7.0 .
> In the ended ,  I have ended  with creating a  new fresh volume and
> physically copying the data there,  then I detached the storage domains and
> attached  to the  new ones  (which holded the old  data),  but I could
> afford  the downtime.
> Also,  I can say that v7.0  (  but not 7.1  or anything later)  also
> worked  without the ACL  issue,  but it causes some trouble  in oVirt - so
> avoid that unless  you have no other options.
>
> Best Regards,
> Strahil  Nikolov
>
>
>
>
> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams 
> написа:
> >Hello,
> >
> >Upgrading diidn't help
> >
> >Still acl errors trying to use a Virtual Disk from a VM
> >
> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
> >[posix-acl.c:263:posix_acl_log_permit_denied] 0-images3-access-control:
> >client:
>
> >CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >req(uid:107,gid:107,perm:1,ngrps:3),
> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >[Permission denied]
> >The message "I [MSGID: 139001]
> >[posix-acl.c:263:posix_acl_log_permit_denied] 0-images3-access-control:
> >client:
>
> >CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >req(uid:107,gid:107,perm:1,ngrps:3),
> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >[Permission denied]" repeated 2 times between [2020-06-21
> >01:33:45.665888]
> >and [2020-06-21 01:33:45.806779]
> >
> >Thank You For Your Help !
> >
> >On Sat, Jun 20, 2020 at 8:59 PM C Williams 
> >wrote:
> >
> >> Hello,
> >>
> >> Based on the situation, I am planning to upgrade the 3 affected
> >hosts.
> >>
> >> My reasoning is that the hosts/bricks were attached to 6.9 at one
> >time.
> >>
> >> Thanks For Your Help !
> >>
> >> On Sat, Jun 20, 2020 at 8:38 PM C Williams 
> >> wrote:
> >>
> >>> Strahil,
> >>>
> >>> The gluster version on the current 3 gluster hosts is  6.7 (last
> >update
> >>> 2/26). These 3 hosts provide 1 brick each for the replica 3 volume.
> >>>
> >>> Earlier I had tried to add 6 additional hosts to the cluster. Those
> >new
> >>> hosts were 6.9 gluster.
> >>>
> >>> I attempted to make a new separate volume with 3 bricks provided by
> >the 3
> >>> new gluster  6.9 hosts. After having many errors from the oVirt
> >interface,
> >>> I gave up and removed the 6 new hosts from the cluster. That is
> >where the
> >>> problems started. The intent was to expand the gluster cluster while
> >making
> >>> 2 new volumes for that cluster. The ovirt compute cluster would
> >allow for
> >>> efficient VM migration between 9 hosts -- while having separate
> >gluster
> >>> volumes for safety purposes.
> >>>
> >>> Looking at the brick logs, I see where there are acl errors starting
> >from
> >>> the time of the removal of the 6 new hosts.
> >>>
> >>> Please check out the attached brick log from 6/14-18. The events
> >started
> >>> on 6/17.
> >>>
> >>> I wish I had a downgrade path.
> >>>
> >>> Thank You For The Help !!
> >>>
> >>> On Sat, Jun 20, 2020 at 7:47 PM Strahil Nikolov
> >
> >>> wrote:
> >>>
> >>>> Hi ,
> >>>>
> >>>>
> >>>> This one really looks like the ACL bug I was hit with when I
> >updated
> >

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-20 Thread C Williams
Hello,

Upgrading diidn't help

Still acl errors trying to use a Virtual Disk from a VM

[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
[2020-06-21 01:33:45.665888] I [MSGID: 139001]
[posix-acl.c:263:posix_acl_log_permit_denied] 0-images3-access-control:
client:
CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
req(uid:107,gid:107,perm:1,ngrps:3),
ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
[Permission denied]
The message "I [MSGID: 139001]
[posix-acl.c:263:posix_acl_log_permit_denied] 0-images3-access-control:
client:
CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
req(uid:107,gid:107,perm:1,ngrps:3),
ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
[Permission denied]" repeated 2 times between [2020-06-21 01:33:45.665888]
and [2020-06-21 01:33:45.806779]

Thank You For Your Help !

On Sat, Jun 20, 2020 at 8:59 PM C Williams  wrote:

> Hello,
>
> Based on the situation, I am planning to upgrade the 3 affected hosts.
>
> My reasoning is that the hosts/bricks were attached to 6.9 at one time.
>
> Thanks For Your Help !
>
> On Sat, Jun 20, 2020 at 8:38 PM C Williams 
> wrote:
>
>> Strahil,
>>
>> The gluster version on the current 3 gluster hosts is  6.7 (last update
>> 2/26). These 3 hosts provide 1 brick each for the replica 3 volume.
>>
>> Earlier I had tried to add 6 additional hosts to the cluster. Those new
>> hosts were 6.9 gluster.
>>
>> I attempted to make a new separate volume with 3 bricks provided by the 3
>> new gluster  6.9 hosts. After having many errors from the oVirt interface,
>> I gave up and removed the 6 new hosts from the cluster. That is where the
>> problems started. The intent was to expand the gluster cluster while making
>> 2 new volumes for that cluster. The ovirt compute cluster would allow for
>> efficient VM migration between 9 hosts -- while having separate gluster
>> volumes for safety purposes.
>>
>> Looking at the brick logs, I see where there are acl errors starting from
>> the time of the removal of the 6 new hosts.
>>
>> Please check out the attached brick log from 6/14-18. The events started
>> on 6/17.
>>
>> I wish I had a downgrade path.
>>
>> Thank You For The Help !!
>>
>> On Sat, Jun 20, 2020 at 7:47 PM Strahil Nikolov 
>> wrote:
>>
>>> Hi ,
>>>
>>>
>>> This one really looks like the ACL bug I was hit with when I updated
>>> from Gluster v6.5 to 6.6 and later from 7.0 to 7.2.
>>>
>>> Did you update your setup recently ? Did you upgrade gluster also ?
>>>
>>> You have to check the gluster logs in order to verify that, so you can
>>> try:
>>>
>>> 1. Set Gluster logs to trace level (for details check:
>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
>>> )
>>> 2. Power up a VM that was already off , or retry the procedure from the
>>> logs you sent.
>>> 3. Stop the trace level of the logs
>>> 4. Check libvirt logs on the host that was supposed to power up the VM
>>> (in case a VM was powered on)
>>> 5. Check the gluster brick logs on all nodes for ACL errors.
>>> Here is a sample from my old logs:
>>>
>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:19:41.489047] I
>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>>> 4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>>> [Permission denied]
>>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:22:51.818796] I
>>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>>> 4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>>> [Permission denied]
>>> gluster_bricks-data_fast4-data_fast4.log

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-20 Thread C Williams
Hello,

Based on the situation, I am planning to upgrade the 3 affected hosts.

My reasoning is that the hosts/bricks were attached to 6.9 at one time.

Thanks For Your Help !

On Sat, Jun 20, 2020 at 8:38 PM C Williams  wrote:

> Strahil,
>
> The gluster version on the current 3 gluster hosts is  6.7 (last update
> 2/26). These 3 hosts provide 1 brick each for the replica 3 volume.
>
> Earlier I had tried to add 6 additional hosts to the cluster. Those new
> hosts were 6.9 gluster.
>
> I attempted to make a new separate volume with 3 bricks provided by the 3
> new gluster  6.9 hosts. After having many errors from the oVirt interface,
> I gave up and removed the 6 new hosts from the cluster. That is where the
> problems started. The intent was to expand the gluster cluster while making
> 2 new volumes for that cluster. The ovirt compute cluster would allow for
> efficient VM migration between 9 hosts -- while having separate gluster
> volumes for safety purposes.
>
> Looking at the brick logs, I see where there are acl errors starting from
> the time of the removal of the 6 new hosts.
>
> Please check out the attached brick log from 6/14-18. The events started
> on 6/17.
>
> I wish I had a downgrade path.
>
> Thank You For The Help !!
>
> On Sat, Jun 20, 2020 at 7:47 PM Strahil Nikolov 
> wrote:
>
>> Hi ,
>>
>>
>> This one really looks like the ACL bug I was hit with when I updated from
>> Gluster v6.5 to 6.6 and later from 7.0 to 7.2.
>>
>> Did you update your setup recently ? Did you upgrade gluster also ?
>>
>> You have to check the gluster logs in order to verify that, so you can
>> try:
>>
>> 1. Set Gluster logs to trace level (for details check:
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
>> )
>> 2. Power up a VM that was already off , or retry the procedure from the
>> logs you sent.
>> 3. Stop the trace level of the logs
>> 4. Check libvirt logs on the host that was supposed to power up the VM
>> (in case a VM was powered on)
>> 5. Check the gluster brick logs on all nodes for ACL errors.
>> Here is a sample from my old logs:
>>
>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:19:41.489047] I
>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> 4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission
>> denied]
>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:22:51.818796] I
>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> 4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission
>> denied]
>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:24:43.732856] I
>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> 4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission
>> denied]
>> gluster_bricks-data_fast4-data_fast4.log:[2020-03-18 13:26:50.758178] I
>> [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
>> 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-
>> 4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
>> gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> req(uid:36,gid:36,perm:1,ngrps:3), ctx
>> (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission
>> denied]
>>
>>
>> In my case , the workaround was to downgrade the gluster packages on all
>> nodes (and reboot each node 1 by 1 ) if the major version is the same, but
>> if you upgraded to v7.X - then you can try the v7.0 .
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams <
>> cwilliams3...@gmail.

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-20 Thread C Williams
Strahil,

I understand. Please keep me posted.

Thanks For The Help !

On Sat, Jun 20, 2020 at 4:36 AM Strahil Nikolov 
wrote:

> Hey C Williams,
>
> sorry for the delay,  but I couldn't get somw time to check your logs.
> Will  try a  little  bit later.
>
> Best Regards,
> Strahil  Nikolov
>
> На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams 
> написа:
> >Hello,
> >
> >Was wanting to follow up on this issue. Users are impacted.
> >
> >Thank You
> >
> >On Fri, Jun 19, 2020 at 9:20 AM C Williams 
> >wrote:
> >
> >> Hello,
> >>
> >> Here are the logs (some IPs are changed )
> >>
> >> ov05 is the SPM
> >>
> >> Thank You For Your Help !
> >>
> >> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov
> >
> >> wrote:
> >>
> >>> Check on the hosts tab , which is your current SPM (last column in
> >Admin
> >>> UI).
> >>> Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
> >>> Then provide the log from that host and the engine's log (on the
> >>> HostedEngine VM or on your standalone engine).
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>> На 18 юни 2020 г. 23:59:36 GMT+03:00, C Williams
> >
> >>> написа:
> >>> >Resending to eliminate email issues
> >>> >
> >>> >-- Forwarded message -
> >>> >From: C Williams 
> >>> >Date: Thu, Jun 18, 2020 at 4:01 PM
> >>> >Subject: Re: [ovirt-users] Fwd: Issues with Gluster Domain
> >>> >To: Strahil Nikolov 
> >>> >
> >>> >
> >>> >Here is output from mount
> >>> >
> >>> >192.168.24.12:/stor/import0 on
> >>> >/rhev/data-center/mnt/192.168.24.12:_stor_import0
> >>> >type nfs4
> >>>
> >>>
>
> >>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.12)
> >>> >192.168.24.13:/stor/import1 on
> >>> >/rhev/data-center/mnt/192.168.24.13:_stor_import1
> >>> >type nfs4
> >>>
> >>>
>
> >>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
> >>> >192.168.24.13:/stor/iso1 on
> >>> >/rhev/data-center/mnt/192.168.24.13:_stor_iso1
> >>> >type nfs4
> >>>
> >>>
>
> >>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
> >>> >192.168.24.13:/stor/export0 on
> >>> >/rhev/data-center/mnt/192.168.24.13:_stor_export0
> >>> >type nfs4
> >>>
> >>>
>
> >>(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
> >>> >192.168.24.15:/images on
> >>> >/rhev/data-center/mnt/glusterSD/192.168.24.15:_images
> >>> >type fuse.glusterfs
> >>>
> >>>
>
> >>(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> >>> >192.168.24.18:/images3 on
> >>> >/rhev/data-center/mnt/glusterSD/192.168.24.18:_images3
> >>> >type fuse.glusterfs
> >>>
> >>>
>
> >>(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> >>> >tmpfs on /run/user/0 type tmpfs
> >>> >(rw,nosuid,nodev,relatime,seclabel,size=13198392k,mode=700)
> >>> >[root@ov06 glusterfs]#
> >>> >
> >>> >Also here is a screenshot of the console
> >>> >
> >>> >[image: image.png]
> >>> >The other domains are up
> >>> >
> >>> >Import0 and Import1 are NFS . GLCL0 is gluster. They all are
> >running
> >>> >VMs
> >>> >
> >>> >Thank You For Your Help !
> >>> >
> >>> >On Thu, Jun 18, 2020 at 3:51 PM Strahil Nikolov
> >
> >>> >wrote:
> >>> >
> >>> >> I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1'
> >>> >mounted
> >>> &

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-19 Thread C Williams
Hello,

Was wanting to follow up on this issue. Users are impacted.

Thank You

On Fri, Jun 19, 2020 at 9:20 AM C Williams  wrote:

> Hello,
>
> Here are the logs (some IPs are changed )
>
> ov05 is the SPM
>
> Thank You For Your Help !
>
> On Thu, Jun 18, 2020 at 11:31 PM Strahil Nikolov 
> wrote:
>
>> Check on the hosts tab , which is your current SPM (last column in Admin
>> UI).
>> Then open the /var/log/vdsm/vdsm.log  and repeat the operation.
>> Then provide the log from that host and the engine's log (on the
>> HostedEngine VM or on your standalone engine).
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 18 юни 2020 г. 23:59:36 GMT+03:00, C Williams 
>> написа:
>> >Resending to eliminate email issues
>> >
>> >-- Forwarded message -
>> >From: C Williams 
>> >Date: Thu, Jun 18, 2020 at 4:01 PM
>> >Subject: Re: [ovirt-users] Fwd: Issues with Gluster Domain
>> >To: Strahil Nikolov 
>> >
>> >
>> >Here is output from mount
>> >
>> >192.168.24.12:/stor/import0 on
>> >/rhev/data-center/mnt/192.168.24.12:_stor_import0
>> >type nfs4
>>
>> >(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.12)
>> >192.168.24.13:/stor/import1 on
>> >/rhev/data-center/mnt/192.168.24.13:_stor_import1
>> >type nfs4
>>
>> >(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>> >192.168.24.13:/stor/iso1 on
>> >/rhev/data-center/mnt/192.168.24.13:_stor_iso1
>> >type nfs4
>>
>> >(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>> >192.168.24.13:/stor/export0 on
>> >/rhev/data-center/mnt/192.168.24.13:_stor_export0
>> >type nfs4
>>
>> >(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
>> >192.168.24.15:/images on
>> >/rhev/data-center/mnt/glusterSD/192.168.24.15:_images
>> >type fuse.glusterfs
>>
>> >(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>> >192.168.24.18:/images3 on
>> >/rhev/data-center/mnt/glusterSD/192.168.24.18:_images3
>> >type fuse.glusterfs
>>
>> >(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>> >tmpfs on /run/user/0 type tmpfs
>> >(rw,nosuid,nodev,relatime,seclabel,size=13198392k,mode=700)
>> >[root@ov06 glusterfs]#
>> >
>> >Also here is a screenshot of the console
>> >
>> >[image: image.png]
>> >The other domains are up
>> >
>> >Import0 and Import1 are NFS . GLCL0 is gluster. They all are running
>> >VMs
>> >
>> >Thank You For Your Help !
>> >
>> >On Thu, Jun 18, 2020 at 3:51 PM Strahil Nikolov 
>> >wrote:
>> >
>> >> I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1'
>> >mounted
>> >> at all  .
>> >> What is the status  of all storage domains ?
>> >>
>> >> Best  Regards,
>> >> Strahil  Nikolov
>> >>
>> >> На 18 юни 2020 г. 21:43:44 GMT+03:00, C Williams
>> >
>> >> написа:
>> >> >  Resending to deal with possible email issues
>> >> >
>> >> >-- Forwarded message -
>> >> >From: C Williams 
>> >> >Date: Thu, Jun 18, 2020 at 2:07 PM
>> >> >Subject: Re: [ovirt-users] Issues with Gluster Domain
>> >> >To: Strahil Nikolov 
>> >> >
>> >> >
>> >> >More
>> >> >
>> >> >[root@ov06 ~]# for i in $(gluster volume list);  do  echo $i;echo;
>> >> >gluster
>> >> >volume info $i; echo;echo;gluster volume status
>> >$i;echo;echo;echo;done
>> >> >images3
>> >> >
>> >> >
>> >> >Volume Name: images3
>> >> >Type: Replicate
>> >> >Volume ID: 0243d439-1b29-47d0-ab39-d61c2f15ae8b
>> >> >Status: Started
>> >> >Snapshot Count: 0
>> >> >Number of Bricks: 1 x 3 = 3
>>

[ovirt-users] Fwd: Issues with Gluster Domain

2020-06-18 Thread C Williams
  Resending to deal with possible email issues

-- Forwarded message -
From: C Williams 
Date: Thu, Jun 18, 2020 at 2:07 PM
Subject: Re: [ovirt-users] Issues with Gluster Domain
To: Strahil Nikolov 


More

[root@ov06 ~]# for i in $(gluster volume list);  do  echo $i;echo; gluster
volume info $i; echo;echo;gluster volume status $i;echo;echo;echo;done
images3


Volume Name: images3
Type: Replicate
Volume ID: 0243d439-1b29-47d0-ab39-d61c2f15ae8b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.24.18:/bricks/brick04/images3
Brick2: 192.168.24.19:/bricks/brick05/images3
Brick3: 192.168.24.20:/bricks/brick06/images3
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
user.cifs: off
auth.allow: *
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable


Status of volume: images3
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 192.168.24.18:/bricks/brick04/images3 49152 0  Y

Brick 192.168.24.19:/bricks/brick05/images3 49152 0  Y
6779
Brick 192.168.24.20:/bricks/brick06/images3 49152 0  Y
7227
Self-heal Daemon on localhost   N/A   N/AY
6689
Self-heal Daemon on ov07.ntc.srcle.com  N/A   N/AY
6802
Self-heal Daemon on ov08.ntc.srcle.com  N/A   N/AY
7250

Task Status of Volume images3
--
There are no active volume tasks




[root@ov06 ~]# ls -l  /rhev/data-center/mnt/glusterSD/
total 16
drwxr-xr-x. 5 vdsm kvm 8192 Jun 18 14:04 192.168.24.15:_images
drwxr-xr-x. 5 vdsm kvm 8192 Jun 18 14:05 192.168.24.18:_images3
[root@ov06 ~]#

On Thu, Jun 18, 2020 at 2:03 PM C Williams  wrote:

> Strahil,
>
> Here you go -- Thank You For Your Help !
>
> BTW -- I can write a test file to gluster and it replicates properly.
> Thinking something about the oVirt Storage Domain ?
>
> [root@ov08 ~]# gluster pool list
> UUIDHostnameState
> 5b40c659-d9ab-43c3-9af8-18b074ea0b83ov06Connected
> 36ce5a00-6f65-4926-8438-696944ebadb5ov07.ntc.srcle.com  Connected
> c7e7abdb-a8f4-4842-924c-e227f0db1b29localhost   Connected
> [root@ov08 ~]# gluster volume list
> images3
>
> On Thu, Jun 18, 2020 at 1:13 PM Strahil Nikolov 
> wrote:
>
>> Log to the oVirt cluster and provide the output of:
>> gluster pool list
>> gluster volume list
>> for i in $(gluster volume list);  do  echo $i;echo; gluster volume info
>> $i; echo;echo;gluster volume status $i;echo;echo;echo;done
>>
>> ls -l  /rhev/data-center/mnt/glusterSD/
>>
>> Best Regards,
>> Strahil  Nikolov
>>
>>
>> На 18 юни 2020 г. 19:17:46 GMT+03:00, C Williams 
>> написа:
>> >Hello,
>> >
>> >I recently added 6 hosts to an existing oVirt compute/gluster cluster.
>> >
>> >Prior to this attempted addition, my cluster had 3 Hypervisor hosts and
>> >3
>> >gluster bricks which made up a single gluster volume (replica 3 volume)
>> >. I
>> >added the additional hosts and made a brick on 3 of the new hosts and
>> >attempted to make a new replica 3 volume. I had  difficulty creating
>> >the
>> >new volume. So, I decided that I would make a new compute/gluster
>> >cluster
>> >for each set of 3 new hosts.
>> >
>> >I removed the 6 new hosts from the existing oVirt Compute/Gluster
>> >Cluster
>> >leaving the 3 original hosts in place with their bricks. At that point
>> >my
>> >original bricks went down and came back up . The volume showed entries
>> >that
>> >needed healing. At that point I ran gluster volume heal images3 full,
>> >etc.
>> >The volume shows no unhealed entries. I also corrected some peer
>> >errors.
>> >
>> >However, I am unable to copy disks, move disks to another domain,
>> >export
>> >disks, etc. It appears that the engine cannot locate disks properly and
>> >I
>> >ge

[ovirt-users] Fwd: Issues with Gluster Domain

2020-06-18 Thread C Williams
Resending to deal with possible email issues

Thank You For Your Help !!
-- Forwarded message -
From: C Williams 
Date: Thu, Jun 18, 2020 at 2:03 PM
Subject: Re: [ovirt-users] Issues with Gluster Domain
To: Strahil Nikolov 


Strahil,

Here you go -- Thank You For Your Help !

BTW -- I can write a test file to gluster and it replicates properly.
Thinking something about the oVirt Storage Domain ?

[root@ov08 ~]# gluster pool list
UUIDHostnameState
5b40c659-d9ab-43c3-9af8-18b074ea0b83ov06Connected
36ce5a00-6f65-4926-8438-696944ebadb5ov07.ntc.srcle.com  Connected
c7e7abdb-a8f4-4842-924c-e227f0db1b29localhost   Connected
[root@ov08 ~]# gluster volume list
images3

On Thu, Jun 18, 2020 at 1:13 PM Strahil Nikolov 
wrote:

> Log to the oVirt cluster and provide the output of:
> gluster pool list
> gluster volume list
> for i in $(gluster volume list);  do  echo $i;echo; gluster volume info
> $i; echo;echo;gluster volume status $i;echo;echo;echo;done
>
> ls -l  /rhev/data-center/mnt/glusterSD/
>
> Best Regards,
> Strahil  Nikolov
>
>
> На 18 юни 2020 г. 19:17:46 GMT+03:00, C Williams 
> написа:
> >Hello,
> >
> >I recently added 6 hosts to an existing oVirt compute/gluster cluster.
> >
> >Prior to this attempted addition, my cluster had 3 Hypervisor hosts and
> >3
> >gluster bricks which made up a single gluster volume (replica 3 volume)
> >. I
> >added the additional hosts and made a brick on 3 of the new hosts and
> >attempted to make a new replica 3 volume. I had  difficulty creating
> >the
> >new volume. So, I decided that I would make a new compute/gluster
> >cluster
> >for each set of 3 new hosts.
> >
> >I removed the 6 new hosts from the existing oVirt Compute/Gluster
> >Cluster
> >leaving the 3 original hosts in place with their bricks. At that point
> >my
> >original bricks went down and came back up . The volume showed entries
> >that
> >needed healing. At that point I ran gluster volume heal images3 full,
> >etc.
> >The volume shows no unhealed entries. I also corrected some peer
> >errors.
> >
> >However, I am unable to copy disks, move disks to another domain,
> >export
> >disks, etc. It appears that the engine cannot locate disks properly and
> >I
> >get storage I/O errors.
> >
> >I have detached and removed the oVirt Storage Domain. I reimported the
> >domain and imported 2 VMs, But the VM disks exhibit the same behaviour
> >and
> >won't run from the hard disk.
> >
> >
> >I get errors such as this
> >
> >VDSM ov05 command HSMGetAllTasksStatusesVDS failed: low level Image
> >copy
> >failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none',
> >'-T', 'none', '-f', 'raw',
> >u'/rhev/data-center/mnt/glusterSD/192.168.24.18:
> _images3/5fe3ad3f-2d21-404c-832e-4dc7318ca10d/images/3ea5afbd-0fe0-4c09-8d39-e556c66a8b3d/fe6eab63-3b22-4815-bfe6-4a0ade292510',
> >'-O', 'raw',
> >u'/rhev/data-center/mnt/192.168.24.13:
> _stor_import1/1ab89386-a2ba-448b-90ab-bc816f55a328/images/f707a218-9db7-4e23-8bbd-9b12972012b6/d6591ec5-3ede-443d-bd40-93119ca7c7d5']
> >failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
> >sector 135168: Transport endpoint is not connected\\nqemu-img: error
> >while
> >reading sector 131072: Transport endpoint is not connected\\nqemu-img:
> >error while reading sector 139264: Transport endpoint is not
> >connected\\nqemu-img: error while reading sector 143360: Transport
> >endpoint
> >is not connected\\nqemu-img: error while reading sector 147456:
> >Transport
> >endpoint is not connected\\nqemu-img: error while reading sector
> >155648:
> >Transport endpoint is not connected\\nqemu-img: error while reading
> >sector
> >151552: Transport endpoint is not connected\\nqemu-img: error while
> >reading
> >sector 159744: Transport endpoint is not connected\\n')",)
> >
> >oVirt version  is 4.3.82-1.el7
> >OS CentOS Linux release 7.7.1908 (Core)
> >
> >The Gluster Cluster has been working very well until this incident.
> >
> >Please help.
> >
> >Thank You
> >
> >Charles Williams
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PAUURXLJE5NIPHOXLLXNZYEQ77JGHOH7/


[ovirt-users] Issues with Gluster Domain

2020-06-18 Thread C Williams
Hello,

I recently added 6 hosts to an existing oVirt compute/gluster cluster.

Prior to this attempted addition, my cluster had 3 Hypervisor hosts and 3
gluster bricks which made up a single gluster volume (replica 3 volume) . I
added the additional hosts and made a brick on 3 of the new hosts and
attempted to make a new replica 3 volume. I had  difficulty creating the
new volume. So, I decided that I would make a new compute/gluster cluster
for each set of 3 new hosts.

I removed the 6 new hosts from the existing oVirt Compute/Gluster Cluster
leaving the 3 original hosts in place with their bricks. At that point my
original bricks went down and came back up . The volume showed entries that
needed healing. At that point I ran gluster volume heal images3 full, etc.
The volume shows no unhealed entries. I also corrected some peer errors.

However, I am unable to copy disks, move disks to another domain, export
disks, etc. It appears that the engine cannot locate disks properly and I
get storage I/O errors.

I have detached and removed the oVirt Storage Domain. I reimported the
domain and imported 2 VMs, But the VM disks exhibit the same behaviour and
won't run from the hard disk.


I get errors such as this

VDSM ov05 command HSMGetAllTasksStatusesVDS failed: low level Image copy
failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none',
'-T', 'none', '-f', 'raw',
u'/rhev/data-center/mnt/glusterSD/192.168.24.18:_images3/5fe3ad3f-2d21-404c-832e-4dc7318ca10d/images/3ea5afbd-0fe0-4c09-8d39-e556c66a8b3d/fe6eab63-3b22-4815-bfe6-4a0ade292510',
'-O', 'raw', 
u'/rhev/data-center/mnt/192.168.24.13:_stor_import1/1ab89386-a2ba-448b-90ab-bc816f55a328/images/f707a218-9db7-4e23-8bbd-9b12972012b6/d6591ec5-3ede-443d-bd40-93119ca7c7d5']
failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
sector 135168: Transport endpoint is not connected\\nqemu-img: error while
reading sector 131072: Transport endpoint is not connected\\nqemu-img:
error while reading sector 139264: Transport endpoint is not
connected\\nqemu-img: error while reading sector 143360: Transport endpoint
is not connected\\nqemu-img: error while reading sector 147456: Transport
endpoint is not connected\\nqemu-img: error while reading sector 155648:
Transport endpoint is not connected\\nqemu-img: error while reading sector
151552: Transport endpoint is not connected\\nqemu-img: error while reading
sector 159744: Transport endpoint is not connected\\n')",)

oVirt version  is 4.3.82-1.el7
OS CentOS Linux release 7.7.1908 (Core)

The Gluster Cluster has been working very well until this incident.

Please help.

Thank You

Charles Williams
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5JTOGVLEOI3M4LD7H3GRD2HXSN2W2CBK/


[ovirt-users] Re: Hyperconverged setup - storage architecture - scaling

2020-01-10 Thread C Williams
Thank You Sahina !

That is great news ! Might be asking more questions as we work through this
.

Thanks Again

C Williams

On Fri, Jan 10, 2020 at 12:15 AM Sahina Bose  wrote:

>
>
> On Thu, Jan 9, 2020 at 10:22 PM C Williams 
> wrote:
>
>>   Hello,
>>
>> I did not see an answer to this ...
>>
>> "> 3. If the limit of hosts per datacenter is 250, then (in theory ) the
>> recomended way in reaching this treshold would be to create 20 separated
>> oVirt logical clusters with 12 nodes per each ( and datacenter managed from
>> one ha-engine ) ?"
>>
>> I have an existing oVirt datacenter with its own engine, hypervisors,
>> etc. Could I create hyperconverged clusters managed by my current
>> datacenter ? Ex. Cluster 1 -- 12 hyperconverged physical machines
>> (storage/compute), Cluster 2 -- 12 hyperconverged physical machines, etc.
>>
>
> Yes, you can add multiple clusters to be managed by your existing engine.
> The deployment flow would be different though, as the installation via
> cockpit also deploys the engine for the servers selected.
> You would need to create a custom ansible playbook that sets up the
> gluster volumes and add the hosts to the existing engine. (or do the
> creation of cluster and gluster volumes via the engine UI)
>
>
>> Please let me know.
>>
>> Thank You
>>
>> C Williams
>>
>> On Tue, Jan 29, 2019 at 4:21 AM Sahina Bose  wrote:
>>
>>> On Mon, Jan 28, 2019 at 6:22 PM Leo David  wrote:
>>> >
>>> > Hello Everyone,
>>> > Reading through the document:
>>> > "Red Hat Hyperconverged Infrastructure for Virtualization 1.5
>>> >  Automating RHHI for Virtualization deployment"
>>> >
>>> > Regarding storage scaling,  i see the following statements:
>>> >
>>> > 2.7. SCALING
>>> > Red Hat Hyperconverged Infrastructure for Virtualization is supported
>>> for one node, and for clusters of 3, 6, 9, and 12 nodes.
>>> > The initial deployment is either 1 or 3 nodes.
>>> > There are two supported methods of horizontally scaling Red Hat
>>> Hyperconverged Infrastructure for Virtualization:
>>> >
>>> > 1 Add new hyperconverged nodes to the cluster, in sets of three, up to
>>> the maximum of 12 hyperconverged nodes.
>>> >
>>> > 2 Create new Gluster volumes using new disks on existing
>>> hyperconverged nodes.
>>> > You cannot create a volume that spans more than 3 nodes, or expand an
>>> existing volume so that it spans across more than 3 nodes at a time
>>> >
>>> > 2.9.1. Prerequisites for geo-replication
>>> > Be aware of the following requirements and limitations when
>>> configuring geo-replication:
>>> > One geo-replicated volume only
>>> > Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for
>>> Virtualization) supports only one geo-replicated volume. Red Hat recommends
>>> backing up the volume that stores the data of your virtual machines, as
>>> this is usually contains the most valuable data.
>>> > --
>>> >
>>> > Also  in oVirtEngine UI, when I add a brick to an existing volume i
>>> get the following warning:
>>> >
>>> > "Expanding gluster volume in a hyper-converged setup is not
>>> recommended as it could lead to degraded performance. To expand storage for
>>> cluster, it is advised to add additional gluster volumes."
>>> >
>>> > Those things are raising a couple of questions that maybe for some for
>>> you guys are easy to answer, but for me it creates a bit of confusion...
>>> > I am also referring to RedHat product documentation,  because I  treat
>>> oVirt as production-ready as RHHI is.
>>>
>>> oVirt and RHHI though as close to each other as possible do differ in
>>> the versions used of the various components and the support
>>> limitations imposed.
>>> >
>>> > 1. Is there any reason for not going to distributed-replicated volumes
>>> ( ie: spread one volume across 6,9, or 12 nodes ) ?
>>> > - ie: is recomanded that in a 9 nodes scenario I should have 3
>>> separated volumes,  but how should I deal with the folowing question
>>>
>>> The reason for this limitation was a bug encountered when scaling a
>>> replica 3 volume to distribute-replica. This has since been fixed in
>>> the latest release of glusterfs.
>>>
>>> >
>>> >

[ovirt-users] Re: Hyperconverged setup - storage architecture - scaling

2020-01-09 Thread C Williams
  Hello,

I did not see an answer to this ...

"> 3. If the limit of hosts per datacenter is 250, then (in theory ) the
recomended way in reaching this treshold would be to create 20 separated
oVirt logical clusters with 12 nodes per each ( and datacenter managed from
one ha-engine ) ?"

I have an existing oVirt datacenter with its own engine, hypervisors, etc.
Could I create hyperconverged clusters managed by my current datacenter ?
Ex. Cluster 1 -- 12 hyperconverged physical machines (storage/compute),
Cluster 2 -- 12 hyperconverged physical machines, etc.

Please let me know.

Thank You

C Williams

On Tue, Jan 29, 2019 at 4:21 AM Sahina Bose  wrote:

> On Mon, Jan 28, 2019 at 6:22 PM Leo David  wrote:
> >
> > Hello Everyone,
> > Reading through the document:
> > "Red Hat Hyperconverged Infrastructure for Virtualization 1.5
> >  Automating RHHI for Virtualization deployment"
> >
> > Regarding storage scaling,  i see the following statements:
> >
> > 2.7. SCALING
> > Red Hat Hyperconverged Infrastructure for Virtualization is supported
> for one node, and for clusters of 3, 6, 9, and 12 nodes.
> > The initial deployment is either 1 or 3 nodes.
> > There are two supported methods of horizontally scaling Red Hat
> Hyperconverged Infrastructure for Virtualization:
> >
> > 1 Add new hyperconverged nodes to the cluster, in sets of three, up to
> the maximum of 12 hyperconverged nodes.
> >
> > 2 Create new Gluster volumes using new disks on existing hyperconverged
> nodes.
> > You cannot create a volume that spans more than 3 nodes, or expand an
> existing volume so that it spans across more than 3 nodes at a time
> >
> > 2.9.1. Prerequisites for geo-replication
> > Be aware of the following requirements and limitations when configuring
> geo-replication:
> > One geo-replicated volume only
> > Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for
> Virtualization) supports only one geo-replicated volume. Red Hat recommends
> backing up the volume that stores the data of your virtual machines, as
> this is usually contains the most valuable data.
> > --
> >
> > Also  in oVirtEngine UI, when I add a brick to an existing volume i get
> the following warning:
> >
> > "Expanding gluster volume in a hyper-converged setup is not recommended
> as it could lead to degraded performance. To expand storage for cluster, it
> is advised to add additional gluster volumes."
> >
> > Those things are raising a couple of questions that maybe for some for
> you guys are easy to answer, but for me it creates a bit of confusion...
> > I am also referring to RedHat product documentation,  because I  treat
> oVirt as production-ready as RHHI is.
>
> oVirt and RHHI though as close to each other as possible do differ in
> the versions used of the various components and the support
> limitations imposed.
> >
> > 1. Is there any reason for not going to distributed-replicated volumes (
> ie: spread one volume across 6,9, or 12 nodes ) ?
> > - ie: is recomanded that in a 9 nodes scenario I should have 3 separated
> volumes,  but how should I deal with the folowing question
>
> The reason for this limitation was a bug encountered when scaling a
> replica 3 volume to distribute-replica. This has since been fixed in
> the latest release of glusterfs.
>
> >
> > 2. If only one geo-replicated volume can be configured,  how should I
> deal with 2nd and 3rd volume replication for disaster recovery
>
> It is possible to have more than 1 geo-replicated volume as long as
> your network and CPU resources support this.
>
> >
> > 3. If the limit of hosts per datacenter is 250, then (in theory ) the
> recomended way in reaching this treshold would be to create 20 separated
> oVirt logical clusters with 12 nodes per each ( and datacenter managed from
> one ha-engine ) ?
> >
> > 4. In present, I have the folowing one 9 nodes cluster , all hosts
> contributing with 2 disks each  to a single replica 3 distributed
> replicated volume. They where added to the volume in the following order:
>   > node1 - disk1
> > node2 - disk1
> > ..
> > node9 - disk1
> > node1 - disk2
> > node2 - disk2
> > ..
> > node9 - disk2
> > At the moment, the volume is arbitrated, but I intend to go for full
> distributed replica 3.
> >
> > Is this a bad setup ? Why ?
> > It oviously brakes the redhat recommended rules...
> >
> > Is there anyone so kind to discuss on these things ?
> >
> > Thank you very much !
> >
> > Leo
> >
> >
> > 

[ovirt-users] Apache Directory Server

2018-01-23 Thread C Williams
Hello,

Has anyone successfully connected the ovirt-engine to Apache Directory
Server 2.0 ?

I have tried the pre-set connections offered by oVirt and have been able to
connect to the server on port 10389 after adding the port to a
serverset.port. I can query the directory and see users but I cannot log
onto the console as a user in the directory.

If any one has any experience/guidance on this, please let me know.

Thank You

Charles Williams
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fwd: Hung Snapshot

2016-10-12 Thread C Williams
Hello,

A bit more

I deleted the job from the database but the event still is reoccurring ,
the snapshot disk is still there and is still locked.

Help with this would be appreciated.

Thank You

>* On the ovirt-engine server, get a psql shell, e.g.,
*>>*   psql -d engine -U postgres
*>>* Data about running tasks are stored in the job table. You might find this
*>* query interesting:
*>>*   select * from job order by start_time desc;
*>>* Grab the job_id for the task you want to delete, then use the DeleteJob
*>* procedure, e.g.,
*>>*   select DeleteJob('8424f7a9-2a4c-4567-b528-45bbc1c2534f');
*>>* Give the web GUI a minute or so to catch up with you. The tasks should be
*>* gone.
*>

-- Forwarded message --
From: C Williams <cwilliams3...@gmail.com>
Date: Wed, Oct 12, 2016 at 2:15 AM
Subject: Fwd: Hung Snapshot
To: Users@ovirt.org


Hello,

More on this

The message that I keep getting over and over is

"Failed to complete snapshot .

Hope that helps

Thank You All For Your Help !
-- Forwarded message --
From: C Williams <cwilliams3...@gmail.com>
Date: Wed, Oct 12, 2016 at 2:01 AM
Subject: Hung Snapshot
To: Users@ovirt.org


Hello,

I am using the oVirtBackup utility and one of the snapshot jobs has hung. I
am on 3.6.6.2-1.el7.centos.

How can I stop this hung snapshot ?

This is filling my engine logs.

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fwd: Hung Snapshot

2016-10-12 Thread C Williams
Hello,

More on this

The message that I keep getting over and over is

"Failed to complete snapshot .

Hope that helps

Thank You All For Your Help !
-- Forwarded message --
From: C Williams <cwilliams3...@gmail.com>
Date: Wed, Oct 12, 2016 at 2:01 AM
Subject: Hung Snapshot
To: Users@ovirt.org


Hello,

I am using the oVirtBackup utility and one of the snapshot jobs has hung. I
am on 3.6.6.2-1.el7.centos.

How can I stop this hung snapshot ?

This is filling my engine logs.

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hung Snapshot

2016-10-12 Thread C Williams
Hello,

I am using the oVirtBackup utility and one of the snapshot jobs has hung. I
am on 3.6.6.2-1.el7.centos.

How can I stop this hung snapshot ?

This is filling my engine logs.

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] More Hosted Engine 3.6 Qusetions

2016-08-29 Thread C Williams
Hello,

I have found that I can import thin-provisioned VMs from VMWare ESXi into
an oVirt (3.6 Hosted-Engine) iSCSI  storage domain as thin-provisioned VMs
by doing the following :

1 . Import the thin-provisioned VMWare VM into an NFS storage domain.
2.  Export the thin-provisioned VM from the NFS storage domain.into the
oVirt Export domain.
3.  Import the thin-provisioned VM into an iSCSI Storage Domain from the
oVirt Export domain

Is there a simpler way to do this using the 3.6 Import capability ? Or, do
we need to follow the above procedure to get VMs into oVirt from VMWare as
thin-provisioned VMs ?

Thank You For Your Help !
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine 3.6 Questions

2016-08-18 Thread C Williams
Hello,

We have an installation with 2 filers on CentOS 7.2 with the hosted engine
appliance. We use iscsi for our backend storage.

We have some issues

We had to dedicate an entire dedicated storage domain for the sole use of
the Hosted Engine appliance. Is this required ? Or can the Hosted Engine
appliance co-exist on another storage domain with VMs ?  I have not been
able to migrate the Hosted Engine VM to other storage. Because of this
limitation and to be efficient with our storage use, we asked our storage
admin to make a small iscsi LUN 20GB for the storage domain for the
appliance. However, we are constantly getting errors regarding low disk
space for this domain even though it has only the 10GB appliance.

Could this size problem be related to my problems with importing VMWare VMs
into our larger (Multi TB) storage using the Import tool in 3.6.

Thank You
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users