[ovirt-users] Re: oVirt4.4.5 and gluster op-version

2021-03-20 Thread Jiří Sléžka
Hello,

On 3/20/21 5:39 PM, Strahil Nikolov wrote:
> If all fuse clients are using gluster v8 , yes.
> 
> Most probably, your oVirt nodes are the only clients, but you can always
> verify with list-client volume command.

yes, oVirt nodes are the only clients and supports 8 op-version

gluster volume status all clients
...

I set new op-version to 8 without problems

gluster volume set all cluster.op-version 8

Thanx,

Jiri


> 
> Best Regards,
> Strahil Nikolov
> 
> On Fri, Mar 19, 2021 at 21:50, Jiří Sléžka
>  wrote:
> Hello,
> 
> I have just upgraded my 2hosts+1arbiter hci cluster to 4.4.5. Gluster is
> not managed by oVirt and currently is in op.version 70200
> 
> gluster volume get all cluster.op-version
> cluster.op-version                      70200
> 
> after gluster upgrade max-op-version is
> 
> gluster volume get all cluster.max-op-version
> cluster.max-op-version                  8
> 
> can I (or should I) switch to 8 op-version?
> 
> Thanks,
> Jiri
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> 
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7REZ46TPMMT4XQWN7VYHJAQ6MNIF7ROF/
> 
> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5TLYX3YTLJYL3U456AGYUKUYVNQHXTLQ/


[ovirt-users] Re: Hyperconverged engine high availability?

2021-03-20 Thread David White via Users
Ah, I see.
The "host" in this context does need to be the backend mgt / gluster network.

I was able to add the 2nd host, and I'm working on adding the 3rd now.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Saturday, March 20, 2021 4:32 PM, David White via Users  
wrote:

> To clarify:
> My screenshot keeps defaulting the "Host Address" to the Storage FQDN, so I 
> keep changing it to the correct fqdn.
> 

> Sent with ProtonMail Secure Email.
> 

> ‐‐‐ Original Message ‐‐‐
> On Saturday, March 20, 2021 4:30 PM, David White  
> wrote:
> 

> > There may be a bug in the latest installer. Or I might have missed a step 
> > somewhere.
> > I did use the 4.4.5 installer for hyperconverged wizard, yes.
> > 

> > I'm currently in the Engine console right now, and I only see 1 host.
> > I've navigated to Compute -> Hosts.
> > 

> > That said, when I navigate to Compute -> Clusters -> Default, I see this 
> > message:
> > Some new hosts are detected in the cluster. You can Import them to engine 
> > or Detach them from the cluster.
> > 

> > I clicked on Import to try to import them into the engine.
> > On the next screen, I see the other two physical hosts.
> > 

> > I verified the Gluster peer address, as well as the front-end Host address, 
> > typed in the root password, and clicked OK. The system acted like it was 
> > doing stuff, but then eventually I landed back on the same "Add Hosts" 
> > screen as before:
> > 

> > [Screenshot from 2021-03-20 16-28-56.png]
> > 

> > Am I missing something?
> > 

> > Sent with ProtonMail Secure Email.
> > 

> > ‐‐‐ Original Message ‐‐‐
> > On Saturday, March 20, 2021 4:17 PM, Jayme  wrote:
> > 

> > > If you deployed with wizard the hosted engine should already be HA and 
> > > can run on any host. I’d you look at GUI you will see a crown beside each 
> > > host that is capable of running the hostess engine. 
> > > 

> > > On Sat, Mar 20, 2021 at 5:14 PM David White via Users  
> > > wrote:
> > > 

> > > > I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged 
> > > > cluster running on Red Hat 8.3 OS.
> > > > 

> > > > Over the course of the setup, I noticed that I had to setup the storage 
> > > > for the engine separately from the gluster bricks. 
> > > > 

> > > > It looks like the engine was installed onto /rhev/data-center/ on the 
> > > > first host, whereas the gluster bricks for all 3 hosts are on 
> > > > /gluster_bricks/.
> > > > 

> > > > I fear that I may already know the answer to this, but:
> > > > Is it possible to make the engine highly available?
> > > > 

> > > > Also, thinking hypothetically here, what would happen to my VMs that 
> > > > are physically on the first server, if the first server crashed? The 
> > > > engine is what handles the high availability, correct? So what if a VM 
> > > > was running on the first host? There would be nothing to automatically 
> > > > "move" it to one of the remaining healthy hosts.
> > > > 

> > > > Or am I misunderstanding something here?
> > > > 

> > > > Sent with ProtonMail Secure Email.
> > > > 

> > > > ___
> > > > Users mailing list -- users@ovirt.org
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > > oVirt Code of Conduct: 
> > > > https://www.ovirt.org/community/about/community-guidelines/
> > > > List Archives: 
> > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MMZSMSGIK7BTUSUECU65VZRMS4N33L/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WHFXQEWJDCVVG4RFTEGFRYVMPNJKOBG/


[ovirt-users] Re: Hyperconverged engine high availability?

2021-03-20 Thread David White via Users
To clarify:
My screenshot keeps defaulting the "Host Address" to the Storage FQDN, so I 
keep changing it to the correct fqdn.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Saturday, March 20, 2021 4:30 PM, David White  
wrote:

> There may be a bug in the latest installer. Or I might have missed a step 
> somewhere.
> I did use the 4.4.5 installer for hyperconverged wizard, yes.
> 

> I'm currently in the Engine console right now, and I only see 1 host.
> I've navigated to Compute -> Hosts.
> 

> That said, when I navigate to Compute -> Clusters -> Default, I see this 
> message:
> Some new hosts are detected in the cluster. You can Import them to engine or 
> Detach them from the cluster.
> 

> I clicked on Import to try to import them into the engine.
> On the next screen, I see the other two physical hosts.
> 

> I verified the Gluster peer address, as well as the front-end Host address, 
> typed in the root password, and clicked OK. The system acted like it was 
> doing stuff, but then eventually I landed back on the same "Add Hosts" screen 
> as before:
> 

> [Screenshot from 2021-03-20 16-28-56.png]
> 

> Am I missing something?
> 

> Sent with ProtonMail Secure Email.
> 

> ‐‐‐ Original Message ‐‐‐
> On Saturday, March 20, 2021 4:17 PM, Jayme  wrote:
> 

> > If you deployed with wizard the hosted engine should already be HA and can 
> > run on any host. I’d you look at GUI you will see a crown beside each host 
> > that is capable of running the hostess engine. 
> > 

> > On Sat, Mar 20, 2021 at 5:14 PM David White via Users  
> > wrote:
> > 

> > > I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged 
> > > cluster running on Red Hat 8.3 OS.
> > > 

> > > Over the course of the setup, I noticed that I had to setup the storage 
> > > for the engine separately from the gluster bricks. 
> > > 

> > > It looks like the engine was installed onto /rhev/data-center/ on the 
> > > first host, whereas the gluster bricks for all 3 hosts are on 
> > > /gluster_bricks/.
> > > 

> > > I fear that I may already know the answer to this, but:
> > > Is it possible to make the engine highly available?
> > > 

> > > Also, thinking hypothetically here, what would happen to my VMs that are 
> > > physically on the first server, if the first server crashed? The engine 
> > > is what handles the high availability, correct? So what if a VM was 
> > > running on the first host? There would be nothing to automatically "move" 
> > > it to one of the remaining healthy hosts.
> > > 

> > > Or am I misunderstanding something here?
> > > 

> > > Sent with ProtonMail Secure Email.
> > > 

> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct: 
> > > https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives: 
> > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MMZSMSGIK7BTUSUECU65VZRMS4N33L/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FAUV7NHLSOHIU4NUS6IWA66C775T6CQM/


[ovirt-users] Re: Hyperconverged engine high availability?

2021-03-20 Thread David White via Users
There may be a bug in the latest installer. Or I might have missed a step 
somewhere.
I did use the 4.4.5 installer for hyperconverged wizard, yes.

I'm currently in the Engine console right now, and I only see 1 host.
I've navigated to Compute -> Hosts.

That said, when I navigate to Compute -> Clusters -> Default, I see this 
message:
Some new hosts are detected in the cluster. You can Import them to engine or 
Detach them from the cluster.

I clicked on Import to try to import them into the engine.
On the next screen, I see the other two physical hosts.

I verified the Gluster peer address, as well as the front-end Host address, 
typed in the root password, and clicked OK. The system acted like it was doing 
stuff, but then eventually I landed back on the same "Add Hosts" screen as 
before:

[Screenshot from 2021-03-20 16-28-56.png]

Am I missing something?

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Saturday, March 20, 2021 4:17 PM, Jayme  wrote:

> If you deployed with wizard the hosted engine should already be HA and can 
> run on any host. I’d you look at GUI you will see a crown beside each host 
> that is capable of running the hostess engine. 
> 

> On Sat, Mar 20, 2021 at 5:14 PM David White via Users  wrote:
> 

> > I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged cluster 
> > running on Red Hat 8.3 OS.
> > 

> > Over the course of the setup, I noticed that I had to setup the storage for 
> > the engine separately from the gluster bricks. 
> > 

> > It looks like the engine was installed onto /rhev/data-center/ on the first 
> > host, whereas the gluster bricks for all 3 hosts are on /gluster_bricks/.
> > 

> > I fear that I may already know the answer to this, but:
> > Is it possible to make the engine highly available?
> > 

> > Also, thinking hypothetically here, what would happen to my VMs that are 
> > physically on the first server, if the first server crashed? The engine is 
> > what handles the high availability, correct? So what if a VM was running on 
> > the first host? There would be nothing to automatically "move" it to one of 
> > the remaining healthy hosts.
> > 

> > Or am I misunderstanding something here?
> > 

> > Sent with ProtonMail Secure Email.
> > 

> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MMZSMSGIK7BTUSUECU65VZRMS4N33L/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WDFY7JNW6X3AA3NMO3G3IMJ2P4NZI4I/


[ovirt-users] Re: Hyperconverged engine high availability?

2021-03-20 Thread Alex K
On Sat, Mar 20, 2021, 22:15 David White via Users  wrote:

> I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged cluster
> running on Red Hat 8.3 OS.
>
> Over the course of the setup, I noticed that I had to setup the storage
> for the engine separately from the gluster bricks.
>
> It looks like the engine was installed onto /rhev/data-center/ on the
> first host, whereas the gluster bricks for all 3 hosts are on
> /gluster_bricks/.
>
/rhev/data-center is a mountpoint of the gluster volume which may have its
bricks in /gluster_bricks/. You can provide more info on the gluster setup
to clarify this. gluster volume info 


> I fear that I may already know the answer to this, but:
> Is it possible to make the engine highly available?
>
Yes, one uses 3 servers to achieve HA for the guest VMs and the engine, as
long as the hosts meet the requirements of storage, network and compute.

>
> Also, thinking hypothetically here, what would happen to my VMs that are
> physically on the first server, if the first server crashed? The engine is
> what handles the high availability, correct? So what if a VM was running on
> the first host? There would be nothing to automatically "move" it to one of
> the remaining healthy hosts.
>
> Or am I misunderstanding something here?
>
>
> Sent with ProtonMail  Secure Email.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MMZSMSGIK7BTUSUECU65VZRMS4N33L/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/76OWWM33ZAEUMVKNWPOYPIQETU3DBPFT/


[ovirt-users] Re: Hyperconverged engine high availability?

2021-03-20 Thread Jayme
If you deployed with wizard the hosted engine should already be HA and can
run on any host. I’d you look at GUI you will see a crown beside each host
that is capable of running the hostess engine.

On Sat, Mar 20, 2021 at 5:14 PM David White via Users 
wrote:

> I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged cluster
> running on Red Hat 8.3 OS.
>
> Over the course of the setup, I noticed that I had to setup the storage
> for the engine separately from the gluster bricks.
>
> It looks like the engine was installed onto /rhev/data-center/ on the
> first host, whereas the gluster bricks for all 3 hosts are on
> /gluster_bricks/.
>
> I fear that I may already know the answer to this, but:
> Is it possible to make the engine highly available?
>
> Also, thinking hypothetically here, what would happen to my VMs that are
> physically on the first server, if the first server crashed? The engine is
> what handles the high availability, correct? So what if a VM was running on
> the first host? There would be nothing to automatically "move" it to one of
> the remaining healthy hosts.
>
> Or am I misunderstanding something here?
>
>
> Sent with ProtonMail  Secure Email.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MMZSMSGIK7BTUSUECU65VZRMS4N33L/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2KHQ4XA4H2FZ3C73CAYES3IOXI7YWTV/


[ovirt-users] Hyperconverged engine high availability?

2021-03-20 Thread David White via Users
I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged cluster 
running on Red Hat 8.3 OS.

Over the course of the setup, I noticed that I had to setup the storage for the 
engine separately from the gluster bricks. 

It looks like the engine was installed onto /rhev/data-center/ on the first 
host, whereas the gluster bricks for all 3 hosts are on /gluster_bricks/.

I fear that I may already know the answer to this, but:
Is it possible to make the engine highly available?

Also, thinking hypothetically here, what would happen to my VMs that are 
physically on the first server, if the first server crashed? The engine is what 
handles the high availability, correct? So what if a VM was running on the 
first host? There would be nothing to automatically "move" it to one of the 
remaining healthy hosts.

Or am I misunderstanding something here?

Sent with ProtonMail Secure Email.

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MMZSMSGIK7BTUSUECU65VZRMS4N33L/


[ovirt-users] Re: oVirt4.4.5 and gluster op-version

2021-03-20 Thread Strahil Nikolov via Users
If all fuse clients are using gluster v8 , yes.
Most probably, your oVirt nodes are the only clients, but you can always verify 
with list-client volume command.
Best Regards,Strahil Nikolov
 
 
  On Fri, Mar 19, 2021 at 21:50, Jiří Sléžka wrote:   Hello,

I have just upgraded my 2hosts+1arbiter hci cluster to 4.4.5. Gluster is
not managed by oVirt and currently is in op.version 70200

gluster volume get all cluster.op-version
cluster.op-version                      70200

after gluster upgrade max-op-version is

gluster volume get all cluster.max-op-version
cluster.max-op-version                  8

can I (or should I) switch to 8 op-version?

Thanks,
Jiri
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7REZ46TPMMT4XQWN7VYHJAQ6MNIF7ROF/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C442GWL6RLACXPDFAYFFH3E4HVBJV6T2/


[ovirt-users] Re: What would be the process to restore master host.

2021-03-20 Thread miguel . garcia
No we are not using gluster. Thanks for the update.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QE3RHOSZ6P5XXD27QQLU3VSXO243AMN6/