[ovirt-users] Re: GetGlusterLocalLogicalVolumeListVDSCommand execution failed: null

2018-10-30 Thread Denis Chaplygin
Hello!



On Thu, Oct 25, 2018 at 8:12 PM Marco Lorenzo Crociani <
mar...@prismatelecomtesting.com> wrote:

> Storage nodes are not yet updated because ovirt 4.2.6.4-1.el7 depends on
>
> Compatibility Versions:
> Compute datacenter 4.2
> Storage 4.1 (because I don't have yet updated the storage cluster)
> so Data Centers is still 4.1
>
>
Could you please specify vdsm-gluster package version?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZAAATHS62E72KSNCP2EQR23D2CA75BLY/


[ovirt-users] Re: Gluster quorum

2018-05-23 Thread Denis Chaplygin
Hello!

On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor  wrote:

>
> Is there any changes with this bug?
>
> Still I haven't finish my upgrade process that i've started on 9th may:(
>
> Please help me if you can.
>
>

Looks like all required patches are already merged, so could you please to
update your engine again to the latest night build?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Add nodes to single-node hyperconverged cluster

2018-04-18 Thread Denis Chaplygin
Hello!

On Tue, Apr 17, 2018 at 5:49 PM, Joe DiTommasso  wrote:

> Thanks! I realized yesterday that I've got a few hosts I was in the
> process of decommissioning that I can temporarily use for this. So my new
> plan is to build a 3-node cluster with junk hosts and cycle in the good
> ones.
>
>
It is definitely a best way to achieve your goal! :)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Add nodes to single-node hyperconverged cluster

2018-04-17 Thread Denis Chaplygin
Hello!

On Fri, Apr 13, 2018 at 7:02 PM, Joe DiTommasso  wrote:

> Hi, I'm currently running an out-of-date (3.6) 3-node oVirt cluster, using
> NFS storage. I'd like to upgrade to the latest release and move to a
> hyperconverged setup, but I've only got the three hosts to play with. I'm
> currently thinking of pulling one node from my current cluster, rebuilding
> it to start a new cluster, and then migrating VMs/physical hosts over to
> the new cluster. Is that something that would seem feasible? I've read the
> documentation on setting up a hyperconverged cluster, but didn't see
> anything about adding new hosts after cluster deployment.Thanks!
>
>

Well, first of all - it is very tricky and risky. We have a good reason to
require 3 hosts in HC cluster - data redundancy and service reliability. If
you have luxury of turning everything down and upgrading using
backup/restore or something similar - please, go this way.


But in case you really really really need to upgrade your deployment in
your way, you can try following:

* Backup first :-)
* Upgrade your OS to the maximum version supported by 3.6 and any of the
next ovirt releases. You can also try to upgrade ovirt first.
* Remove one of the nodes from your cluster
* Manually install gluster servers on all 3 nodes and create a replicated
volumes for your future engine and vms
* Deploy HE on the free node, using existing gluster volumes
* Migrate VMs to that engine.
* One by one remove hosts from your old cluster and add them to your new
cluster
* When you will finish that, backup again :) and try to continue your
upgrade.

The whole idea of the process, described above, is to provide you with
gluster replica 3 volume from the very beginning. Technically you can use a
single brick distributed volume to install new engine and migrate VMs to
your new cluster, but problems with that single brick distributed volume
will affect all your VMs.


Another one option for you is to make replicated volume in a single host.
If you have enough space for keeping data thee times, you can:

* Backup (!)
* Remove one of the hosts from service, upgrade it
* And after that create replica 3 volumes for the engine and VMs, using 6
_local_ bricks
* Migrate VMs from one of your hosts
* Upgrade that second hosts, add it to your new cluster and manually
migrate two bricks (one brick per volume) from the first host to the second
host.
* Repeat for remaining host

Both approaches are VERY risky and highly THEORETICAL. I don't think anyone
ever did that, so think twice before doing that. Following any of those
scenario requires you to deeply understand, what are you doing and involves
a lot of work in console. Seriously, thnk one more time before following
them.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Troubleshooting a self-hosted engine deployment

2018-01-11 Thread Denis Chaplygin
Hello!

On Thu, Jan 11, 2018 at 1:03 PM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:

> Denis, thx for your answer.
>
> [root@node-gluster203 ~]# gluster volume heal engine info
> Brick node-gluster205:/opt/gluster/engine
> Status: Connected
> Number of entries: 0
>
> Brick node-gluster203:/opt/gluster/engine
> Status: Connected
> Number of entries: 0
>
> Brick node-gluster201:/opt/gluster/engine
> Status: Connected
> Number of entries: 0
>
> *Gluster volume looks like OK. Adding aditional info about volume options.
> *
>
>
>

Right, gluster volume seems to be fine. May be you have a connections
issue? Could you please try to execute 'hosted-engine --connect-storage'?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Troubleshooting a self-hosted engine deployment

2018-01-11 Thread Denis Chaplygin
Hello!

On Thu, Jan 11, 2018 at 9:56 AM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:

>
> We have a self-hosted engine test infra with gluster storage replica 3
> arbiter 1 (oVirt 4.1).
>

Why would not you try 4.2? :-) There is a lot of good changes in that area.


>
> RuntimeError: Volume does not exist: (u'13b5a4d0-dd26-491c-b5c0-
> 5628b56bc3a5',)
>
>

I assume you may have an issue with your gluster volume. Could you please
share output of the command 'gluster volume heal engine info'?

Thanks in advance.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster Performance

2017-10-05 Thread Denis Chaplygin
Hello!

On Thu, Oct 5, 2017 at 12:21 AM, Bryan Sockel  wrote:

> Is there any performance loss from your  if your gluster replica 3 servers
> are not all the same configuration?
>
>
> Server 1 - Primary
> 16 X 1.2 TB 10k Drives - Raid 10 - Stripe 256k (2.5 Drives)
> 1 CPU - Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz
> 64 GB Memory
>
> Server 2 - Replica
> 8 X 6 TB 7.5 RPM Drives - Raid 10 - Trip 512 (3.5)
> 1 CPU - Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz
> 16 GB Memory
>
>

As gluster needs to copy each 'write' operation twice and it waits for
actual completion of every op, your write iops will be limited by the
slowest drive.



> Server 3 - Arbiter (Also Virtual machine node)
> 2 X 1.2 TB 10k Drives Raid 10
> 2 CPU - Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
> 192 GB Memory
>
>

But it will be relaxed for the arbiter node, as arbiter keeps no data,
therefore no writes* happens on it, so it doesn't impacts overall
performance


* There are actually some metadata updates on the arbiter node, which are,
obviously, write operations, but we can neglect them.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Denis Chaplygin
Hello!

On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk  wrote:

> Hello,
>
> I'm using the DNS Balancing gluster hostname for years now, not only with
> ovirt. No software so far had a problem. And setting the hostname to only
> one Host of course breaks one advantage of a distributed/replicated Cluster
> File-System like loadbalancing the connections to the storage and/or
> failover if one host is missing. In earlier ovirt it wasn't possible to
> specify something like "backupvolfile-server" for a High-Available
> hosted-engine rollout (which I use).
>

As far as i know, backup-volfile-servers is a recommended way to keep you
filesystem mountable in case of server failure. While fs is mounted,
gluster will automatically provide failover. And you definitely can
specify backup-volfile-servers
in the storage domain configuration.


> I already used live migration in such a setup. This was done with pure
> libvirt setup/virsh and later using OpenNebula.
>
>
> Yes, but it was based on a accessing gluster volume as a mount filesystem,
not directly... And i would like to exclude that from list of possible
causes.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Denis Chaplygin
Hello!

On Fri, Aug 25, 2017 at 11:05 AM, Ralf Schenk  wrote:

>
> I replayed migration (10:38:02 local time) and recorded vdsm.log of source
> and destination as attached. I can't find anything in the gluster logs that
> shows an error. One information: my FQDN glusterfs.rxmgmt.databay.de
> points to all the gluster hosts:
>
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.121
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.125
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.127
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.122
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.124
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.123
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.126
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.128
>
> I double checked all gluster hosts. They all are configured the same
> regarding "option rpc-auth-allow-insecure on" No iptables rules on the
> host.
>

Do you use 'glusterfs.rxmgmt.databay.de" as a storage domain host name? I'm
not a gluster guru, but i'm afraid that some internal gluster client code
may go crazy, when it receives different address or several ip addresses
every time. Is it possible to try with separate names? You can create a
storage domain using 172.16.252.121 for example and it should work
bypassing your DNS. If it is possible to make that, could you please do
that and retry live migration?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-24 Thread Denis Chaplygin
Hello!

On Thu, Aug 24, 2017 at 3:55 PM, Ralf Schenk  wrote:

> nice to hear it worked for you.
>
> Attached you find the vdsm.log (from migration source) including the
> error and engine.log which looks ok.
>

Yes, most interesting part was in the vdsm log. Do you have anything
interesting in gluster logs?

And another one - can you try to run that vm (or some new vm with drives on
the same SD) on the migration target host? Will it start?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-24 Thread Denis Chaplygin
Hello!

On Thu, Aug 24, 2017 at 3:07 PM, Ralf Schenk  wrote:

> Responsiveness of VM is much better (already seen when Updateng OS
> Packages).
>
> But I'm not able to migrate the mashine live to another host in the
> cluster. Manager only states "Migration failed"
>
>
> Live migration worked for me.

Cold you please provide some details? Engine/vdsm logs in +/- 10 minutes in
the vicinity of migration failure.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 gluster with libgfapi

2017-08-02 Thread Denis Chaplygin
Hello!

On Wed, Aug 2, 2017 at 11:52 AM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> I was going through gluster document it was talking about libgfapi, that
> gives better performance.
> And I also went through many bug description and comment and mail thread
> in ovirt group.
>
> But still not understood, how to enable libgfapi ?
> Is it supported in 4.1 ? I am confuse.
>
>
Unfortunately it is not yet there. All the hard work is done and i hope it
will be in 4.1.5 release.



> Please help me to understand whether it supported or not.
> If supported, then how can I enable it ? or use it ?
>
>
It is not supported yet, i believe it will be in 4.1.5

You will be able to enable it with command

engine-config -s LibgfApiSupported=true

I would like to emphasize once more - is it not supported yet and command,
mentioned above, will not do anything useful right now.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-30 Thread Denis Chaplygin
Hello!

On Fri, Jun 30, 2017 at 5:46 PM, cmc  wrote:

> I ran 'hosted-engine --vm-start' after trying to ping the engine and
> running 'hosted-engine --vm-status' (which said it wasn't running) and
> it reported that it was 'destroying storage' and starting the engine,
> though it did not start it. I could not see any evidence from
> 'hosted-engine --vm-status' or logs that it started.


That sound really strange. I would suspect some storage problems or
something. As i told you earlier, output of --vm-status may shed light on
that issue.


> By this point I
> was in a panic to get VMs running. So I had to fire up the old bare
> metal engine. This has been a very disappointing experience. I still
> have no idea why the IDs in 'host_id' differed from the spm ID, and
>

Did you tried to migrate form bare metal engine to the hosted engine?

>
>
> 1. Why did the VMs (apart from the Hosted Engine VM) not start on
> power up of the hosts? Is it because the hosts were powered down, that
> they stay in a down state on power up of the host?
>
>
Engine is responsible for starting those VMs. As you had no engine, there
was no one to start them. Hosted Engine tools are only responsible for the
engine VM, not other VMs.


> 2. Now that I have connected the bare metal engine back to the
> cluster, is there a way back, or do I have to start from scratch
> again? I imagine there is no way of getting the Hosted Engine running
> again. If not, what do I need to 'clean' all the hosts of the remnants
> of the failed deployment? I can of course reinitialise the LUN that
> the Hosted Engine was on - anything else?
>

I know, there exists 'bare metal - to - hosted engine' migration procedure,
but i doubt i knew it good enough. If i remember correctly, you need to
take a backup of your bare metal engine database, run migration preparation
script, that will handle spm_id duplications, deploy your first HE host,
restore database from the backup, deploy more HE hosts. I'm not sure if
those steps are correct and would better ask Martin about migration process.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Help! No VMs start after reboot of cluster

2017-06-30 Thread Denis Chaplygin
Hello!

On Fri, Jun 30, 2017 at 5:34 PM, cmc  wrote:

>
> Yes, I did check that and it said it was out of global maintenance
> ('False' I think it said).
>
>
Well, then it should start VM :-) Could you please share hosted-engine
--vm-status  output? It may contain some interesting information
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Help! No VMs start after reboot of cluster

2017-06-30 Thread Denis Chaplygin
Hello!

On Fri, Jun 30, 2017 at 4:35 PM, cmc  wrote:

> I restarted my 3 host cluster after setting it into global maintenance
> mode and then shutting down all of the nodes and then bringing them up
> again. I moved it out of global maintenance mode and no VM is running,
> including the hosted engine.
>
> Any help greatly appreciated!
>

Are you sure you are really out of global maintenance? Could you please
post hosted-engine --vm-status output?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-30 Thread Denis Chaplygin
Hello!

On Fri, Jun 30, 2017 at 4:19 PM, cmc  wrote:

> Help! I put the cluster into global maintenance, then powered off and
> then on all of the nodes I have powered off and powered on all the
> nodes. I have taken it out of global maintenance. No VM has started,
> including the hosted engine. This is very bad. I am going to look
> through logs to see why nothing has started. Help greatly appreciated.
>

Global maintenance mode turns off high availability for the hosted engine
vm. You should either cancel global maintenance or start vm manually with
hosted-engine --vm-start

Global maintenance was added to allow manual maintenance of the engine VM,
so in that mode state of the engine VM and engine itself is not managed and
you a free to stop engine or vm or both, do whatever you like and hosted
engine tools will not interfere. Obviously when engine VM just dies while
cluster is in global maintenance (or all nodes reboot, as in your case)
there is no one to restart it :)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-29 Thread Denis Chaplygin
Hello!

On Thu, Jun 29, 2017 at 1:22 PM, Martin Sivak  wrote:

> Change the ids so they are distinct. I need to check if there is a way
> to read the SPM ids from the engine as using the same numbers would be
> the best.
>

Host (SPM) ids are not shown in the UI, but you can search on it by typing
'spm_id=' into a search box and it will return you host with the
specified id or nothing if that id is not in use
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt split brain resolution

2017-06-23 Thread Denis Chaplygin
Hello Abi,

On Fri, Jun 23, 2017 at 4:47 PM, Abi Askushi 
wrote:

> Hi All,
>
> I have a 3 node ovirt 4.1 setup. I lost one node due to raid controller
> issues. Upon restoration I have the following split brain, although the
> hosts have mounted the storage domains:
>
> gluster volume heal engine info split-brain
> Brick gluster0:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
> Brick gluster1:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
> Brick gluster2:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
>
>
It is definitely on gluster side. You could try to use

gluster volume heal engine split-brain latest-mtime
/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent


I also added gluster developers to that thread, so they may provide you
with better advices.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-08 Thread Denis Chaplygin
Hello Alex,


On Wed, Jun 7, 2017 at 11:39 AM, Abi Askushi 
wrote:

> Hi Sahina,
>
> Did you have the chance to check the logs and have any idea how this may
> be addressed?
>


It seems to be a VDSM issue, as VDSM uses direct IO (and id actualy calls
dd) and assumes that block size is 512. I see in the code, that block size
is defined as a constant, so it probably may be adjusted, but i think it
would be better if we ask some one who knows that part better.

Anyway, could you please file a bug on that issue? Thanks in advance.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
Hello!

On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hi Denis, understood.
> What if in the case of adding a fourth host to the running cluster, will
> the copy of data be kept only twice in any of the 4 servers ?
>

replica volumes can be build only from 2 or 3 bricks. There is no way to
make a replica volume from a 4 bricks.

But you may combine distributed volumes and replica volumes [1]:

gluster volume create test-volume replica 2 transport tcp server1:/b1
server2:/b2 server3:/b3 server4:/b4

test-volume would be like a RAID10 - you will have two replica volumes
b1+b2 and b3+b4 combined into a single distributed volume. In that case you
will
have only two copies of your data. Part of your data will be stored twice
on b1 and b2 and another one part will be stored twice at b3 and b4
You will be able to extend that distributed volume by adding new replicas.


[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move Hosted Engine disk?

2017-04-24 Thread Denis Chaplygin
Hello!

On Mon, Apr 24, 2017 at 4:34 PM, gflwqs gflwqs  wrote:

>
> I am running ovirt4.1.1 and moving my vm:s is no problem.
> However how do i move my hosted engine disk to the new FC SAN?
> In the engine gui i am able to click move disk but is this enough?
>
>
Short answer: No, it's not enough. Use backup/restore

Long answer: It would be better to reinstall Hosted Engine on a new FC SAN
and restore database from a backup, but that operations is quite tricky.

It also requires some extra space for VMs, as you will be putting couple
of  hosts into maintenance mode and at least one host will be reinstalled.

I would recommend you to migrate all your VMs to non-HE hosts at first. If
you don't have enough hosts for that, you should either temporarily
undeploy HE from some hosts or shutdown some VMs. If both options are not
possible, you may still continue, but you may have some undesired effects.

After that take a database backup and execute engine-migrate-he.py script.
This script will put one of your hosts into maintenance mode, so you will
need some extra space for your VMs (including HE VM).

Now you are safe to switch Hosted Engine to global maintenance mode,
shutdown you HE VM, redeploy HE on some host using new FC SAN and restore
your database. engine-backup script should be executed with
--he-remove-hosts --he-remove-hosts options.

Finally, immediately after database restoration, redeploy existing HE
hosts, so they will join new HE cluster. It is also safe to activate host,
that was put to maintenance mode by engine-migrate-he.py script (but if it
was a HE host, just reinstall it)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
With arbiter volume you still have a replica 3 volume, meaning that you
have three participants in your quorum. But only two of those participants
keep the actual data. Third one, the arbiter, stores only some metadata,
not the files content, so data is not replicated 3 times.

On Mon, Apr 24, 2017 at 3:33 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> But then quorum doesn't replicate data 3 times, does it ?
>
> Fernando
>
> On 24/04/2017 10:24, Denis Chaplygin wrote:
>
> Hello!
>
> On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Out of curiosity, why do you and people in general use more replica 3
>> than replica 2 ?
>>
>
> The answer is simple - quorum. With just two participants you don't know
> what to do, when your peer is unreachable. When you have three
> participants, you are able to establish a majority. In that case, when two
> partiticipants are able to communicate, they now, that lesser part of
> cluster knows, that it should not accept any changes.
>
>
>> If I understand correctly this seems overkill and waste of storage as 2
>> copies of data (replica 2)  seems pretty reasonable similar to RAID 1 and
>> still in the worst case the data can be replicated after a fail. I see that
>> replica 3 helps more on performance at the cost of space.
>>
>> You are absolutely right. You need two copies of data to provide data
> redundancy and you need three (or more) members in cluster to provide
> distinguishable majority. Therefore we have arbiter volumes, thus solving
> that issue [1].
>
> [1] https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/arbiter-volumes-and-quorum/
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
Hello!

On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Out of curiosity, why do you and people in general use more replica 3 than
> replica 2 ?
>

The answer is simple - quorum. With just two participants you don't know
what to do, when your peer is unreachable. When you have three
participants, you are able to establish a majority. In that case, when two
partiticipants are able to communicate, they now, that lesser part of
cluster knows, that it should not accept any changes.


> If I understand correctly this seems overkill and waste of storage as 2
> copies of data (replica 2)  seems pretty reasonable similar to RAID 1 and
> still in the worst case the data can be replicated after a fail. I see that
> replica 3 helps more on performance at the cost of space.
>
> You are absolutely right. You need two copies of data to provide data
redundancy and you need three (or more) members in cluster to provide
distinguishable majority. Therefore we have arbiter volumes, thus solving
that issue [1].

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-06 Thread Denis Chaplygin
Hello!

On Fri, Mar 3, 2017 at 12:18 PM, Arman Khalatyan  wrote:

> I think there are some bug in the vdsmd checks;
>
> OSError: [Errno 2] Mount of `10.10.10.44:/GluReplica` at
> `/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica` does not exist
>


>
> 10.10.10.44:/GluReplica.rdma   3770662912 407818240 3362844672  11%
> /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
>

I suppose, that vdsm is not able to handle that .rdma suffix on volume
path. Could you please file a bug for that issue to track it?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users