Hello!
On Fri, Mar 3, 2017 at 12:18 PM, Arman Khalatyan wrote:
> I think there are some bug in the vdsmd checks;
>
> OSError: [Errno 2] Mount of `10.10.10.44:/GluReplica` at
> `/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica` does not exist
>
>
>
Hello!
On Mon, Apr 24, 2017 at 4:34 PM, gflwqs gflwqs wrote:
>
> I am running ovirt4.1.1 and moving my vm:s is no problem.
> However how do i move my hosted engine disk to the new FC SAN?
> In the engine gui i am able to click move disk but is this enough?
>
>
Short answer:
Hello!
On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:
> Hi Denis, understood.
> What if in the case of adding a fourth host to the running cluster, will
> the copy of data be kept only twice in any of the 4 servers ?
>
replica volumes can be build only
Hello!
On Wed, Aug 2, 2017 at 11:52 AM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:
> I was going through gluster document it was talking about libgfapi, that
> gives better performance.
> And I also went through many bug description and comment and mail thread
> in ovirt group.
>
>
Hello!
On Fri, Jun 30, 2017 at 4:35 PM, cmc wrote:
> I restarted my 3 host cluster after setting it into global maintenance
> mode and then shutting down all of the nodes and then bringing them up
> again. I moved it out of global maintenance mode and no VM is running,
>
Hello!
On Fri, Jun 30, 2017 at 5:34 PM, cmc wrote:
>
> Yes, I did check that and it said it was out of global maintenance
> ('False' I think it said).
>
>
Well, then it should start VM :-) Could you please share hosted-engine
--vm-status output? It may contain some
Hello!
On Fri, Jun 30, 2017 at 4:19 PM, cmc wrote:
> Help! I put the cluster into global maintenance, then powered off and
> then on all of the nodes I have powered off and powered on all the
> nodes. I have taken it out of global maintenance. No VM has started,
> including
Hello!
On Fri, Jun 30, 2017 at 5:46 PM, cmc wrote:
> I ran 'hosted-engine --vm-start' after trying to ping the engine and
> running 'hosted-engine --vm-status' (which said it wasn't running) and
> it reported that it was 'destroying storage' and starting the engine,
> though
Hello!
On Thu, Jun 29, 2017 at 1:22 PM, Martin Sivak wrote:
> Change the ids so they are distinct. I need to check if there is a way
> to read the SPM ids from the engine as using the same numbers would be
> the best.
>
Host (SPM) ids are not shown in the UI, but you can
Hello!
On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:
> Out of curiosity, why do you and people in general use more replica 3 than
> replica 2 ?
>
The answer is simple - quorum. With just two participants you don't know
what to do, when your peer is
, 2017 at 3:33 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:
> But then quorum doesn't replicate data 3 times, does it ?
>
> Fernando
>
> On 24/04/2017 10:24, Denis Chaplygin wrote:
>
> Hello!
>
> On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI <
Hello Alex,
On Wed, Jun 7, 2017 at 11:39 AM, Abi Askushi
wrote:
> Hi Sahina,
>
> Did you have the chance to check the logs and have any idea how this may
> be addressed?
>
It seems to be a VDSM issue, as VDSM uses direct IO (and id actualy calls
dd) and assumes that
Hello Abi,
On Fri, Jun 23, 2017 at 4:47 PM, Abi Askushi
wrote:
> Hi All,
>
> I have a 3 node ovirt 4.1 setup. I lost one node due to raid controller
> issues. Upon restoration I have the following split brain, although the
> hosts have mounted the storage domains:
>
>
Hello!
On Thu, Oct 5, 2017 at 12:21 AM, Bryan Sockel wrote:
> Is there any performance loss from your if your gluster replica 3 servers
> are not all the same configuration?
>
>
> Server 1 - Primary
> 16 X 1.2 TB 10k Drives - Raid 10 - Stripe 256k (2.5 Drives)
> 1 CPU -
Hello!
On Thu, Aug 24, 2017 at 3:07 PM, Ralf Schenk wrote:
> Responsiveness of VM is much better (already seen when Updateng OS
> Packages).
>
> But I'm not able to migrate the mashine live to another host in the
> cluster. Manager only states "Migration failed"
>
>
> Live
Hello!
On Thu, Aug 24, 2017 at 3:55 PM, Ralf Schenk wrote:
> nice to hear it worked for you.
>
> Attached you find the vdsm.log (from migration source) including the
> error and engine.log which looks ok.
>
Yes, most interesting part was in the vdsm log. Do you have anything
Hello!
On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk wrote:
> Hello,
>
> I'm using the DNS Balancing gluster hostname for years now, not only with
> ovirt. No software so far had a problem. And setting the hostname to only
> one Host of course breaks one advantage of a
Hello!
On Fri, Aug 25, 2017 at 11:05 AM, Ralf Schenk wrote:
>
> I replayed migration (10:38:02 local time) and recorded vdsm.log of source
> and destination as attached. I can't find anything in the gluster logs that
> shows an error. One information: my FQDN
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor wrote:
>
> Is there any changes with this bug?
>
> Still I haven't finish my upgrade process that i've started on 9th may:(
>
> Please help me if you can.
>
>
Looks like all required patches are already merged, so
Hello!
On Thu, Jan 11, 2018 at 9:56 AM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:
>
> We have a self-hosted engine test infra with gluster storage replica 3
> arbiter 1 (oVirt 4.1).
>
Why would not you try 4.2? :-) There is a lot of good changes in that area.
>
> RuntimeError:
Hello!
On Thu, Jan 11, 2018 at 1:03 PM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:
> Denis, thx for your answer.
>
> [root@node-gluster203 ~]# gluster volume heal engine info
> Brick node-gluster205:/opt/gluster/engine
> Status: Connected
> Number of entries: 0
>
> Brick
Hello!
On Tue, Apr 17, 2018 at 5:49 PM, Joe DiTommasso wrote:
> Thanks! I realized yesterday that I've got a few hosts I was in the
> process of decommissioning that I can temporarily use for this. So my new
> plan is to build a 3-node cluster with junk hosts and cycle in
Hello!
On Fri, Apr 13, 2018 at 7:02 PM, Joe DiTommasso wrote:
> Hi, I'm currently running an out-of-date (3.6) 3-node oVirt cluster, using
> NFS storage. I'd like to upgrade to the latest release and move to a
> hyperconverged setup, but I've only got the three hosts to play
Hello!
On Thu, Oct 25, 2018 at 8:12 PM Marco Lorenzo Crociani <
mar...@prismatelecomtesting.com> wrote:
> Storage nodes are not yet updated because ovirt 4.2.6.4-1.el7 depends on
>
> Compatibility Versions:
> Compute datacenter 4.2
> Storage 4.1 (because I don't have yet updated the storage
24 matches
Mail list logo