Adding gluster-users.

On Wed, Jan 31, 2018 at 3:55 PM, Misak Khachatryan <kmi...@gmail.com> wrote:

> Hi,
>
> here is the output from virt3 - problematic host:
>
> [root@virt3 ~]# gluster volume status
> Status of volume: data
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick virt1:/gluster/brick2/data            49152     0          Y
>  3536
> Brick virt2:/gluster/brick2/data            49152     0          Y
>  3557
> Brick virt3:/gluster/brick2/data            49152     0          Y
>  3523
> Self-heal Daemon on localhost               N/A       N/A        Y
>  32056
> Self-heal Daemon on virt2                   N/A       N/A        Y
>  29977
> Self-heal Daemon on virt1                   N/A       N/A        Y
>  1788
>
> Task Status of Volume data
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick virt1:/gluster/brick1/engine          49153     0          Y
>  3561
> Brick virt2:/gluster/brick1/engine          49153     0          Y
>  3570
> Brick virt3:/gluster/brick1/engine          49153     0          Y
>  3534
> Self-heal Daemon on localhost               N/A       N/A        Y
>  32056
> Self-heal Daemon on virt2                   N/A       N/A        Y
>  29977
> Self-heal Daemon on virt1                   N/A       N/A        Y
>  1788
>
> Task Status of Volume engine
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick virt1:/gluster/brick4/iso             49154     0          Y
>  3585
> Brick virt2:/gluster/brick4/iso             49154     0          Y
>  3592
> Brick virt3:/gluster/brick4/iso             49154     0          Y
>  3543
> Self-heal Daemon on localhost               N/A       N/A        Y
>  32056
> Self-heal Daemon on virt1                   N/A       N/A        Y
>  1788
> Self-heal Daemon on virt2                   N/A       N/A        Y
>  29977
>
> Task Status of Volume iso
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
> and one of the logs.
>
> Thanks in advance
>
> Best regards,
> Misak Khachatryan
>
>
> On Wed, Jan 31, 2018 at 9:17 AM, Sahina Bose <sab...@redhat.com> wrote:
> > Could you provide the output of "gluster volume status" and the gluster
> > mount logs to check further?
> > Are all the host shown as active in the engine (that is, is the
> monitoring
> > working?)
> >
> > On Wed, Jan 31, 2018 at 1:07 AM, Misak Khachatryan <kmi...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>
> >> After upgrade to 4.2 i'm getting "VM paused due unknown storage
> >> error". When i was upgrading i had some gluster problem with one of
> >> the hosts, which i was fixed readding it to gluster peers. Now i see
> >> something weir in bricks configuration, see attachment - one of the
> >> bricks uses 0% of space.
> >>
> >> How I can diagnose this? Nothing wrong in logs as I can see.
> >>
> >>
> >>
> >>
> >> Best regards,
> >> Misak Khachatryan
> >>
> >> _______________________________________________
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >
>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to