Hi Craig,
steps are same as previous there is no special steps. you can follow [1] for
more details. after upgrading you need to bump up the op-version explicitly.
#gluster volume set all cluster.op-version 30707
since we have not introduced new op-version for 3.7.8 you need to bump up with
On 02/20/2016 02:08 AM, Takemura, Won wrote:
I am new user / admin of Gluster.
I am involved in a support issue where access to a gluster mount fails
and the error is displayed when users attempt to access the mount:
-bash: cd: /app/: Transport endpoint is not connected
This error is
Hi Xin,
I didn't heard about this issue in Gluster V 3.7.6/3.7.8 or any other version.
After checking all logs i can say that whether its a issue or something else.
Thanks,
Gaurav
- Original Message -
From: "songxin"
To: "Gaurav Garg"
Cc:
I am new user / admin of Gluster.
I am involved in a support issue where access to a gluster mount fails and the
error is displayed when users attempt to access the mount: -bash: cd: /app/:
Transport endpoint is not connected
A reboot restores the access to the gluster mount since there is an
Hi,
One of my nodes show "gluster peer status" :
Number of Peers: 3
Hostname: 200.145.239.172
Uuid: 2f3aac03-6b27-4572-8edd-48fbf53b7883
State: Peer in Cluster (Connected)
Hostname: 200.145.239.172
Uuid: 2f3aac03-6b27-4572-8edd-48fbf53b7883
State: Establishing Connection (Connected)
Other
Hi xin,
Thanks for bringing up your Gluster issue.
Abhishek (another Gluster community member) also faced the same issue. I asked
below things for futher analysing this issue. could you provide me following
information?
Did you perform any manual operation with GlusterFS configuration file
-Atin
Sent from one plus one
On 19-Feb-2016 7:47 pm, "Atin Mukherjee" wrote:
>
> Abhilash has already raised a concern and Gaurav is looking into it.
My bad, he is Abhishek!
>
> -Atin
> Sent from one plus one
>
> On 19-Feb-2016 7:07 pm, "songxin"
Abhilash has already raised a concern and Gaurav is looking into it.
-Atin
Sent from one plus one
On 19-Feb-2016 7:07 pm, "songxin" wrote:
> Hi,
>
> I create a replicate volume with 2 brick.And I frequently reboot my two nodes
> and frequently run “peer detach” “peer
Hi,
I create a replicate volume with 2 brick.And I frequently reboot my two nodes
and frequently run “peer detach” “peer detach” “add-brick” "remove-brick".
A borad ip: 10.32.0.48
B borad ip: 10.32.1.144
After that, I run "gluster peer status" on A board and it show as below.
Number of
Are there any special steps to upgrade gluster from 3.7.3 to 3.7.8? On
CentOS 7, I was thinking I could just install the new rpm's and restart the
processes.
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi Abhishek,
Peer status output looks interesting where it have stale entry, technically it
should not happen. Here few thing need to ask
Did you perform any manual operation with GlusterFS configuration file which
resides in /var/lib/glusterd/* folder.
Can you provide output of "ls
Hi Gaurav,
After the failure of add-brick following is outcome "gluster peer status"
command
Number of Peers: 2
Hostname: 10.32.1.144
Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
State: Peer in Cluster (Connected)
Hostname: 10.32.1.144
Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
State: Peer in
Hi Abhishek,
How are you connecting two board, and how are you removing it manually that
need to know because if you are removing your 2nd board from the cluster
(abrupt shutdown) then you can't perform remove brick operation in 2nd node
from first node and its happening successfully in your
Hi Gaurav,
Thanks for reply
1. Here, I removed the board manually here but this time it works fine
[2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs replica 1
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
[2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS
Yes
Abhishek,
when sometime its working fine means 2nd board network connection is reachable
to first node. you can conform this by executing same #gluster peer status
command.
Thanks,
Gaurav
- Original Message -
From: "ABHISHEK PALIWAL"
To: "Gaurav Garg"
Hi Gaurav,
Yes, you are right actually I am force fully detaching the node from the
slave and when we removed the board it disconnected from the another board.
but my question is I am doing this process multiple time some time it works
fine but some time it gave these errors.
you can see the
Hi,
I am working on two board setup connecting to each other. Gluster version
3.7.6 is running and added two bricks in replica 2 mode but when I manually
removed (detach) the one board from the setup I am getting the following
error.
volume remove-brick c_glusterfs replica 1
17 matches
Mail list logo