On 8 October 2015 at 07:19, Joe Julian wrote:
>
>
> On 10/07/2015 12:06 AM, Lindsay Mathieson wrote:
>
> First up - one of the things that concerns me re gluster is the incoherent
> state of documentation. The only docs linked on the main webpage are for
> 3.2 and there is
Hello.
Tnx for the suggestion. I get an error message saying that "volume set:
failed: option : cluster.consistent-metadata does not exist".
-> I will upgrade next week my servers with the last version and then
let you know.
Marco
Il 07. 10. 15 03:10, Krutika Dhananjay ha scritto:
> Let me know
On 7 October 2015 at 21:28, sreejith kb wrote:
> gluster volume remove-brick datastore1 replica *1*
> vnb.proxmox.softlog:/glusterdata/datastore1c
> force.
Sorry, but I did try it with replica 1 as well, got the same error.
I'll try and reproduce it later and
First up - one of the things that concerns me re gluster is the incoherent
state of documentation. The only docs linked on the main webpage are for
3.2 and there is almost nothing on how to handle failure modes such as dead
disks/bricks etc, which is one of glusters primary functions.
My problem
Hi,
I have an 8-node trusted pool with a distributed, non-replicated volume
The bricks are located only on 2 machines (2 bricks per node), so there are 6
dummy" nodes. Everything is working great until one of the brick-arrying nodes
experiences a power outage.
In this case, I can
On 10/07/2015 10:28 PM, Gene Liverman wrote:
> I want to replace my existing CentOS 6 nodes with CentOS 7 ones. Is
> there a recommended way to go about this from the perspective of
> Gluster? I am running a 3 node replicated cluster (3 servers each with 1
> brick). In case it makes a
On 7 October 2015 at 21:28, sreejith kb wrote:
> gluster volume remove-brick datastore1 replica *1*
> vnb.proxmox.softlog:/glusterdata/datastore1c
> force.
I think my problem was that I was using "commit force" instead of just
"force", I have it working now. Brain
On 8 October 2015 at 07:19, Joe Julian wrote:
> I documented this on my blog at
> https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is
> still accurate for the latest version.
>
> The bug report I filed for this was closed without resolution. I assume
>
Hi,
While you removing a failed brick from four existing cluster volume
try to provide the correct replica number *'n-1' *while removing a brick
from 'n' number of bricks from a gluster volume.
so here you are trying to remove one brick from a volume that contain 2
number of bricks in
Hi All,
In 1 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda:
Hi all,
thanks for the participation today. In case you have missed the meeting,
remind yourself to join next week Wednesday at 12:00 UTC. More details
in the agenda:
https://public.pad.fsfe.org/p/gluster-community-meetings
Meeting summary
---
* agenda for todays meeting can be
-Atin
Sent from one plus one
On Oct 7, 2015 6:53 PM, "Mohammed Rafi K C" wrote:
>
> Hi all,
>
> thanks for the participation today. In case you have missed the meeting,
> remind yourself to join next week Wednesday at 12:00 UTC. More details
> in the agenda:
>
>
Hey Guys,
This is my first email to the group. I am not sure if it’s the right forum or
if there are any conventions to raise issues.
Incident Overview:
===
Gluster daemon on individual servers crash, with crashdump in log file. This
has started happening since last upgrade from
Both of the requested trace commands are below:
Core was generated by `/usr/sbin/glusterd --pid-file=/var/run/glusterd.pid'.
Program terminated with signal 6, Aborted.
#0 0x003b91432625 in raise (sig=) at
../nptl/sysdeps/unix/sysv/linux/raise.c:64
64return INLINE_SYSCALL (tgkill, 3,
This looks like a glibc corruption to me. Which distribution platform are
you running Gluster on?
-Atin
Sent from one plus one
On Oct 7, 2015 9:12 PM, "Gene Liverman" wrote:
> Both of the requested trace commands are below:
>
> Core was generated by `/usr/sbin/glusterd
>
There are a couple of answers to that question...
- The core dump is from a fully patched RHEL 6 box. This is my primary
box
- The other two nodes are fully patched CentOS 6.
--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
I need to check this, few months back we faced a libc issue on RHEL. I
don't have the full context of it with me right now. Does anyone recollect
the issue?
-Atin
Sent from one plus one
On Oct 7, 2015 9:29 PM, "Gene Liverman" wrote:
> There are a couple of answers to that
I am trying to understand the nature of the Gluster management traffic.
This would be the traffic that can be secured with the presence of the
‘secure-access’ file.
There appears to be a bug when ‘secure-access’ is enabled and I am evaluating
the security implications of having TLS management
I want to replace my existing CentOS 6 nodes with CentOS 7 ones. Is there a
recommended way to go about this from the perspective of Gluster? I am
running a 3 node replicated cluster (3 servers each with 1 brick). In case
it makes a difference, my bricks are on separate drives formatted as XFS so
19 matches
Mail list logo