This is still an issue for me, I don’t need anyone to tear the code apart, but
I’d be grateful if someone would even chime in and say “yeah, we’ve seen that
too."
From: Christian Rice >
Date: Sunday, August 30, 2015 at 11:18 PM
To:
Hi
We are currently running a gluster cluster with 2 nodes in replica 2. We mount
the distributed volume with the gluster fuse client and a volumebackup file for
failover.
The volumebackup file looks as follows:
volume node1
type protocol/client
option transport-type tcp
option
On 09/01/2015 09:51 PM, Joe Julian wrote:
Right, so unless server quorum is enabled, this shouldn't be the problem.
This is not really related to server quorum. glusterd won't [1] start
the other gluster processes until it hears from at least one other
glusterd .
-Ravi
[1]
On Tuesday 01 September 2015 09:10 AM, Yiping Peng wrote:
Even if I'm seeing disconnected nodes (also from already-in-pool nodes),
my volume is still intact and available. So I'm guessing that glusterd
has few to do with volume/brick service?
Am I safe to kill all glusterd on all servers and
Hi,
As the subject says, GlusterForge will be shutdown at the end of September 2015.
Anyone still having projects in GlusterForge should move them out.
Github is the suggested location migration. If required the project
can be hosted under the Gluster organisation in Github.
After the project
Hi everybody,
I am looking into the snap shot tool, following this tutorial:
http://blog.gluster.org/2014/10/gluster-volume-snapshot-howto/
While having successfully created the LVM, gluster volume and one snapshot,
there are some questions arrising where I was hoping to find some guidence
here:
> Is this setup on bare metal or does it involve virtual machines?
I'm running GlusterFS on physical machines. No virtual machines involved.
Have you checked if port 24007 is reachable on all peers?
Yes, I did "nc -z server.xxx 24007" on all servers. All succeeded.
Additionally, if you are
On 09/01/2015 12:39 AM, Davy Croonen wrote:
Hi
We are currently running a gluster cluster with 2 nodes in replica 2. We mount
the distributed volume with the gluster fuse client and a volumebackup file for
failover.
The volumebackup file looks as follows:
volume node1
type
On 08/31/2015 09:03 PM, Atin Mukherjee wrote:
On 09/01/2015 01:00 AM, Merlin Morgenstern wrote:
this all makes sense and sounds a bit like a solr setup :-)
I have now added the third node as a peer
sudo gluster peer probe gs3
That indeed allow me to mount the share manually on node2 even
On 09/01/2015 02:34 PM, Joe Julian wrote:
>
>
> On 08/31/2015 09:03 PM, Atin Mukherjee wrote:
>>
>> On 09/01/2015 01:00 AM, Merlin Morgenstern wrote:
>>> this all makes sense and sounds a bit like a solr setup :-)
>>>
>>> I have now added the third node as a peer
>>> sudo gluster peer probe
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
Thanks for your answer. This clarifies a lot.
KR
Davy
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
-Atin
Sent from one plus one
On Sep 1, 2015 9:39 PM, "Joe Julian" wrote:
>
>
>
> On 09/01/2015 02:59 AM, Atin Mukherjee wrote:
>>
>>
>> On 09/01/2015 02:34 PM, Joe Julian wrote:
>>>
>>>
>>> On 08/31/2015 09:03 PM, Atin Mukherjee wrote:
On 09/01/2015 01:00 AM,
On 09/01/2015 09:21 AM, Atin Mukherjee wrote:
-Atin
Sent from one plus one
On Sep 1, 2015 9:39 PM, "Joe Julian" > wrote:
>
>
>
> On 09/01/2015 02:59 AM, Atin Mukherjee wrote:
>>
>>
>> On 09/01/2015 02:34 PM, Joe Julian wrote:
>>>
>>>
>>>
AFAIK even without server quorum the behavior is same.
-Atin
Sent from one plus one
On Sep 1, 2015 9:52 PM, "Joe Julian" wrote:
>
>
> On 09/01/2015 09:21 AM, Atin Mukherjee wrote:
>
> -Atin
> Sent from one plus one
> On Sep 1, 2015 9:39 PM, "Joe Julian"
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-01/gluster-meeting.2015-09-01-12.00.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-01/gluster-meeting.2015-09-01-12.00.txt
Log:
Hello
On RHEL7, I've got some problems to mount gluster volume replicated on 2 nodes
(QA1 and QA2), on nfs. I'm using glusterfs 3.6.3-1. Same thing with 3.7.3-1.
# gluster volume info
Volume Name: data-sync
Type: Replicate
Volume ID: 91335d2a-c60a-4631-8e9c-6809dffd66f4
Status: Started
Number
Hi,
I don't know if this a bug or I am doing something wrong. But it seem that
mounting a volume with acl,selinux results in poor performance.
The odd thing is that sometimes it's really slow and other times it just
relatively slow.
I tried both on glusterfs version 3.6.2 and 3.6.3. The file
On 09/01/2015 02:59 AM, Atin Mukherjee wrote:
On 09/01/2015 02:34 PM, Joe Julian wrote:
On 08/31/2015 09:03 PM, Atin Mukherjee wrote:
On 09/01/2015 01:00 AM, Merlin Morgenstern wrote:
this all makes sense and sounds a bit like a solr setup :-)
I have now added the third node as a peer
On Tuesday 01 September 2015 07:08 PM, Nicolas Repentin wrote:
Hello
On RHEL7, I've got some problems to mount gluster volume replicated on 2
nodes (QA1 and QA2), on nfs. I'm using glusterfs 3.6.3-1. Same thing
with 3.7.3-1.
# gluster volume info
Volume Name: data-sync
Type: Replicate
Volume
20 matches
Mail list logo