I am confused about my caching problem. I’ll try to keep this as
straightforward as possible and include the basic details...
I have a sixteen node distributed volume, one brick per node, XFS isize=512,
Debian 7/Wheezy, 32GB RAM minimally. Every brick node is also a gluster
client, and also
I am running 2 glusterd (3.7.3) on individual machines connected by private
network. It apears that I can only mount the (replicated) bricks if both
deamons are up, otherwise it failes. However, failover works - once they
are running.
As I am pretty new to glusterfs, there might be some
-Atin
Sent from one plus one
On Aug 31, 2015 10:34 PM, "Merlin Morgenstern"
wrote:
>
> Thank you all for your help.
>
> To explain the setup better, here is the goal I am trying to achieve:
>
> - 3 servers running in a cluster, each with a webserver uploading and
Thank you all for your help.
To explain the setup better, here is the goal I am trying to achieve:
- 3 servers running in a cluster, each with a webserver uploading and
serving files to visitors from a common glusterfs share.
- Server1 and Server2 have gluster-server installed
- One brick
On Monday 31 August 2015 10:42 PM, Atin Mukherjee wrote:
> 2. Server2 dies. Server1 has to reboot.
>
> In this case the service stays down. It is inpossible to remount the
share without Server1. This is not acceptable for a High Availability
System and I believe also not intended, but a
I understand. So my setup is maybe wrong. Vijay, could you please explain
how this dummy node setup would look like?
Do you recommend to setup a glusterd on node3 and replicate to 3 servers?
In my understanding this would significantly reduce performance as files
have to be replicated 3 times.
On 08/31/2015 10:41 AM, Vijay Bellur wrote:
On Monday 31 August 2015 10:42 PM, Atin Mukherjee wrote:
> 2. Server2 dies. Server1 has to reboot.
>
> In this case the service stays down. It is inpossible to remount the
share without Server1. This is not acceptable for a High Availability
this all makes sense and sounds a bit like a solr setup :-)
I have now added the third node as a peer
sudo gluster peer probe gs3
That indeed allow me to mount the share manually on node2 even if node1 is
down.
BUT: It does not mount on reboot! It only successfully mounts if node1 is
up. I need
Hi,
I am testing out several failure scenarios with GlusterFS. I have a 3 node
replicated gluster that i am testing with.
One test i am having trouble solving is when a host dies. (i.e. as if
someone pulled the power cord out).
- Firewall off a host from the rest of the cluster
- Test time it
On 08/31/2015 12:35 PM, Grant Ridder wrote:
Hi,
I am testing out several failure scenarios with GlusterFS. I have a 3
node replicated gluster that i am testing with.
One test i am having trouble solving is when a host dies. (i.e. as if
someone pulled the power cord out).
- Firewall off a
Thanks for the info Joe! responded in line
On Mon, Aug 31, 2015 at 3:00 PM, Joe Julian wrote:
> On 08/31/2015 12:35 PM, Grant Ridder wrote:
>
>> Hi,
>>
>> I am testing out several failure scenarios with GlusterFS. I have a 3
>> node replicated gluster that i am testing
I've tried both: assuming server1 is already in pool, server2 is undergoing
peer-probing
server2:~$ mount server1:/vol1 mountpoint, fail;
server2:~$ mount server2:/vol1 mountpoint, fail.
Strange enough. I *should* be able to mount server1:/vol1 on server2. But
this is not the case :(
Maybe
Hi guys,
I've been running GlusterFS for a couple of days and it's been nice and
steady, except a minor problem: the peer probing on my relatively large
cluster seems to stuck for a long time.
Last time atinm told me in IRC (I was barius.2333 in IRC) that a cluster as
large as 50+ nodes might
On 08/31/2015 01:10 PM, Yiping Peng wrote:
> Hi guys,
>
>
> I've been running GlusterFS for a couple of days and it's been nice and
> steady, except a minor problem: the peer probing on my relatively large
> cluster seems to stuck for a long time.
>
>
> Last time atinm told me in IRC (I was
The "Disconnected" state of nodes randomly changes, so I randomly picked a
node and tailed last several lines
of /var/log/glusterfs/etc-glusterfs-glusterd.vol.log (is it the right log
file?).
I can still access the cluster from servers already in pool, either reading
or writing is fine.
The log
On 08/31/2015 12:53 PM, Merlin Morgenstern wrote:
Trying to mount the brick on the same physical server with deamon
running on this server but not on the other server:
|@node2:~$ sudo mount -t glusterfs gs2:/volume1 /data/nfs Mount failed.
Please check the log file for more details. |
For
I believe the following events have happened in the cluster resulting
into this situation:
1. GlusterD & brick process on node 2 was brought down
2. Node 1 was rebooted.
In the above case the mount will definitely fail since the brick process
was not started as in a 2 node set up glusterd waits
On 09/01/2015 01:00 AM, Merlin Morgenstern wrote:
> this all makes sense and sounds a bit like a solr setup :-)
>
> I have now added the third node as a peer
> sudo gluster peer probe gs3
>
> That indeed allow me to mount the share manually on node2 even if node1 is
> down.
>
> BUT: It does
Even if I'm seeing disconnected nodes (also from already-in-pool nodes), my
volume is still intact and available. So I'm guessing that glusterd has few
to do with volume/brick service?
Am I safe to kill all glusterd on all servers and start this whole peer
probing process all over again?
If I do
19 matches
Mail list logo