I am running into trouble while syncing (rsync, cp ... ) my files to
glusterfs. After about 50K files, one machine dies and has to be rebooted.
As there are about 300K files in one directory, I am thinking about to
cluster that in a directory structure in order to overcome that problem.
e.g.
,logbufs=8,logbsize=256k,largeio,inode64,swalloc,allocsize=131072k,nobarrier
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-24 16:11 GMT+02:00 Merlin Morgenstern
merlin.morgenst...@gmail.com:
re your questions:
did you do some basic tuning to help anyway
I am running ubuntu 14.04 as guest inside virtualbox 5.0.3 on OS X. While
performing rsync on a glusterfs (3.7.x) mounted share with over 100.000
files, the entire machine crashes.
There is one particular /var/log/syslog entry of interest:
INFO: task rsync:4114 blocked for more than 120
which you are trying to mount is offline.
On Thu, Aug 27, 2015 at 4:14 AM, Merlin Morgenstern
merlin.morgenst...@gmail.com wrote:
I have two gluster servers installed on seperate machines. The function
as failover and I do mount on each machine with the coresponding gluster
client. This only
I am running 2 glusterd (3.7.3) on individual machines connected by private
network. It apears that I can only mount the (replicated) bricks if both
deamons are up, otherwise it failes. However, failover works - once they
are running.
As I am pretty new to glusterfs, there might be some
I have two gluster servers installed on seperate machines. The function as
failover and I do mount on each machine with the coresponding gluster
client. This only seems to work if both gluster servers are live.
It seems that something is misconfigured or misunderstood by myself how
gluster works.
was not started as in a 2 node set up glusterd waits its peers to come
>>> up before it starts the bricks. Could you check whether the brick
>>> process is running or not?
>>>
>>> Thanks,
>>> Atin
>>>
>>> On 08/31/2015 04:17 PM, Yiping Peng wr
I understand. So my setup is maybe wrong. Vijay, could you please explain
how this dummy node setup would look like?
Do you recommend to setup a glusterd on node3 and replicate to 3 servers?
In my understanding this would significantly reduce performance as files
have to be replicated 3 times.
this all makes sense and sounds a bit like a solr setup :-)
I have now added the third node as a peer
sudo gluster peer probe gs3
That indeed allow me to mount the share manually on node2 even if node1 is
down.
BUT: It does not mount on reboot! It only successfully mounts if node1 is
up. I need
Hi everybody,
I am looking into the snap shot tool, following this tutorial:
http://blog.gluster.org/2014/10/gluster-volume-snapshot-howto/
While having successfully created the LVM, gluster volume and one snapshot,
there are some questions arrising where I was hoping to find some guidence
here:
--- Original Message -
> > From: "Merlin Morgenstern" <merlin.morgenst...@gmail.com>
> > To: "gluster-users" <gluster-users@gluster.org>
> > Sent: Tuesday, September 1, 2015 3:15:43 PM
> > Subject: [Gluster-users] gluster volume sna
According to the docs, snapshots should be present at "gs:/snaps" just as
volumes are under "gs:/volume". This is not the case. I can see mounted
snaps under /var/rund/snaps/UUID/...
Furthermore the name of the snapshot is not as declared in the command.
E.g. "snap1", but it is "snap1_timestamp"
Is there a way to retrieve the snap volume name from gluster?
$ sudo gluster snapshot info returns all kinds of info:
Snapshot : snap1
Snap UUID : 2788e974-514b-4337-b41a-54b9cb5b0699
Created : 2015-09-02 14:03:59
Snap Volumes:
Snap Volume Name
me" | sed -e
> 's/^S.*://g'
>
> ??
>
> Alex
>
>
>
> On 03/09/15 16:06, Merlin Morgenstern wrote:
>
> Is there a way to retrieve the snap volume name from gluster?
>
> $ sudo gluster snapshot info returns all kinds of info:
>
> Snapshot
got it...
sudo gluster snapshot info snap1 | grep "Snap\ Volume\ Name" | sed -e
's/.*S.*:.//g'
2015-09-03 17:36 GMT+02:00 Merlin Morgenstern <merlin.morgenst...@gmail.com>
:
> Interesting hack :-)
>
> Should work but the regex seems not fitting:
> R
I have about 1M files in a GlusterFS with rep 2 on 3 nodes runnnig gluster
3.7.3.
What would be a recommended automated backup strategy for this setup?
I already considered the following:
1) glusterfs snapshots in combination with dd. This unfortunatelly was not
possible so far as I could not
I want to automate backups on a glusterfs 3.7.3 share with snapshots.
Creating snapshots manualy works, but how are they secured to a different
server?
I was able manage that process manually by doing the following:
sudo umount /run/gluster/snaps/7cb4b2c8f8a64ceaba62bc4ca6cd76b2/brick1
sudo dd
Thank you in advance for sheding some light on doing backups from glusterfs.
2015-09-03 13:21 GMT+02:00 Rajesh Joseph <rjos...@redhat.com>:
>
>
> - Original Message -----
> > From: "Merlin Morgenstern" <merlin.morgenst...@gmail.com>
> >
root root 13 Sep 2 16:04 snapd.info
-rw--- 1 root root 2478 Sep 2 16:03
trusted-2d828e6282964e0e89616b297130aa1b.tcp-fuse.vol
2015-09-02 16:31 GMT+02:00 Merlin Morgenstern <merlin.morgenst...@gmail.com>
:
> So what would be the fastest possible way to make a backup to one single
er volume status' on server to see if brick process is running.
Thank you in advance for any help
2015-09-02 14:11 GMT+02:00 Rajesh Joseph <rjos...@redhat.com>:
>
>
> - Original Message -
> > From: "Merlin Morgenstern" <merlin.morgenst...@gmail.com>
>
er processes are specific to the volume.
> If you wish to stop these, you must stop the volume using
>
> # gluster v stop
>
> You could also kill them, but that might come with additional
> repercussions like data loss, etc.
>
>
>
> From:Merlin
I am running Gluster 3.7.x on 3 nodes and want to stop the service.
Unfortunatelly this does not seem to work:
sudo /usr/sbin/glusterd stop
user@fx2:~$ ps -ef | grep gluster
root 2334 1 0 Sep08 ?00:00:03 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/nfs -p
I am experiencing unusual CPU usage on a ubuntu 14.04.03 box which is
supposed to be idle. It is around 25% on a 4 core system and htop says load
is about 1.0 while the underlying proccess all show 0 to a bit % CPU load.
The only service running is a glusterfsd and a glusterfs client. There is
no
I am running glusterfs on a 3 node prod server with thinly provisioned
LVM-Volumes. My goal is to automate a backup process that is based on
gluster snapshots. The idea is basically to run a shell script via cron
that takes the snapshot, zips it and moves it to a remote server.
Backup works, now
I am triying to attach a brick from another server to a local gluster
development server. Therfore I have done a dd from a snapshot on production
and a dd on the lvm volume on development. Then I deleted the .glusterfs
folder on root.
Unfortunatelly forming a new brick failed nevertheless with
25 matches
Mail list logo