Hello
Thanks for the suggestions, but they don’t work:
[root@mybox ~]# gluster volume create myVol1 /gfs/mybox/brick1
Wrong brick type: /gfs/mybox/brick1, use HOSTNAME:export-dir-abs-path
Usage: volume create NEW-VOLNAME [stripe COUNT] [replica COUNT [arbiter
COUNT]] [disperse [COUNT]]
I have a 1 x 2 = 2 volume geo-replicated to a single-brick volume in
another physical site, where I would like to set up a backup.
I could setup a backup on a mount of the volume, but a quick test shows it
is slow in this setup (presumably because there are loads of small files on
there).
Thanks Anand. That would be interesting to see indeed.
On 17 Aug 2015 4:55 am, Anand Subramanian ansub...@redhat.com wrote:
Hi Thibault,
There are a few tuneables that have helped boost ganesha performance and I
suspect these tuneables on the OS side apply to improve performance for
several
Thanks that worked, I only needed to edit in the hosts name in its own
hosts file and it worked fine with the one volume.
On 17/08/2015 12:18, Kaushal M kshlms...@gmail.com wrote:
Hey Mark,
Does the address `mybox` resolve on you system? Gluster requires a
resolvable hostname to be used for
Hi Thibault,
Instead of backing up, individual bricks or the entire thin logical
volume, you can take a gluster volume snapshot, and you will have a
point in time backup of the volume.
gluster snapshots internally use thin lv snapshots, so you can't move
the backup out of the system. Also
Hi,
Since then I setup geo-replication to a volume composed of a single brick
(no replication, no distribution), which seems to be complete / up to date
(the LAST_SYNCED column in 'gluster volume geo-replication VOLUME SLAVE
status detail' is only a few minutes ago), and interestingly the master
Hey Mark,
Does the address `mybox` resolve on you system? Gluster requires a
resolvable hostname to be used for bricks. If is doesn't resolve add
an entry to your /etc/hosts. for eg. '127.0.0.1 mybox'. This allows
the name to be resolved and gluster will allow you to create the
volume. You will
Hi Athin :)
I shall take Bug 1245380
[RFE] Render all mounts of a volume defunct upon access revocation
https://bugzilla.redhat.com/show_bug.cgi?id=1245380
Thanks Regards,
Prasanna Kumar K.
- Original Message -
From: Atin Mukherjee atin.mukherje...@gmail.com
To: Kaushal M
Hi everyone,
We wanted to pull down gluster version 3.6.3 but on accessing the repo at:
http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/EPEL.repo/glusterfs-epel.repo
I noticed the urls all point to LATEST. Shouldn't it read 3.6/3.6.3/?
Thanks, Jeremy.
On Tue, Aug 18, 2015 at 12:44 AM, M S Vishwanath Bhat msvb...@gmail.com wrote:
On 17 August 2015 at 21:44, Sankarshan Mukhopadhyay
sankarshan.mukhopadh...@gmail.com wrote:
And now, a very primitive first draft of what it could look like -
On 08/18/2015 08:45 AM, Jarvis, Jeremy wrote:
Hi everyone,
We wanted to pull down gluster version 3.6.3 but on accessing the repo at:
http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/EPEL.repo/glusterfs-epel.repo
I noticed the urls all point to LATEST. Shouldn't it read
Hi Atin
Any news on this one?
KR
Davy
On 12 Aug 2015, at 16:41, Atin Mukherjee
atin.mukherje...@gmail.commailto:atin.mukherje...@gmail.com wrote:
Davy,
I will check this with Kaleb and get back to you.
-Atin
Sent from one plus one
On Aug 12, 2015 7:22 PM, Davy Croonen
I've not got a chance to look at it which I will do now. Thanks for
reminder!
-Atin
Sent from one plus one
On Aug 17, 2015 7:19 PM, Davy Croonen davy.croo...@smartbit.be wrote:
Hi Atin
Any news on this one?
KR
Davy
On 12 Aug 2015, at 16:41, Atin Mukherjee atin.mukherje...@gmail.com
On 08/18/2015 10:41 AM, Atin Mukherjee wrote:
Just a quick update. I was wrong saying the issue is reproducible in
3.7. What I could see is this issue is fixed in 3.7. Now I need to find
out the patch which fixed it and backport it to 3.6. Would it be
possible for you to upgrade the setup to
Just a quick update. I was wrong saying the issue is reproducible in
3.7. What I could see is this issue is fixed in 3.7. Now I need to find
out the patch which fixed it and backport it to 3.6. Would it be
possible for you to upgrade the setup to 3.7 if you want a quick solution?
~Atin
On
Hi Jeremy Atin,
I noticed the urls all point to LATEST. Shouldn't it read 3.6/3.6.3/?
Yes, its pointing to latest 3.7.3 bits and it needs a correction. Thanks
for pointing this out.
I *dont* think it should be the case, that said, when 3.6.1 is released you
pull the repo file which points
On Mon, Aug 17, 2015 at 10:35 AM, sankarshan
sankarshan.mukhopadh...@gmail.com wrote:
On Mon, 17 Aug 2015 06:27:53 +0200, Niels de Vos wrote:
It would be good to have some standard structure that includes
interesting statistics.
I'm not absolutely sure about the statistics but I was thinking
Dne 17.8.2015 v 13:30 Miloš Kozák napsal(a):
Dne 17.8.2015 v 10:07 Ravishankar N napsal(a):
On 08/17/2015 12:42 PM, Miloš Kozák wrote:
I assumed that those files are different because of different file
size.
I then reckon the writes are not successful on the 2nd node. The
brick log
On 08/17/2015 12:42 PM, Miloš Kozák wrote:
I assumed that those files are different because of different file size.
I then reckon the writes are not successful on the 2nd node. The brick
log should give some clue. You can also check the fuse mount log for errors.
Ok, so my MD5s are:
On 17 August 2015 at 21:44, Sankarshan Mukhopadhyay
sankarshan.mukhopadh...@gmail.com wrote:
On Mon, Aug 17, 2015 at 10:35 AM, sankarshan
sankarshan.mukhopadh...@gmail.com wrote:
On Mon, 17 Aug 2015 06:27:53 +0200, Niels de Vos wrote:
It would be good to have some standard structure that
Hi,
Dne 17.8.2015 v 06:34 Ravishankar N napsal(a):
On 08/16/2015 04:22 PM, Miloš Kozák wrote:
Hi, I have been running an glusterfs for a while, and everything
works just fine even after one node failure.. However, I went for
brick replacement due to my bricks were not thin-provisioned and
On 08/17/2015 11:58 AM, Miloš Kozák wrote:
Mounted by fuse.
I would not say written to only one node because both nodes contain
something, but I am quite suspicious about files on the node2.
Not sure I follow. Is there a mismatch of the file's contents on both
nodes? You could do an md5sum
I assumed that those files are different because of different file size.
Ok, so my MD5s are:
[root@nodef01i fs]# md5sum *
6bc6e89a9cb46a4098db300ff6d95b46 f1607f25aa52f4fb6f98f20ef0f3f9d7
0989d12cc875387827ebbbdc67503f2b 3706a2cb0bb27ba5787b3c12388f4ebb
[root@nodef01i fs]# md5sum *
23 matches
Mail list logo