Hi,
I'm slowly finding my way around glusterfs and getting a little more
comfortable with the concepts of glusterfs itself.
I have a 2 node baremetal server with a 1.5 TB volume replicating and it seems
pretty damn cool.
I have now exported with samba as well. I can connect with samba to the
Just as how you can't ask xfs or ext4 how many files it has, you can't
ask Gluster the same thing. It doesn't know. It crawls the tree, looks
at the hash of the filename, and determines if it should reside on a
different brick.
To guess how much remains, look at how many are complete out of
I am using GlusterFS to make my storage between several servers can be
replicated with fault tolerance. And I am doing the similar way as
http://www.gluster.org/community/documentation/index.php/GlusterFS_iSCSI
It works well, but in the process of
dd if=/dev/zero of=disk3 bs=2G count=25
it
I meant to ask if you are using the same underlying filesystem and
parameters for A/B comparisons. It isn't clear what A and B are in your
comparison. You said that your performance is different with and without
gluster even if you access bricks directly. That bit is puzzling. Adding a
brick to
Hi, Cong!
Well, there is not much to advice from my experience. GlusterFS is FS type
storage, while iscsi is blockdevice. You can't export gluster bricks as
iscsi targets directrly. At least at this moment. Who knows, mby devs will
surprise us some way in this direction :) Mby HA-LVM is your pick
I've got a 4-node distributed-replicated cluster running 3.5.2. I just
added 4TB (2TB to the distribute pair and 2TB to the replicate pair) and
then started a rebalance as I believe the rebalance needed to be initiated
for the extra space to be presented to the entire volume (correct me if I'm
Hi there,
I have a split-brain situation on one of my volumes. The part that I'm not
clear on is that it would appear to be on the root of the volume.
[2015-04-13 22:07:35.729798] E
[afr-self-heal-common.c:2262:afr_self_heal_completion_cbk]
0-www-conf-replicate-0: background meta-data
Just keep this in mind when revamping:
[15:17] mike2512 hey guys... i am trying to install gluster on 2 centos vms
- centos 6.6
[15:17] mike2512 Requires: libgfapi.so.0(GFAPI_3.4.0)(64bit)
[15:17] mike2512 i have followed the procedures here:
Okay, I wasn't reading carefully... Try this:
dd if=/dev/zero of=disk3 count=1 bs=1 skip=50G
That will give you a 50GB thin-provisioned file for iSCSI.
Regards,
Jon Heese
On Apr 13, 2015, at 7:02 PM, Jon Heese
jonhe...@jonheese.commailto:jonhe...@jonheese.com wrote:
Cong,
Try adding
Cong,
Try adding skip=2G (I think) to your dd command and change the bs and count
both to 1. This will essentially thin-provision your iSCSI volume file.
I use this method to make iSCSI volumes that live on gluster.
Regards,
Jon Heese
On Apr 13, 2015, at 5:23 PM, Yue, Cong
Wiki process:
1. user reports instructions are failing
2. click the link for the failing instructions
3. identify the problem with the instructions
4. click edit
5. edit
6. save
Average total time: 2 minutes
Static process:
1. user reports instructions are failing
2. click the link for the
CCing Dan as he has more insights on this.
~Atin
On 04/14/2015 02:52 AM, Yue, Cong wrote:
I am using GlusterFS to make my storage between several servers can be
replicated with fault tolerance. And I am doing the similar way as
Hi,
If you are connected to a node via Samba and if that node goes down, you will
have to manually connect to the other Samba node,
unless Samba nodes are clustered and you have a HA solution on top of Samba.
CTDB could be one of your options, it provides both clustering and IP failover
for
Hi All
Hi All,
Last problem I hope.
With my 14 node one node the 8 is present in the gluster volume info
tyty and not in the gluster volume status tyty
And when I start a volume on the 8 the other don't start, as I have to
start on one of the other node but they don't see the 8 and the
- Original Message -
How about using Pelican(http://blog.getpelican.com/) for generating
static site? Pelican is a Python based popular static site generator.(My
website http://aravindavk.in is created using Pelican, with my own
custom theme)
This is pretty much exactly what we have
Hi,
thanks.. this is very useful…..
Met vriendelijke groet / kind regards,
Sander Zijlstra
| Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6
43 99 12 47 | sander.zijls...@surfsara.nl mailto:sander.zijls...@surfsara.nl
| www.surfsara.nl http://www.surfsara.nl/ |
On 04/13/2015 07:20 PM, Sander Zijlstra wrote:
Hi,
thanks, I missed that one….
Does this mean that I can rely on looking at “volume heal info” output
to notice issues?
I regularly see those numbers return “0” so I can rely on that meaning
that no files failed healing?
That is
Hi and thanks,
Well, I definitely don't want to ruin my change log (the extended attributes),
so
I'll have to think of a new way to integrate glusterfs in our use case.
Thanks for confirming my suspicion.
Regards
Andreas
On 04/13/15 06:16, Atin Mukherjee wrote:
On 04/11/2015 02:25 PM,
LS,
I recently upgraded to 3.6.2 and I noticed that the info command for
heal-failed is not working:
# gluster volume heal gv0 info failed
Usage: volume heal VOLNAME [{full | statistics {heal-count {replica
hostname:brickname}} |info {healed | heal-failed | split-brain}}]
[root@v39-app-01 ~]#
On 04/13/2015 06:23 PM, Sander Zijlstra wrote:
LS,
I recently upgraded to 3.6.2 and I noticed that the info command for
heal-failed is not working:
# gluster volume heal gv0 info failed
Usage: volume heal VOLNAME [{full | statistics {heal-count {replica
hostname:brickname}} |info {healed
Hi,
thanks, I missed that one….
Does this mean that I can rely on looking at “volume heal info” output to
notice issues?
I regularly see those numbers return “0” so I can rely on that meaning that no
files failed healing?
Off course as per the bug report I still need to check the
Do, gluster volume set help. There is a pretty good explanation of
the read subvolume preferences and options.
Specifically, you'll want to look at the cluster.read-hash-mode option,
which has one of three values:
(0) Each client will determine which brick seems fastest, and use that
for
Hi everyone,
I've been running GlusterFS for a while now on EC2 instances to keep
several WordPress environments sync'd across multiple hosts. Up until last
week everything has been working great.
All of the sudden on two of the environments I have seen massive spikes of
read activity on the
23 matches
Mail list logo