Hi,
There are quite a few tuning parameters for Gluster (as seen in Gluster
volume XYZ get all), but I didn't find much documentation on those.
Some people do seem to set at least some of them, so the knowledge must be
somewhere...
Is there a good source of information to understand what they
Hi Ben,
Regarding https://bugzilla.redhat.com/show_bug.cgi?id=1250241which does
look like a serious regression for small file performance, do you know
which versions are affected, or is there a way to find out?
Also the patch didn't make it: do you have visibility on whether another
patch is
On 25/09/15 17:46, Jiri Jaburek wrote:
Hello,
I'd like to ask if there's a way to remove or hide the .trashcan/
As per current design ,this is not possible. But we are planning to do
the same as part of 3.8 release
You can track this on https://bugzilla.redhat.com/show_bug.cgi?id=1264849
Gluster 3.6.6 / CentOS 7.1 / dual Intel E5-2630v3 / 128GB RAM /
Mellanox 10G Ethernet
I just added a 3rd replica to a 2 replica volume and I'm noticing the
network throughput is very slow replicating to the new node,
~30-60MB/s. I'm on 10gig with SSD bricks and typically get 300+MB/s
for normal
I am _pretty_sure_ its only 3.7+ based, IIRC on upstream anything 3.6 should
work around it. Lemme see if I can dig up the patch that fixed the issue that
lead to this regression so we can be sure, but I would try the latest 3.6. For
Red Hat based distros go with 3.0.4(which has the MT epoll
Team,
We would like to implement gluster in my environment. could you please help me
to get more information about this product ?
Regards,
Premkumar Mani
Storage / Backup Tower
IT Service Delivery Management
Group IT - Malaysia Airlines
Desk # +603-7863-7155
H.P:+60-103644137
Malaysia Airlines
Hello,
I'd like to ask if there's a way to remove or hide the .trashcan/
directory (Gluster 3.7) from the resulting FUSE mount. The reason is
that it breaks POSIX fs expectations and requires manual exclusion hooks
everywhere, typically rsync.
I have tried disabling it via features.trash, but the
Hi,
Where I can find the workaround info listed on the gluster org page as,
http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_NFS_Frequently_Asked_Questions
--
Vmware ESX server reports error when creating a new NFS data store
Vmware ESX server reports the following
On 09/25/2015 07:40 PM, Khoi Mai wrote:
I think I found it from your github doc. the quota size does not
match with the replicate pair. I don't know if that would make the
difference. I apologize, i cannot use fpaste.org, or pastebin.com
due to policies at my company.
I'm not sure
HI Michael,
Yes, only el6 packages are available @
http://download.gluster.org/pub/gluster/glusterfs-nagios/ . I am looping
nagios project team leads to this thread. Lets wait for them to revert.
--Humble
On Sun, Sep 20, 2015 at 2:32 PM, Prof. Dr. Michael Schefczyk <
mich...@schefczyk.net>
On 09/25/2015 07:48 AM, Khoi Mai wrote:
I have a 4 node distributed-replicated gluster farm.
Volume Name: devstatic
Type: Distributed-Replicate
Volume ID: 75832afb-f20e-4017-8d74-8550a92233fd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1:
Hi all,
I have 14 nodes with a replicated 2 volume. I want to suppress the
replication function. I that possible without losing data. Or Do I need
to make a save of the data before.
The data a based data for our computing I need a secure solution, so if
the automatic de-replication is not
I think I found it from your github doc. the quota size does not match
with the replicate pair. I don't know if that would make the difference.
I apologize, i cannot use fpaste.org, or pastebin.com due to policies at
my company.
[root@omhq1b4e ~]# getfattr -d -m . -e hex /static/content/
the gfid looks the same. I'm not sure what gluster volume heal info
split-brain is reporting when the GFID matches, and for all 4 nodes in the
devstatic volume.
[root@omhq1b4f ~]# getfattr -h -d -m trusted.gfid -e hex /static/content/
getfattr: Removing leading '/' from absolute path names
#
14 matches
Mail list logo