Hi, HU
Thank for your help.
I tried to use your example(1 server ,1 Client) to test authentication
function, it's work.
But I tried to test it in replication mode (multi-node),FUSE mounting work, but
NFS didn't.
Any node can mount volume via NFS.
Hello.
Has anyone seen error messages like this in /var/log/glusterfs/nfs.log:
tail /var/log/glusterfs/nfs.log
[2011-01-10 14:22:55.859066] I
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk] pfs-ro1-replicate-3:
background meta-data data self-heal completed on /
[2011-01-10
Hello, all
How can I turn on the Debug mode on 3.1
When I start the volume by command line, is there any way to turn the debug
message on brick side?
___
Gluster-users mailing list
Gluster-users@gluster.org
I am testing Infiniband for the first time. It seems that I should be able to
get a lot more speed than I am with some pretty basic tests. Maybe someone
running Infiniband can confirm that what I am seeing is way out of line, and/or
help diagnose?
I have two systems connected using 3.1.2qa3.
Hi
It seems that the node 10.18.14.240 runs both server and client.
If not, write the server list and the client list here.
As you can see in the log, the node other than above are all accepted by
the server, so you can add both 10.18.14.240 and 127.0.0.1 to the
ip-allowed list to see whether it
Hi,
I'm using for about 4 weeks a simple 2 node replica 2 cluster; I'm
using glusterfs 3.1.1 built on Dec 9 2010 15:41:32 Repository revision:
v3.1.1 .
I use it to provide images trought Nginx.
All works well.
Today i've added 2 new brick, and rebalance volume. For about 4 hours work,
after the
On 01/09/2011 10:06 PM, Bryan McGuire wrote:
Hello,
I am looking into GlusterFS as a high availability solution for our
email servers. I am new to Infiniband but find it could possibly
provide us with the necessary speed.
Hi Bryan
We've done this for various ISP/email hosting customers.
On 01/10/2011 07:43 PM, Christopher Hawkins wrote:
I am testing Infiniband for the first time. It seems that I should be
able to get a lot more speed than I am with some pretty basic tests.
Maybe someone running Infiniband can confirm that what I am seeing is
way out of line, and/or help
Are any apps on the mount point erroring out with:
Invalid argument
or
Stale NFS file handle?
Burnash, James wrote:
Hello.
Has anyone seen error messages like this in /var/log/glusterfs/nfs.log:
tail /var/log/glusterfs/nfs.log
[2011-01-10 14:22:55.859066] I
Hi,
For bricks use -
gluster volume set volume-name diagnostics.brick-log-level DEBUG
For more tunable options refer
http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options
--
Cheers,
Lakshmipathi.G
FOSS Programmer.
- Original Message -
From:
Thanks Joe, you nailed it. These are just test machines and in each case, just
a single 10k scsi drive. That is the throughput bottleneck... I was not able to
get more than 70MB/s sustained.
Chris
- Joe Landman land...@scalableinformatics.com wrote:
On 01/10/2011 07:43 PM, Christopher
Out of curiosity, what is a typical RAID / spindle count / rpm configuration
for you that yields 2 GB/s?
- Christopher Hawkins chawk...@bplinux.com wrote:
Thanks Joe, you nailed it. These are just test machines and in each
case, just a single 10k scsi drive. That is the throughput
On Tue, Jan 11, 2011 at 5:26 AM, Joe Landman
land...@scalableinformatics.com wrote:
We tune our systems pretty hard, so we start with 2+GB/s for TB sized files
before we ever touch the next stack up. Each additional stack you traverse
takes performance away (you lose it in stack inefficiency).
13 matches
Mail list logo