Hello,
In the Translator section of this link, there are some words in the
background of the text.
http://www.gluster.org/community/documentation/index.php/GlusterFS_Concepts
--
Asias
___
Gluster-users mailing list
Gluster-users@gluster.org
hi, my testing:
# glusterd -V
glusterfs 3.4.0 built on Jul 25 2013 04:12:26
# smbd -V
Version 3.6.9-151.el6
samba-gluster-vfs:
git://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs.git
smb.conf:
[gvol]
comment = For samba export of volume test
vfs objects = glusterfs
For example:
1. Click Documentation:
http://www.gluster.org/community/documentation/index.php/Main_Page
2. Click Developers:
http://www.gluster.org/community/documentation/index.php/Developers
3. Click Architectural overview and Gluster internals:
hi, All
samba vfs low performance, compare with non-vfs export dir.
nonvfs:
[root@infi131 cifs]# dd if=/dev/zero of=test1 bs=64k
^C记录了55548+1 的读入
记录了55548+0 的写出
3640393728字节(3.6 GB)已复制,17.192 秒,212 MB/秒
vfs:
[root@infi131 cifs]# cd /mnt/cifs
[root@infi131 cifs]# dd if=/dev/zero of=test1 bs=64k
Hi,
What does it mean when you use peer probe to add a new host, but then
afterwards the peer status is reported as Rejected yet Connected?
And of course -- how does one fix this?
gluster peer status
Number of Peers: 1
Hostname: 192.168.10.32
Uuid: 32497846-6e02-4b68-b147-6f4b936b3373
State:
Hi,
I'm getting some confusing Incorrect brick errors when attempting to
remove OR replace a brick.
gluster volume info condor
Volume Name: condor
Type: Replicate
Volume ID: 9fef3f76-525f-4bfe-9755-151e0d8279fd
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1:
On 06/08/13 18:12, Toby Corkindale wrote:
Hi,
What does it mean when you use peer probe to add a new host, but then
afterwards the peer status is reported as Rejected yet Connected?
And of course -- how does one fix this?
gluster peer status
Number of Peers: 1
Hostname: 192.168.10.32
Uuid:
Hi,
1. Please add in global section in smb.conf the following lines:
kernel oplocks = no
stat cache = no
2. service smb restart
Please try the same test and let us know if that helps to increase the
throughput.
Raghavendra Talur
On Tue, Aug 6, 2013 at 1:00 PM, haiwei.xie-soulinfo
Nux,
I suggest you don't do this. As soon as the bricks are connected to gluster
self-heal daemon, it already does this. And every 10 minutes it checks if there
is anything to be healed and heals.
Pranith.
- Original Message -
From: Nux! n...@li.nux.ro
To: Pranith Kumar Karampuri
Toby,
What versions of gluster are on the peers? And does the cluster have
just two peers or more?
~kaushal
On Tue, Aug 6, 2013 at 4:32 PM, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
- Original Message -
From: Toby Corkindale toby.corkind...@strategicdata.com.au
To:
Hi,
Unfortunately, add the two params into smb.conf, nfs client OS will core dump
with dd testing.
I reconfigure gluster vol, without any new params in smb.conf, performance
looks good. Maybe
former low performance result from error config.
[root@infi131 cifs]# dd if=/dev/zero of=t1 bs=64k
Hi Gluster Gurus
I'm looking at implementing a web project that needs to be able to scale
but starting out very small and I think gluster might be solution for
my needs but I have a few questions that if answered would help clarify
things in my head.
1. Is gluster a viable solution
so are you asking if gluster would do all the load balancing for you for
static content? I bet it could if you created a volume with extremely
high replication.
This potentially similar to the idea of using a gluster distributed volume
to replace a distributed file cache.
Curious if anyone has
Say I have a brick server which has totally crashed,what should I do then?It
seems that replace brick only works when old brick is online.
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi there,
I heard there would be meetups in September and October. How can I register?
Many thanks,
Ekaterina Tatarenko
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
On Tue, Jul 30, 2013 at 05:29:43AM -0400, Balamurugan Arumugam wrote:
Hi All,
Recently new logging framework was introduced [1][2][3] in glusterfs
master branch. You could read more about this on doc/logging.txt. In
brief, current log target is moved to syslog and user has an option to
Update on this issue:
After I deleted the zero-length files which were located on the wrong
bricks, the issue of files losing permissions is mostly resolved. I left
my find running every 10 minutes for the last five days, though, and the
problem continues to recur with a few hundred files every
It's a mediawiki error that I cannot figure out how to fix. If there's a
mediawiki whiz in the house, please do share your wisdom :)
-JM
- Original Message -
The HTML contains the following at that spot:
div class=MediaTransformError style=width: 322px; height: 0px;
Fixed
On 08/06/2013 01:37 PM, John Mark Walker wrote:
It's a mediawiki error that I cannot figure out how to fix. If there's a
mediawiki whiz in the house, please do share your wisdom :)
-JM
- Original Message -
The HTML contains the following at that spot:
div
On 06/08/13 21:25, Kaushal M wrote:
Toby,
What versions of gluster are on the peers? And does the cluster have
just two peers or more?
Version 3.3.1.
The cluster has/had two nodes; we're trying to replace one with another one.
On Tue, Aug 6, 2013 at 4:32 PM, Toby Corkindale
The solution has been found, but it's kind of ugly.
peer detach
stop gluster on new node, wipe /var/lib/gluster
restart gluster on new node
on old node, run:
for Q in `gluster volume list`; do
gluster reset $Q
done
peer probe
After this it successfully connected the new node.
I have no idea
On 06/08/13 18:24, Toby Corkindale wrote:
Hi,
I'm getting some confusing Incorrect brick errors when attempting to
remove OR replace a brick.
gluster volume info condor
Volume Name: condor
Type: Replicate
Volume ID: 9fef3f76-525f-4bfe-9755-151e0d8279fd
Status: Started
Number of Bricks: 1 x 2 =
On Wed, 07 Aug 2013 00:11:32 +0530
Vijay Bellur vbel...@redhat.com wrote:
On 08/06/2013 05:09 PM, haiwei.xie-soulinfo wrote:
Hi,
Unfortunately, add the two params into smb.conf, nfs client OS will core
dump with dd testing.
I reconfigure gluster vol, without any new params in
On Wed, Aug 7, 2013 at 5:37 AM, Joe Julian j...@julianfamily.org wrote:
Fixed
It looks like that the GlusterFS_Concepts page is still not updated ;-)
On 08/06/2013 01:37 PM, John Mark Walker wrote:
It's a mediawiki error that I cannot figure out how to fix. If there's a
mediawiki whiz in
Is this a bug or a feature?
# gluster volume create foo mel-storage01:/tmp/foo
Creation of volume foo has been successful. Please start the volume to
access data.
# gluster volume delete foo
Deleting volume foo has been successful
# gluster volume create foo mel-storage01:/tmp/foo
/tmp/foo
On 08/07/2013 10:56 AM, Toby Corkindale wrote:
Is this a bug or a feature?
# gluster volume create foo mel-storage01:/tmp/foo
Creation of volume foo has been successful. Please start the volume to
access data.
# gluster volume delete foo
Deleting volume foo has been successful
# gluster
26 matches
Mail list logo