You cannot expand stripe directly. You have to use
distribute + stripe, where you scale in stripe sets.
For example, if you have 8 nodes, you create
= distribute(stripe(4)+stripe(4))
Now if you want to scale your storage cluster, you should do so
in stripe sets. Add 4 more nodes like this:
=
Peng Zhao wrote:
Hi, all,
I'm new to gluster, but found it interesting. I want to setup gluster in
a way to be similar with HDFS.
There is my sample vol-file:
volume posix
type storage/posix
option directory /data1/gluster
end-volume
volume locks
type features/locks
subvolumes posix
If i need height available with afr , did it use distribute + stripe + afr
just like distribute(stripe( afr(2) 4)+stripe( afr(2) 4)+stripe( afr(2) 4)) ??
2009-07-01
eagleeyes
发件人: Anand Babu Periasamy
发送时间: 2009-07-01 14:01:14
收件人: eagleeyes
抄送: gluster-users
主题: Re:
hello there
sorry for the newbie question, but I couldn't find an answer thru the
old messages
I am researching glusterfs for a personal project involving a
constantly growing storage farm
my config is simple: several bricks and a server doing DHT using
cluster/distribute just like
On Wednesday 01 of July 2009 03:50, Peng Zhao wrote:
[2009-07-01 09:37:36] E [xlator.c:736:xlator_init_rec] xlator:
Initialization of volume 'fuse' failed, review your volfile again
I got this error when the fuse module was not loaded, so check this first..
modprobe fuse
ls -l /dev/fuse
OK, my stupid. There was no fuse module. I built one and modprobe fuse. The
previous error is gone, with some new one:
Here are the DEBUG-level msg:
[2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify)
on /usr/lib64/glusterfs/2.0.2/xlator/features/locks.so: undefined
Peng Zhao wrote:
[2009-07-01 18:36:25] E [socket.c:206:__socket_server_bind] server:
binding to failed: Address already in use
[2009-07-01 18:36:25] E [socket.c:209:__socket_server_bind] server:
Port is already in use
[2009-07-01 18:36:25] E [server-protocol.c:7631:init] server: failed
to
Hi,
I have a basic 2 server (serverA and serverB) replication with 1
client.
Sometimes serverB will go offline, so client is writing directly to
serverA.
However when serverB comes back online, the client does not detect this
and ignore serverB until I restart the client. Is this
Hi,
What are the intended use cases for gluster? For example, is it suitable for,
say, replacing a SAN? For example, is it good for the following?
[ ] Storing a huge volume of seldom accessed files (file archive)
[ ] Storing frequently read/write files (file server)
[ ] Storing frequently read
I'm using an unpatched fuse 2.7.4-1 and glusterfs 2.0.2-1 with the
following configs and have this result which surprised me:
# dd if=/dev/zero of=foo bs=512k count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 14.1538 seconds, 37.9 MB/s
# dd if=/dev/zero of=foo
10 matches
Mail list logo