On Fri, 2 Mar 2012, Brian Candler wrote:
On Fri, Mar 02, 2012 at 01:02:39PM +0200, Harald Hannelius wrote:
If both are fast: then retest using a two-node replicated volume.
gluster volume create test replica 2 transport tcp
aethra:/data/single alcippe:/data/single
Volume Name: test
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: aethra:/data/single
Brick2: alcippe:/data/single
# time dd if=/dev/zero bs=1M count=20000 of=/mnt/testfile
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 426.62 s, 49.2 MB/s
real 7m6.625s
user 0m0.040s
sys 0m12.293s
As expected, roughly half of the single node setup. I could live
with that too.
So next is back to the four-node setup you had before. I would expect that
to perform about the same.
So would I expect too. But;
# time dd if=/dev/zero bs=1M count=20000 of=/gluster/testfile
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 1058.22 s, 19.8 MB/s
real 17m38.357s
user 0m0.040s
sys 0m12.501s
# gluster volume info
Volume Name: virtuals
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: adraste:/data/brick
Brick2: alcippe:/data/brick
Brick3: aethra:/data/brick
Brick4: helen:/data/brick
Options Reconfigured:
cluster.data-self-heal-algorithm: diff
cluster.self-heal-window-size: 1
performance.io-thread-count: 64
performance.cache-size: 536870912
performance.write-behind-window-size: 16777216
performance.flush-behind: on
At the same time nagios tries to empty my cell phone battery when virtual
hosts don't respond to ping anymore. That virtual host is a mailserver and
it receives e-mail. I guess that sendmail+procmail+imapd generates some I/O.
At least I got double figure readings this time. Sometimes I get write
speeds of 5-6 MB/s.
If you have problems with high levels of concurrency, this might be a
problem with the number of I/O threads which gluster creates. You actually
only get log(2) of the number of outstanding requests in the queue.
I made a (stupid, non-production) patch which got around this problem in
my benchmarking:
http://gluster.org/pipermail/gluster-users/2012-February/009590.html
IMO it would be better to be able to configure the *minimum* number of I/O
threads to spawn. You can configure the maximum but it will almost never be
reached.
Regards,
Brian.
--
Harald Hannelius | harald.hannelius/a\arcada.fi | +358 50 594 1020
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users