[Gluster-users] Hybrid drives SSHD on Gluster peers

2017-10-10 Thread WK

anybody use them on Gluster?

They seem to be almost the same cost as spinning metal these days. In 
fact I was trying to get some 2.5 inch 2TB drives on a vendor and all 
they had was the firecuda SSHDs or the really expensive "Enterprise" 
variety.


Our use case would be for VM hosting (Rep2 + Arb). I'm not sure how the 
SSD cache would pan out with the shards.


I've googled and the various responses are all over the map, but the 
responses range from "can't hurt but probably a waste of money" to "yeah 
we noticed a difference".


And of course most of those reviews were when there was a significant 
price difference.


-wk

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS with SSD ZIL vs XFS

2017-10-10 Thread Bartosz Zięba

Hi,

Have you thought about using an SSD as a GlusterFS hot tiers?

Regards,
Bartosz


On 10.10.2017 19:59, Gandalf Corvotempesta wrote:

2017-10-10 18:27 GMT+02:00 Jeff Darcy :

Probably not.  If there is, it would probably favor XFS.  The developers
at Red Hat use XFS almost exclusively.  We at Facebook have a mix, but
XFS is (I think) the most common.  Whatever the developers use tends to
become "the way local filesystems work" and code is written based on
that profile, so even without intention that tends to get a bit of a
boost.  To the extent that ZFS makes different tradeoffs - e.g. using
lots more memory, very different disk access patterns - it's probably
going to have a bit more of an "impedance mismatch" with the choices
Gluster itself has made.

Ok, so XFS is the way to go :)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] small files performance

2017-10-10 Thread Alastair Neil
I just tried setting:

performance.parallel-readdir on
features.cache-invalidation on
features.cache-invalidation-timeout 600
performance.stat-prefetch
performance.cache-invalidation
performance.md-cache-timeout 600
network.inode-lru-limit 5
performance.cache-invalidation on

and clients could not see their files with ls when accessing via a fuse
mount.  The files and directories were there, however, if you accessed them
directly. Server are 3.10.5 and the clients are 3.10 and 3.12.

Any ideas?


On 10 October 2017 at 10:53, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-10-10 8:25 GMT+02:00 Karan Sandha :
>
>> Hi Gandalf,
>>
>> We have multiple tuning to do for small-files which decrease the time for
>> negative lookups , meta-data caching, parallel readdir. Bumping the server
>> and client event threads will help you out in increasing the small file
>> performance.
>>
>> gluster v set   group metadata-cache
>> gluster v set  group nl-cache
>> gluster v set  performance.parallel-readdir on (Note : readdir
>> should be on)
>>
>
> This is what i'm getting with suggested parameters.
> I'm running "fio" from a mounted gluster client:
> 172.16.0.12:/gv0 on /mnt2 type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,
> allow_other,max_read=131072)
>
>
>
> # fio --ioengine=libaio --filename=fio.test --size=256M
> --direct=1 --rw=randrw --refill_buffers --norandommap
> --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16
> --runtime=60 --group_reporting --name=fio-test
> fio-test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio,
> iodepth=16
> ...
> fio-2.16
> Starting 16 processes
> fio-test: Laying out IO file(s) (1 file(s) / 256MB)
> Jobs: 14 (f=13): [m(5),_(1),m(8),f(1),_(1)] [33.9% done] [1000KB/440KB/0KB
> /s] [125/55/0 iops] [eta 01m:59s]
> fio-test: (groupid=0, jobs=16): err= 0: pid=2051: Tue Oct 10 16:51:46 2017
>   read : io=43392KB, bw=733103B/s, iops=89, runt= 60610msec
> slat (usec): min=14, max=1992.5K, avg=177873.67, stdev=382294.06
> clat (usec): min=768, max=6016.8K, avg=1871390.57, stdev=1082220.06
>  lat (usec): min=872, max=6630.6K, avg=2049264.23, stdev=1158405.41
> clat percentiles (msec):
>  |  1.00th=[   20],  5.00th=[  208], 10.00th=[  457], 20.00th=[  873],
>  | 30.00th=[ 1237], 40.00th=[ 1516], 50.00th=[ 1795], 60.00th=[ 2073],
>  | 70.00th=[ 2442], 80.00th=[ 2835], 90.00th=[ 3326], 95.00th=[ 3785],
>  | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5800],
>  | 99.99th=[ 5997]
>   write: io=18856KB, bw=318570B/s, iops=38, runt= 60610msec
> slat (usec): min=17, max=3428, avg=212.62, stdev=287.88
> clat (usec): min=59, max=6015.6K, avg=1693729.12, stdev=1003122.83
>  lat (usec): min=79, max=6015.9K, avg=1693941.74, stdev=1003126.51
> clat percentiles (usec):
>  |  1.00th=[  724],  5.00th=[144384], 10.00th=[403456],
> 20.00th=[765952],
>  | 30.00th=[1105920], 40.00th=[1368064], 50.00th=[1630208],
> 60.00th=[1875968],
>  | 70.00th=[2179072], 80.00th=[2572288], 90.00th=[3031040],
> 95.00th=[3489792],
>  | 99.00th=[4227072], 99.50th=[4423680], 99.90th=[4751360],
> 99.95th=[5210112],
>  | 99.99th=[5996544]
> lat (usec) : 100=0.15%, 250=0.05%, 500=0.06%, 750=0.09%, 1000=0.05%
> lat (msec) : 2=0.28%, 4=0.09%, 10=0.15%, 20=0.39%, 50=1.81%
> lat (msec) : 100=1.02%, 250=1.63%, 500=5.59%, 750=6.03%, 1000=7.31%
> lat (msec) : 2000=35.61%, >=2000=39.67%
>   cpu  : usr=0.01%, sys=0.01%, ctx=8218, majf=11, minf=295
>   IO depths: 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=96.9%, 32=0.0%,
> >=64=0.0%
>  submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>  complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>  issued: total=r=5424/w=2357/d=0, short=r=0/w=0/d=0,
> drop=r=0/w=0/d=0
>  latency   : target=0, window=0, percentile=100.00%, depth=16
>
> Run status group 0 (all jobs):
>READ: io=43392KB, aggrb=715KB/s, minb=715KB/s, maxb=715KB/s,
> mint=60610msec, maxt=60610msec
>   WRITE: io=18856KB, aggrb=311KB/s, minb=311KB/s, maxb=311KB/s,
> mint=60610msec, maxt=60610msec
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ZFS with SSD ZIL vs XFS

2017-10-10 Thread Gandalf Corvotempesta
Last time I've read about tiering in gluster, there wasn't any performance
gain with VM workload and more over doesn't speed up writes...

Il 10 ott 2017 9:27 PM, "Bartosz Zięba"  ha scritto:

> Hi,
>
> Have you thought about using an SSD as a GlusterFS hot tiers?
>
> Regards,
> Bartosz
>
>
> On 10.10.2017 19:59, Gandalf Corvotempesta wrote:
>
>> 2017-10-10 18:27 GMT+02:00 Jeff Darcy :
>>
>>> Probably not.  If there is, it would probably favor XFS.  The developers
>>> at Red Hat use XFS almost exclusively.  We at Facebook have a mix, but
>>> XFS is (I think) the most common.  Whatever the developers use tends to
>>> become "the way local filesystems work" and code is written based on
>>> that profile, so even without intention that tends to get a bit of a
>>> boost.  To the extent that ZFS makes different tradeoffs - e.g. using
>>> lots more memory, very different disk access patterns - it's probably
>>> going to have a bit more of an "impedance mismatch" with the choices
>>> Gluster itself has made.
>>>
>> Ok, so XFS is the way to go :)
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ZFS with SSD ZIL vs XFS

2017-10-10 Thread Gandalf Corvotempesta
Any performance report to share?

Il 10 ott 2017 8:25 PM, "Dmitri Chebotarov" <4dim...@gmail.com> ha scritto:

>
> I've had good results with using SSD as LVM cache for gluster bricks (
> http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on
> bricks.
>
>
>
> On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy  wrote:
>
>> On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:
>> > Anyone made some performance comparison between XFS and ZFS with ZIL
>> > on SSD, in gluster environment ?
>> >
>> > I've tried to compare both on another SDS (LizardFS) and I haven't
>> > seen any tangible performance improvement.
>> >
>> > Is gluster different ?
>>
>> Probably not.  If there is, it would probably favor XFS.  The developers
>> at Red Hat use XFS almost exclusively.  We at Facebook have a mix, but
>> XFS is (I think) the most common.  Whatever the developers use tends to
>> become "the way local filesystems work" and code is written based on
>> that profile, so even without intention that tends to get a bit of a
>> boost.  To the extent that ZFS makes different tradeoffs - e.g. using
>> lots more memory, very different disk access patterns - it's probably
>> going to have a bit more of an "impedance mismatch" with the choices
>> Gluster itself has made.
>>
>> If you're interested in ways to benefit from a disk+SSD combo under XFS,
>> it is possible to configure XFS with a separate journal device but I
>> believe there were some bugs encountered when doing that.  Richard
>> Wareing's upcoming Dev Summit talk on Hybrid XFS might cover those, in
>> addition to his own work on using an SSD in even more interesting ways.
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster shows volume size less than created

2017-10-10 Thread Pavel Kutishchev

Hello folks,

Would please someone advice, after volume creation glusterfs shows 
volume less than created. Example below:


Status of volume: vol_17ec47c44ae6bd45d0db4627683b4f15
--
Brick    : Brick 
glusterfs-sas-server29.sds.default.svc.kubernetes.local:/var/lib/heketi/mounts/vg_946dbd5ccbf78dddcca3857a32f32535/brick_0af81ba1b5d4e9ddb8deb57796912106/brick

TCP Port : 49159
RDMA Port    : 0
Online   : Y
Pid  : 6376
File System  : xfs
Device   : 
/dev/mapper/vg_946dbd5ccbf78dddcca3857a32f32535-brick_0af81ba1b5d4e9ddb8deb57796912106
Mount Options    : 
rw,seclabel,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota

Inode Size   : 512
Disk Space Free  : 999.5GB
Total Disk Space : 999.5GB
Inode Count  : 524283904
Free Inodes  : 524283877


But on LVM i can see the following:


  brick_0af81ba1b5d4e9ddb8deb57796912106 
vg_946dbd5ccbf78dddcca3857a32f32535 Vwi-aotz-- 1000.00g 
tp_0af81ba1b5d4e9ddb8deb57796912106    0.05
  tp_0af81ba1b5d4e9ddb8deb57796912106 
vg_946dbd5ccbf78dddcca3857a32f32535 twi-aotz-- 
1000.00g    0.05   0.03


--
Best regards
Pavel Kutishchev
DevOPS Engineer at
Self employed.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ZFS with SSD ZIL vs XFS

2017-10-10 Thread Gandalf Corvotempesta
2017-10-10 18:27 GMT+02:00 Jeff Darcy :
> Probably not.  If there is, it would probably favor XFS.  The developers
> at Red Hat use XFS almost exclusively.  We at Facebook have a mix, but
> XFS is (I think) the most common.  Whatever the developers use tends to
> become "the way local filesystems work" and code is written based on
> that profile, so even without intention that tends to get a bit of a
> boost.  To the extent that ZFS makes different tradeoffs - e.g. using
> lots more memory, very different disk access patterns - it's probably
> going to have a bit more of an "impedance mismatch" with the choices
> Gluster itself has made.

Ok, so XFS is the way to go :)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS with SSD ZIL vs XFS

2017-10-10 Thread Dmitri Chebotarov
I've had good results with using SSD as LVM cache for gluster bricks (
http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on
bricks.



On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy  wrote:

> On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:
> > Anyone made some performance comparison between XFS and ZFS with ZIL
> > on SSD, in gluster environment ?
> >
> > I've tried to compare both on another SDS (LizardFS) and I haven't
> > seen any tangible performance improvement.
> >
> > Is gluster different ?
>
> Probably not.  If there is, it would probably favor XFS.  The developers
> at Red Hat use XFS almost exclusively.  We at Facebook have a mix, but
> XFS is (I think) the most common.  Whatever the developers use tends to
> become "the way local filesystems work" and code is written based on
> that profile, so even without intention that tends to get a bit of a
> boost.  To the extent that ZFS makes different tradeoffs - e.g. using
> lots more memory, very different disk access patterns - it's probably
> going to have a bit more of an "impedance mismatch" with the choices
> Gluster itself has made.
>
> If you're interested in ways to benefit from a disk+SSD combo under XFS,
> it is possible to configure XFS with a separate journal device but I
> believe there were some bugs encountered when doing that.  Richard
> Wareing's upcoming Dev Summit talk on Hybrid XFS might cover those, in
> addition to his own work on using an SSD in even more interesting ways.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ZFS with SSD ZIL vs XFS

2017-10-10 Thread Jeff Darcy
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:
> Anyone made some performance comparison between XFS and ZFS with ZIL
> on SSD, in gluster environment ?
> 
> I've tried to compare both on another SDS (LizardFS) and I haven't
> seen any tangible performance improvement.
> 
> Is gluster different ?

Probably not.  If there is, it would probably favor XFS.  The developers
at Red Hat use XFS almost exclusively.  We at Facebook have a mix, but
XFS is (I think) the most common.  Whatever the developers use tends to
become "the way local filesystems work" and code is written based on
that profile, so even without intention that tends to get a bit of a
boost.  To the extent that ZFS makes different tradeoffs - e.g. using
lots more memory, very different disk access patterns - it's probably
going to have a bit more of an "impedance mismatch" with the choices
Gluster itself has made.

If you're interested in ways to benefit from a disk+SSD combo under XFS,
it is possible to configure XFS with a separate journal device but I
believe there were some bugs encountered when doing that.  Richard
Wareing's upcoming Dev Summit talk on Hybrid XFS might cover those, in
addition to his own work on using an SSD in even more interesting ways.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] ZFS with SSD ZIL vs XFS

2017-10-10 Thread Gandalf Corvotempesta
Anyone made some performance comparison between XFS and ZFS with ZIL
on SSD, in gluster environment ?

I've tried to compare both on another SDS (LizardFS) and I haven't
seen any tangible performance improvement.

Is gluster different ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] small files performance

2017-10-10 Thread Gandalf Corvotempesta
2017-10-10 8:25 GMT+02:00 Karan Sandha :

> Hi Gandalf,
>
> We have multiple tuning to do for small-files which decrease the time for
> negative lookups , meta-data caching, parallel readdir. Bumping the server
> and client event threads will help you out in increasing the small file
> performance.
>
> gluster v set   group metadata-cache
> gluster v set  group nl-cache
> gluster v set  performance.parallel-readdir on (Note : readdir
> should be on)
>

This is what i'm getting with suggested parameters.
I'm running "fio" from a mounted gluster client:
172.16.0.12:/gv0 on /mnt2 type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)



# fio --ioengine=libaio --filename=fio.test --size=256M
--direct=1 --rw=randrw --refill_buffers --norandommap
--bs=8k --rwmixread=70 --iodepth=16 --numjobs=16
--runtime=60 --group_reporting --name=fio-test
fio-test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio,
iodepth=16
...
fio-2.16
Starting 16 processes
fio-test: Laying out IO file(s) (1 file(s) / 256MB)
Jobs: 14 (f=13): [m(5),_(1),m(8),f(1),_(1)] [33.9% done] [1000KB/440KB/0KB
/s] [125/55/0 iops] [eta 01m:59s]
fio-test: (groupid=0, jobs=16): err= 0: pid=2051: Tue Oct 10 16:51:46 2017
  read : io=43392KB, bw=733103B/s, iops=89, runt= 60610msec
slat (usec): min=14, max=1992.5K, avg=177873.67, stdev=382294.06
clat (usec): min=768, max=6016.8K, avg=1871390.57, stdev=1082220.06
 lat (usec): min=872, max=6630.6K, avg=2049264.23, stdev=1158405.41
clat percentiles (msec):
 |  1.00th=[   20],  5.00th=[  208], 10.00th=[  457], 20.00th=[  873],
 | 30.00th=[ 1237], 40.00th=[ 1516], 50.00th=[ 1795], 60.00th=[ 2073],
 | 70.00th=[ 2442], 80.00th=[ 2835], 90.00th=[ 3326], 95.00th=[ 3785],
 | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5800],
 | 99.99th=[ 5997]
  write: io=18856KB, bw=318570B/s, iops=38, runt= 60610msec
slat (usec): min=17, max=3428, avg=212.62, stdev=287.88
clat (usec): min=59, max=6015.6K, avg=1693729.12, stdev=1003122.83
 lat (usec): min=79, max=6015.9K, avg=1693941.74, stdev=1003126.51
clat percentiles (usec):
 |  1.00th=[  724],  5.00th=[144384], 10.00th=[403456],
20.00th=[765952],
 | 30.00th=[1105920], 40.00th=[1368064], 50.00th=[1630208],
60.00th=[1875968],
 | 70.00th=[2179072], 80.00th=[2572288], 90.00th=[3031040],
95.00th=[3489792],
 | 99.00th=[4227072], 99.50th=[4423680], 99.90th=[4751360],
99.95th=[5210112],
 | 99.99th=[5996544]
lat (usec) : 100=0.15%, 250=0.05%, 500=0.06%, 750=0.09%, 1000=0.05%
lat (msec) : 2=0.28%, 4=0.09%, 10=0.15%, 20=0.39%, 50=1.81%
lat (msec) : 100=1.02%, 250=1.63%, 500=5.59%, 750=6.03%, 1000=7.31%
lat (msec) : 2000=35.61%, >=2000=39.67%
  cpu  : usr=0.01%, sys=0.01%, ctx=8218, majf=11, minf=295
  IO depths: 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=96.9%, 32=0.0%,
>=64=0.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
 complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%,
>=64=0.0%
 issued: total=r=5424/w=2357/d=0, short=r=0/w=0/d=0,
drop=r=0/w=0/d=0
 latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: io=43392KB, aggrb=715KB/s, minb=715KB/s, maxb=715KB/s,
mint=60610msec, maxt=60610msec
  WRITE: io=18856KB, aggrb=311KB/s, minb=311KB/s, maxb=311KB/s,
mint=60610msec, maxt=60610msec
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Peer isolation while healing

2017-10-10 Thread ML

We have a lot of small files indeed.

I'll test the different values for cluster.data-self-heal-algorithm

Thanks!


Le 09/10/2017 à 15:38, lemonni...@ulrar.net a écrit :

On Mon, Oct 09, 2017 at 03:29:41PM +0200, ML wrote:

The server's load was huge during the healing (cpu at 100%), and the
disk latency increased a lot.

Depending on the file sizes, you might want to consider changing the
heal algortithm. Might be better to just re-download the whole file /
shard than to try and heal it, assuming you don't have big files. That
would free up the CPU


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users