Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-31 Thread Vitalij Satanivskij

Hello.

Patch worked as expected - no compresion on l2, but 

a) l2arc couse big overhad on fs io 
b) after some time errors apears again.


For now best choice not to use any patch on system

so, for now we delete next changes from system 

1) http://svnweb.freebsd.org/base?view=revisionsortby=filerevision=256889
2) http://svnweb.freebsd.org/base?view=revisionrevision=255753


Yes, zpool status says 
cache
  gpt/cache0ONLINE   0 0 0  block size: 512B 
configured, 4096B native
  gpt/cache1ONLINE   0 0 0  block size: 512B 
configured, 4096B native
  gpt/cache2ONLINE   0 0 0  block size: 512B 
configured, 4096B native



But l2arc work as expected (no errors, no noticeable performance problem ) 

There is many another problem with performance on freebsd 10 after implement 
new vmem subsystem (at least no problem was before) 

but thay not corespond to l2 

Don't even know what to do with situation.



Vitalij Satanivskij wrote:
VS I dont apply previos patch on high load server, only on test where find 
that it's not disabling compression.
VS 
VS Thank you for help, i will try new patch as soon as posible.
VS 
VS SH 
VS SH Have you seen any l2_io_error or l2_cksum_bad since
VS SH applying the ashift patch?
VS SH 
VS ___
VS freebsd-current@freebsd.org mailing list
VS http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-28 Thread Vitalij Satanivskij

So that's patch not disabling compresion on l2arc?


Steven Hartland wrote:
SH I would have expected zero for all l2_compress values.
SH 
SH Regards
SH Steve
SH 
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH To: Steven Hartland kill...@multiplay.co.uk
SH Cc: Vitalij Satanivskij sa...@ukr.net; Dmitriy Makarov 
suppor...@ukr.net; freebsd-current@freebsd.org
SH Sent: Friday, October 25, 2013 8:32 AM
SH Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173
SH 
SH 
SH  Just after system reboot with this patch 
SH  
SH  found 
SH  kstat.zfs.misc.arcstats.l2_compress_successes: 6083
SH  kstat.zfs.misc.arcstats.l2_compress_zeros: 1
SH  kstat.zfs.misc.arcstats.l2_compress_failures: 296
SH  
SH  compression on test pool (where I'm test this patch) is lz4 
SH  
SH  so is it ok ?
SH  
SH  
SH  
SH  Steven Hartland wrote:
SH  SH If you are still seeing high load try commenting out the following
SH  SH which should disable l2arc compression.
SH  SH sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH  SH if (l2arc_compress)
SH  SH hdr-b_flags |= ARC_L2COMPRESS;
SH  SH 
SH 
SH 
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
SH 
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH 
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-28 Thread Steven Hartland

Correct, there's clearly some other cases which sets
ARC_L2COMPRESS on b_flags for write, but I cant see
where unless its some how gets through from the read
cases.

Attached is a patch which removes all paths which
set ARC_L2COMPRESS on b_flags for both read and write.

Revert the last patch and try this one.

To be clear l2_compress_failures are only a failure
to compress the data, which is not an error.

Have you seen any l2_io_error or l2_cksum_bad since
applying the ashift patch?

   Regards
   Steve

- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net


So that's patch not disabling compresion on l2arc?


Steven Hartland wrote:
SH I would have expected zero for all l2_compress values.
SH
SH Regards
SH Steve
SH
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net

SH To: Steven Hartland kill...@multiplay.co.uk
SH Cc: Vitalij Satanivskij sa...@ukr.net; Dmitriy Makarov suppor...@ukr.net; 
freebsd-current@freebsd.org
SH Sent: Friday, October 25, 2013 8:32 AM
SH Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173
SH
SH
SH  Just after system reboot with this patch
SH 
SH  found
SH  kstat.zfs.misc.arcstats.l2_compress_successes: 6083
SH  kstat.zfs.misc.arcstats.l2_compress_zeros: 1
SH  kstat.zfs.misc.arcstats.l2_compress_failures: 296
SH 
SH  compression on test pool (where I'm test this patch) is lz4
SH 
SH  so is it ok ?
SH 
SH 
SH 
SH  Steven Hartland wrote:
SH  SH If you are still seeing high load try commenting out the following
SH  SH which should disable l2arc compression.
SH  SH sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH  SH if (l2arc_compress)
SH  SH hdr-b_flags |= ARC_L2COMPRESS;
SH  SH
SH 
SH
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the 
event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any 
information contained in it.

SH
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org





This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

zfs-l2arc-no-compress.patch
Description: Binary data
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-28 Thread Vitalij Satanivskij
I dont apply previos patch on high load server, only on test where find that 
it's not disabling compression.

Thank you for help, i will try new patch as soon as posible.

SH 
SH Have you seen any l2_io_error or l2_cksum_bad since
SH applying the ashift patch?
SH 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-25 Thread Vitalij Satanivskij

Hello 

Bad news, to day two servers with applayed patches get L2ARC degraded 

L2 ARC Summary: (DEGRADED)
Passed Headroom:4.17m
Tried Lock Failures:635.53m
IO In Progress: 41.89k
Low Memory Aborts:  8
Free on Write:  1.03m
Writes While Full:  12.95k
R/W Clashes:405.05k
Bad Checksums:  362.19k
IO Errors:  45.60k
SPA Mismatch:   526.37m

L2 ARC Size: (Adaptive) 391.03  GiB
Header Size:0.59%   2.30GiB

So looks like problem not disapered ^( 




Vitalij Satanivskij wrote:
VS 
VS First of all Thank you for help.
VS 
VS As for high load on system, looks like problems with l2arc have litle 
impact on load comparatively to another just now 
VS 
VS not fully classifed things.
VS 
VS Looks like ower internal software and libs that it use didn't like new VMEM 
subsystem, at last 
VS system behavior complitely diferent from 6 month older CURRENT. 
VS 
VS So for now none problem's with l2arc errors.
VS 
VS Will try to understand reason of load and fix or at last ask for help again 
^).
VS 
VS 
VS 
VS 
VS Steven Hartland wrote:
VS SH First off I just wanted to clarify that you don't need to compression on
VS SH dataset for L2ARC to use LZ4 compression, it does this by default as is
VS SH not currently configurable.
VS SH 
VS SH Next up I believe we've found the cause of this high load and I've just
VS SH committed the fix to head:
VS SH http://svnweb.freebsd.org/base?view=revisionsortby=filerevision=256889
VS SH 
VS SH Thanks to Vitalij for testing :)
VS SH 
VS SH Dmitriy if you could test on your side too that would be appreciated.
VS SH 
VS SH Regards
VS SH Steve
VS SH 
VS SH - Original Message - 
VS SH From: Vitalij Satanivskij sa...@ukr.net
VS SH To: Allan Jude free...@allanjude.com
VS SH Cc: freebsd-current@freebsd.org
VS SH Sent: Thursday, October 10, 2013 6:03 PM
VS SH Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on 
r255173
VS SH 
VS SH 
VS SH  AJ Some background on L2ARC compression for you:
VS SH  AJ 
VS SH  AJ http://wiki.illumos.org/display/illumos/L2ARC+Compression
VS SH  
VS SH  I'm alredy see it.
VS SH  
VS SH  
VS SH  
VS SH  AJ http://svnweb.freebsd.org/base?view=revisionrevision=251478
VS SH  AJ 
VS SH  AJ Are you sure that compression on pool/zfs is off? it would 
normally
VS SH  AJ inherit from the parent, so double check with: zfs get 
compression pool/zfs
VS SH  
VS SH  Yes, compression turned off on pool/zfs, it's was may time rechecked.
VS SH  
VS SH  
VS SH  
VS SH  AJ Is the data on pool/zfs related to the data on the root pool? if
VS SH  AJ pool/zfs were a clone, and the data is actually used in both 
places, the
VS SH  AJ newer 'single copy ARC' feature may come in to play:
VS SH  AJ https://www.illumos.org/issues/3145
VS SH  
VS SH  No, both pool and pool/zfs have diferent type of data, pool/zfs was 
created as new empty zfs (zfs create pool/zfs)
VS SH  
VS SH  and data was writed to it from another server. 
VS SH  
VS SH  
VS SH  Right now one machine work fine with l2arc. This machine without 
patch for corecting ashift on cache devices.
VS SH  
VS SH  At last 3 day's working with zero errors. Another servers with same 
config similar data, load and so on after 2 day 
VS SH  work began report abouy errors.
VS SH  
VS SH  
VS SH  AJ 
VS SH  AJ 
VS SH  AJ 
VS SH  AJ -- 
VS SH  AJ Allan Jude
VS SH  AJ 
VS SH  AJ ___
VS SH  AJ freebsd-current@freebsd.org mailing list
VS SH  AJ http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS SH  AJ To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS SH  ___
VS SH  freebsd-current@freebsd.org mailing list
VS SH  http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS SH  To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS SH 
VS SH 
VS SH 
VS SH This e.mail is private and confidential between Multiplay (UK) Ltd. and 
the person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
VS SH 
VS SH In the event of misdirection, illegible or incomplete transmission 
please telephone +44 845 868 1337
VS SH or return the E.mail to postmas...@multiplay.co.uk.
VS SH 
VS SH ___
VS SH freebsd-current@freebsd.org mailing list
VS SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS SH To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS

Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-25 Thread Vitalij Satanivskij
Just after system reboot with this patch 

found 
kstat.zfs.misc.arcstats.l2_compress_successes: 6083
kstat.zfs.misc.arcstats.l2_compress_zeros: 1
kstat.zfs.misc.arcstats.l2_compress_failures: 296

compression on test pool (where I'm test this patch) is lz4 

so is it ok ?



Steven Hartland wrote:
SH If you are still seeing high load try commenting out the following
SH which should disable l2arc compression.
SH sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH if (l2arc_compress)
SH hdr-b_flags |= ARC_L2COMPRESS;
SH 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-25 Thread Steven Hartland

I would have expected zero for all l2_compress values.

   Regards
   Steve

- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net

To: Steven Hartland kill...@multiplay.co.uk
Cc: Vitalij Satanivskij sa...@ukr.net; Dmitriy Makarov suppor...@ukr.net; 
freebsd-current@freebsd.org
Sent: Friday, October 25, 2013 8:32 AM
Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173


Just after system reboot with this patch 

found 
kstat.zfs.misc.arcstats.l2_compress_successes: 6083

kstat.zfs.misc.arcstats.l2_compress_zeros: 1
kstat.zfs.misc.arcstats.l2_compress_failures: 296

compression on test pool (where I'm test this patch) is lz4 


so is it ok ?



Steven Hartland wrote:
SH If you are still seeing high load try commenting out the following
SH which should disable l2arc compression.
SH sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH if (l2arc_compress)
SH hdr-b_flags |= ARC_L2COMPRESS;
SH 




This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-22 Thread Steven Hartland

First off I just wanted to clarify that you don't need to compression on
dataset for L2ARC to use LZ4 compression, it does this by default as is
not currently configurable.

Next up I believe we've found the cause of this high load and I've just
committed the fix to head:
http://svnweb.freebsd.org/base?view=revisionsortby=filerevision=256889

Thanks to Vitalij for testing :)

Dmitriy if you could test on your side too that would be appreciated.

   Regards
   Steve

- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net

To: Allan Jude free...@allanjude.com
Cc: freebsd-current@freebsd.org
Sent: Thursday, October 10, 2013 6:03 PM
Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173



AJ Some background on L2ARC compression for you:
AJ 
AJ http://wiki.illumos.org/display/illumos/L2ARC+Compression


I'm alredy see it.



AJ http://svnweb.freebsd.org/base?view=revisionrevision=251478
AJ 
AJ Are you sure that compression on pool/zfs is off? it would normally

AJ inherit from the parent, so double check with: zfs get compression pool/zfs

Yes, compression turned off on pool/zfs, it's was may time rechecked.



AJ Is the data on pool/zfs related to the data on the root pool? if
AJ pool/zfs were a clone, and the data is actually used in both places, the
AJ newer 'single copy ARC' feature may come in to play:
AJ https://www.illumos.org/issues/3145

No, both pool and pool/zfs have diferent type of data, pool/zfs was created as 
new empty zfs (zfs create pool/zfs)

and data was writed to it from another server. 



Right now one machine work fine with l2arc. This machine without patch for 
corecting ashift on cache devices.

At last 3 day's working with zero errors. Another servers with same config similar data, load and so on after 2 day 
work began report abouy errors.



AJ 
AJ 
AJ 
AJ -- 
AJ Allan Jude
AJ 
AJ ___

AJ freebsd-current@freebsd.org mailing list
AJ http://lists.freebsd.org/mailman/listinfo/freebsd-current
AJ To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org




This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-22 Thread Vitalij Satanivskij

First of all Thank you for help.

As for high load on system, looks like problems with l2arc have litle impact on 
load comparatively to another just now 

not fully classifed things.

Looks like ower internal software and libs that it use didn't like new VMEM 
subsystem, at last 
system behavior complitely diferent from 6 month older CURRENT. 

So for now none problem's with l2arc errors.

Will try to understand reason of load and fix or at last ask for help again ^).




Steven Hartland wrote:
SH First off I just wanted to clarify that you don't need to compression on
SH dataset for L2ARC to use LZ4 compression, it does this by default as is
SH not currently configurable.
SH 
SH Next up I believe we've found the cause of this high load and I've just
SH committed the fix to head:
SH http://svnweb.freebsd.org/base?view=revisionsortby=filerevision=256889
SH 
SH Thanks to Vitalij for testing :)
SH 
SH Dmitriy if you could test on your side too that would be appreciated.
SH 
SH Regards
SH Steve
SH 
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH To: Allan Jude free...@allanjude.com
SH Cc: freebsd-current@freebsd.org
SH Sent: Thursday, October 10, 2013 6:03 PM
SH Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173
SH 
SH 
SH  AJ Some background on L2ARC compression for you:
SH  AJ 
SH  AJ http://wiki.illumos.org/display/illumos/L2ARC+Compression
SH  
SH  I'm alredy see it.
SH  
SH  
SH  
SH  AJ http://svnweb.freebsd.org/base?view=revisionrevision=251478
SH  AJ 
SH  AJ Are you sure that compression on pool/zfs is off? it would normally
SH  AJ inherit from the parent, so double check with: zfs get compression 
pool/zfs
SH  
SH  Yes, compression turned off on pool/zfs, it's was may time rechecked.
SH  
SH  
SH  
SH  AJ Is the data on pool/zfs related to the data on the root pool? if
SH  AJ pool/zfs were a clone, and the data is actually used in both places, 
the
SH  AJ newer 'single copy ARC' feature may come in to play:
SH  AJ https://www.illumos.org/issues/3145
SH  
SH  No, both pool and pool/zfs have diferent type of data, pool/zfs was 
created as new empty zfs (zfs create pool/zfs)
SH  
SH  and data was writed to it from another server. 
SH  
SH  
SH  Right now one machine work fine with l2arc. This machine without patch 
for corecting ashift on cache devices.
SH  
SH  At last 3 day's working with zero errors. Another servers with same 
config similar data, load and so on after 2 day 
SH  work began report abouy errors.
SH  
SH  
SH  AJ 
SH  AJ 
SH  AJ 
SH  AJ -- 
SH  AJ Allan Jude
SH  AJ 
SH  AJ ___
SH  AJ freebsd-current@freebsd.org mailing list
SH  AJ http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH  AJ To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
SH  ___
SH  freebsd-current@freebsd.org mailing list
SH  http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH  To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
SH 
SH 
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
SH 
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH 
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-22 Thread Steven Hartland

If you are still seeing high load try commenting out the following
which should disable l2arc compression.
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
   if (l2arc_compress)
   hdr-b_flags |= ARC_L2COMPRESS;

   Regards
   Steve
- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net

To: Steven Hartland kill...@multiplay.co.uk
Cc: Vitalij Satanivskij sa...@ukr.net; Allan Jude free...@allanjude.com; Dmitriy Makarov suppor...@ukr.net; 
freebsd-current@freebsd.org

Sent: Tuesday, October 22, 2013 3:10 PM
Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173




First of all Thank you for help.

As for high load on system, looks like problems with l2arc have litle impact on 
load comparatively to another just now

not fully classifed things.

Looks like ower internal software and libs that it use didn't like new VMEM 
subsystem, at last
system behavior complitely diferent from 6 month older CURRENT.

So for now none problem's with l2arc errors.

Will try to understand reason of load and fix or at last ask for help again ^).




Steven Hartland wrote:
SH First off I just wanted to clarify that you don't need to compression on
SH dataset for L2ARC to use LZ4 compression, it does this by default as is
SH not currently configurable.
SH
SH Next up I believe we've found the cause of this high load and I've just
SH committed the fix to head:
SH http://svnweb.freebsd.org/base?view=revisionsortby=filerevision=256889
SH
SH Thanks to Vitalij for testing :)
SH
SH Dmitriy if you could test on your side too that would be appreciated.
SH
SH Regards
SH Steve
SH
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net

SH To: Allan Jude free...@allanjude.com
SH Cc: freebsd-current@freebsd.org
SH Sent: Thursday, October 10, 2013 6:03 PM
SH Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173
SH
SH
SH  AJ Some background on L2ARC compression for you:
SH  AJ
SH  AJ http://wiki.illumos.org/display/illumos/L2ARC+Compression
SH 
SH  I'm alredy see it.
SH 
SH 
SH 
SH  AJ http://svnweb.freebsd.org/base?view=revisionrevision=251478
SH  AJ
SH  AJ Are you sure that compression on pool/zfs is off? it would normally
SH  AJ inherit from the parent, so double check with: zfs get compression 
pool/zfs
SH 
SH  Yes, compression turned off on pool/zfs, it's was may time rechecked.
SH 
SH 
SH 
SH  AJ Is the data on pool/zfs related to the data on the root pool? if
SH  AJ pool/zfs were a clone, and the data is actually used in both places, 
the
SH  AJ newer 'single copy ARC' feature may come in to play:
SH  AJ https://www.illumos.org/issues/3145
SH 
SH  No, both pool and pool/zfs have diferent type of data, pool/zfs was 
created as new empty zfs (zfs create pool/zfs)
SH 
SH  and data was writed to it from another server.
SH 
SH 
SH  Right now one machine work fine with l2arc. This machine without patch 
for corecting ashift on cache devices.
SH 
SH  At last 3 day's working with zero errors. Another servers with same 
config similar data, load and so on after 2 day
SH  work began report abouy errors.
SH 
SH 
SH  AJ
SH  AJ
SH  AJ
SH  AJ -- 
SH  AJ Allan Jude

SH  AJ
SH  AJ ___
SH  AJ freebsd-current@freebsd.org mailing list
SH  AJ http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH  AJ To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
SH  ___
SH  freebsd-current@freebsd.org mailing list
SH  http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH  To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
SH 
SH
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the 
event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any 
information contained in it.

SH
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org





This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http

Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-10 Thread Vitalij Satanivskij
Same situation hapend yesterday again :( 

What's confuse me while trying to understend where I'm wrong


Firt some info.

We have zfs pool  POOL and one more zfs on it POOL/zfs

POOL - have only primarycache enabled ALL 
POOL/zfs - have both primay and secondary for ALL

POOL have compression=lz4

POOL/zfs have none


POOL - have around 9TB data

POOL/zfs - have 1TB

Secondary cache have configuration - 

cache
  gpt/cache0ONLINE   0 0 0
  gpt/cache1ONLINE   0 0 0
  gpt/cache2ONLINE   0 0 0

gpt/cache0-2 it's intel sdd SSDSC2BW180A4 180gb 

So full real size  for l2 is 540GB (realy 489gb)

First question  - data on l2arc will be compressed on not?

Second in stats we see 

L2 ARC Size: (Adaptive) 2.08TiB

eary it was 1.1 1.4 ...

So a) how cache can be biger than zfs it self
   b) in case it's not compressed (answer for first question) how it an be 
biger than real ssd size?


one more coment if l2 arc size grove above phisical sizes I se  next stats 

kstat.zfs.misc.arcstats.l2_cksum_bad: 50907344
kstat.zfs.misc.arcstats.l2_io_error: 4547377

and growing.


System is r255173 with patch from rr255173


At last maybe somebody have any ideas what's realy hapend...





Vitalij Satanivskij wrote:
VS 
VS One more question - 
VS 
VS we have two counter - 
VS 
VS kstat.zfs.misc.arcstats.l2_size: 1256609410560
VS kstat.zfs.misc.arcstats.l2_asize: 1149007667712
VS 
VS can anybody explain how to understand them i.e.  l2_asize - real used space 
on l2arc an l2_size - uncompressed size,
VS 
VS or maybe something else ?
VS 
VS 
VS 
VS Vitalij Satanivskij wrote:
VS VS 
VS VS Data on pool have compressratio around 1.4 
VS VS 
VS VS On diferent servers with same data type and load L2 ARC Size: 
(Adaptive) can be diferent 
VS VS 
VS VS for example 1.04TiB vs  1.45TiB
VS VS 
VS VS But it's all have same porblem  - grow in time.
VS VS 
VS VS 
VS VS More stange for us - 
VS VS 
VS VS ARC: 80G Total, 4412M MFU, 5040M MRU, 76M Anon, 78G Header, 2195M Other
VS VS 
VS VS 78G header size and ubnormal - 
VS VS 
VS VS kstat.zfs.misc.arcstats.l2_cksum_bad: 210920592
VS VS kstat.zfs.misc.arcstats.l2_io_error: 7362414
VS VS 
VS VS sysctl's growing avery second.
VS VS 
VS VS All part's of server (as hardware part's) in in normal state.
VS VS 
VS VS After reboot no problem's for some period untile cache size grow to 
some limit.
VS VS 
VS VS 
VS VS 
VS VS Mark Felder wrote:
VS VS MF On Mon, Oct 7, 2013, at 13:09, Dmitriy Makarov wrote:
VS VS MF  
VS VS MF  How can L2 ARC Size: (Adaptive) be 1.44 TiB (up) with total 
physical size
VS VS MF  of L2ARC devices 490GB?
VS VS MF  
VS VS MF 
VS VS MF http://svnweb.freebsd.org/base?view=revisionrevision=251478
VS VS MF 
VS VS MF L2ARC compression perhaps?
VS VS MF ___
VS VS MF freebsd-current@freebsd.org mailing list
VS VS MF http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS VS MF To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS VS ___
VS VS freebsd-current@freebsd.org mailing list
VS VS http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS VS To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS ___
VS freebsd-current@freebsd.org mailing list
VS http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-10 Thread Vitalij Satanivskij

Hm, another strange thing's on another server -

zfs-stats -L


ZFS Subsystem ReportThu Oct 10 12:56:54 2013


L2 ARC Summary: (DEGRADED)
Passed Headroom:8.34m
Tried Lock Failures:145.66m
IO In Progress: 9.76m
Low Memory Aborts:  526
Free on Write:  1.70m
Writes While Full:  29.28k
R/W Clashes:341.30k
Bad Checksums:  865.91k
IO Errors:  44.19k
SPA Mismatch:   32.03m

L2 ARC Size: (Adaptive) 189.28  GiB
Header Size:4.88%   9.24GiB

Looks like size have nothing similar with IO errors.

So question is - when error's like Bad Checksums and IO Errors can hapend?

Look's like no hardware problem's.

All ssd atached to onboard intel sata controler (Motherboard  is Supermicro 
X9SRL-F) 
 

Vitalij Satanivskij wrote:
VS Same situation hapend yesterday again :( 
VS 
VS What's confuse me while trying to understend where I'm wrong
VS 
VS 
VS Firt some info.
VS 
VS We have zfs pool  POOL and one more zfs on it POOL/zfs
VS 
VS POOL - have only primarycache enabled ALL 
VS POOL/zfs - have both primay and secondary for ALL
VS 
VS POOL have compression=lz4
VS 
VS POOL/zfs have none
VS 
VS 
VS POOL - have around 9TB data
VS 
VS POOL/zfs - have 1TB
VS 
VS Secondary cache have configuration - 
VS 
VS cache
VS   gpt/cache0ONLINE   0 0 0
VS   gpt/cache1ONLINE   0 0 0
VS   gpt/cache2ONLINE   0 0 0
VS 
VS gpt/cache0-2 it's intel sdd SSDSC2BW180A4 180gb 
VS 
VS So full real size  for l2 is 540GB (realy 489gb)
VS 
VS First question  - data on l2arc will be compressed on not?
VS 
VS Second in stats we see 
VS 
VS L2 ARC Size: (Adaptive) 2.08TiB
VS 
VS eary it was 1.1 1.4 ...
VS 
VS So a) how cache can be biger than zfs it self
VSb) in case it's not compressed (answer for first question) how it an be 
biger than real ssd size?
VS 
VS 
VS one more coment if l2 arc size grove above phisical sizes I se  next stats 
VS 
VS kstat.zfs.misc.arcstats.l2_cksum_bad: 50907344
VS kstat.zfs.misc.arcstats.l2_io_error: 4547377
VS 
VS and growing.
VS 
VS 
VS System is r255173 with patch from rr255173
VS 
VS 
VS At last maybe somebody have any ideas what's realy hapend...
VS 
VS 
VS 
VS 
VS 
VS Vitalij Satanivskij wrote:
VS VS 
VS VS One more question - 
VS VS 
VS VS we have two counter - 
VS VS 
VS VS kstat.zfs.misc.arcstats.l2_size: 1256609410560
VS VS kstat.zfs.misc.arcstats.l2_asize: 1149007667712
VS VS 
VS VS can anybody explain how to understand them i.e.  l2_asize - real used 
space on l2arc an l2_size - uncompressed size,
VS VS 
VS VS or maybe something else ?
VS VS 
VS VS 
VS VS 
VS VS Vitalij Satanivskij wrote:
VS VS VS 
VS VS VS Data on pool have compressratio around 1.4 
VS VS VS 
VS VS VS On diferent servers with same data type and load L2 ARC Size: 
(Adaptive) can be diferent 
VS VS VS 
VS VS VS for example 1.04TiB vs  1.45TiB
VS VS VS 
VS VS VS But it's all have same porblem  - grow in time.
VS VS VS 
VS VS VS 
VS VS VS More stange for us - 
VS VS VS 
VS VS VS ARC: 80G Total, 4412M MFU, 5040M MRU, 76M Anon, 78G Header, 2195M 
Other
VS VS VS 
VS VS VS 78G header size and ubnormal - 
VS VS VS 
VS VS VS kstat.zfs.misc.arcstats.l2_cksum_bad: 210920592
VS VS VS kstat.zfs.misc.arcstats.l2_io_error: 7362414
VS VS VS 
VS VS VS sysctl's growing avery second.
VS VS VS 
VS VS VS All part's of server (as hardware part's) in in normal state.
VS VS VS 
VS VS VS After reboot no problem's for some period untile cache size grow to 
some limit.
VS VS VS 
VS VS VS 
VS VS VS 
VS VS VS Mark Felder wrote:
VS VS VS MF On Mon, Oct 7, 2013, at 13:09, Dmitriy Makarov wrote:
VS VS VS MF  
VS VS VS MF  How can L2 ARC Size: (Adaptive) be 1.44 TiB (up) with total 
physical size
VS VS VS MF  of L2ARC devices 490GB?
VS VS VS MF  
VS VS VS MF 
VS VS VS MF http://svnweb.freebsd.org/base?view=revisionrevision=251478
VS VS VS MF 
VS VS VS MF L2ARC compression perhaps?
VS VS VS MF ___
VS VS VS MF freebsd-current@freebsd.org mailing list
VS VS VS MF http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS VS VS MF To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS VS VS ___
VS VS VS freebsd-current@freebsd.org mailing list
VS VS VS http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS VS VS To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS VS 

Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-10 Thread Allan Jude
On 2013-10-10 05:22, Vitalij Satanivskij wrote:
 Same situation hapend yesterday again :( 

 What's confuse me while trying to understend where I'm wrong


 Firt some info.

 We have zfs pool  POOL and one more zfs on it POOL/zfs

 POOL - have only primarycache enabled ALL 
 POOL/zfs - have both primay and secondary for ALL

 POOL have compression=lz4

 POOL/zfs have none


 POOL - have around 9TB data

 POOL/zfs - have 1TB

 Secondary cache have configuration - 

 cache
   gpt/cache0ONLINE   0 0 0
   gpt/cache1ONLINE   0 0 0
   gpt/cache2ONLINE   0 0 0

 gpt/cache0-2 it's intel sdd SSDSC2BW180A4 180gb 

 So full real size  for l2 is 540GB (realy 489gb)

 First question  - data on l2arc will be compressed on not?

 Second in stats we see 

 L2 ARC Size: (Adaptive) 2.08TiB

 eary it was 1.1 1.4 ...

 So a) how cache can be biger than zfs it self
b) in case it's not compressed (answer for first question) how it an be 
 biger than real ssd size?


 one more coment if l2 arc size grove above phisical sizes I se  next stats 

 kstat.zfs.misc.arcstats.l2_cksum_bad: 50907344
 kstat.zfs.misc.arcstats.l2_io_error: 4547377

 and growing.


 System is r255173 with patch from rr255173


 At last maybe somebody have any ideas what's realy hapend...





 Vitalij Satanivskij wrote:
 VS 
 VS One more question - 
 VS 
 VS we have two counter - 
 VS 
 VS kstat.zfs.misc.arcstats.l2_size: 1256609410560
 VS kstat.zfs.misc.arcstats.l2_asize: 1149007667712
 VS 
 VS can anybody explain how to understand them i.e.  l2_asize - real used 
 space on l2arc an l2_size - uncompressed size,
 VS 
 VS or maybe something else ?
 VS 
 VS 
 VS 
 VS Vitalij Satanivskij wrote:
 VS VS 
 VS VS Data on pool have compressratio around 1.4 
 VS VS 
 VS VS On diferent servers with same data type and load L2 ARC Size: 
 (Adaptive) can be diferent 
 VS VS 
 VS VS for example 1.04TiB vs  1.45TiB
 VS VS 
 VS VS But it's all have same porblem  - grow in time.
 VS VS 
 VS VS 
 VS VS More stange for us - 
 VS VS 
 VS VS ARC: 80G Total, 4412M MFU, 5040M MRU, 76M Anon, 78G Header, 2195M 
 Other
 VS VS 
 VS VS 78G header size and ubnormal - 
 VS VS 
 VS VS kstat.zfs.misc.arcstats.l2_cksum_bad: 210920592
 VS VS kstat.zfs.misc.arcstats.l2_io_error: 7362414
 VS VS 
 VS VS sysctl's growing avery second.
 VS VS 
 VS VS All part's of server (as hardware part's) in in normal state.
 VS VS 
 VS VS After reboot no problem's for some period untile cache size grow to 
 some limit.
 VS VS 
 VS VS 
 VS VS 
 VS VS Mark Felder wrote:
 VS VS MF On Mon, Oct 7, 2013, at 13:09, Dmitriy Makarov wrote:
 VS VS MF  
 VS VS MF  How can L2 ARC Size: (Adaptive) be 1.44 TiB (up) with total 
 physical size
 VS VS MF  of L2ARC devices 490GB?
 VS VS MF  
 VS VS MF 
 VS VS MF http://svnweb.freebsd.org/base?view=revisionrevision=251478
 VS VS MF 
 VS VS MF L2ARC compression perhaps?
 VS VS MF ___
 VS VS MF freebsd-current@freebsd.org mailing list
 VS VS MF http://lists.freebsd.org/mailman/listinfo/freebsd-current
 VS VS MF To unsubscribe, send any mail to 
 freebsd-current-unsubscr...@freebsd.org
 VS VS ___
 VS VS freebsd-current@freebsd.org mailing list
 VS VS http://lists.freebsd.org/mailman/listinfo/freebsd-current
 VS VS To unsubscribe, send any mail to 
 freebsd-current-unsubscr...@freebsd.org
 VS ___
 VS freebsd-current@freebsd.org mailing list
 VS http://lists.freebsd.org/mailman/listinfo/freebsd-current
 VS To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Some background on L2ARC compression for you:

http://wiki.illumos.org/display/illumos/L2ARC+Compression

http://svnweb.freebsd.org/base?view=revisionrevision=251478

Are you sure that compression on pool/zfs is off? it would normally
inherit from the parent, so double check with: zfs get compression pool/zfs

Is the data on pool/zfs related to the data on the root pool? if
pool/zfs were a clone, and the data is actually used in both places, the
newer 'single copy ARC' feature may come in to play:
https://www.illumos.org/issues/3145




-- 
Allan Jude

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-10 Thread Vitalij Satanivskij
AJ Some background on L2ARC compression for you:
AJ 
AJ http://wiki.illumos.org/display/illumos/L2ARC+Compression

I'm alredy see it.


 
AJ http://svnweb.freebsd.org/base?view=revisionrevision=251478
AJ 
AJ Are you sure that compression on pool/zfs is off? it would normally
AJ inherit from the parent, so double check with: zfs get compression pool/zfs

Yes, compression turned off on pool/zfs, it's was may time rechecked.



AJ Is the data on pool/zfs related to the data on the root pool? if
AJ pool/zfs were a clone, and the data is actually used in both places, the
AJ newer 'single copy ARC' feature may come in to play:
AJ https://www.illumos.org/issues/3145

No, both pool and pool/zfs have diferent type of data, pool/zfs was created as 
new empty zfs (zfs create pool/zfs)

and data was writed to it from another server. 


Right now one machine work fine with l2arc. This machine without patch for 
corecting ashift on cache devices.

At last 3 day's working with zero errors. Another servers with same config 
similar data, load and so on after 2 day 
work began report abouy errors.


AJ 
AJ 
AJ 
AJ -- 
AJ Allan Jude
AJ 
AJ ___
AJ freebsd-current@freebsd.org mailing list
AJ http://lists.freebsd.org/mailman/listinfo/freebsd-current
AJ To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-07 Thread Dmitriy Makarov
Hi all,

On our production system on r255173 we have problem with abnormal high system 
load caused (not sure) with L2ARC placed on a few SSD, 490 GB total size.

After a fresh boot everything seems to be fine, Load Average less then 5.00.
But after some time (nearly day-two) Load Average jump to 10.. 20+ and system 
goes really slow, IO opearations with zfs pool grows from ms to even seconds. 

And L2ARC sysctls are pretty disturbing:

[frv:~]$ sysctl -a | grep l2

vfs.zfs.l2arc_write_max: 2500
vfs.zfs.l2arc_write_boost: 5000
vfs.zfs.l2arc_headroom: 8
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_feed_min_ms: 30
vfs.zfs.l2arc_noprefetch: 0
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_norw: 1
vfs.zfs.l2c_only_size: 1525206040064
vfs.cache.numfullpathfail2: 4
kstat.zfs.misc.arcstats.evict_l2_cached: 6592742547456
kstat.zfs.misc.arcstats.evict_l2_eligible: 734016778752
kstat.zfs.misc.arcstats.evict_l2_ineligible: 29462561417216
kstat.zfs.misc.arcstats.l2_hits: 576550808
kstat.zfs.misc.arcstats.l2_misses: 128158998
kstat.zfs.misc.arcstats.l2_feeds: 1524059
kstat.zfs.misc.arcstats.l2_rw_clash: 1429740
kstat.zfs.misc.arcstats.l2_read_bytes: 2896069043200
kstat.zfs.misc.arcstats.l2_write_bytes: 2405022640128
kstat.zfs.misc.arcstats.l2_writes_sent: 826642
kstat.zfs.misc.arcstats.l2_writes_done: 826642
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 1059415
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 1640
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 8580680
kstat.zfs.misc.arcstats.l2_abort_lowmem: 2096
kstat.zfs.misc.arcstats.l2_cksum_bad: 212832715
kstat.zfs.misc.arcstats.l2_io_error: 5501886
kstat.zfs.misc.arcstats.l2_size: 1587962307584
kstat.zfs.misc.arcstats.l2_asize: 1425666718720
kstat.zfs.misc.arcstats.l2_hdr_size: 82346948208
kstat.zfs.misc.arcstats.l2_compress_successes: 41707766
kstat.zfs.misc.arcstats.l2_compress_zeros: 0
kstat.zfs.misc.arcstats.l2_compress_failures: 0
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 8847701930
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 21220076
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 27619372107
kstat.zfs.misc.arcstats.l2_write_in_l2: 418007172085
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 29279
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 131001473113
kstat.zfs.misc.arcstats.l2_write_full: 63699
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 1524059
kstat.zfs.misc.arcstats.l2_write_pios: 826642
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 8433038008130560
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 96529899
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 9228464


Here is output from zfs-stats about L2:

[frv:~]$ zfs-stats -L


ZFS Subsystem ReportMon Oct  7 20:50:19 2013


L2 ARC Summary: (DEGRADED)
Passed Headroom:21.22m
Tried Lock Failures:8.85b
IO In Progress: 29.32k
Low Memory Aborts:  2.10k
Free on Write:  8.59m
Writes While Full:  63.71k
R/W Clashes:1.43m
Bad Checksums:  213.07m
IO Errors:  5.51m
SPA Mismatch:   27.62b

L2 ARC Size: (Adaptive) 1.44TiB
Header Size:5.19%   76.70   GiB

L2 ARC Evicts:
Lock Retries:   1.64k
Upon Reading:   0

L2 ARC Breakdown:   705.25m
Hit Ratio:  81.82%  577.01m
Miss Ratio: 18.18%  128.24m
Feeds:  1.52m

L2 ARC Buffer:
Bytes Scanned:  7.49PiB
Buffer Iterations:  1.52m
List Iterations:96.55m
NULL List Iterations:   9.23m

L2 ARC Writes:
Writes Sent:100.00% 826.96k

--


In /boot/loader.conf (128GB RAM in system):

vm.kmem_size=110G
vfs.zfs.arc_max=100G
vfs.zfs.arc_min=80G
vfs.zfs.vdev.cache.size=16M
vfs.zfs.vdev.cache.max=16384

vfs.zfs.txg.timeout=10
vfs.zfs.write_limit_min=134217728

vfs.zfs.vdev.cache.bshift=14
vfs.zfs.arc_meta_limit=53687091200

vfs.zfs.l2arc_write_max=25165824
vfs.zfs.l2arc_write_boost=50331648
vfs.zfs.l2arc_noprefetch=0

In /etc/sysctl.conf:

vfs.zfs.l2arc_write_max=2500
vfs.zfs.l2arc_write_boost=5000
vfs.zfs.l2arc_noprefetch=0
vfs.zfs.l2arc_headroom=8
vfs.zfs.l2arc_feed_min_ms=30
vfs.zfs.arc_meta_limit=53687091200




How can L2 ARC Size: 

Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-07 Thread Mark Felder
On Mon, Oct 7, 2013, at 13:09, Dmitriy Makarov wrote:
 
 How can L2 ARC Size: (Adaptive) be 1.44 TiB (up) with total physical size
 of L2ARC devices 490GB?
 

http://svnweb.freebsd.org/base?view=revisionrevision=251478

L2ARC compression perhaps?
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-07 Thread Vitalij Satanivskij

Data on pool have compressratio around 1.4 

On diferent servers with same data type and load L2 ARC Size: (Adaptive) can be 
diferent 

for example 1.04TiB vs  1.45TiB

But it's all have same porblem  - grow in time.


More stange for us - 

ARC: 80G Total, 4412M MFU, 5040M MRU, 76M Anon, 78G Header, 2195M Other

78G header size and ubnormal - 

kstat.zfs.misc.arcstats.l2_cksum_bad: 210920592
kstat.zfs.misc.arcstats.l2_io_error: 7362414

sysctl's growing avery second.

All part's of server (as hardware part's) in in normal state.

After reboot no problem's for some period untile cache size grow to some limit.



Mark Felder wrote:
MF On Mon, Oct 7, 2013, at 13:09, Dmitriy Makarov wrote:
MF  
MF  How can L2 ARC Size: (Adaptive) be 1.44 TiB (up) with total physical size
MF  of L2ARC devices 490GB?
MF  
MF 
MF http://svnweb.freebsd.org/base?view=revisionrevision=251478
MF 
MF L2ARC compression perhaps?
MF ___
MF freebsd-current@freebsd.org mailing list
MF http://lists.freebsd.org/mailman/listinfo/freebsd-current
MF To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-07 Thread Vitalij Satanivskij

One more question - 

we have two counter - 

kstat.zfs.misc.arcstats.l2_size: 1256609410560
kstat.zfs.misc.arcstats.l2_asize: 1149007667712

can anybody explain how to understand them i.e.  l2_asize - real used space on 
l2arc an l2_size - uncompressed size,

or maybe something else ?



Vitalij Satanivskij wrote:
VS 
VS Data on pool have compressratio around 1.4 
VS 
VS On diferent servers with same data type and load L2 ARC Size: (Adaptive) 
can be diferent 
VS 
VS for example 1.04TiB vs  1.45TiB
VS 
VS But it's all have same porblem  - grow in time.
VS 
VS 
VS More stange for us - 
VS 
VS ARC: 80G Total, 4412M MFU, 5040M MRU, 76M Anon, 78G Header, 2195M Other
VS 
VS 78G header size and ubnormal - 
VS 
VS kstat.zfs.misc.arcstats.l2_cksum_bad: 210920592
VS kstat.zfs.misc.arcstats.l2_io_error: 7362414
VS 
VS sysctl's growing avery second.
VS 
VS All part's of server (as hardware part's) in in normal state.
VS 
VS After reboot no problem's for some period untile cache size grow to some 
limit.
VS 
VS 
VS 
VS Mark Felder wrote:
VS MF On Mon, Oct 7, 2013, at 13:09, Dmitriy Makarov wrote:
VS MF  
VS MF  How can L2 ARC Size: (Adaptive) be 1.44 TiB (up) with total physical 
size
VS MF  of L2ARC devices 490GB?
VS MF  
VS MF 
VS MF http://svnweb.freebsd.org/base?view=revisionrevision=251478
VS MF 
VS MF L2ARC compression perhaps?
VS MF ___
VS MF freebsd-current@freebsd.org mailing list
VS MF http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS MF To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS ___
VS freebsd-current@freebsd.org mailing list
VS http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org