Re: ZFS secondarycache on SSD problem on r255173

2014-06-06 Thread Matthew D. Fuller
On Fri, Apr 25, 2014 at 08:26:52PM -0500 I heard the voice of
Matthew D. Fuller, and lo! it spake thus:
 On Mon, Mar 03, 2014 at 11:17:05AM +0200 I heard the voice of
 Andriy Gapon, and lo! it spake thus:
  
  I noticed that on some of our systems we were getting a clearly
  abnormal number of l2arc checksum errors accounted in l2_cksum_bad.
  [...]
  I propose the following patch which has been tested and seems to fix
  the problem without introducing any side effects:
 
 I've been running this for 2 weeks now without any cksum_bad's
 showing up (long enough that without it, I'd have expected them),
 and no other obvious side effects.

Another 6 weeks on, still working fine and still seems to fix the
issue.  Any reason not to commit it?


-- 
Matthew Fuller (MF4839)   |  fulle...@over-yonder.net
Systems/Network Administrator |  http://www.over-yonder.net/~fullermd/
   On the Internet, nobody can hear you scream.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2014-04-25 Thread Matthew D. Fuller
On Mon, Mar 03, 2014 at 11:17:05AM +0200 I heard the voice of
Andriy Gapon, and lo! it spake thus:
 
 I noticed that on some of our systems we were getting a clearly
 abnormal number of l2arc checksum errors accounted in l2_cksum_bad.
 [...]
 I propose the following patch which has been tested and seems to fix
 the problem without introducing any side effects:

I've been running this for 2 weeks now without any cksum_bad's
showing up (long enough that without it, I'd have expected them), and
no other obvious side effects.

FreeBSD 11.0-CURRENT #0 r264306M

So if there are votes, I vote for making it work without that 'M'  :)


-- 
Matthew Fuller (MF4839)   |  fulle...@over-yonder.net
Systems/Network Administrator |  http://www.over-yonder.net/~fullermd/
   On the Internet, nobody can hear you scream.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2014-03-11 Thread Matthew D. Fuller
On Mon, Mar 03, 2014 at 11:17:05AM +0200 I heard the voice of
Andriy Gapon, and lo! it spake thus:
 
 I noticed that on some of our systems we were getting a clearly
 abnormal number of l2arc checksum errors accounted in l2_cksum_bad.
 The hardware appeared to be in good health.

FWIW, I have a system here where I see similar things; after
sufficient uptime it shows an impressive number and proportion of
checksum errors (50% or more of the hits, eventually), since at least
sometime last fall.  No indication anywhere else of problems (CAM,
SMART, zpool status).

Haven't tried the patches.  It would take most of a month of seeing
nothing to have much confidence they did anything anyway; it's got 2
weeks now and only ~500 errors, so it takes a long time with this
system/workload to ramp up.  But it'd be a nice fix to have, so mark
me up as a user-vote for landing if the code makes sense to the
codesense people.


-- 
Matthew Fuller (MF4839)   |  fulle...@over-yonder.net
Systems/Network Administrator |  http://www.over-yonder.net/~fullermd/
   On the Internet, nobody can hear you scream.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2014-03-03 Thread Andriy Gapon
on 18/10/2013 17:57 Steven Hartland said the following:
 I think we we may well need the following patch to set the minblock
 size based on the vdev ashift and not SPA_MINBLOCKSIZE.
 
 svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
 Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
 ===
 --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(revision 
 256554)
 +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(working copy)
 @@ -5147,7 +5147,7 @@ l2arc_compress_buf(l2arc_buf_hdr_t *l2hdr)
len = l2hdr-b_asize;
cdata = zio_data_buf_alloc(len);
csize = zio_compress_data(ZIO_COMPRESS_LZ4, l2hdr-b_tmp_cdata,
 -   cdata, l2hdr-b_asize, (size_t)SPA_MINBLOCKSIZE);
 +   cdata, l2hdr-b_asize, (size_t)(1ULL 
 l2hdr-b_dev-l2ad_vdev-vdev_ashift));
 
if (csize == 0) {
/* zero block, indicate that there's nothing to write */


This is a rather old thread and change, but I think that I have identified
another problem with 4KB cache devices.

I noticed that on some of our systems we were getting a clearly abnormal number
of l2arc checksum errors accounted in l2_cksum_bad.  The hardware appeared to be
in good health.  Using DTrace I noticed that the data seemed to be overwritten
with other data.  After more DTrace analysis I observed that sometimes
l2arc_write_buffers() would advance l2ad_hand by more than target_sz.
This meant that l2arc_write_buffers() would write beyond a region cleared by
l2arc_evict() and thus overwrite data belonging to non-evicted buffers.  Havoc
ensues.

The cache devices in question are all SSDs with logical sector size of 4KB.
I am not sure about other ZFS platforms, but on FreeBSD this fact is detected
and ashift of 12 is used for the cache vdevs.

Looking at l2arc_write_buffers() code you can see that it properly accounts for
ashift when actually writing buffers and advancing l2ad_hand:
/*
 * Keep the clock hand suitably device-aligned.
 */
buf_p_sz = vdev_psize_to_asize(dev-l2ad_vdev, buf_sz);
write_psize += buf_p_sz;
dev-l2ad_hand += buf_p_sz;

But the same is not done when selecting buffers to be written and checking that
target_sz is not exceeded.
So, if ARC contains a lot of buffers smaller than 4K that means that an aligned
on-disk size of the L2ARC buffers could be quite larger than their non-aligned 
size.

I propose the following patch which has been tested and seems to fix the problem
without introducing any side effects:
https://github.com/avg-I/freebsd/compare/review;l2arc-write-target-size.diff
https://github.com/avg-I/freebsd/compare/review;l2arc-write-target-size

-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2014-03-03 Thread Steven Hartland
- Original Message - 
From: Andriy Gapon a...@freebsd.org




on 18/10/2013 17:57 Steven Hartland said the following:

I think we we may well need the following patch to set the minblock
size based on the vdev ashift and not SPA_MINBLOCKSIZE.

svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
===
--- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(revision 
256554)
+++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(working copy)
@@ -5147,7 +5147,7 @@ l2arc_compress_buf(l2arc_buf_hdr_t *l2hdr)
   len = l2hdr-b_asize;
   cdata = zio_data_buf_alloc(len);
   csize = zio_compress_data(ZIO_COMPRESS_LZ4, l2hdr-b_tmp_cdata,
-   cdata, l2hdr-b_asize, (size_t)SPA_MINBLOCKSIZE);
+   cdata, l2hdr-b_asize, (size_t)(1ULL 
l2hdr-b_dev-l2ad_vdev-vdev_ashift));

   if (csize == 0) {
   /* zero block, indicate that there's nothing to write */



This is a rather old thread and change, but I think that I have identified
another problem with 4KB cache devices.

I noticed that on some of our systems we were getting a clearly abnormal number
of l2arc checksum errors accounted in l2_cksum_bad.  The hardware appeared to be
in good health.  Using DTrace I noticed that the data seemed to be overwritten
with other data.  After more DTrace analysis I observed that sometimes
l2arc_write_buffers() would advance l2ad_hand by more than target_sz.
This meant that l2arc_write_buffers() would write beyond a region cleared by
l2arc_evict() and thus overwrite data belonging to non-evicted buffers.  Havoc
ensues.

The cache devices in question are all SSDs with logical sector size of 4KB.
I am not sure about other ZFS platforms, but on FreeBSD this fact is detected
and ashift of 12 is used for the cache vdevs.

Looking at l2arc_write_buffers() code you can see that it properly accounts for
ashift when actually writing buffers and advancing l2ad_hand:
   /*
* Keep the clock hand suitably device-aligned.
*/
   buf_p_sz = vdev_psize_to_asize(dev-l2ad_vdev, buf_sz);
   write_psize += buf_p_sz;
   dev-l2ad_hand += buf_p_sz;

But the same is not done when selecting buffers to be written and checking that
target_sz is not exceeded.
So, if ARC contains a lot of buffers smaller than 4K that means that an aligned
on-disk size of the L2ARC buffers could be quite larger than their non-aligned 
size.

I propose the following patch which has been tested and seems to fix the problem
without introducing any side effects:
https://github.com/avg-I/freebsd/compare/review;l2arc-write-target-size.diff
https://github.com/avg-I/freebsd/compare/review;l2arc-write-target-size


Looks good to me.

   Regards
   Steve


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-22 Thread Vitalij Satanivskij

Ок, just up to now no error on l2arc 

L2 ARC Summary: (HEALTHY)
Passed Headroom:1.99m
Tried Lock Failures:144.53m
IO In Progress: 130.15k
Low Memory Aborts:  7
Free on Write:  335.56k
Writes While Full:  30.31k
R/W Clashes:115.31k
Bad Checksums:  0
IO Errors:  0
SPA Mismatch:   153.15m

L2 ARC Size: (Adaptive) 433.75  GiB
Header Size:0.49%   2.12GiB


I will test for longer time, but looks like problem gone.


Vitalij Satanivskij wrote:
VS Steven Hartland wrote:
VS SH So previously you only started seeing l2 errors after there was
VS SH a significant amount of data in l2arc? Thats interesting in itself
VS SH if thats the case.
VS 
VS Yes someting arround  200+gb
VS  
VS SH I wonder if its the type of data, or something similar. Do you
VS SH run compression on any of your volumes?
VS SH zfs get compression
VS 
VS Just now testing goes on next configuration 
VS 
VS first zfs is top level pool calling disk1  have enable lz4 compression and 
secondarycache = metadata
VS 
VS next zfs is disk1/data with compression=off and secondarycache = all 
VS 
VS Error was seen on confiruration like that and on configuration where was 
seted as secondarycache = none for disk1 (disk1/data still fully cached)
VS 
VS 
VS 
VS 
VS SH Regards
VS SH Steve
VS SH - Original Message - 
VS SH From: Vitalij Satanivskij sa...@ukr.net
VS SH 
VS SH 
VS SH  
VS SH  Just  now I cannot say, as to triger problem we need at last 200+gb 
size on l2arc wich usually grow in one production day.
VS SH  
VS SH  But for some reason today in the morning server was rebooted so cache 
was flushed and now only 100Gb. 
VS SH  
VS SH  Need to wait some more time.
VS SH  
VS SH  At last for now none error on l2.
VS SH 
VS SH 
VS SH 
VS SH This e.mail is private and confidential between Multiplay (UK) Ltd. and 
the person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
VS SH 
VS SH In the event of misdirection, illegible or incomplete transmission 
please telephone +44 845 868 1337
VS SH or return the E.mail to postmas...@multiplay.co.uk.
VS SH 
VS SH ___
VS SH freebsd-current@freebsd.org mailing list
VS SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS SH To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
VS ___
VS freebsd-current@freebsd.org mailing list
VS http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-10-21 Thread Steven Hartland

Hows things looking Vitalij?

- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net




Ok. Just right now system rebooted with you patch.

Trim enabled again.

WIll wait some time untile size of used cache grow's.


Steven Hartland wrote:
SH Looking at the l2arc compression code I believe that metadata is always
SH compressed with lz4, even if compression is off on all datasets.
SH 
SH This is backed up by what I'm seeing on my system here as it shows a

SH non-zero l2_compress_successes value even though I'm not using
SH compression at all.
SH 
SH I think we we may well need the following patch to set the minblock

SH size based on the vdev ashift and not SPA_MINBLOCKSIZE.
SH 
SH svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c

SH Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH ===
SH --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(revision 
256554)
SH +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(working 
copy)
SH @@ -5147,7 +5147,7 @@ l2arc_compress_buf(l2arc_buf_hdr_t *l2hdr)
SH len = l2hdr-b_asize;
SH cdata = zio_data_buf_alloc(len);
SH csize = zio_compress_data(ZIO_COMPRESS_LZ4, l2hdr-b_tmp_cdata,
SH -   cdata, l2hdr-b_asize, (size_t)SPA_MINBLOCKSIZE);
SH +   cdata, l2hdr-b_asize, (size_t)(1ULL  
l2hdr-b_dev-l2ad_vdev-vdev_ashift));
SH 
SH if (csize == 0) {

SH /* zero block, indicate that there's nothing to write */
SH 
SH Could you try this patch on your system Vitalij see if it has any effect

SH on the number of l2_cksum_bad / l2_io_error?




This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-21 Thread Vitalij Satanivskij

Just  now I cannot say, as to triger problem we need at last 200+gb size on 
l2arc wich usually grow in one production day.

But for some reason today in the morning server was rebooted so cache was 
flushed and now only 100Gb. 

Need to wait some more time.

At last for now none error on l2.



Steven Hartland wrote:
SH Hows things looking Vitalij?
SH 
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH 
SH 
SH  Ok. Just right now system rebooted with you patch.
SH  
SH  Trim enabled again.
SH  
SH  WIll wait some time untile size of used cache grow's.
SH  
SH  
SH  Steven Hartland wrote:
SH  SH Looking at the l2arc compression code I believe that metadata is 
always
SH  SH compressed with lz4, even if compression is off on all datasets.
SH  SH 
SH  SH This is backed up by what I'm seeing on my system here as it shows a
SH  SH non-zero l2_compress_successes value even though I'm not using
SH  SH compression at all.
SH  SH 
SH  SH I think we we may well need the following patch to set the minblock
SH  SH size based on the vdev ashift and not SPA_MINBLOCKSIZE.
SH  SH 
SH  SH svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH  SH Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH  SH ===
SH  SH --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
(revision 256554)
SH  SH +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
(working copy)
SH  SH @@ -5147,7 +5147,7 @@ l2arc_compress_buf(l2arc_buf_hdr_t *l2hdr)
SH  SH len = l2hdr-b_asize;
SH  SH cdata = zio_data_buf_alloc(len);
SH  SH csize = zio_compress_data(ZIO_COMPRESS_LZ4, 
l2hdr-b_tmp_cdata,
SH  SH -   cdata, l2hdr-b_asize, (size_t)SPA_MINBLOCKSIZE);
SH  SH +   cdata, l2hdr-b_asize, (size_t)(1ULL  
l2hdr-b_dev-l2ad_vdev-vdev_ashift));
SH  SH 
SH  SH if (csize == 0) {
SH  SH /* zero block, indicate that there's nothing to write 
*/
SH  SH 
SH  SH Could you try this patch on your system Vitalij see if it has any 
effect
SH  SH on the number of l2_cksum_bad / l2_io_error?
SH 
SH 
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
SH 
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH 
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-21 Thread Steven Hartland

So previously you only started seeing l2 errors after there was
a significant amount of data in l2arc? Thats interesting in itself
if thats the case.

I wonder if its the type of data, or something similar. Do you
run compression on any of your volumes?
zfs get compression

   Regards
   Steve
- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net





Just  now I cannot say, as to triger problem we need at last 200+gb size on 
l2arc wich usually grow in one production day.

But for some reason today in the morning server was rebooted so cache was flushed and now only 100Gb. 


Need to wait some more time.

At last for now none error on l2.




This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-21 Thread Vitalij Satanivskij
Steven Hartland wrote:
SH So previously you only started seeing l2 errors after there was
SH a significant amount of data in l2arc? Thats interesting in itself
SH if thats the case.

Yes someting arround  200+gb
 
SH I wonder if its the type of data, or something similar. Do you
SH run compression on any of your volumes?
SH zfs get compression

Just now testing goes on next configuration 

first zfs is top level pool calling disk1  have enable lz4 compression and 
secondarycache = metadata

next zfs is disk1/data with compression=off and secondarycache = all 

Error was seen on confiruration like that and on configuration where was seted 
as secondarycache = none for disk1 (disk1/data still fully cached)




SH Regards
SH Steve
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH 
SH 
SH  
SH  Just  now I cannot say, as to triger problem we need at last 200+gb size 
on l2arc wich usually grow in one production day.
SH  
SH  But for some reason today in the morning server was rebooted so cache was 
flushed and now only 100Gb. 
SH  
SH  Need to wait some more time.
SH  
SH  At last for now none error on l2.
SH 
SH 
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
SH 
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH 
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-19 Thread Vitalij Satanivskij
Ok. Just right now system rebooted with you patch.

Trim enabled again.

WIll wait some time untile size of used cache grow's.


Steven Hartland wrote:
SH Looking at the l2arc compression code I believe that metadata is always
SH compressed with lz4, even if compression is off on all datasets.
SH 
SH This is backed up by what I'm seeing on my system here as it shows a
SH non-zero l2_compress_successes value even though I'm not using
SH compression at all.
SH 
SH I think we we may well need the following patch to set the minblock
SH size based on the vdev ashift and not SPA_MINBLOCKSIZE.
SH 
SH svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
SH ===
SH --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(revision 
256554)
SH +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(working 
copy)
SH @@ -5147,7 +5147,7 @@ l2arc_compress_buf(l2arc_buf_hdr_t *l2hdr)
SH len = l2hdr-b_asize;
SH cdata = zio_data_buf_alloc(len);
SH csize = zio_compress_data(ZIO_COMPRESS_LZ4, l2hdr-b_tmp_cdata,
SH -   cdata, l2hdr-b_asize, (size_t)SPA_MINBLOCKSIZE);
SH +   cdata, l2hdr-b_asize, (size_t)(1ULL  
l2hdr-b_dev-l2ad_vdev-vdev_ashift));
SH 
SH if (csize == 0) {
SH /* zero block, indicate that there's nothing to write */
SH 
SH Could you try this patch on your system Vitalij see if it has any effect
SH on the number of l2_cksum_bad / l2_io_error?
SH 
SH Regards
SH Steve
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH To: Steven Hartland kill...@multiplay.co.uk
SH Cc: Vitalij Satanivskij sa...@ukr.net; Dmitriy Makarov 
suppor...@ukr.net; Justin T. Gibbs gi...@freebsd.org; Borja 
SH Marcos bor...@sarenet.es; freebsd-current@freebsd.org
SH Sent: Friday, October 18, 2013 3:45 PM
SH Subject: Re: ZFS secondarycache on SSD problem on r255173
SH 
SH 
SH 
SH  Just right now stats not to actual because of some another test.
SH 
SH  Test is simply all gpart information destroyed from ssd and
SH 
SH  They used as raw cache devices. Just
SH  2013-10-18.11:30:49 zpool add disk1 cache /dev/ada1 /dev/ada2 /dev/ada3
SH 
SH  So sizes at last l2_size and  l2_asize in not actual.
SH 
SH  But heare it is:
SH 
SH  kstat.zfs.misc.arcstats.hits: 5178174063
SH  kstat.zfs.misc.arcstats.misses: 57690806
SH  kstat.zfs.misc.arcstats.demand_data_hits: 313995744
SH  kstat.zfs.misc.arcstats.demand_data_misses: 37414740
SH  kstat.zfs.misc.arcstats.demand_metadata_hits: 4719242892
SH  kstat.zfs.misc.arcstats.demand_metadata_misses: 9266394
SH  kstat.zfs.misc.arcstats.prefetch_data_hits: 1182495
SH  kstat.zfs.misc.arcstats.prefetch_data_misses: 9951733
SH  kstat.zfs.misc.arcstats.prefetch_metadata_hits: 143752935
SH  kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1057939
SH  kstat.zfs.misc.arcstats.mru_hits: 118609738
SH  kstat.zfs.misc.arcstats.mru_ghost_hits: 1895486
SH  kstat.zfs.misc.arcstats.mfu_hits: 4914673425
SH  kstat.zfs.misc.arcstats.mfu_ghost_hits: 14537497
SH  kstat.zfs.misc.arcstats.allocated: 103796455
SH  kstat.zfs.misc.arcstats.deleted: 40168100
SH  kstat.zfs.misc.arcstats.stolen: 20832742
SH  kstat.zfs.misc.arcstats.recycle_miss: 15663428
SH  kstat.zfs.misc.arcstats.mutex_miss: 1456781
SH  kstat.zfs.misc.arcstats.evict_skip: 25960184
SH  kstat.zfs.misc.arcstats.evict_l2_cached: 891379153920
SH  kstat.zfs.misc.arcstats.evict_l2_eligible: 50578438144
SH  kstat.zfs.misc.arcstats.evict_l2_ineligible: 956055729664
SH  kstat.zfs.misc.arcstats.hash_elements: 8693451
SH  kstat.zfs.misc.arcstats.hash_elements_max: 14369414
SH  kstat.zfs.misc.arcstats.hash_collisions: 90967764
SH  kstat.zfs.misc.arcstats.hash_chains: 1891463
SH  kstat.zfs.misc.arcstats.hash_chain_max: 24
SH  kstat.zfs.misc.arcstats.p: 73170954752
SH  kstat.zfs.misc.arcstats.c: 85899345920
SH  kstat.zfs.misc.arcstats.c_min: 42949672960
SH  kstat.zfs.misc.arcstats.c_max: 85899345920
SH  kstat.zfs.misc.arcstats.size: 85899263104
SH  kstat.zfs.misc.arcstats.hdr_size: 1425948696
SH  kstat.zfs.misc.arcstats.data_size: 77769994240
SH  kstat.zfs.misc.arcstats.other_size: 6056233632
SH  kstat.zfs.misc.arcstats.l2_hits: 21725934
SH  kstat.zfs.misc.arcstats.l2_misses: 35876251
SH  kstat.zfs.misc.arcstats.l2_feeds: 130197
SH  kstat.zfs.misc.arcstats.l2_rw_clash: 110181
SH  kstat.zfs.misc.arcstats.l2_read_bytes: 391282009600
SH  kstat.zfs.misc.arcstats.l2_write_bytes: 1098703347712
SH  kstat.zfs.misc.arcstats.l2_writes_sent: 130037
SH  kstat.zfs.misc.arcstats.l2_writes_done: 130037
SH  kstat.zfs.misc.arcstats.l2_writes_error: 0
SH  kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 375921
SH  kstat.zfs.misc.arcstats.l2_evict_lock_retry: 331
SH  kstat.zfs.misc.arcstats.l2_evict_reading: 43
SH  kstat.zfs.misc.arcstats.l2_free_on_write: 255730
SH  kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
SH  kstat.zfs.misc.arcstats.l2_cksum_bad: 854359

Re: ZFS secondarycache on SSD problem on r255173

2013-10-18 Thread Vitalij Satanivskij
Hello.

Yesterday system was rebooted with vfs.zfs.trim.enabled=0 

System version 10.0-BETA1 FreeBSD 10.0-BETA1 #6 r256669, without any changes in 
code

Uptime 10:51  up 16:41 

sysctl vfs.zfs.trim.enabled
vfs.zfs.trim.enabled: 0

Around 2 hours ago errors counter's 
kstat.zfs.misc.arcstats.l2_cksum_bad: 854359
kstat.zfs.misc.arcstats.l2_io_error: 38254 

begin grow from zero values.

After remove cache 
2013-10-18.10:37:10 zpool remove disk1 gpt/cache0 gpt/cache1 gpt/cache2

and attach again 

2013-10-18.10:38:28 zpool add disk1 cache gpt/cache0 gpt/cache1 gpt/cache2

counters stop growing (of couse thay not zeroed)

before cache remove kstat.zfs.misc.arcstats.l2_asize was around 280GB

hw size of l2 cache is 3x164G

=   34  351651821  ada3  GPT  (168G)
 34  6- free -  (3.0K)
 408388608 1  zil2  (4.0G)
8388648  343263200 2  cache2  (164G)
  351651848  7- free -  (3.5K)


Any hypothesis what alse we can test/try etc?



Steven Hartland wrote:
SH Correct.
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH 
SH 
SH  Just to be sure I understand you clearly, I need to test next 
configuration:
SH  
SH  1) System with ashift patch eg. just latest stable/10 revision 
SH  2) vfs.zfs.trim.enabled=0 in /boot/loader.conf 
SH  
SH  So realy only diferens in default system configuration is  disabled trim 
functional ?
SH  
SH  
SH  
SH  Steven Hartland wrote:
SH  SH Still worth testing with the problem version installed but
SH  SH with trim disabled to see if that clears the issues, if
SH  SH nothing else it will confirm / deny if trim is involved.
SH 
SH 
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
SH 
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH 
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-18 Thread Steven Hartland

Hmm so that rules out a TRIM related issue. I wonder if the
increase in ashift has triggered a problem in compression.

What are all the values reported by:
sysctl -a kstat.zfs.misc.arcstats

   Regards
   Steve

- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net

To: Steven Hartland kill...@multiplay.co.uk
Cc: sa...@ukr.net; Justin T. Gibbs gi...@freebsd.org; freebsd-current@freebsd.org; Borja Marcos bor...@sarenet.es; 
Dmitriy Makarov suppor...@ukr.net

Sent: Friday, October 18, 2013 9:01 AM
Subject: Re: ZFS secondarycache on SSD problem on r255173



Hello.

Yesterday system was rebooted with vfs.zfs.trim.enabled=0

System version 10.0-BETA1 FreeBSD 10.0-BETA1 #6 r256669, without any changes in 
code

Uptime 10:51  up 16:41

sysctl vfs.zfs.trim.enabled
vfs.zfs.trim.enabled: 0

Around 2 hours ago errors counter's
kstat.zfs.misc.arcstats.l2_cksum_bad: 854359
kstat.zfs.misc.arcstats.l2_io_error: 38254

begin grow from zero values.

After remove cache
2013-10-18.10:37:10 zpool remove disk1 gpt/cache0 gpt/cache1 gpt/cache2

and attach again

2013-10-18.10:38:28 zpool add disk1 cache gpt/cache0 gpt/cache1 gpt/cache2

counters stop growing (of couse thay not zeroed)

before cache remove kstat.zfs.misc.arcstats.l2_asize was around 280GB

hw size of l2 cache is 3x164G

=   34  351651821  ada3  GPT  (168G)
34  6- free -  (3.0K)
408388608 1  zil2  (4.0G)
   8388648  343263200 2  cache2  (164G)
 351651848  7- free -  (3.5K)


Any hypothesis what alse we can test/try etc?



Steven Hartland wrote:
SH Correct.
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net

SH
SH
SH  Just to be sure I understand you clearly, I need to test next 
configuration:
SH 
SH  1) System with ashift patch eg. just latest stable/10 revision
SH  2) vfs.zfs.trim.enabled=0 in /boot/loader.conf
SH 
SH  So realy only diferens in default system configuration is  disabled trim 
functional ?
SH 
SH 
SH 
SH  Steven Hartland wrote:
SH  SH Still worth testing with the problem version installed but
SH  SH with trim disabled to see if that clears the issues, if
SH  SH nothing else it will confirm / deny if trim is involved.
SH
SH
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the 
event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any 
information contained in it.

SH
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org





This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-18 Thread Vitalij Satanivskij

Just right now stats not to actual because of some another test.

Test is simply all gpart information destroyed from ssd and 

They used as raw cache devices. Just 
2013-10-18.11:30:49 zpool add disk1 cache /dev/ada1 /dev/ada2 /dev/ada3
 
So sizes at last l2_size and  l2_asize in not actual.

But heare it is: 

kstat.zfs.misc.arcstats.hits: 5178174063
kstat.zfs.misc.arcstats.misses: 57690806
kstat.zfs.misc.arcstats.demand_data_hits: 313995744
kstat.zfs.misc.arcstats.demand_data_misses: 37414740
kstat.zfs.misc.arcstats.demand_metadata_hits: 4719242892
kstat.zfs.misc.arcstats.demand_metadata_misses: 9266394
kstat.zfs.misc.arcstats.prefetch_data_hits: 1182495
kstat.zfs.misc.arcstats.prefetch_data_misses: 9951733
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 143752935
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1057939
kstat.zfs.misc.arcstats.mru_hits: 118609738
kstat.zfs.misc.arcstats.mru_ghost_hits: 1895486
kstat.zfs.misc.arcstats.mfu_hits: 4914673425
kstat.zfs.misc.arcstats.mfu_ghost_hits: 14537497
kstat.zfs.misc.arcstats.allocated: 103796455
kstat.zfs.misc.arcstats.deleted: 40168100
kstat.zfs.misc.arcstats.stolen: 20832742
kstat.zfs.misc.arcstats.recycle_miss: 15663428
kstat.zfs.misc.arcstats.mutex_miss: 1456781
kstat.zfs.misc.arcstats.evict_skip: 25960184
kstat.zfs.misc.arcstats.evict_l2_cached: 891379153920
kstat.zfs.misc.arcstats.evict_l2_eligible: 50578438144
kstat.zfs.misc.arcstats.evict_l2_ineligible: 956055729664
kstat.zfs.misc.arcstats.hash_elements: 8693451
kstat.zfs.misc.arcstats.hash_elements_max: 14369414
kstat.zfs.misc.arcstats.hash_collisions: 90967764
kstat.zfs.misc.arcstats.hash_chains: 1891463
kstat.zfs.misc.arcstats.hash_chain_max: 24
kstat.zfs.misc.arcstats.p: 73170954752
kstat.zfs.misc.arcstats.c: 85899345920
kstat.zfs.misc.arcstats.c_min: 42949672960
kstat.zfs.misc.arcstats.c_max: 85899345920
kstat.zfs.misc.arcstats.size: 85899263104
kstat.zfs.misc.arcstats.hdr_size: 1425948696
kstat.zfs.misc.arcstats.data_size: 77769994240
kstat.zfs.misc.arcstats.other_size: 6056233632
kstat.zfs.misc.arcstats.l2_hits: 21725934
kstat.zfs.misc.arcstats.l2_misses: 35876251
kstat.zfs.misc.arcstats.l2_feeds: 130197
kstat.zfs.misc.arcstats.l2_rw_clash: 110181
kstat.zfs.misc.arcstats.l2_read_bytes: 391282009600
kstat.zfs.misc.arcstats.l2_write_bytes: 1098703347712
kstat.zfs.misc.arcstats.l2_writes_sent: 130037
kstat.zfs.misc.arcstats.l2_writes_done: 130037
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 375921
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 331
kstat.zfs.misc.arcstats.l2_evict_reading: 43
kstat.zfs.misc.arcstats.l2_free_on_write: 255730
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 854359
kstat.zfs.misc.arcstats.l2_io_error: 38254
kstat.zfs.misc.arcstats.l2_size: 136696884736
kstat.zfs.misc.arcstats.l2_asize: 131427690496
kstat.zfs.misc.arcstats.l2_hdr_size: 742951208
kstat.zfs.misc.arcstats.l2_compress_successes: 5565311
kstat.zfs.misc.arcstats.l2_compress_zeros: 0
kstat.zfs.misc.arcstats.l2_compress_failures: 0
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 325157131
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 4897854
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 115704249
kstat.zfs.misc.arcstats.l2_write_in_l2: 15114214372
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 63417
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 3291593934
kstat.zfs.misc.arcstats.l2_write_full: 47672
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 130197
kstat.zfs.misc.arcstats.l2_write_pios: 130037
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 369077156457472
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 8015080
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 79825
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.duplicate_buffers: 0
kstat.zfs.misc.arcstats.duplicate_buffers_size: 0
kstat.zfs.misc.arcstats.duplicate_reads: 0


Values of 
-
kstat.zfs.misc.arcstats.l2_cksum_bad: 854359
kstat.zfs.misc.arcstats.l2_io_error: 38254  


not growing from last cache reconfiguration, just wait some time to see - maybe 
problem disapers :) 






Steven Hartland wrote:
SH Hmm so that rules out a TRIM related issue. I wonder if the
SH increase in ashift has triggered a problem in compression.
SH 
SH What are all the values reported by:
SH sysctl -a kstat.zfs.misc.arcstats
SH 
SH Regards
SH Steve
SH 
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH To: Steven Hartland kill...@multiplay.co.uk
SH Cc: sa...@ukr.net; Justin T. Gibbs gi...@freebsd.org; 
freebsd-current@freebsd.org; Borja Marcos bor...@sarenet.es; 
SH Dmitriy Makarov suppor...@ukr.net
SH Sent: Friday, October 18, 2013 9:01 AM
SH Subject: Re: ZFS secondarycache on SSD problem on r255173
SH 
SH 
SH  Hello.
SH 
SH  Yesterday system was rebooted with vfs.zfs.trim.enabled=0
SH 
SH  System version 10.0-BETA1 FreeBSD 10.0-BETA1 #6

Re: ZFS secondarycache on SSD problem on r255173

2013-10-18 Thread Steven Hartland

Looking at the l2arc compression code I believe that metadata is always
compressed with lz4, even if compression is off on all datasets.

This is backed up by what I'm seeing on my system here as it shows a
non-zero l2_compress_successes value even though I'm not using
compression at all.

I think we we may well need the following patch to set the minblock
size based on the vdev ashift and not SPA_MINBLOCKSIZE.

svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
===
--- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(revision 
256554)
+++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c(working copy)
@@ -5147,7 +5147,7 @@ l2arc_compress_buf(l2arc_buf_hdr_t *l2hdr)
   len = l2hdr-b_asize;
   cdata = zio_data_buf_alloc(len);
   csize = zio_compress_data(ZIO_COMPRESS_LZ4, l2hdr-b_tmp_cdata,
-   cdata, l2hdr-b_asize, (size_t)SPA_MINBLOCKSIZE);
+   cdata, l2hdr-b_asize, (size_t)(1ULL  
l2hdr-b_dev-l2ad_vdev-vdev_ashift));

   if (csize == 0) {
   /* zero block, indicate that there's nothing to write */

Could you try this patch on your system Vitalij see if it has any effect
on the number of l2_cksum_bad / l2_io_error?

   Regards
   Steve
- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net

To: Steven Hartland kill...@multiplay.co.uk
Cc: Vitalij Satanivskij sa...@ukr.net; Dmitriy Makarov suppor...@ukr.net; Justin T. Gibbs gi...@freebsd.org; Borja 
Marcos bor...@sarenet.es; freebsd-current@freebsd.org

Sent: Friday, October 18, 2013 3:45 PM
Subject: Re: ZFS secondarycache on SSD problem on r255173




Just right now stats not to actual because of some another test.

Test is simply all gpart information destroyed from ssd and

They used as raw cache devices. Just
2013-10-18.11:30:49 zpool add disk1 cache /dev/ada1 /dev/ada2 /dev/ada3

So sizes at last l2_size and  l2_asize in not actual.

But heare it is:

kstat.zfs.misc.arcstats.hits: 5178174063
kstat.zfs.misc.arcstats.misses: 57690806
kstat.zfs.misc.arcstats.demand_data_hits: 313995744
kstat.zfs.misc.arcstats.demand_data_misses: 37414740
kstat.zfs.misc.arcstats.demand_metadata_hits: 4719242892
kstat.zfs.misc.arcstats.demand_metadata_misses: 9266394
kstat.zfs.misc.arcstats.prefetch_data_hits: 1182495
kstat.zfs.misc.arcstats.prefetch_data_misses: 9951733
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 143752935
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1057939
kstat.zfs.misc.arcstats.mru_hits: 118609738
kstat.zfs.misc.arcstats.mru_ghost_hits: 1895486
kstat.zfs.misc.arcstats.mfu_hits: 4914673425
kstat.zfs.misc.arcstats.mfu_ghost_hits: 14537497
kstat.zfs.misc.arcstats.allocated: 103796455
kstat.zfs.misc.arcstats.deleted: 40168100
kstat.zfs.misc.arcstats.stolen: 20832742
kstat.zfs.misc.arcstats.recycle_miss: 15663428
kstat.zfs.misc.arcstats.mutex_miss: 1456781
kstat.zfs.misc.arcstats.evict_skip: 25960184
kstat.zfs.misc.arcstats.evict_l2_cached: 891379153920
kstat.zfs.misc.arcstats.evict_l2_eligible: 50578438144
kstat.zfs.misc.arcstats.evict_l2_ineligible: 956055729664
kstat.zfs.misc.arcstats.hash_elements: 8693451
kstat.zfs.misc.arcstats.hash_elements_max: 14369414
kstat.zfs.misc.arcstats.hash_collisions: 90967764
kstat.zfs.misc.arcstats.hash_chains: 1891463
kstat.zfs.misc.arcstats.hash_chain_max: 24
kstat.zfs.misc.arcstats.p: 73170954752
kstat.zfs.misc.arcstats.c: 85899345920
kstat.zfs.misc.arcstats.c_min: 42949672960
kstat.zfs.misc.arcstats.c_max: 85899345920
kstat.zfs.misc.arcstats.size: 85899263104
kstat.zfs.misc.arcstats.hdr_size: 1425948696
kstat.zfs.misc.arcstats.data_size: 77769994240
kstat.zfs.misc.arcstats.other_size: 6056233632
kstat.zfs.misc.arcstats.l2_hits: 21725934
kstat.zfs.misc.arcstats.l2_misses: 35876251
kstat.zfs.misc.arcstats.l2_feeds: 130197
kstat.zfs.misc.arcstats.l2_rw_clash: 110181
kstat.zfs.misc.arcstats.l2_read_bytes: 391282009600
kstat.zfs.misc.arcstats.l2_write_bytes: 1098703347712
kstat.zfs.misc.arcstats.l2_writes_sent: 130037
kstat.zfs.misc.arcstats.l2_writes_done: 130037
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 375921
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 331
kstat.zfs.misc.arcstats.l2_evict_reading: 43
kstat.zfs.misc.arcstats.l2_free_on_write: 255730
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 854359
kstat.zfs.misc.arcstats.l2_io_error: 38254
kstat.zfs.misc.arcstats.l2_size: 136696884736
kstat.zfs.misc.arcstats.l2_asize: 131427690496
kstat.zfs.misc.arcstats.l2_hdr_size: 742951208
kstat.zfs.misc.arcstats.l2_compress_successes: 5565311
kstat.zfs.misc.arcstats.l2_compress_zeros: 0
kstat.zfs.misc.arcstats.l2_compress_failures: 0
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 325157131
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 4897854
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 115704249

Re: ZFS secondarycache on SSD problem on r255173

2013-10-17 Thread Vitalij Satanivskij
Hello.

Problem description is in -

http://lists.freebsd.org/pipermail/freebsd-current/2013-October/045088.html

As we find  later first begin's problem with errors counter in arcstats than 
size of l2 grows abnormal.

After patch rollback everything is ok.


Justin T. Gibbs wrote:
JTG You'll have to be more specific.  I don't have that email or know what 
list on which to search.
JTG 
JTG Thanks,
JTG Justin
JTG 
JTG On Oct 16, 2013, at 2:01 AM, Vitalij Satanivskij sa...@ukr.net wrote:
JTG 
JTG  Hello.
JTG  
JTG  Patch brocke cache functionality.
JTG  
JTG  Look at's Dmitriy's mail from  Mon, 07 Oct 2013 21:09:06 +0300
JTG  
JTG  With subject ZFS L2ARC - incorrect size and abnormal system load on 
r255173
JTG  
JTG  As patch alredy in head and BETA it's not good.
JTG  
JTG  Yesterday we update one machine up to beta1 and forgot about patch. So 
12 Hours and cache broken... :((
JTG  
JTG  
JTG  
JTG  Dmitriy Makarov wrote:
JTG  DM The attached patch by Steven Hartland fixes issue for me too. Thank 
you! 
JTG  DM 
JTG  DM 
JTG  DM --- Исходное сообщение --- 
JTG  DM От кого: Steven Hartland  kill...@multiplay.co.uk  
JTG  DM Дата: 18 сентября 2013, 01:53:10 
JTG  DM 
JTG  DM - Original Message - 
JTG  DM From: Justin T. Gibbs  
JTG  DM 
JTG  DM --- 
JTG  DM Дмитрий Макаров 
JTG  DM ___
JTG  DM freebsd-current@freebsd.org mailing list
JTG  DM http://lists.freebsd.org/mailman/listinfo/freebsd-current
JTG  DM To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
JTG  
JTG 
JTG ___
JTG freebsd-current@freebsd.org mailing list
JTG http://lists.freebsd.org/mailman/listinfo/freebsd-current
JTG To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-10-17 Thread Vitalij Satanivskij
Hello.

SSD is Intel SSD 530 series (INTEL SSDSC2BW180A4 DC12)

Controler is onboard intel sata controler, motherboard is Supermicro X9SRL-F so 
it's Intel C602 chipset

All cache ssd connected to sata 2 ports.

System has LSI MPS controler (SAS2308) with firmware version - 16.00.00.00, but 
only hdd's (36 1TB WD RE4 drives) 
connected to it. 


Steven Hartland wrote:
SH Ohh stupid question what hardware are you running this on,
SH specifically what SSD's and what controller and if relavent
SH what controller Firmware version?
SH 
SH I wonder if you might have bad HW / FW, such as older LSI
SH mps Firmware, which is know to causing corruption with
SH some delete methods.
SH 
SH Without the ashift fixes, you'll be sending short requests
SH to the disk which will then ignore them, so I can see a
SH potential corrilation there.
SH 
SH You could rule this out by disabling ZFS TRIM with
SH vfs.zfs.trim.enabled=0 in /boot/loader.conf
SH 
SH Regards
SH Steve
SH 
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
SH 
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH 
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-17 Thread Steven Hartland

Still worth testing with the problem version installed but
with trim disabled to see if that clears the issues, if
nothing else it will confirm / deny if trim is involved.

   Regards
   Steve

- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net

To: Steven Hartland kill...@multiplay.co.uk
Cc: Justin T. Gibbs gi...@freebsd.org; Vitalij Satanivskij sa...@ukr.net; freebsd-current@freebsd.org; Borja Marcos 
bor...@sarenet.es; Dmitriy Makarov suppor...@ukr.net

Sent: Thursday, October 17, 2013 7:12 AM
Subject: Re: ZFS secondarycache on SSD problem on r255173



Hello.

SSD is Intel SSD 530 series (INTEL SSDSC2BW180A4 DC12)

Controler is onboard intel sata controler, motherboard is Supermicro X9SRL-F so 
it's Intel C602 chipset

All cache ssd connected to sata 2 ports.

System has LSI MPS controler (SAS2308) with firmware version - 16.00.00.00, but 
only hdd's (36 1TB WD RE4 drives)
connected to it.


Steven Hartland wrote:
SH Ohh stupid question what hardware are you running this on,
SH specifically what SSD's and what controller and if relavent
SH what controller Firmware version?
SH
SH I wonder if you might have bad HW / FW, such as older LSI
SH mps Firmware, which is know to causing corruption with
SH some delete methods.
SH
SH Without the ashift fixes, you'll be sending short requests
SH to the disk which will then ignore them, so I can see a
SH potential corrilation there.
SH
SH You could rule this out by disabling ZFS TRIM with
SH vfs.zfs.trim.enabled=0 in /boot/loader.conf
SH
SH Regards
SH Steve
SH
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the 
event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any 
information contained in it.

SH
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org





This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-17 Thread Vitalij Satanivskij
Just to be sure I understand you clearly, I need to test next configuration:

1) System with ashift patch eg. just latest stable/10 revision 
2) vfs.zfs.trim.enabled=0 in /boot/loader.conf 

So realy only diferens in default system configuration is  disabled trim 
functional ?



Steven Hartland wrote:
SH Still worth testing with the problem version installed but
SH with trim disabled to see if that clears the issues, if
SH nothing else it will confirm / deny if trim is involved.
SH 
SH Regards
SH Steve
SH 
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH To: Steven Hartland kill...@multiplay.co.uk
SH Cc: Justin T. Gibbs gi...@freebsd.org; Vitalij Satanivskij 
sa...@ukr.net; freebsd-current@freebsd.org; Borja Marcos 
SH bor...@sarenet.es; Dmitriy Makarov suppor...@ukr.net
SH Sent: Thursday, October 17, 2013 7:12 AM
SH Subject: Re: ZFS secondarycache on SSD problem on r255173
SH 
SH 
SH  Hello.
SH 
SH  SSD is Intel SSD 530 series (INTEL SSDSC2BW180A4 DC12)
SH 
SH  Controler is onboard intel sata controler, motherboard is Supermicro 
X9SRL-F so it's Intel C602 chipset
SH 
SH  All cache ssd connected to sata 2 ports.
SH 
SH  System has LSI MPS controler (SAS2308) with firmware version - 
16.00.00.00, but only hdd's (36 1TB WD RE4 drives)
SH  connected to it.
SH 
SH 
SH  Steven Hartland wrote:
SH  SH Ohh stupid question what hardware are you running this on,
SH  SH specifically what SSD's and what controller and if relavent
SH  SH what controller Firmware version?
SH  SH
SH  SH I wonder if you might have bad HW / FW, such as older LSI
SH  SH mps Firmware, which is know to causing corruption with
SH  SH some delete methods.
SH  SH
SH  SH Without the ashift fixes, you'll be sending short requests
SH  SH to the disk which will then ignore them, so I can see a
SH  SH potential corrilation there.
SH  SH
SH  SH You could rule this out by disabling ZFS TRIM with
SH  SH vfs.zfs.trim.enabled=0 in /boot/loader.conf
SH  SH
SH  SH Regards
SH  SH Steve
SH  SH
SH  SH 
SH  SH This e.mail is private and confidential between Multiplay (UK) Ltd. 
and the person or entity to whom it is addressed. In the 
SH  event of misdirection, the recipient is prohibited from using, copying, 
printing or otherwise disseminating it or any 
SH  information contained in it.
SH  SH
SH  SH In the event of misdirection, illegible or incomplete transmission 
please telephone +44 845 868 1337
SH  SH or return the E.mail to postmas...@multiplay.co.uk.
SH  SH
SH  SH ___
SH  SH freebsd-current@freebsd.org mailing list
SH  SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH  SH To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
SH  
SH 
SH 
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
SH 
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-17 Thread Steven Hartland

Correct.
- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net




Just to be sure I understand you clearly, I need to test next configuration:

1) System with ashift patch eg. just latest stable/10 revision 
2) vfs.zfs.trim.enabled=0 in /boot/loader.conf 


So realy only diferens in default system configuration is  disabled trim 
functional ?



Steven Hartland wrote:
SH Still worth testing with the problem version installed but
SH with trim disabled to see if that clears the issues, if
SH nothing else it will confirm / deny if trim is involved.




This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Vitalij Satanivskij
Hello.

Patch brocke cache functionality.

Look at's Dmitriy's mail from  Mon, 07 Oct 2013 21:09:06 +0300

With subject ZFS L2ARC - incorrect size and abnormal system load on r255173

As patch alredy in head and BETA it's not good.

Yesterday we update one machine up to beta1 and forgot about patch. So 12 Hours 
and cache broken... :((

 

Dmitriy Makarov wrote:
DM The attached patch by Steven Hartland fixes issue for me too. Thank you! 
DM 
DM 
DM --- Исходное сообщение --- 
DM От кого: Steven Hartland  kill...@multiplay.co.uk  
DM Дата: 18 сентября 2013, 01:53:10 
DM 
DM - Original Message - 
DM From: Justin T. Gibbs  
DM 
DM --- 
DM Дмитрий Макаров 
DM ___
DM freebsd-current@freebsd.org mailing list
DM http://lists.freebsd.org/mailman/listinfo/freebsd-current
DM To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Steven Hartland

Have you confirmed the ashift changes are the actual cause of this
by backing out just those changes and retesting on the same hardware.

Also worth checking your disks smart values to confirm there are no
visible signs of HW errors.

   Regards
   Steve

- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net

To: Dmitriy Makarov suppor...@ukr.net
Cc: Steven Hartland kill...@multiplay.co.uk; Justin T. Gibbs gi...@freebsd.org; Borja Marcos bor...@sarenet.es; 
freebsd-current@freebsd.org

Sent: Wednesday, October 16, 2013 9:01 AM
Subject: Re: ZFS secondarycache on SSD problem on r255173



Hello.

Patch brocke cache functionality.

Look at's Dmitriy's mail from  Mon, 07 Oct 2013 21:09:06 +0300

With subject ZFS L2ARC - incorrect size and abnormal system load on r255173

As patch alredy in head and BETA it's not good.

Yesterday we update one machine up to beta1 and forgot about patch. So 12 Hours 
and cache broken... :((



Dmitriy Makarov wrote:
DM The attached patch by Steven Hartland fixes issue for me too. Thank you!
DM
DM
DM --- Исходное сообщение --- 
DM От кого: Steven Hartland  kill...@multiplay.co.uk 

DM Дата: 18 сентября 2013, 01:53:10
DM
DM - Original Message - 
DM From: Justin T. Gibbs 

DM
DM --- 
DM Дмитрий Макаров

DM ___
DM freebsd-current@freebsd.org mailing list
DM http://lists.freebsd.org/mailman/listinfo/freebsd-current
DM To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org





This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Vitalij Satanivskij
Yes 

We have 15 servers, all of them have problem while using with patch fo ashift, 
sh we rollback path (for r255173) 
and all of them works for a week without that's problem's. Yesterday one of of 
servers was updated to stable/10 (beta1) 

wich include patch  and after around 12 hours of works l2arc begin et errors 
like that 

kstat.zfs.misc.arcstats.l2_cksum_bad
kstat.zfs.misc.arcstats.l2_io_error 


For now patch disabled in ower production.


Please note we have very heavy load on zfs pool so 90GB arc and 3x180Gb L2arc 
have very big hit's on it  on it.


SSD used for cache's is intel ssd 530 series smart for all devices in in normal 
states's
no bad values on it.

Steven Hartland wrote:
SH Have you confirmed the ashift changes are the actual cause of this
SH by backing out just those changes and retesting on the same hardware.
SH 
SH Also worth checking your disks smart values to confirm there are no
SH visible signs of HW errors.
SH 
SH Regards
SH Steve
SH 
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH To: Dmitriy Makarov suppor...@ukr.net
SH Cc: Steven Hartland kill...@multiplay.co.uk; Justin T. Gibbs 
gi...@freebsd.org; Borja Marcos bor...@sarenet.es; 
SH freebsd-current@freebsd.org
SH Sent: Wednesday, October 16, 2013 9:01 AM
SH Subject: Re: ZFS secondarycache on SSD problem on r255173
SH 
SH 
SH  Hello.
SH 
SH  Patch brocke cache functionality.
SH 
SH  Look at's Dmitriy's mail from  Mon, 07 Oct 2013 21:09:06 +0300
SH 
SH  With subject ZFS L2ARC - incorrect size and abnormal system load on 
r255173
SH 
SH  As patch alredy in head and BETA it's not good.
SH 
SH  Yesterday we update one machine up to beta1 and forgot about patch. So 12 
Hours and cache broken... :((
SH 
SH 
SH 
SH  Dmitriy Makarov wrote:
SH  DM The attached patch by Steven Hartland fixes issue for me too. Thank 
you!
SH  DM
SH  DM
SH  DM --- Исходное сообщение --- 
SH  DM От кого: Steven Hartland  kill...@multiplay.co.uk 
SH  DM Дата: 18 сентября 2013, 01:53:10
SH  DM
SH  DM - Original Message - 
SH  DM From: Justin T. Gibbs 
SH  DM
SH  DM --- 
SH  DM Дмитрий Макаров
SH  DM ___
SH  DM freebsd-current@freebsd.org mailing list
SH  DM http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH  DM To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
SH  
SH 
SH 
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 
SH 
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH 
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Steven Hartland

I'm not clear what you rolled back there as r255173 has ntothing to do
with this. Could you clarify

Any errors recorded in /var/log/messages?

Could you add code to record the none zero value of zio-io_error in
l2arc_read_done as this may give some indication of the underlying
issue.

Additionally could always put a panic in that code path too and then
create a dump so the details can be fully exhamined.

In terms of the slowness thats going to be a side effect of the cache
failures.

Oh could you also confirm that the issue doesn't exist if you
1. Exclude r255753
2. Set vfs.zfs.max_auto_ashift=9

   Regards
   Steve
- Original Message - 
From: Vitalij Satanivskij sa...@ukr.net

To: Steven Hartland kill...@multiplay.co.uk
Cc: Vitalij Satanivskij sa...@ukr.net; Dmitriy Makarov suppor...@ukr.net; Justin T. Gibbs gi...@freebsd.org; Borja 
Marcos bor...@sarenet.es; freebsd-current@freebsd.org

Sent: Wednesday, October 16, 2013 3:10 PM
Subject: Re: ZFS secondarycache on SSD problem on r255173



Yes

We have 15 servers, all of them have problem while using with patch fo ashift, 
sh we rollback path (for r255173)
and all of them works for a week without that's problem's. Yesterday one of of 
servers was updated to stable/10 (beta1)

wich include patch  and after around 12 hours of works l2arc begin et errors 
like that

kstat.zfs.misc.arcstats.l2_cksum_bad
kstat.zfs.misc.arcstats.l2_io_error


For now patch disabled in ower production.


Please note we have very heavy load on zfs pool so 90GB arc and 3x180Gb L2arc 
have very big hit's on it  on it.


SSD used for cache's is intel ssd 530 series smart for all devices in in normal 
states's
no bad values on it.

Steven Hartland wrote:
SH Have you confirmed the ashift changes are the actual cause of this
SH by backing out just those changes and retesting on the same hardware.
SH
SH Also worth checking your disks smart values to confirm there are no
SH visible signs of HW errors.
SH
SH Regards
SH Steve
SH
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net

SH To: Dmitriy Makarov suppor...@ukr.net
SH Cc: Steven Hartland kill...@multiplay.co.uk; Justin T. Gibbs gi...@freebsd.org; 
Borja Marcos bor...@sarenet.es;
SH freebsd-current@freebsd.org
SH Sent: Wednesday, October 16, 2013 9:01 AM
SH Subject: Re: ZFS secondarycache on SSD problem on r255173
SH
SH
SH  Hello.
SH 
SH  Patch brocke cache functionality.
SH 
SH  Look at's Dmitriy's mail from  Mon, 07 Oct 2013 21:09:06 +0300
SH 
SH  With subject ZFS L2ARC - incorrect size and abnormal system load on 
r255173
SH 
SH  As patch alredy in head and BETA it's not good.
SH 
SH  Yesterday we update one machine up to beta1 and forgot about patch. So 12 
Hours and cache broken... :((
SH 
SH 
SH 
SH  Dmitriy Makarov wrote:
SH  DM The attached patch by Steven Hartland fixes issue for me too. Thank 
you!
SH  DM
SH  DM
SH  DM --- Исходное сообщение --- 
SH  DM От кого: Steven Hartland  kill...@multiplay.co.uk 

SH  DM Дата: 18 сентября 2013, 01:53:10
SH  DM
SH  DM - Original Message - 
SH  DM From: Justin T. Gibbs 

SH  DM
SH  DM --- 
SH  DM Дмитрий Макаров

SH  DM ___
SH  DM freebsd-current@freebsd.org mailing list
SH  DM http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH  DM To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
SH 
SH
SH
SH 
SH This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the 
event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any 
information contained in it.

SH
SH In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
SH or return the E.mail to postmas...@multiplay.co.uk.
SH
SH ___
SH freebsd-current@freebsd.org mailing list
SH http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org





This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Vitalij Satanivskij
Steven Hartland wrote:
SH I'm not clear what you rolled back there as r255173 has ntothing to do
SH with this. Could you clarify

r255173 with you patch from email dated Tue, 17 Sep 2013 23:53:12 +0100 with 
subject  Re: ZFS secondarycache on SSD problem on r255173

Errors wich we gets is in arcstats count not in messages, and was desribed some 
time ago in mails 
be me and  Dmitriy Makarov with subject's ZFS L2ARC - incorrect size and 
abnormal system load on r255173


On r255173 without patch and with vfs.zfs.max_auto_ashift=9 when added to pool 
2 ssd as caches get 

  cache
  gpt/cache1ONLINE   0 0 0  block size: 512B 
configured, 4096B native
  gpt/cache2ONLINE   0 0 0  block size: 512B 
configured, 4096B native


Same message we seen with default vfs.zfs.max_auto_ashift  

Will wait some time to see how it works.





SH Any errors recorded in /var/log/messages?
SH 
SH Could you add code to record the none zero value of zio-io_error in
SH l2arc_read_done as this may give some indication of the underlying
SH issue.
SH 
SH Additionally could always put a panic in that code path too and then
SH create a dump so the details can be fully exhamined.
SH 
SH In terms of the slowness thats going to be a side effect of the cache
SH failures.
SH 
SH Oh could you also confirm that the issue doesn't exist if you
SH 1. Exclude r255753
SH 2. Set vfs.zfs.max_auto_ashift=9
SH 
SH Regards
SH Steve
SH - Original Message - 
SH From: Vitalij Satanivskij sa...@ukr.net
SH To: Steven Hartland kill...@multiplay.co.uk
SH Cc: Vitalij Satanivskij sa...@ukr.net; Dmitriy Makarov 
suppor...@ukr.net; Justin T. Gibbs gi...@freebsd.org; Borja 
SH Marcos bor...@sarenet.es; freebsd-current@freebsd.org
SH Sent: Wednesday, October 16, 2013 3:10 PM
SH Subject: Re: ZFS secondarycache on SSD problem on r255173
SH 
SH 
SH  Yes
SH 
SH  We have 15 servers, all of them have problem while using with patch fo 
ashift, sh we rollback path (for r255173)
SH  and all of them works for a week without that's problem's. Yesterday one 
of of servers was updated to stable/10 (beta1)
SH 
SH  wich include patch  and after around 12 hours of works l2arc begin et 
errors like that
SH 
SH  kstat.zfs.misc.arcstats.l2_cksum_bad
SH  kstat.zfs.misc.arcstats.l2_io_error
SH 
SH 
SH  For now patch disabled in ower production.
SH 
SH 
SH  Please note we have very heavy load on zfs pool so 90GB arc and 3x180Gb 
L2arc have very big hit's on it  on it.
SH 
SH 
SH  SSD used for cache's is intel ssd 530 series smart for all devices in in 
normal states's
SH  no bad values on it.
SH 
SH  Steven Hartland wrote:
SH  SH Have you confirmed the ashift changes are the actual cause of this
SH  SH by backing out just those changes and retesting on the same hardware.
SH  SH
SH  SH Also worth checking your disks smart values to confirm there are no
SH  SH visible signs of HW errors.
SH  SH
SH  SH Regards
SH  SH Steve
SH  SH
SH  SH - Original Message - 
SH  SH From: Vitalij Satanivskij sa...@ukr.net
SH  SH To: Dmitriy Makarov suppor...@ukr.net
SH  SH Cc: Steven Hartland kill...@multiplay.co.uk; Justin T. Gibbs 
gi...@freebsd.org; Borja Marcos bor...@sarenet.es;
SH  SH freebsd-current@freebsd.org
SH  SH Sent: Wednesday, October 16, 2013 9:01 AM
SH  SH Subject: Re: ZFS secondarycache on SSD problem on r255173
SH  SH
SH  SH
SH  SH  Hello.
SH  SH 
SH  SH  Patch brocke cache functionality.
SH  SH 
SH  SH  Look at's Dmitriy's mail from  Mon, 07 Oct 2013 21:09:06 +0300
SH  SH 
SH  SH  With subject ZFS L2ARC - incorrect size and abnormal system load on 
r255173
SH  SH 
SH  SH  As patch alredy in head and BETA it's not good.
SH  SH 
SH  SH  Yesterday we update one machine up to beta1 and forgot about patch. 
So 12 Hours and cache broken... :((
SH  SH 
SH  SH 
SH  SH 
SH  SH  Dmitriy Makarov wrote:
SH  SH  DM The attached patch by Steven Hartland fixes issue for me too. 
Thank you!
SH  SH  DM
SH  SH  DM
SH  SH  DM --- Исходное сообщение --- 
SH  SH  DM От кого: Steven Hartland  kill...@multiplay.co.uk 
SH  SH  DM Дата: 18 сентября 2013, 01:53:10
SH  SH  DM
SH  SH  DM - Original Message - 
SH  SH  DM From: Justin T. Gibbs 
SH  SH  DM
SH  SH  DM --- 
SH  SH  DM Дмитрий Макаров
SH  SH  DM ___
SH  SH  DM freebsd-current@freebsd.org mailing list
SH  SH  DM http://lists.freebsd.org/mailman/listinfo/freebsd-current
SH  SH  DM To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org
SH  SH 
SH  SH
SH  SH
SH  SH 
SH  SH This e.mail is private and confidential between Multiplay (UK) Ltd. 
and the person or entity to whom it is addressed. In the 
SH  event of misdirection, the recipient is prohibited from using, copying, 
printing or otherwise disseminating it or any 
SH  information contained in it.
SH  SH
SH  SH In the event of misdirection, illegible or incomplete transmission 
please telephone +44 845

Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Justin T. Gibbs
You'll have to be more specific.  I don't have that email or know what list on 
which to search.

Thanks,
Justin

On Oct 16, 2013, at 2:01 AM, Vitalij Satanivskij sa...@ukr.net wrote:

 Hello.
 
 Patch brocke cache functionality.
 
 Look at's Dmitriy's mail from  Mon, 07 Oct 2013 21:09:06 +0300
 
 With subject ZFS L2ARC - incorrect size and abnormal system load on r255173
 
 As patch alredy in head and BETA it's not good.
 
 Yesterday we update one machine up to beta1 and forgot about patch. So 12 
 Hours and cache broken... :((
 
 
 
 Dmitriy Makarov wrote:
 DM The attached patch by Steven Hartland fixes issue for me too. Thank you! 
 DM 
 DM 
 DM --- Исходное сообщение --- 
 DM От кого: Steven Hartland  kill...@multiplay.co.uk  
 DM Дата: 18 сентября 2013, 01:53:10 
 DM 
 DM - Original Message - 
 DM From: Justin T. Gibbs  
 DM 
 DM --- 
 DM Дмитрий Макаров 
 DM ___
 DM freebsd-current@freebsd.org mailing list
 DM http://lists.freebsd.org/mailman/listinfo/freebsd-current
 DM To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
 

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Justin T. Gibbs
I took a quick look at arc.c and see that the trim_map_free() calls don't take 
into account ashift.  I don't know if that has anything to do with your problem 
though.  I would expect this would just make the trim less efficient, but I 
need to dig further.

--
Justin

On Oct 16, 2013, at 4:42 PM, Justin T. Gibbs gi...@freebsd.org wrote:

 You'll have to be more specific.  I don't have that email or know what list 
 on which to search.
 
 Thanks,
 Justin
 
 On Oct 16, 2013, at 2:01 AM, Vitalij Satanivskij sa...@ukr.net wrote:
 
 Hello.
 
 Patch brocke cache functionality.
 
 Look at's Dmitriy's mail from  Mon, 07 Oct 2013 21:09:06 +0300
 
 With subject ZFS L2ARC - incorrect size and abnormal system load on r255173
 
 As patch alredy in head and BETA it's not good.
 
 Yesterday we update one machine up to beta1 and forgot about patch. So 12 
 Hours and cache broken... :((
 
 
 
 Dmitriy Makarov wrote:
 DM The attached patch by Steven Hartland fixes issue for me too. Thank you! 
 DM 
 DM 
 DM --- Исходное сообщение --- 
 DM От кого: Steven Hartland  kill...@multiplay.co.uk  
 DM Дата: 18 сентября 2013, 01:53:10 
 DM 
 DM - Original Message - 
 DM From: Justin T. Gibbs  
 DM 
 DM --- 
 DM Дмитрий Макаров 
 DM ___
 DM freebsd-current@freebsd.org mailing list
 DM http://lists.freebsd.org/mailman/listinfo/freebsd-current
 DM To unsubscribe, send any mail to 
 freebsd-current-unsubscr...@freebsd.org
 
 
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Steven Hartland
- Original Message - 
From: Justin T. Gibbs gi...@freebsd.org




I took a quick look at arc.c and see that the trim_map_free()
calls don't take into account ashift.  I don't know if that
has anything to do with your problem though.  I would expect
this would just make the trim less efficient, but I need to
dig further.


This is actually done by TRIM_ZIO_END in trim_map_free.

   Regards
   Steve


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-10-16 Thread Steven Hartland

Ohh stupid question what hardware are you running this on,
specifically what SSD's and what controller and if relavent
what controller Firmware version?

I wonder if you might have bad HW / FW, such as older LSI
mps Firmware, which is know to causing corruption with
some delete methods.

Without the ashift fixes, you'll be sending short requests
to the disk which will then ignore them, so I can see a
potential corrilation there.

You could rule this out by disabling ZFS TRIM with
vfs.zfs.trim.enabled=0 in /boot/loader.conf

   Regards
   Steve


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-09-20 Thread Justin T. Gibbs
On Sep 17, 2013, at 4:53 PM, Steven Hartland kill...@multiplay.co.uk wrote:

 - Original Message - From: Justin T. Gibbs gi...@freebsd.org
 
 
 Sorry for being slow to chime in on this thread.  I live in Boulder, CO and 
 we've had a bit of rain. :-)
 
 Hope all is well your side, everyone safe and sound if may be little wetter 
 than usual.

I have been very fortunate.

 As Steven pointed out, the warning is benign, but does show that the code I 
 committed to
 -current is not optimizing the allocation size for L2ARC devices.  The fix 
 for this is to find
 the best place during pool creation/load/import to call 
 vdev_ashift_optimize() on L2ARC
 devices.  I will look into this tomorrow, but feel free to suggest a good 
 spot if you look at it
 before I can.
 
 The attached patch fixes this for me, not sure if its ideal place for it and 
 consideration may be
 needed in combination with persistent l2arc.

Persistent l2arc will require storing and checking the ashift recorded in the 
label so bugs and
or quirk table changes don't confuse the system.  But for now, with no 
persistent l2arc support
in the tree, I think your placement is fine.  I've submitted a request to RE so 
I can put your fix into head.

--
Justin
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-09-17 Thread Steven Hartland
- Original Message - 
From: Justin T. Gibbs gi...@freebsd.org




Sorry for being slow to chime in on this thread.  I live in Boulder, CO and 
we've had a bit of rain. :-)


Hope all is well your side, everyone safe and sound if may be little wetter 
than usual.


As Steven pointed out, the warning is benign, but does show that the code I 
committed to
-current is not optimizing the allocation size for L2ARC devices.  The fix for 
this is to find
the best place during pool creation/load/import to call vdev_ashift_optimize() 
on L2ARC
devices.  I will look into this tomorrow, but feel free to suggest a good spot 
if you look at it
before I can.


The attached patch fixes this for me, not sure if its ideal place for it and 
consideration may be
needed in combination with persistent l2arc.

   Regards
   Steve


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

zfs-l2arc-ashift.patch
Description: Binary data
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-09-16 Thread Borja Marcos

On Sep 13, 2013, at 2:18 PM, Steven Hartland wrote:

 This is a recent bit of code by Justin cc'ed, so he's likely the best person 
 to
 investigate this one.

Hmm. There is still a lot of confusion surrounding all this, and it's a time 
bomb waiting to explode.

A friend had serious problems on 9.2 with the gnop trick in order to force a 4 
KB block size. After a reboot,
ZFS would insist on trying to find the .nop device for the ZIL which, of 
course, did no longer exist. In his case, for some reason,
ZFS didn't identify the labelled gpt/name or gptpd/uuid devices as valid, and 
the pool wouldn't attach. Curiously, it did identify the L2ARC 
without problems.

The cure was to disable GPT labels using loader.conf (I don't remember if he 
disabled kern.geom.label.gptid.enable, kern.geom.label.gpt.enable or both) in
order to force it to use the classical daXpY nomenclature.

As I said, this sounds like a time bomb in the works. There seems to be some 
confusion in the ZFS between the different naming schemes you
can use for a disk partition right now ( daXpY, gpt label or gptid).





Borja.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-09-16 Thread Steven Hartland

Cant say I've ever had a issue with gnop, but I haven't used it for
some time.

I did have a quick look over the weekend at your issue and it looks
to me like warning for the cache is a false positive, as the vdev
for cache doesn't report an ashift in zdb so could well be falling
back to a default value.

I couldn't reproduce the issue for log here, it just seems to work
for me, can you confirm what ashift is reported for your devices
using: zdb pool

   Regards
   Steve
- Original Message - 
From: Borja Marcos bor...@sarenet.es

To: Steven Hartland kill...@multiplay.co.uk
Cc: Dmitryy Makarov suppor...@ukr.net; freebsd-current@freebsd.org; Justin T. 
Gibbs gi...@freebsd.org
Sent: Monday, September 16, 2013 12:06 PM
Subject: Re: ZFS secondarycache on SSD problem on r255173



On Sep 13, 2013, at 2:18 PM, Steven Hartland wrote:


This is a recent bit of code by Justin cc'ed, so he's likely the best person to
investigate this one.


Hmm. There is still a lot of confusion surrounding all this, and it's a time 
bomb waiting to explode.

A friend had serious problems on 9.2 with the gnop trick in order to force a 4 
KB block size. After a reboot,
ZFS would insist on trying to find the .nop device for the ZIL which, of course, did no longer exist. In his case, for some 
reason,
ZFS didn't identify the labelled gpt/name or gptpd/uuid devices as valid, and the pool wouldn't attach. Curiously, it did identify 
the L2ARC

without problems.

The cure was to disable GPT labels using loader.conf (I don't remember if he disabled kern.geom.label.gptid.enable, 
kern.geom.label.gpt.enable or both) in

order to force it to use the classical daXpY nomenclature.

As I said, this sounds like a time bomb in the works. There seems to be some confusion in the ZFS between the different naming 
schemes you

can use for a disk partition right now ( daXpY, gpt label or gptid).





Borja.




This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: ZFS secondarycache on SSD problem on r255173

2013-09-16 Thread Justin T. Gibbs
Sorry for being slow to chime in on this thread.  I live in Boulder, CO and 
we've had a bit of rain. :-)

As Steven pointed out, the warning is benign, but does show that the code I 
committed to -current is not optimizing the allocation size for L2ARC devices.  
The fix for this is to find the best place during pool creation/load/import to 
call vdev_ashift_optimize() on L2ARC devices.  I will look into this tomorrow, 
but feel free to suggest a good spot if you look at it before I can.

--
Justin

On Sep 16, 2013, at 6:43 AM, Steven Hartland kill...@multiplay.co.uk wrote:

 Thats correct the new code knows the what the ashift of the underlying disk 
 should
 be so if it detects a vdev with a lower ashift you get this warning.
 
 I suspect the issue is that cache devices don't have an ashift, so when the 
 check
 is done it gets a default value and hence the warning.
 
 I'd have to did some more to see how ashift in cache devices is or isn't 
 implemented.
 
   Regards
   Steve
 
 - Original Message - From: Dmitriy Makarov suppor...@ukr.net
 To: Steven Hartland kill...@multiplay.co.uk
 Cc: Borja Marcos bor...@sarenet.es; freebsd-current@freebsd.org; 
 Justin T. Gibbs gi...@freebsd.org
 Sent: Monday, September 16, 2013 1:29 PM
 Subject: Re[3]: ZFS secondarycache on SSD problem on r255173
 
 
 And have to say that ashift of a main pool doesn't matter.
 I've tried to create pool with ashift 9 (default value) and with ashift 12 
 with creating gnops over gpart devices, export pool, destroy gnops, import 
 pool. There is the same problem with cache device.
 
 There is no problem with ZIL devices, they reports ashift: 12
 
 children[1]:
 type: 'disk'
 id: 1
 guid: 6986664094649753344
 path: '/dev/gpt/zil1'
 phys_path: '/dev/gpt/zil1'
 whole_disk: 1
 metaslab_array: 67
 metaslab_shift: 25
 ashift: 12
 asize: 4290248704
 is_log: 1
 create_txg: 22517
 
 Problem with cache devices only, but in zdb output tere is nothing at all 
 about them.
 
 --- Исходное сообщение --- От кого: Steven Hartland  
 kill...@multiplay.co.uk 
 Дата: 16 сентября 2013, 14:18:31
 
 Cant say I've ever had a issue with gnop, but I haven't used it for
 some time.
 
 I did have a quick look over the weekend at your issue and it looks
 to me like warning for the cache is a false positive, as the vdev
 for cache doesn't report an ashift in zdb so could well be falling
 back to a default value.
 
 I couldn't reproduce the issue for log here, it just seems to work
 for me, can you confirm what ashift is reported for your devices
 using: zdb pool
 
   Regards
   Steve
 - Original Message - From: Borja Marcos 
 
 --- Дмитрий Макаров
 
 
 --- Дмитрий Макаров
 
 
 
 This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
 person or entity to whom it is addressed. In the event of misdirection, the 
 recipient is prohibited from using, copying, printing or otherwise 
 disseminating it or any information contained in it. 
 In the event of misdirection, illegible or incomplete transmission please 
 telephone +44 845 868 1337
 or return the E.mail to postmas...@multiplay.co.uk.
 
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: ZFS secondarycache on SSD problem on r255173

2013-09-13 Thread Steven Hartland

This is a recent bit of code by Justin cc'ed, so he's likely the best person to
investigate this one.

   Regards
   Steve
- Original Message - 
From: Dmitryy Makarov suppor...@ukr.net

To: freebsd-current@freebsd.org
Sent: Friday, September 13, 2013 12:16 PM
Subject: ZFS secondarycache on SSD problem on r255173


Hello, FreeBSD Current. 

Have some trouble with adding SSD drive as secondarycache device on 10.0-CURRENT FreeBSD 10.0-CURRENT #1 r255173 : 
SSD INTEL SSDSC2BW180A4 

dmesg: 
ada1 at ahcich2 bus 0 scbus4 target 0 lun 0 
ada1: INTEL SSDSC2BW180A4 DC02 ATA-9 SATA 3.x device 
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) 
ada1: Command Queueing enabled 
ada1: 171705MB (351651888 512 byte sectors: 16H 63S/T 16383C) 
ada1: quirks=0x14K 
ada1: Previously was known as ad8 



# diskinfo -v /dev/ada1 
/dev/ada1 
512 # sectorsize 
180045766656 # mediasize in bytes (167G) 
351651888 # mediasize in sectors 
4096 # stripesize 
0 # stripeoffset 
348861 # Cylinders according to firmware. 
16 # Heads according to firmware. 
63 # Sectors according to firmware. 
BTDA326504N51802GN # Disk ident. 

Creating partition and adding: 
# gpart create -s gpt ada1 
# gpart add -t freebsd-zfs -s3G -l zil0 ada1 
# gpart add -t freebsd-zfs -l cache0 ada1 
# zpool add disk1 log /dev/gpt/zil0 
# zpool add disk1 cache /dev/gpt/cache0 

After that have: 
log 

gpt/zil0 ONLINE 0 0 0 
cache 
gpt/cache0 ONLINE 0 0 0 block size: 512B configured, 4096B native 

But why? ZFS can't properly set sector size for cache device (quirks=0x14K)? 
And have some strange behavior with ZIl partition on the same SSD - everything works! 



Okay, was trying to do some workaround with GNOP: 
# gnop create -S4096 /dev/gpt/cache0 
# zpool add disk1 cache /dev/gpt/cache0.nop 

Seems to be OK 
cache 
gpt/cache0.nop ONLINE 0 0 0 

But after reboot pool has 
status:One or more devices are configured to use a non-native block size. 
Expect reduced performance. 

cache 
gptid/39e40c33-1c4d-11e3-ac20-002590aa9548 ONLINE 0 0 0 block size: 512B configured, 4096B native 

And no gnop parts was found, and even gpart (in /dev/gpart) 

# gnop status 
# gnop: Command 'status' not available. 

# gpart show -l ada1 
= 34 351651821 ada1 GPT (167G) 
34 6 - free - (3.0k) 
40 6291456 1 zil0 (3.0G) 
6291496 345360352 2 cache0 (164G) 
351651848 7 - free - (3.5k) 

# ls /dev/gpt/cache* 
zsh: no matches found: /dev/gpt/cache* 



Please, help to deal with this problem. 
Or what else information can be useful to find out this trouble. 
___

freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org



This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org