Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Darren J Moffat

On 12/07/11 20:48, Mertol Ozyoney wrote:

Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.

The only vendor i know that can do this is Netapp

In fact , most of our functions, like replication is not dedup aware.



For example, thecnicaly it's possible to optimize our replication that
it does not send daya chunks if a data chunk with the same chechsum
exists in target, without enabling dedup on target and source.


We already do that with 'zfs send -D':

 -D

 Perform dedup processing on the stream. Deduplicated
 streams  cannot  be  received on systems that do not
 support the stream deduplication feature.




--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mertol Ozyoney
> Sent: Wednesday, December 07, 2011 3:49 PM
> To: Brad Diggs
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup
> 
> Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.

I haven't read the code, but I can reference experimental results that seem to 
defy statement...

If you time write a large data stream of completely duplicated data to disk 
without dedup...
And time read it back...  It takes the same amount of time.

But,
If you enable dedup and repeat the same test, it goes much faster.  Depending 
on a lot of variables, it might be 2x-12x faster.  
To me, "significantly faster than disk speed," can only mean it's benefitting 
from cache.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Issues with Areca 1680

2011-12-08 Thread Stephan Budach

Hi all,

I have a server that is build on top of an Asus board which is equipped 
with an Areca 1680 HBA. Since ZFS like raw disks, I changed its mode 
from RAID to JBOD in the firmware and rebootet the host.

Now, I do have 16 drives in the chassis and the line out like this:

root@vsm01:~# format
Searching for disks...done

c5t0d0: configured with capacity of 978.00MB


AVAILABLE DISK SELECTIONS:
   0. c3t1d0 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,0
   1. c3t1d1 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,1
   2. c3t1d2 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,2
   3. c3t1d3 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,3
   4. c3t1d4 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,4
   5. c3t1d5 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,5
   6. c3t1d6 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,6
   7. c3t1d7 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,7
   8. c3t2d0 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,0
   9. c3t2d1 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,1
  10. c3t2d2 sec 126>

  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,2
  11. c3t2d3 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,3
  12. c3t2d4 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,4
  13. c3t2d5 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,5
  14. c3t2d6 sec 126>

  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,6
  15. c3t2d7 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,7
  16. c5t0d0 
  /pci@0,0/pci1043,819e@1d,7/storage@4/disk@0,0

Looked okay for me, so I went ahead an created a zpool containing two 
mirrors like this:


root@vsm01:~# zpool create vsm_pool1_1T mirror c3t2d1 c3t1d6 mirror 
c3t2d0 c3t1d5


This went just fine and the zpool was created like this:

root@vsm01:~# zpool status vsm_pool1_1T
  pool: vsm_pool1_1T
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
vsm_pool1_1T  ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c3t2d1  ONLINE   0 0 0
c3t1d6  ONLINE   0 0 0
  mirror-1  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t1d5  ONLINE   0 0 0

errors: No known data errors

Now, creating another zpool from the remaining 500GB drives failed with 
this weird error:


root@vsm01:~# zpool create vsm_pool2_1T mirror c3t1d4 c3t1d3mirror 
c3t1d2 c3t1d1

invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t2d1s0 is part of active ZFS pool vsm_pool1_1T. Please see 
zpool(1M).


Anybody has an idea of what is going wrong here. It doesn't seem to 
matter which of the drives I want to use for the new zpool, I am always 
getting this error message - ven when trying only mirror c3t1d4 c3t1d3 
or mirror c3t1d2 c3t1d1 alone.


Thanks,
budy






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SOLVED: Issues with Areca 1680

2011-12-08 Thread Stephan Budach

Am 08.12.11 18:14, schrieb Stephan Budach:

Hi all,

I have a server that is build on top of an Asus board which is 
equipped with an Areca 1680 HBA. Since ZFS like raw disks, I changed 
its mode from RAID to JBOD in the firmware and rebootet the host.

Now, I do have 16 drives in the chassis and the line out like this:

root@vsm01:~# format
Searching for disks...done

c5t0d0: configured with capacity of 978.00MB


AVAILABLE DISK SELECTIONS:
   0. c3t1d0 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,0
   1. c3t1d1 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,1
   2. c3t1d2 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,2
   3. c3t1d3 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,3
   4. c3t1d4 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,4
   5. c3t1d5 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,5
   6. c3t1d6 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,6
   7. c3t1d7 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,7
   8. c3t2d0 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,0
   9. c3t2d1 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,1
  10. c3t2d2 sec 126>

  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,2
  11. c3t2d3 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,3
  12. c3t2d4 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,4
  13. c3t2d5 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,5
  14. c3t2d6 sec 126>

  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,6
  15. c3t2d7 
  /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,7
  16. c5t0d0 
  /pci@0,0/pci1043,819e@1d,7/storage@4/disk@0,0

Looked okay for me, so I went ahead an created a zpool containing two 
mirrors like this:


root@vsm01:~# zpool create vsm_pool1_1T mirror c3t2d1 c3t1d6 mirror 
c3t2d0 c3t1d5


This went just fine and the zpool was created like this:

root@vsm01:~# zpool status vsm_pool1_1T
  pool: vsm_pool1_1T
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
vsm_pool1_1T  ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c3t2d1  ONLINE   0 0 0
c3t1d6  ONLINE   0 0 0
  mirror-1  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t1d5  ONLINE   0 0 0

errors: No known data errors

Now, creating another zpool from the remaining 500GB drives failed 
with this weird error:


root@vsm01:~# zpool create vsm_pool2_1T mirror c3t1d4 c3t1d3mirror 
c3t1d2 c3t1d1

invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t2d1s0 is part of active ZFS pool vsm_pool1_1T. Please see 
zpool(1M).


Anybody has an idea of what is going wrong here. It doesn't seem to 
matter which of the drives I want to use for the new zpool, I am 
always getting this error message - ven when trying only mirror c3t1d4 
c3t1d3 or mirror c3t1d2 c3t1d1 alone.


Thanks,
budy 
Hmm… now I did it the other way round and this time it worked as 
expected. Seems that these disks have been used in some other zpool 
configurations without being properly removed and thus they may had some 
old labels on them, right?


Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Ian Collins

On 12/ 9/11 12:39 AM, Darren J Moffat wrote:

On 12/07/11 20:48, Mertol Ozyoney wrote:

Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.

The only vendor i know that can do this is Netapp

In fact , most of our functions, like replication is not dedup aware.
For example, thecnicaly it's possible to optimize our replication that
it does not send daya chunks if a data chunk with the same chechsum
exists in target, without enabling dedup on target and source.

We already do that with 'zfs send -D':

   -D

   Perform dedup processing on the stream. Deduplicated
   streams  cannot  be  received on systems that do not
   support the stream deduplication feature.





Is there any more published information on how this feature works?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Mark Musante

You can see the original ARC case here:

http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt

On 8 Dec 2011, at 16:41, Ian Collins wrote:

> On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
>> On 12/07/11 20:48, Mertol Ozyoney wrote:
>>> Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
>>> 
>>> The only vendor i know that can do this is Netapp
>>> 
>>> In fact , most of our functions, like replication is not dedup aware.
>>> For example, thecnicaly it's possible to optimize our replication that
>>> it does not send daya chunks if a data chunk with the same chechsum
>>> exists in target, without enabling dedup on target and source.
>> We already do that with 'zfs send -D':
>> 
>>   -D
>> 
>>   Perform dedup processing on the stream. Deduplicated
>>   streams  cannot  be  received on systems that do not
>>   support the stream deduplication feature.
>> 
>> 
>> 
>> 
> Is there any more published information on how this feature works?
> 
> -- 
> Ian.
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] First zone creation - getting ZFS error

2011-12-08 Thread Betsy Schwartz
I would also try it without the /zones mountpoint. Putting the zone root dir on 
an alternate mountpoint caused problems for us. Try creating /datastore/zones 
for a zone root home, or just make the zones in  /datastore

Solaris seems to get very easily confused when zone root is anything out of the 
ordinary ( and it really bites you at patch time!)



On Dec 7, 2011, at 9:50 PM, Ian Collins  wrote:

> On 12/ 7/11 05:12 AM, Mark Creamer wrote:
>> I'm running OI 151a. I'm trying to create a zone for the first time, and am 
>> getting an error about zfs. I'm logged in as me, then su - to root before 
>> running these commands.
>> 
>> I have a pool called datastore, mounted at /datastore
>> 
>> Per the wiki document http://wiki.openindiana.org/oi/Building+in+zones, I 
>> first created the zfs file system (note that the command syntax in the 
>> document appears to be wrong, so I did the options I wanted separately):
>> 
>> zfs create datastore/zones
>> zfs set compression=on datastore/zones
>> zfs set mountpoint=/zones datastore/zones
>> 
>> zfs list shows:
>> 
>> NAME USED  AVAIL  REFER  MOUNTPOINT
>> datastore   28.5M  7.13T  57.9K  /datastore
>> datastore/dbdata28.1M  7.13T  28.1M  /datastore/dbdata
>> datastore/zones 55.9K  7.13T  55.9K  /zones
>> rpool   27.6G   201G45K  /rpool
>> rpool/ROOT  2.89G   201G31K  legacy
>> rpool/ROOT/openindiana  2.89G   201G  2.86G  /
>> rpool/dump  12.0G   201G  12.0G  -
>> rpool/export5.53M   201G32K  /export
>> rpool/export/home   5.50M   201G32K  /export/home
>> rpool/export/home/mcreamer  5.47M   201G  5.47M  /export/home/mcreamer
>> rpool/swap  12.8G   213G   137M  -
>> 
>> Then I went about creating the zone:
>> 
>> zonecfg -z zonemaster
>> create
>> set autoboot=true
>> set zonepath=/zones/zonemaster
>> set ip-type=exclusive
>> add net
>> set physical=vnic0
>> end
>> exit
>> 
>> That all goes fine, then...
>> 
>> zoneadm -z zonemaster install
>> 
>> which returns...
>> 
>> ERROR: the zonepath must be a ZFS dataset.
>> The parent directory of the zonepath must be a ZFS dataset so that the
>> zonepath ZFS dataset can be created properly.
>> 
> That's odd, it should have worked.
> 
>> Since the zfs dataset datastore/zones is created, I don't understand what 
>> the error is trying to get me to do. Do I have to do:
>> 
>> zfs create datastore/zones/zonemaster
>> 
>> before I can create a zone in that path? That's not in the documentation, so 
>> I didn't want to do anything until someone can point out my error for me. 
>> Thanks for your help!
>> 
> You shouldn't have to, but it won't do any harm.
> 
> If you don't get any further, try zones-discuss.
> 
> -- 
> Ian.
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] First zone creation - getting ZFS error

2011-12-08 Thread Ian Collins

On 12/ 9/11 11:37 AM, Betsy Schwartz wrote:

 On Dec 7, 2011, at 9:50 PM, Ian Collins  wrote:

On 12/ 7/11 05:12 AM, Mark Creamer wrote:


Since the zfs dataset datastore/zones is created, I don't understand what the 
error is trying to get me to do. Do I have to do:

zfs create datastore/zones/zonemaster

before I can create a zone in that path? That's not in the documentation, so I 
didn't want to do anything until someone can point out my error for me. Thanks 
for your help!


You shouldn't have to, but it won't do any harm.

If you don't get any further, try zones-discuss.

I would also try it without the /zones mountpoint. Putting the zone root dir on 
an alternate mountpoint caused problems for us. Try creating /datastore/zones 
for a zone root home, or just make the zones in  /datastore

Solaris seems to get very easily confused when zone root is anything out of the 
ordinary ( and it really bites you at patch time!)


It shouldn't.

On all my systems, I have:

NAME USED  AVAIL  REFER  MOUNTPOINT
rpool/zoneRoot  11.6G   214G40K  /zoneRoot

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-08 Thread Nigel W
On Mon, Dec 5, 2011 at 17:46, Jim Klimov  wrote:
> So, in contrast with Nigel's optimistic theory that
> metadata is anyway extra-redundant and should be
> easily fixable, it seems that I do still have the
> problem. It does not show itself in practice as of
> yet, but is found by scrub ;)

Hmm. Interesting.

I have re-scrubbed pool that I referenced. I haven't gotten another
error in the metadata.

> After a few days to complete the current scrub,
> I plan to run zdb as asked by Steve. If anyone else
> has some theories, suggestions or requests to dig
> up more clues - bring them on! ;)
>
Perhaps the cause of the corruption is still active.

The circumstances that lead up to the discovery of the error are
different for you and me. The server that I encountered it on was
running fine for months, it was only after the crash/hang because of
attempting to add a bad drive to the pool that I encountered the issue
which was found on boot in my case.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss