Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-25 Thread StorageConcepts
Hello, 
actually this is bad news. 

I always assumed that the mirror redundancy of zil can also be used to handle 
bad blocks on the zil device (just as the main pool self healing does for data 
blocks).

I actually dont know how SSD's "die", because of the "wear out" characteristics 
I can think of a increased number of bad blocks / bit errors at the EOL of such 
a device -  probably undiscovered.

Because ZIL is write only, you only know if it worked in case you need it - 
wich is bad. So my suggestion was always to run with 1 zil during 
pre-production, and add the zil mirror 2 weeks later when production starts. 
This way they dont't age exactly the same and zil2 has 2 more weeks of expected 
flifetime (or even more, assuming the usual heavier writes during stress 
testing). 

I would call this pre-aging. However if the second zil is not used to recover 
from bad blocks, this does not make a lot of sense.

So would say there are 2 bugs / missing features in this: 

1) zil needs to report truncated transactions on zilcorruption
2) zil should need mirrored counterpart to recover bad block checksums 

Now with OpenSolaris beeing Oracle closed and Illumos beeing just startet, I 
don't  know how to handle bug openenings :) - is bugs.opensolaris.org still 
maintained ???

Regards, 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Dr. Martin Mundschenk

Am 26.08.2010 um 04:38 schrieb Edward Ned Harvey:

> There is no such thing as reliable external disks.  Not unless you want to
> pay $1000 each, which is dumb.  You have to scrap your mini, and use
> internal (or hotswappable) disks.
> 
> Never expect a mini to be reliable.  They're designed to be small and cute.
> Not reliable.


The MacMini and the disks themselves are just fine. The problem seems to be the 
SATA-bridges to USB/FW. They just stall, when the load gets heavy.

Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (preview) Whitepaper - ZFS Pools Explained - feedback welcome

2010-08-25 Thread StorageConcepts
Thanks for the feedback, the idea of it is to give people new to ZFS a 
understanding of the terms and mode of operations to avoid common problems 
(wide stripe pools etc.). Also agreed that it is a little NexentaStor "tweaked" 
:)

I think I have to rework the zil section anyhow because of 
http://opensolaris.org/jive/thread.jspa?threadID=133294&tstart=0 - have to do 
some experiments here - and I will also do a "dual command strategy" showing 
nexentastor commands AND opensolaris commands when a command is shown. 

That's for the good feedback.

Regards, 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-25 Thread Neil Perrin

On 08/25/10 20:33, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin

This is a consequence of the design for performance of the ZIL code.
Intent log blocks are dynamically allocated and chained together.
When reading the intent log we read each block and checksum it
with the embedded checksum within the same block. If we can't read
a block due to an IO error then that is reported, but if the checksum
does
not match then we assume it's the end of the intent log chain.
Using this design means we use the minimum number of writes.

So corruption of an intent log is not going to generate any errors.



I didn't know that.  Very interesting.  This raises another question ...

It's commonly stated, that even with log device removal supported, the most
common failure mode for an SSD is to blindly write without reporting any
errors, and only detect that the device is failed upon read.  So ... If an
SSD is in this failure mode, you won't detect it?  At bootup, the checksum
will simply mismatch, and we'll chug along forward, having lost the data ...
(nothing can prevent that) ... but we don't know that we've lost data?
  


- Indeed, we wouldn't know we lost data.


Worse yet ... In preparation for the above SSD failure mode, it's commonly
recommended to still mirror your log device, even if you have log device
removal.  If you have a mirror, and the data on each half of the mirror
doesn't match each other (one device failed, and the other device is good)
... Do you read the data from *both* sides of the mirror, in order to
discover the corrupted log device, and correctly move forward without data
loss?

  


Hmm, I need to check, but if we get a checksum mismatch then I don't 
think we try other
mirror(s). This is automatic for the 'main pool', but of course the ZIL 
code is different
by necessity. This problem can of course be fixed. (It will be  a week 
and a bit before I can

report back on this, as I'm on vacation).

Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dr. Martin Mundschenk
> 
> devices attached. Unfortunately the USB and sometimes the FW devices
> just die, causing the whole system to stall, forcing me to do a hard
> reboot.
> 
> Well, I wonder what are the components to build a stable system without
> having an enterprise solution: eSATA, USB, FireWire, FibreChannel?

There is no such thing as reliable external disks.  Not unless you want to
pay $1000 each, which is dumb.  You have to scrap your mini, and use
internal (or hotswappable) disks.

Never expect a mini to be reliable.  They're designed to be small and cute.
Not reliable.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-25 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Neil Perrin
> 
> This is a consequence of the design for performance of the ZIL code.
> Intent log blocks are dynamically allocated and chained together.
> When reading the intent log we read each block and checksum it
> with the embedded checksum within the same block. If we can't read
> a block due to an IO error then that is reported, but if the checksum
> does
> not match then we assume it's the end of the intent log chain.
> Using this design means we the minimum number of writes to add
> write an intent log record is just one write.
> 
> So corruption of an intent log is not going to generate any errors.

I didn't know that.  Very interesting.  This raises another question ...

It's commonly stated, that even with log device removal supported, the most
common failure mode for an SSD is to blindly write without reporting any
errors, and only detect that the device is failed upon read.  So ... If an
SSD is in this failure mode, you won't detect it?  At bootup, the checksum
will simply mismatch, and we'll chug along forward, having lost the data ...
(nothing can prevent that) ... but we don't know that we've lost data?

Worse yet ... In preparation for the above SSD failure mode, it's commonly
recommended to still mirror your log device, even if you have log device
removal.  If you have a mirror, and the data on each half of the mirror
doesn't match each other (one device failed, and the other device is good)
... Do you read the data from *both* sides of the mirror, in order to
discover the corrupted log device, and correctly move forward without data
loss?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog and TRIM support [SEC=UNCLASSIFIED]

2010-08-25 Thread LaoTsao 老曹


-M has larger capacity and L2ARC is mostly for read  and not much write 
and U also need memory for ARC

L2ARC should be the sze of Ur working dataset
ZIL is mostly for write and U want SSD for better write and longer life  
and ZIl may be <=1/2 phy memory

regards


On 8/25/2010 9:18 PM, Wilkinson, Alex wrote:

 0n Wed, Aug 25, 2010 at 02:54:42PM -0400, LaoTsao ?? wrote:

 >IMHO, U want -E for ZIL and -M for L2ARC

Why ?

-Alex

IMPORTANT: This email remains the property of the Department of Defence and is 
subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have 
received this email in error, you are requested to contact the sender and 
delete the email.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog and TRIM support [SEC=UNCLASSIFIED]

2010-08-25 Thread Wilkinson, Alex

0n Wed, Aug 25, 2010 at 02:54:42PM -0400, LaoTsao ?? wrote: 

>IMHO, U want -E for ZIL and -M for L2ARC

Why ?

   -Alex

IMPORTANT: This email remains the property of the Department of Defence and is 
subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have 
received this email in error, you are requested to contact the sender and 
delete the email.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Freddie Cash
On Wed, Aug 25, 2010 at 12:29 PM, Dr. Martin Mundschenk
 wrote:
> I'm running a OSOL box for quite a while and I think ZFS is an amazing 
> filesystem. As a computer I use a Apple MacMini with USB and FireWire devices 
> attached. Unfortunately the USB and sometimes the FW devices just die, 
> causing the whole system to stall, forcing me to do a hard reboot.
>
> I had the worst experience with an USB-SATA bridge running an Oxford chipset, 
> in a way that the four external devices stalled randomly within a day or so. 
> I switched to a four slot raid box, also with USB bridge, but with better 
> reliability.
>
> Well, I wonder what are the components to build a stable system without 
> having an enterprise solution: eSATA, USB, FireWire, FibreChannel?

If possible to get a card to fit into a MacMini, eSATA would be a lot
better than USB or FireWire.

If there's any way to run cables from inside the case, you can "make
do" with plain SATA and longer cables.

Otherwise, you'll need to look into something other than a MacMini for
your storage box.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Enrico Maria Crisostomo
I'm currently using SXCE with eSATA (with an LSI controller) and SAS disks in 
my home boxes and they run just fine. The only glitch I had after LU-upgrading 
to the latest release is eSATA disk not spinning down any longer when idle.

I export file systems with NFS to my Macs: beware that Mac OS X uses decomposed 
UTF-8 characters and sometimes I have some portability issues when file names 
contain, for example, accented characters. It runs fine and pretty better than 
CIFS, IMHO.

In some case I use an OS X iSCSI initiator and Comstar: it runs fine and it's 
the only solution I found if you need, for example, to use time machine upon a 
ZFS volume.

Bye,
Enrico
-- 
Enrico M. Crisostomo

On Aug 25, 2010, at 21:29, "Dr. Martin Mundschenk"  wrote:

> Hi!
> 
> I'm running a OSOL box for quite a while and I think ZFS is an amazing 
> filesystem. As a computer I use a Apple MacMini with USB and FireWire devices 
> attached. Unfortunately the USB and sometimes the FW devices just die, 
> causing the whole system to stall, forcing me to do a hard reboot.
> 
> I had the worst experience with an USB-SATA bridge running an Oxford chipset, 
> in a way that the four external devices stalled randomly within a day or so. 
> I switched to a four slot raid box, also with USB bridge, but with better 
> reliability.
> 
> Well, I wonder what are the components to build a stable system without 
> having an enterprise solution: eSATA, USB, FireWire, FibreChannel?
> 
> Martin
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] System hangs during zfs send

2010-08-25 Thread Bryan Leaman
So I don't know if I'm on the right track, but I've been looking at the 
threadlist and findstack output from above, specifically this thread which 
seems to be what zpool-syspool is stuck on:

> 0xff000fa05c60::findstack -v
stack pointer for thread ff000fa05c60: ff000fa05860
[ ff000fa05860 _resume_from_idle+0xf1() ]
  ff000fa05890 swtch+0x145()
  ff000fa058c0 cv_wait+0x61(ff000fa05e3e, ff000fa05e40)
  ff000fa05900 delay_common+0xab(1)
  ff000fa05940 delay+0xc4(1)
  ff000fa05960 dnode_special_close+0x28(ff02e8aa2050)
  ff000fa05990 dmu_objset_evict+0x160(ff02e5b91100)
  ff000fa05a20 dsl_dataset_user_release_sync+0x52(ff02e000b928,
  ff02d0b1a868, ff02e5b9c6e0)
  ff000fa05a70 dsl_sync_task_group_sync+0xf3(ff02d0b1a868,
  ff02e5b9c6e0)
  ff000fa05af0 dsl_pool_sync+0x1ec(ff02cd540380, 9291)
  ff000fa05ba0 spa_sync+0x37b(ff02cdd40b00, 9291)
  ff000fa05c40 txg_sync_thread+0x247(ff02cd540380)
  ff000fa05c50 thread_start+8()

It seems to be trying to sync txg 9291.

dmu_objset_evict is called with:

> ff02e5b91100::print objset_t
{
os_dsl_dataset = 0xff02cd748880
os_spa = 0xff02cdd40b00
os_phys_buf = 0xff02cf426a88
os_phys = 0xff02e570d800
os_meta_dnode = 0xff02e8aa2050
os_userused_dnode = 0xff02e8aa1758
os_groupused_dnode = 0xff02e8aa1478

and the ds_snapname indeed matches the name of the snapshot I was copying with 
zfs send when the system hung:

> ff02cd748880::print dsl_dataset_t ds_snapname
ds_snapname = [ "20100824" ]

within dmu_objset_evict() it executes:

/*
 * We should need only a single pass over the dnode list, since
 * nothing can be added to the list at this point.
 */
(void) dmu_objset_evict_dbufs(os);

dnode_special_close(os->os_meta_dnode);
if (os->os_userused_dnode) {
dnode_special_close(os->os_userused_dnode);
dnode_special_close(os->os_groupused_dnode);

and in the stack, it's calling dnode_special_close+0x28(ff02e8aa2050) which 
matches the value for os->os_meta_dnode.  So I guess that means it's stuck in 
dnode_special_close() handling the os_meta_dnode?

Looking at the code for dnode_special_close() in dnode.c seems to explain the 
delay+0xc4(1) in the stack:

>  ff02e8aa2050::print dnode_t dn_holds
dn_holds = {
dn_holds.rc_count = 0x20
}

void
dnode_special_close(dnode_t *dn)
{
/*
 * Wait for final references to the dnode to clear.  This can
 * only happen if the arc is asyncronously evicting state that
 * has a hold on this dnode while we are trying to evict this
 * dnode.
 */
while (refcount_count(&dn->dn_holds) > 0)
delay(1);
dnode_destroy(dn);
}

But now I'm reaching the limit of what I'm able to debug, as my understanding 
of the inner workings of ZFS is very limited.  Any thoughts or suggestions 
based on this analysis?  At least I've learned quite a bit about mdb. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Halcyon ZFS and system monitoring software for OpenSolaris (beta)

2010-08-25 Thread Mike Kirk
Update: version 3.2.5 out now, with changes to better support snv_134:

http://forums.halcyoninc.com/showthread.php?t=368

If you've downloaded v3.2.4 and are on 09/06, there is no reason to upgrade.

Regards,

mike.k...@halcyoninc.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup - Does "on" imply "sha256?"

2010-08-25 Thread Peter Taps
Thank you all for your help.

It appears it is better to use "on" instead of "sha256." This way, you are 
letting zfs decide the best option.

Regards,
Peter
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-08-25 Thread Andreas Grüninger
This was the information I got from the distributor but this faq is newer.

Anyway you have still the problems.

When we installed the Intel-X25 we had also problems with timeout.
We replaced the original SUN StorageTek SAS HBA (LSI based, 1068E, newest 
firmware) with an original SUN StorageTek SAS RAID HBA (SUN OEM version of 
Adaptec 5085).
No timeouts since this replacement.

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (preview) Whitepaper - ZFS Pools Explained - feedback welcome

2010-08-25 Thread LaoTsao 老曹

 dtrace is DTrace

On 8/25/2010 3:27 PM, F. Wessels wrote:

Although it's bit much Nexenta oriented, command wise. It's a nice introduction. I did found one 
thing, page 28 about the zil. There's no zil device, the zil can be written to an optional slog 
device. And the last line first paragraph, "If you can, use memory based SSD devices". At 
least change memory into dram, flash is also memory. Perhaps even better is "If you can, use a 
non volatile dram based device."
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread LaoTsao

sas-2 with 7200 rpm sas 1tb ot 2tb hdd

--- Original message ---

From: Dr. Martin Mundschenk 
To: zfs-discuss@opensolaris.org
Sent: 25.8.'10,  15:29

Hi!

I'm running a OSOL box for quite a while and I think ZFS is an amazing 
filesystem. As a computer I use a Apple MacMini with USB and FireWire 
devices attached. Unfortunately the USB and sometimes the FW devices just 
die, causing the whole system to stall, forcing me to do a hard reboot.


I had the worst experience with an USB-SATA bridge running an Oxford 
chipset, in a way that the four external devices stalled randomly within a 
day or so. I switched to a four slot raid box, also with USB bridge, but 
with better reliability.


Well, I wonder what are the components to build a stable system without 
having an enterprise solution: eSATA, USB, FireWire, FibreChannel?


Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Brandon High
On Wed, Aug 25, 2010 at 12:29 PM, Dr. Martin Mundschenk
 wrote:
> Well, I wonder what are the components to build a stable system without 
> having an enterprise solution: eSATA, USB, FireWire, FibreChannel?

I wouldn't consider anything except FC or SAS for a true enterprise solution.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Dr. Martin Mundschenk
Hi!

I'm running a OSOL box for quite a while and I think ZFS is an amazing 
filesystem. As a computer I use a Apple MacMini with USB and FireWire devices 
attached. Unfortunately the USB and sometimes the FW devices just die, causing 
the whole system to stall, forcing me to do a hard reboot.

I had the worst experience with an USB-SATA bridge running an Oxford chipset, 
in a way that the four external devices stalled randomly within a day or so. I 
switched to a four slot raid box, also with USB bridge, but with better 
reliability.

Well, I wonder what are the components to build a stable system without having 
an enterprise solution: eSATA, USB, FireWire, FibreChannel?

Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup - Does "on" imply "sha256?"

2010-08-25 Thread Brandon High
On Tue, Aug 24, 2010 at 9:45 PM, Peter Taps  wrote:
> Essentially, "on" is just a pseudonym for "sha256" and "verify" is just a 
> pseudonym for "sha256,verify."
>
> Can someone please confirm if this is true?

When dedup was initially announced, dedup=on was fletcher4. There was
a problem with the implementation and the default was changed to
sha256.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (preview) Whitepaper - ZFS Pools Explained - feedback welcome

2010-08-25 Thread F. Wessels
Although it's bit much Nexenta oriented, command wise. It's a nice 
introduction. I did found one thing, page 28 about the zil. There's no zil 
device, the zil can be written to an optional slog device. And the last line 
first paragraph, "If you can, use memory based SSD devices". At least change 
memory into dram, flash is also memory. Perhaps even better is "If you can, use 
a non volatile dram based device."
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-08-25 Thread Ray Van Dolson
On Wed, Aug 25, 2010 at 11:47:38AM -0700, Andreas Grüninger wrote:
> Ray
> 
> Supermicro does not support the use of SSDs behind an expander.
> 
> You must put the SSD in the head or use an interposer card see here:
> http://www.lsi.com/storage_home/products_home/standard_product_ics/sas_sata_protocol_bridge/lsiss9252/index.html
> Supermicro offers an interposer card too: AOCSMPLSISS9252 .
> 

Hmm, interesting.

FAQ #3 on this page[1] seems to indicate otherwise -- at least in the
case of the Intel X25-E (SSDSA2SH064G1GC) with firmware 8860 (which we
are running).

Ray

[1] http://www.supermicro.com/support/faqs/results.cfm?id=95
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog and TRIM support

2010-08-25 Thread LaoTsao 老曹

 IMHO, U want -E for ZIL and -M for L2ARC


On 8/25/2010 2:44 PM, Karl Rossing wrote:
I'm trying to pick between an Intel X25-M or Intel X25-E for a slog 
device.


At some point in the future, TRIM support will become available 
. 
The X25-M support TRIM while X25-E don't support trim.


Does TRIM support mater when selecting slog drives?

Thanks
Karl













CONFIDENTIALITY NOTICE: This communication (including all attachments) 
is confidential and is intended for the use of the named addressee(s) 
only and may contain information that is private, confidential, 
privileged, and exempt from disclosure under law. All rights to 
privilege are expressly claimed and reserved and are not waived. Any 
use, dissemination, distribution, copying or disclosure of this 
message and any attachments, in whole or in part, by anyone other than 
the intended recipient(s) is strictly prohibited. If you have received 
this communication in error, please notify the sender immediately, 
delete this communication from all data storage devices and destroy 
all hard copies.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] slog and TRIM support

2010-08-25 Thread Karl Rossing
 I'm trying to pick between an Intel X25-M or Intel X25-E for a slog 
device.


At some point in the future, TRIM support will become available 
. 
The X25-M support TRIM while X25-E don't support trim.


Does TRIM support mater when selecting slog drives?

Thanks
Karl















CONFIDENTIALITY NOTICE:  This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any attachments, in
whole or in part, by anyone other than the intended recipient(s) is strictly
prohibited.  If you have received this communication in error, please notify
the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-08-25 Thread Andreas Grüninger
Ray

Supermicro does not support the use of SSDs behind an expander.

You must put the SSD in the head or use an interposer card see here:
http://www.lsi.com/storage_home/products_home/standard_product_ics/sas_sata_protocol_bridge/lsiss9252/index.html
Supermicro offers an interposer card too: AOCSMPLSISS9252 .

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrink zpool

2010-08-25 Thread LaoTsao 老曹

 not possible now

On 8/25/2010 2:34 PM, Mike DeMarco wrote:

Is it currently or near future possible to shrink a zpool "remove a disk"
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrink zpool

2010-08-25 Thread Freddie Cash
On Wed, Aug 25, 2010 at 11:34 AM, Mike DeMarco  wrote:
> Is it currently or near future possible to shrink a zpool "remove a disk"

Short answer:  no.

Long answer:  search the archives for "block pointer rewrite" for all
the gory details.  :)


-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cloud Storage

2010-08-25 Thread Wes Felter

On 8/25/10 12:42 PM, J.P. King wrote:


What I would like to achieve:

Large (by my standard) scale storage. Lets say petabyte scale...

Redundancy across machines of data

An Amazon S3 style interface


Sounds like OpenStack Swift http://openstack.org/projects/storage/

Wes Felter

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] shrink zpool

2010-08-25 Thread Mike DeMarco
Is it currently or near future possible to shrink a zpool "remove a disk"
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cloud Storage

2010-08-25 Thread LaoTsao 老曹


IMHO, may be take look the ZFS  appliance (7000 storage) from  oracle
it provide GUI for Dtrace based Analytics and WGUI management.
It support 1/2 PB now and will be support much more in near future.
 http://www.oracle.com/us/products/servers-storage/039224.pdf
it support local cluster and remote replication etc
it support 10ge and IB etc
regards


On 8/25/2010 1:42 PM, J.P. King wrote:


This is slightly off topic, so I apologise in advance.

I'm investigating the option of offering private "cloud storage".  
I've found many things which offer features that I want, but nothing 
that seems to glue them all together into a useful whole.  Thus I 
would like to pick your collective brains on the matter.  The reason 
for this mailing list is that the obvious solution to aspects of this 
is to use ZFS as the underlying filesystem, and this is the only 
storage mailing list I am subscribed to.  :-)


What I would like to achieve:

Large (by my standard) scale storage.  Lets say petabyte scale, 
although I'll start around 50-100TB.


Redundancy across machines of data.  This doesn't mean that I have to 
have synchronous mirroring or anything, but I don't want data stored 
in just one location.  I also don't require this happen at the block 
level.  I am quite happy for a system which has two copies of every 
file one two machines done at the application level.


An Amazon S3 style interface.  It doesn't have to be the same API as 
S3, but something which has the same sorts of features would be good.


Scalability.  My building blocks would be X4540's or similar.  I want 
to be able to add more of these and be able to manage the storage 
well.  I want the front end to hide that the data is half on machines 
Alpha and Aleph and half on machines Beta and Beth.


A means of managing all this storage.  I'll accept a web front end, 
but I'd rather have something scriptable with an API.


Does anyone have any thoughts, pointers, or suggestions.  If people 
have ideas that force me away from ZFS then I'm interested, although 
that does mean that this thread would drift more off topic.


Has anyone done anything like this?  Whether public cloud or private 
cloud?  In my head the model worked really well, but research didn't 
result in the solutions I thought I was going to find.


This is intended to be a service, so if it is too shonky then it won't 
meet my needs.


Oh, and free isn't a requirement, but it is definitely a bonus.

Thanks in advance,

Julian
--
Julian King
Computer Officer, University of Cambridge, Unix Support
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Cloud Storage

2010-08-25 Thread J.P. King


This is slightly off topic, so I apologise in advance.

I'm investigating the option of offering private "cloud storage".  I've 
found many things which offer features that I want, but nothing that seems 
to glue them all together into a useful whole.  Thus I would like to pick 
your collective brains on the matter.  The reason for this mailing list is 
that the obvious solution to aspects of this is to use ZFS as the 
underlying filesystem, and this is the only storage mailing list I am 
subscribed to.  :-)


What I would like to achieve:

Large (by my standard) scale storage.  Lets say petabyte scale, although 
I'll start around 50-100TB.


Redundancy across machines of data.  This doesn't mean that I have to have 
synchronous mirroring or anything, but I don't want data stored in just 
one location.  I also don't require this happen at the block level.  I am 
quite happy for a system which has two copies of every file one two 
machines done at the application level.


An Amazon S3 style interface.  It doesn't have to be the same API as S3, 
but something which has the same sorts of features would be good.


Scalability.  My building blocks would be X4540's or similar.  I want to 
be able to add more of these and be able to manage the storage well.  I 
want the front end to hide that the data is half on machines Alpha and 
Aleph and half on machines Beta and Beth.


A means of managing all this storage.  I'll accept a web front end, but 
I'd rather have something scriptable with an API.


Does anyone have any thoughts, pointers, or suggestions.  If people have 
ideas that force me away from ZFS then I'm interested, although that does 
mean that this thread would drift more off topic.


Has anyone done anything like this?  Whether public cloud or private 
cloud?  In my head the model worked really well, but research didn't 
result in the solutions I thought I was going to find.


This is intended to be a service, so if it is too shonky then it won't 
meet my needs.


Oh, and free isn't a requirement, but it is definitely a bonus.

Thanks in advance,

Julian
--
Julian King
Computer Officer, University of Cambridge, Unix Support
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-25 Thread Daniel Whitener
Sigbjorn

Stop! Don't do it... it's a waste of time.  We tried exactly what
you're thinking of... we bought two Sun/Oracle 7000 series storage
units with 20TB of ZFS storage each planning to use them as a backup
target for Networker.  We ran into several issues eventually gave up
the ZFS networker combo.  We've used other storage devices in the past
(virtual tape libraries) that had deduplication.  We were used to
seeing dedup ratios better than 20x on our backup data.  The ZFS
filesystem only gave us 1.03x, and it had regular issues because it
couldn't do dedup for such large filesystems very easily.  We didn't
know it ahead of time, but VTL solutions use something called
"variable length" block dedup, whereas ZFS uses "fixed block" length
dedup. Like one of the other posters mentioned, things just don't line
up right and the dedup ratio suffers.  Yes, compression works to some
degree -- I think we got 2 or 3x on that, but it was a far cry from
the 20x that we were used to seeing on our old VTL.

We recently ditched the 7000 series boxes in favor of a much pricier
competitor.  It's about double the cost, but dedup ratios are better
than 20x.  Personally I love ZFS and I use it in many other places,
but we were very disappointed with the dedup ability for that type of
data.  We went to Sun with our problems and they ran it up the food
chain and word came back down from the developers that this was the
way it was designed, and it's not going to change anytime soon.  The
type of files that Networker writes out just are not friendly at all
with the dedup mechanism used in ZFS.  They gave us a few ideas and
things to tweak in Networker, but no measurable gains ever came from
any of the tweaks.

If are considering a home-grown ZFS solution for budget reasons, go
for it just do yourself a favor and save yourself the overhead of
"trying" to dedup.  When we disabled dedup on our 7000 series boxes,
everything worked great and compression was fine with next to no
overhead.  Unfortunately, we NEEDED at least a 10x ratio to keep the 3
week backups we were trying to do.  We couldn't even keep a 1 week
backup with the dedup performance of ZFS.

If you need more details, I'm happy to help.  We went through months
of pain trying to make it work and it just doesn't for Networker data.

best wishes
Daniel








2010/8/18 Sigbjorn Lie :
> Hi,
>
> We are considering using a ZFS based storage as a staging disk for Networker. 
> We're aiming at
> providing enough storage to be able to keep 3 months worth of backups on 
> disk, before it's moved
> to tape.
>
> To provide storage for 3 months of backups, we want to utilize the dedup 
> functionality in ZFS.
>
> I've searched around for these topics and found no success stories, however 
> those who has tried
> did not mention if they had attempted to change the blocksize to any smaller 
> than the default of
> 128k.
>
> Does anyone have any experience with this kind of setup?
>
>
> Regards,
> Sigbjorn
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] System hangs during zfs send

2010-08-25 Thread Bryan Leaman
More info:

> ::zio_state
ADDRESS  TYPE  STAGEWAITER

ff02cf248c88 NULL  OPEN -
ff02e3dec348 NULL  OPEN -
ff02e3e6d6a0 NULL  OPEN -
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] System hangs during zfs send

2010-08-25 Thread David Blasingame Oracle

What does ::zio_state show?

Dave

On 08/25/10 07:41, Bryan Leaman wrote:

Hi, I've been following these forums for a long time but this is my first post. 
 I'm looking for some advice on debugging an issue.  I've been looking at all 
the bug reports and updates though b146 but I can't find a good match.  I tried 
the fix for 6937998 but it didn't help.

Running Nexenta NCP3 and when I attempt to do a simple zfs send of my root pool 
(syspool) > /dev/null, it sends all the volume streams but then all IO hangs at 
the moment the send seems like it should be completed.  I have to restart the box 
at this point.

The following mdb output is from the hung system (from a savecore -L).  I'm 
still learning my way around mdb and kernel debugging so any suggestions on how 
to track this down would be really appreciated.  It seems like it's stuck 
waiting for txg_wait_synced.

  

::ptree


   ff02e8d97718  sshd
ff02e74c3570  sshd
 ff02e8d95e48  tcsh
  ff02d1cc3e20  bash
   ff02e7f4a720  bash
ff02e6bec900  zfs

  

ff02e6bec900::walk thread


ff02d1954720

  

ff02d1954720::threadlist -v


ADDR PROC  LWP CLS PRIWCHAN
ff02d1954720 ff02e6bec900 ff02cf543850   1  60 ff02cd54054a
  PC: _resume_from_idle+0xf1CMD: zfs send -Rvp sysp...@20100824
  stack pointer for thread ff02d1954720: ff0010b6ca90
  [ ff0010b6ca90 _resume_from_idle+0xf1() ]
swtch+0x145()
cv_wait+0x61()
txg_wait_synced+0x7c()
dsl_sync_task_group_wait+0xee()
dsl_dataset_user_release+0x101()
zfs_ioc_release+0x51()
zfsdev_ioctl+0x177()
cdev_ioctl+0x45()
spec_ioctl+0x5a()
fop_ioctl+0x7b()
ioctl+0x18e()
sys_syscall32+0xff()

  

ff02d1954720::findstack -v


stack pointer for thread ff02d1954720: ff0010b6ca90
[ ff0010b6ca90 _resume_from_idle+0xf1() ]
  ff0010b6cac0 swtch+0x145()
  ff0010b6caf0 cv_wait+0x61(ff02cd54054a, ff02cd540510)
  ff0010b6cb40 txg_wait_synced+0x7c(ff02cd540380, 9291)
  ff0010b6cb80 dsl_sync_task_group_wait+0xee(ff02d0b1a868)
  ff0010b6cc10 dsl_dataset_user_release+0x101(ff02d1336000,
  ff02d1336400, ff02d1336c00, 1)
  ff0010b6cc40 zfs_ioc_release+0x51(ff02d1336000)
  ff0010b6ccc0 zfsdev_ioctl+0x177(b6, 5a32, 8045660, 13,
  ff02cd646588, ff0010b6cde4)
  ff0010b6cd00 cdev_ioctl+0x45(b6, 5a32, 8045660, 13,
  ff02cd646588, ff0010b6cde4)
  ff0010b6cd40 spec_ioctl+0x5a(ff02d17c3180, 5a32, 8045660, 13,
  ff02cd646588, ff0010b6cde4, 0)
  ff0010b6cdc0 fop_ioctl+0x7b(ff02d17c3180, 5a32, 8045660, 13,
  ff02cd646588, ff0010b6cde4, 0)
  ff0010b6cec0 ioctl+0x18e(3, 5a32, 8045660)
  ff0010b6cf10 sys_syscall32+0xff()

  

ff02cd540380::print dsl_pool_t dp_tx


dp_tx = {
dp_tx.tx_cpu = 0xff02cd540680
dp_tx.tx_sync_lock = {
_opaque = [ 0 ]
}
dp_tx.tx_open_txg = 0x9292
dp_tx.tx_quiesced_txg = 0
dp_tx.tx_syncing_txg = 0x9291
dp_tx.tx_synced_txg = 0x9290
dp_tx.tx_sync_txg_waiting = 0x9292
dp_tx.tx_quiesce_txg_waiting = 0x9292
dp_tx.tx_sync_more_cv = {
_opaque = 0
}
dp_tx.tx_sync_done_cv = {
_opaque = 0x2
}
dp_tx.tx_quiesce_more_cv = {
_opaque = 0x1
}
dp_tx.tx_quiesce_done_cv = {
_opaque = 0
}
dp_tx.tx_timeout_cv = {
_opaque = 0
}
dp_tx.tx_exit_cv = {
_opaque = 0
}
dp_tx.tx_threads = 0x2
dp_tx.tx_exiting = 0
dp_tx.tx_sync_thread = 0xff000fa05c60
dp_tx.tx_quiesce_thread = 0xff000f9fcc60
dp_tx.tx_commit_cb_taskq = 0

  

ff02cd540380::print dsl_pool_t dp_tx.tx_sync_thread


dp_tx.tx_sync_thread = 0xff000fa05c60

  

0xff000fa05c60::findstack -v


stack pointer for thread ff000fa05c60: ff000fa05860
[ ff000fa05860 _resume_from_idle+0xf1() ]
  ff000fa05890 swtch+0x145()
  ff000fa058c0 cv_wait+0x61(ff000fa05e3e, ff000fa05e40)
  ff000fa05900 delay_common+0xab(1)
  ff000fa05940 delay+0xc4(1)
  ff000fa05960 dnode_special_close+0x28(ff02e8aa2050)
  ff000fa05990 dmu_objset_evict+0x160(ff02e5b91100)
  ff000fa05a20 dsl_dataset_user_release_sync+0x52(ff02e000b928,
  ff02d0b1a868, ff02e5b9c6e0)
  ff000fa05a70 dsl_sync_task_group_sync+0xf3(ff02d0b1a868,
  ff02e5b9c6e0)
  ff000fa05af0 dsl_pool_sync+0x1ec(ff02cd540380, 9291)
  ff000fa05ba0 spa_sync+0x37b(ff02cdd40b00, 9291)
  ff000fa05c40 txg_sync_thread+0x247(ff02cd540380)
  ff000fa05c50 thread_start+8()

  

::spa


ADDR STATE NAME
ff02cdd40b00ACTIVE syspool

  

ff02cdd40b00::print spa_t spa_dsl_p

[zfs-discuss] System hangs during zfs send

2010-08-25 Thread Bryan Leaman
Hi, I've been following these forums for a long time but this is my first post. 
 I'm looking for some advice on debugging an issue.  I've been looking at all 
the bug reports and updates though b146 but I can't find a good match.  I tried 
the fix for 6937998 but it didn't help.

Running Nexenta NCP3 and when I attempt to do a simple zfs send of my root pool 
(syspool) > /dev/null, it sends all the volume streams but then all IO hangs at 
the moment the send seems like it should be completed.  I have to restart the 
box at this point.

The following mdb output is from the hung system (from a savecore -L).  I'm 
still learning my way around mdb and kernel debugging so any suggestions on how 
to track this down would be really appreciated.  It seems like it's stuck 
waiting for txg_wait_synced.

> ::ptree
   ff02e8d97718  sshd
ff02e74c3570  sshd
 ff02e8d95e48  tcsh
  ff02d1cc3e20  bash
   ff02e7f4a720  bash
ff02e6bec900  zfs

> ff02e6bec900::walk thread
ff02d1954720

> ff02d1954720::threadlist -v
ADDR PROC  LWP CLS PRIWCHAN
ff02d1954720 ff02e6bec900 ff02cf543850   1  60 ff02cd54054a
  PC: _resume_from_idle+0xf1CMD: zfs send -Rvp sysp...@20100824
  stack pointer for thread ff02d1954720: ff0010b6ca90
  [ ff0010b6ca90 _resume_from_idle+0xf1() ]
swtch+0x145()
cv_wait+0x61()
txg_wait_synced+0x7c()
dsl_sync_task_group_wait+0xee()
dsl_dataset_user_release+0x101()
zfs_ioc_release+0x51()
zfsdev_ioctl+0x177()
cdev_ioctl+0x45()
spec_ioctl+0x5a()
fop_ioctl+0x7b()
ioctl+0x18e()
sys_syscall32+0xff()

> ff02d1954720::findstack -v
stack pointer for thread ff02d1954720: ff0010b6ca90
[ ff0010b6ca90 _resume_from_idle+0xf1() ]
  ff0010b6cac0 swtch+0x145()
  ff0010b6caf0 cv_wait+0x61(ff02cd54054a, ff02cd540510)
  ff0010b6cb40 txg_wait_synced+0x7c(ff02cd540380, 9291)
  ff0010b6cb80 dsl_sync_task_group_wait+0xee(ff02d0b1a868)
  ff0010b6cc10 dsl_dataset_user_release+0x101(ff02d1336000,
  ff02d1336400, ff02d1336c00, 1)
  ff0010b6cc40 zfs_ioc_release+0x51(ff02d1336000)
  ff0010b6ccc0 zfsdev_ioctl+0x177(b6, 5a32, 8045660, 13,
  ff02cd646588, ff0010b6cde4)
  ff0010b6cd00 cdev_ioctl+0x45(b6, 5a32, 8045660, 13,
  ff02cd646588, ff0010b6cde4)
  ff0010b6cd40 spec_ioctl+0x5a(ff02d17c3180, 5a32, 8045660, 13,
  ff02cd646588, ff0010b6cde4, 0)
  ff0010b6cdc0 fop_ioctl+0x7b(ff02d17c3180, 5a32, 8045660, 13,
  ff02cd646588, ff0010b6cde4, 0)
  ff0010b6cec0 ioctl+0x18e(3, 5a32, 8045660)
  ff0010b6cf10 sys_syscall32+0xff()

> ff02cd540380::print dsl_pool_t dp_tx
dp_tx = {
dp_tx.tx_cpu = 0xff02cd540680
dp_tx.tx_sync_lock = {
_opaque = [ 0 ]
}
dp_tx.tx_open_txg = 0x9292
dp_tx.tx_quiesced_txg = 0
dp_tx.tx_syncing_txg = 0x9291
dp_tx.tx_synced_txg = 0x9290
dp_tx.tx_sync_txg_waiting = 0x9292
dp_tx.tx_quiesce_txg_waiting = 0x9292
dp_tx.tx_sync_more_cv = {
_opaque = 0
}
dp_tx.tx_sync_done_cv = {
_opaque = 0x2
}
dp_tx.tx_quiesce_more_cv = {
_opaque = 0x1
}
dp_tx.tx_quiesce_done_cv = {
_opaque = 0
}
dp_tx.tx_timeout_cv = {
_opaque = 0
}
dp_tx.tx_exit_cv = {
_opaque = 0
}
dp_tx.tx_threads = 0x2
dp_tx.tx_exiting = 0
dp_tx.tx_sync_thread = 0xff000fa05c60
dp_tx.tx_quiesce_thread = 0xff000f9fcc60
dp_tx.tx_commit_cb_taskq = 0

> ff02cd540380::print dsl_pool_t dp_tx.tx_sync_thread
dp_tx.tx_sync_thread = 0xff000fa05c60

> 0xff000fa05c60::findstack -v
stack pointer for thread ff000fa05c60: ff000fa05860
[ ff000fa05860 _resume_from_idle+0xf1() ]
  ff000fa05890 swtch+0x145()
  ff000fa058c0 cv_wait+0x61(ff000fa05e3e, ff000fa05e40)
  ff000fa05900 delay_common+0xab(1)
  ff000fa05940 delay+0xc4(1)
  ff000fa05960 dnode_special_close+0x28(ff02e8aa2050)
  ff000fa05990 dmu_objset_evict+0x160(ff02e5b91100)
  ff000fa05a20 dsl_dataset_user_release_sync+0x52(ff02e000b928,
  ff02d0b1a868, ff02e5b9c6e0)
  ff000fa05a70 dsl_sync_task_group_sync+0xf3(ff02d0b1a868,
  ff02e5b9c6e0)
  ff000fa05af0 dsl_pool_sync+0x1ec(ff02cd540380, 9291)
  ff000fa05ba0 spa_sync+0x37b(ff02cdd40b00, 9291)
  ff000fa05c40 txg_sync_thread+0x247(ff02cd540380)
  ff000fa05c50 thread_start+8()

> ::spa
ADDR STATE NAME
ff02cdd40b00ACTIVE syspool

> ff02cdd40b00::print spa_t spa_dsl_pool->dp_tx.tx_sync_thread|::findstack -
v
stack pointer for thread ff000fa05c60: ff000fa05860
[ ff000fa05860 _resume_from_idle+0xf

Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-25 Thread Paul Kraus
On Wed, Aug 25, 2010 at 3:27 AM, Sigbjorn Lie  wrote:
> Wow, not bad!
>
> What is your CPU penalty for enabling compression?

Not noticeable on the server side. The NBU servers are M4000 with 4
dual core CPUs, so we have (effectively) 16 CPUs and 16 GB of RAM. The
load does climb to the 7 to 8 region when the server is busiest (over
100 backup jobs running and it is the master server). This server is
overkill for NBU, but it was sized to be a V490 but by the time we
ordered it the V490 was no longer shipping so we changed over to the
M4000 with the same configuration.

We do not have good data on the CPU penalty for client side dedupe
yet. Based on recent NBU 7.0 issues, we will probably wait for 7.1 to
upgrade production.

>> We are using Netbackup with ZFS Disk Stage under Solaris 10U8,
>> no dedupe but are getting 1.9x compression ratio :-)

>> The latest release of NBU (7.0) supports both client side and
>> server side dedupe (at additional cost ;-). We are using it in test for 
>> backing up remote servers
>> across slow WAN links with very good results.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] (preview) Whitepaper - ZFS Pools Explained - feedback welcome

2010-08-25 Thread StorageConcepts
Hello list, 

while following this list for more then 1 year, I feel that this list was a 
great way to get insights into ZFS. Thank you all for contributing.

Over the last month's I was writing a little "whitepaper" trying to consolidate 
the knowledge collected here. It has now reached a "beta" state and I would 
like to share the result with you. I call it 

  - Whitepaper: ZFS Pooling Explained 

it can be found here as a preview.

http://www.storageconcepts.de/uploads/media/StorageConcepts_Whitepaper_-_ZFS_Pooling_explained_-1e.pdf

Link here: http://www.storageconcepts.de/produkte/nexentastor/videolog/

Feedback, comments and corrections are greatly appreshiated.

Regards, 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-25 Thread Sigbjorn Lie
Wow, not bad!

What is your CPU penalty for enabling compression?


Sigbjorn



On Wed, August 18, 2010 14:11, Paul Kraus wrote:
> On Wed, Aug 18, 2010 at 7:51 AM, Peter Tribble  
> wrote:
>
>
>> I tried this with NetBackup, and decided against it pretty rapidly.
>> Basically, we
>> got hardly any dedup at all. (Something like 3%; compression gave us much 
>> better results.) Tiny
>> changes in block alignment completely ruin the possibility of significant 
>> benefit.
>
> We are using Netbackup with ZFS Disk Stage under Solaris 10U8,
> no dedupe but are getting 1.9x compression ratio :-)
>
>> Using ZFS dedup is logically the wrong place to do this; you want a decent
>> backup system that doesn't generate significant amounts of duplicate data in 
>> the first place.
>
> The latest release of NBU (7.0) supports both client side and
> server side dedupe (at additional cost ;-). We are using it in test for 
> backing up remote servers
> across slow WAN links with very good results.
>
> --
> {1-2-3-4-5-6-7-}
> Paul Kraus
> -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
> -> Sound Coordinator, Schenectady Light Opera Company (
> http://www.sloctheater.org/ )
> -> Technical Advisor, RPI Players
>
>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-25 Thread Sigbjorn Lie
Hi,

What sort of compression ratio do you get?


Sigbjorn


On Wed, August 18, 2010 12:59, Hans Foertsch wrote:
> Hello,
>
>
> we use ZFS on Solaris 10u8 as a backup to disk solution with EMC Networker.
>
> We use the standard recordsize 128k and zfs compression.
>
>
> Dedup we can't use, because of Solaris 10.
>
>
> But we working on to use more feature and look for more improvements...
>
>
> But we are happy with this solution.
>
>
> Hans
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss