Re: [zfs-discuss] ZFS Deduplication Replication

2009-11-24 Thread Peter Brouwer, Principal Storage Architect




Hi Darren,

Could you post the -D part of the man pages? I have no access to a
system (yet) with the latest man pages.
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m
has not been updated yet.

Regards
Peter

Darren J Moffat wrote:
Steven
Sim wrote:
  
  Hello;


Dedup on ZFS is an absolutely wonderful feature!


Is there a way to conduct dedup replication across boxes from one dedup
ZFS data set to another?

  
  
Pass the '-D' argument to 'zfs send'.
  
  


-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-05-31 Thread Peter Brouwer, Principal Storage Architect




Hello Jorgen

I used a VIA SN18000 itx board housed in a chenbro mini case (http://www.chenbro.com/corporatesite/products_detail.php?sku=79)
that can hold up to 4 sata drives ( hot pluggable) dvd drive and
internal ide. Can boot from flash via IDE, have not tried that yet.
Neat little setup.

Peter


Jorgen Lundman wrote:

Re-surfacing an old thread. I was wondering myself if there are any
home-use commercial NAS devices with zfs. I did find that there is
Thecus 7700. But, it appears to come with Linux, and use ZFS in FUSE,
but I (perhaps unjustly) don't feel comfortable with :)
  
  
Perhaps we will start to see more home NAS devices with zfs options, or
at least to be able to run EON ?
  
  
  
  
  
Joe S wrote:
  
  In the last few weeks, I've seen a number of
new NAS devices released

from companies like HP, QNAP, VIA, Lacie, Buffalo, Iomega,

Cisco/Linksys, etc. Most of these are powered by Intel Celeron, Intel

Atom, AMD Sempron, Marvell Orion, or Via C7 chips. I've also noticed

that most allow a maximum of 1 or 2 GB of RAM.


Is it likely that any of these will run OpenSolaris?


Has anyone else tried?


http://www.via.com.tw/en/products/embedded/nsd7800/

http://www.hp.com/united-states/digitalentertainment/mediasmart/serverdemo/index-noflash.html

http://www.qnap.com/pro_detail_feature.asp?p_id=108


I prefer one of these instead of the huge PC I have at home.

___

zfs-discuss mailing list

zfs-discuss@opensolaris.org

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  
  


-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-27 Thread Peter Brouwer, Principal Storage Architect






dick hoogendijk wrote:

  On Thu, 27 Nov 2008 12:58:20 +
Chris Ridd [EMAIL PROTECTED] wrote:

  
  
I'm not 100% convinced it'll boot if half the mirror's not there,

  
  
Believe me, it will (been there done that). You -have- to make sure
though that both disks have installgrub ... And that your bios is able
to boot from the other disk.

You can always try it ou by pulling a plug from one of the disks ;-)
  

Or less drastic, use the BIOS to point to one of the disks in the
mirror to boot from.
If you have used installgrub to setup the stage 12 for both boot
drives you can boot from either one.


-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for write-only media?

2008-04-22 Thread Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems






Ralf Bertling wrote:

  
I am not an expert, but the MTTDL is in tousands of years when using  
raidz2 with a hot-spare and regular scrubbing.
If you add zpool send/receive and geographically dislocated severs,  
this may be better than optical media, because you detect the errors  
early.
  

Two things to be aware of with this type of statistical info ( also
true for MTBF ),
It is statistical info and it is an average, it does not give any
guarantee that a catastrophic event will not happen in the next hr.
Secondly this figure is defined on subsystem level, if the box goes up
in smoke the thousands of years MTTDL is of no use!

So the moral of the story is to always plan for a second copy of your
data if you want to design resilience from a data set point of view. 
-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs and smb performance

2008-03-27 Thread Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems




Hello abs

Would you be able to repeat the same tests for the cifs in zfs option
instead of using samba?
Would be interesting to see how the kernel cifs versus the samba
performance compare.

Peter

abs wrote:
hello all, 
i have two xraids connect via fibre to a poweredge2950. the 2 xraids
are configured with 2 raid5 volumes each, giving me a total of 4 raid5
volumes. these are striped across in zfs. the read and write speeds
local to the machine are as expected but i have noticed some
performance hits in the read and write speed over nfs and samba.
  
here is the observation:
  
each filesystem is shared via nfs as well as samba. 
i am able to mount via nfs and samba on a Mac OS 10.5.2 client.
i am able to only mount via nfs on a Mac OS 10.4.11 client. (there
seems to be authentication/encryption issue between the 10.4.11 client
and solaris box in this scenario. i know this is a bug on the client
side)
  
when writing a file via nfs from the 10.5.2 client the speeds are 60 ~
70 MB/sec.
when writing a file via samba from the 10.5.2 client the speeds are 30
~ 50 MB/sec
  
when writing a file via nfs from the 10.4.11 client the speeds are 20 ~
30 MB/sec.
  
when writing a file via samba from a Windows XP client the speeds are
30 ~ 40 MB.
  
i know that there is an implementational difference in nfs and samba on
both Mac OS 10.4.11 and 10.5.2 clients but that still does not explain
the Windows scenario.
  
  
i was wondering if anyone else was experiencing similar issues and if
there is some tuning i can do or am i just missing something. thanx in
advance.
  
cheers, 
abs
  
  
  
  
  
   
  Never miss a thing. 
Make Yahoo your homepage.
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] boot from zfs, mirror config issue

2008-03-26 Thread Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems




Hello,

I ran into the following issue when configuring a system with ZFS for
root.
The restriction is that it can only be either a single disk or a mirror
pool for root.
Trying to set bootfs on a zpool that does not satisfy the above
criteria fails, so this is good.

However when adding a mirror set to the zpool like zpool add rootpool
mirror disk3 disk 4 is not blocked.
Once this command is executed you are toast as you cannot remove the
extra mirror. 
Executing zpool set bootfs now fails.

If this is general zfs behavior , would it not be better for zpool to
check if bootfs is set and refuse any zpool add command that
compromises the boot restrictions of the pool bootfs is set for.

-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool automount issue

2008-03-26 Thread Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems




Hello,

I ran into an issue with the automount feature of zpool.
Normal default behavior is for the pool and filesystems in it to be
automatically mounted, unless you set zfs/zpool set
mountpoint=[legacy|/yourname]

When I used 'export' as pool name I could not get it to automount.
Wonder if export is a reserved name as it is part of the zpool command
syntax.

Anyone seen this too?

Used the nexenta opensolaris derivative.
-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss