Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-26 Thread Arne Jansen

Geoff Nordli wrote:


Is this the one
(http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maxim
um-performance-enterprise-solid-state-drives/ocz-vertex-2-pro-series-sata-ii
-2-5--ssd-.html) with the built in supercap? 



Yes.

Geoff 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-26 Thread Ben Miles
What supporting applications are there on Ubuntu for RAIDZ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-26 Thread Freddie Cash
On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles merloc...@hotmail.com wrote:
 What supporting applications are there on Ubuntu for RAIDZ?

None.  Ubuntu doesn't officially support ZFS.

You can kind of make it work using the ZFS-FUSE project.  But it's not
stable, nor recommended.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Accessing a zpool from another system

2010-06-26 Thread Andrej Podzimek

Hello,

I have a problem after accessing a zpool containing a boot environment from 
another system. When the zpool is accessed (imported, mounted and exported 
again) by another system, the device addresses stored in its metadata are 
overwritten. Consequently, it is not bootable any more and causes a kernel 
panic when booted. The only solution is to boot from a live CD and do a zpool 
import/export in exactly the same hardware environment, so that the correct 
device addresses are restored again.

Can this be avoided? I need the zpool to be both directly bootable and 
accessible from another (virtual) machine. How can a zpool be imported and 
mounted without changing the device addresses stored in its metadata? I know 
those hardware characteristics could be important in RAIDZ scenarios, but this 
is just one partition...

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recomendations for Storage Pool Config

2010-06-26 Thread tie...@lotas-smartman.net
Good morning all.

This question has probably poped up before, but maybe not in this exact way...

I am planning on building a SAN for my home meta centre, and have some of the 
raid cards I need for the build. I will be ordering the case soon, and then the 
drives. The cards I have are 2 8 port PXI-Express cards (A dell Perc 5 and a 
Adaptec card...). The case will have 20 hot swap SAS/SATA drives, and I will be 
adding a third RAID controller to allow the full 20 drives.

I have read something about trying to setup redundancy with the RAID 
controllers, so having zpools spanning multiple controllers. Given I won't be 
using the on-board RAID features of the cards, I am wondering how this should 
be setup...

I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose a 
controller and not lose any data from the pools... But is this theory correct? 
If I were to use 2Tb drives, each zpool would be 10Tb RAW and 6TB useable... 
giving me a total of 40Tb RAW and 24Tb usable...

Is this over kill? Should I be worrying about losing a controller?

Thanks in advance.

--Tiernan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-06-26 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Thomas Maier-Komor
 
 you can probably improve overall performance by using mbuffer [1] to
 stream the data over the network. At least some people have reported
 increased performance. mbuffer will buffer the datastream and
 disconnect
 zfs send operations from network latencies.
 
 Get it there:
 original source: http://www.maier-komor.de/mbuffer.html
 binary package:  http://www.opencsw.org/packages/CSWmbuffer/

mbuffer is also available in opencsw / blastwave.  IMHO, easier and faster
and better than building things from source, most of the time.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-26 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Ben Miles
 
 What supporting applications are there on Ubuntu for RAIDZ?

I see, supporting applications is just confusing English, because your
filesystem isn't an application.  I think you're just asking How can I do
raid on ubuntu.  This question might be more appropriate on an ubuntu list
instead.  But here's a quick answer anyway:

You can do zfs-fuse.  It's not as good as having ZFS included natively with
your OS.

Aside from that, there is no raidz available in ubuntu, or any linux.
(Well, at least theoretically you could use the Lawrence Livermore National
Laboratory linux port of ZFS, http://wiki.github.com/behlendorf/zfs/, which
will allow you to create raidz zpool's and then you can format the zvol with
ext3/ext4).  But this is lacking a lot of functionality of zfs, and
extremely new.

If you don't need raidz for example, if raid5 is ok ... man pvcreate, man
vgcreate, man lvcreate.  It's not as good as zfs or raidz, but it does
support software raid5.  But why would you want to do software raid5 on
anything that doesn't have zfs?  If you're not using zfs, you should get a
hardware raid controller with writeback cache and BBU.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recomendations for Storage Pool Config

2010-06-26 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Tiernan OToole
 
 I have read something about trying to setup redundancy with the RAID
 controllers, so having zpools spanning multiple controllers. Given I
 won't be using the on-board RAID features of the cards, I am wondering
 how this should be setup.

Option 1:
Suppose you have controllers, A, B, C.
Suppose you have 8 disks per controller, 0, 1, ..., 7
Configure your controller for JBOD mode (so the OS can see each individual
disk) or configure each individual disk as a 1-disk raid0 or raid1, which is
essentially the same as JBOD.  Point is, the OS needs to see each individual
disk.

mirror A0 B0 C0 mirror A1 B1 C1 mirror A2 B2 C2 ...

You will have the total usable capacity of 8 disks, and 16 disks redundancy.

Option 2:

raidz A0 B0 C0 raidz A1 B1 C1 raidz A2 B2 C2 ...

You will have the total usable capacity of 16 disks, and 8 disks redundancy.

Option 3:

If you have any less than 8 disks redundancy, you won't be protected against
controller failure.  It's a calculated risk; a crashed controller would
result in the zpool going offline, and probably an OS crash.  I don't know
the probability of data loss in that scenario, but I know none of this
should be a problem if you *do* have the controller redundancy. 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-26 Thread David Magda


On Jun 26, 2010, at 02:09, Arne Jansen wrote:


Geoff Nordli wrote:

Is this the one
(http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maxim
um-performance-enterprise-solid-state-drives/ocz-vertex-2-pro- 
series-sata-ii

-2-5--ssd-.html) with the built in supercap?


Yes.


Crickey. Who's the genius who thinks of these URLs?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-26 Thread Roy Sigurd Karlsbakk
- Original Message -
 On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles merloc...@hotmail.com
 wrote:
  What supporting applications are there on Ubuntu for RAIDZ?
 
 None. Ubuntu doesn't officially support ZFS.
 
 You can kind of make it work using the ZFS-FUSE project. But it's not
 stable, nor recommended.

FYI, zfs-fuse is in 10.04 by default

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recomendations for Storage Pool Config

2010-06-26 Thread Roy Sigurd Karlsbakk
 I am planning on building a SAN for my home meta centre, and have some
 of the raid cards I need for the build. I will be ordering the case
 soon, and then the drives. The cards I have are 2 8 port PXI-Express
 cards (A dell Perc 5 and a Adaptec card…). The case will have 20 hot
 swap SAS/SATA drives, and I will be adding a third RAID controller to
 allow the full 20 drives.

First, you won't need a RAID controller. Just get something cheap with lots of 
ports. It'll suffice - ZFS will do the smart stuff.

 I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose
 a controller and not lose any data from the pools… But is this theory
 correct? If I were to use 2Tb drives, each zpool would be 10Tb RAW and
 6TB useable… giving me a total of 40Tb RAW and 24Tb usable…

I don't see the point of using more than one pool. Just use more VDEVs in the 
same pool, and redundancy will be just as good as with more pools, and a single 
pool is far more flexible.
 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-26 Thread Ben Miles
I tried to post this question on the Ubuntu forum.
Within 30 minutes my post was on the second page of new posts...

Yah.  Im really not down with using Ubuntu on my server here.  But I may be 
forced to.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Creating a dataset from a directory?

2010-06-26 Thread Roy Sigurd Karlsbakk
Hi all

I've ran into this thing a few times. A directory is filled up with scientific 
data, and a subdirectory of that is used as a workspace with hundreds of 
gigabytes of temporary data. When automatic snapshotting is enabled, this 
obviously eats disk space. Now, the obvious way to solve this is to never use 
those areas as workspace areas, and to teach the users to use separate areas 
for that, but then, the world is not always a nice place.

Would it be hard to add a way create a new dataset with an existing directory, 
separating this directory and subdirectories and files, perhaps also snapshots?
 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-26 Thread Roy Sigurd Karlsbakk
- Original Message -
 I tried to post this question on the Ubuntu forum.
 Within 30 minutes my post was on the second page of new posts...
 
 Yah. Im really not down with using Ubuntu on my server here. But I may
 be forced to.

As others have suggested, perhaps you should try FreeBSD?
 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] /bin/cp vs /usr/gnu/bin/pc

2010-06-26 Thread Oliver Seidel
Hello,

I came across this blog post:

http://kevinclosson.wordpress.com/2007/03/15/copying-files-on-solaris-slow-or-fast-its-your-choice/

and would like to hear from you performance gurus how this 2007 article relates 
to the 2010 ZFS implementation?  What should I use and why?

Thanks,

Oliver
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /bin/cp vs /usr/gnu/bin/pc

2010-06-26 Thread Dennis Clarke
 Hello,
 
 I came across this blog post:
 
 http://kevinclosson.wordpress.com/2007/03/15/copying-files-on-solaris-slow-or-fast-its-your-choice/
 
 and would like to hear from you performance gurus how this 2007 
 article relates to the 2010 ZFS implementation?  What should I use and 
 why?
 

[ WARNING : red herring non-sequiter follows ]

My PATH looks like so : 

$ echo $PATH
/opt/SUNWspro/bin:/usr/xpg6/bin:/usr/xpg4/bin:/usr/ccs/bin:/usr/bin:/usr/sbin:/bin:/sbin

Thus I have no such issues with the GNU vs OpenGroup/POSIX compliance tools.

Dennis 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /bin/cp vs /usr/gnu/bin/pc

2010-06-26 Thread ольга крыжановская
Try in /usr/bin/ksh93:
builtin cp
and benchmark that.

Olga

On Sat, Jun 26, 2010 at 5:18 PM, Oliver Seidel opensola...@os1.net wrote:
 Hello,

 I came across this blog post:

 http://kevinclosson.wordpress.com/2007/03/15/copying-files-on-solaris-slow-or-fast-its-your-choice/

 and would like to hear from you performance gurus how this 2007 article 
 relates to the 2010 ZFS implementation?  What should I use and why?

 Thanks,

 Oliver
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
  ,   __   ,
 { \/`o;-Olga Kryzhanovska   -;o`\/ }
.'-/`-/ olga.kryzhanov...@gmail.com   \-`\-'.
 `'-..-| /   http://twitter.com/fleyta \ |-..-'`
  /\/\ Solaris/BSD//C/C++ programmer   /\/\
  `--`  `--`
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trouble detecting Seagate LP 2TB drives

2010-06-26 Thread Brandon High
For those wanting more details, it's in a flag day from 2009/9/11:

http://hub.opensolaris.org/bin/view/Community+Group+on/2008091102

FreeNAS worked fine with the system and drives, which made for a nice
fallback plan.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Detaching a clone from a snapshot

2010-06-26 Thread Brandon High
On Fri, Jun 25, 2010 at 10:28 PM, Andrej Podzimek and...@podzimek.org wrote:
 I can't get rid of S (and of the error message in zpool status) without
 removing either C1 or C2. Is there a solution other than removing C1 or C2?

You can use send|recv to create a new copy of one or both of the
clones. This should remove any dependancy on the original snaopshot.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss