[zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-25 Thread Arne Jansen
Now the test for the Vertex 2 Pro. This was fun.
For more explanation please see the thread Crucial RealSSD C300 and cache
flush?
This time I made sure the device is attached via 3GBit SATA. This is also
only a short test. I'll retest after some weeks of usage.

cache enabled, 32 buffers, 64k blocks
linear write, random data: 96 MB/s
linear read, random data: 206 MB/s
linear write, zero data: 234 MB/s
linear read, zero data: 255 MB/s
random write, random data: 84 MB/s
random read, random data: 180 MB/s
random write, zero data: 224 MB/s
randow read, zero data: 190 MB/s

cache enabled, 32 buffers, 4k blocks
linear write, random data: 93 MB/s
linear read, random data: 138 MB/s
linear write, zero data: 113 MB/s
linear read, zero data: 141 MB/s
random write, random data: 41 MB/s (10300 ops/s)
random read, random data: 76 MB/s (19000 ops/s)
random write, zero data: 54 MB/s (13800 ops/s)
random read, zero data: 91 MB/s (22800 ops/s)


cache enabled, 1 buffer, 4k blocks
linear write, random data: 62 MB/s (15700 ops/s)
linear read, random data: 32 MB/s (8000 ops/s)
linear write, zero data: 64 MB/s (16100 ops/s)
linear read, zero data: 45 MB/s (11300 ops/s)
random write, random data: 14 MB/s (3400 ops/s)
random read, random data: 22 MB/s (5600 ops/s)
random write, zero data: 19 MB/s (4500 ops/s)
random read, zero data: 21 MB/s (5100 ops/s)

cache enabled, 1 buffer, 4k blocks, with cache flushes:
linear write, random data, flush after every write: 5700 ops/s
linear write, zero data, flush after every write: 5700 ops/s
linear write, random data, flush after every 4th write: 8500 ops/s
linear write, zero data, flush after every 4th write: 8500 ops/s

Some remarks:

The random op numbers have to be read with care:
 - reading occurs in the same order as the writing before
 - the ops are not aligned to any specific boundary

The device also passed the write-loss-test: after 5 repeats no
data has been lost.

It doesn't make any difference if the cache is enabled or disabled, so
it might be worth to tune zfs to not issue cache flushes.

Conclusion: This device will make an excellent slog device. I'll order
them today ;)

--Arne
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS root recovery SMI/EFI label weirdness

2010-06-25 Thread Sean .
I've been testing the ZFS root recovery using 10u6 and have come across a very 
odd problem.

When following this procedure I the disk I am setting up my rpool on keeps 
reverting to an EFI label.

http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=ena=view

Here is what the exact steps I am doing;

 boot net -s

# mount -F nfs remote-system:/ipool/snapshots /mnt

# format -e (Change the label to SMI and check after)

# prtvtoc /dev/rdsk/c1t0d0s0
* /dev/rdsk/c1t0d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 424 sectors/track
*  24 tracks/cylinder
*   10176 sectors/cylinder
*   14089 cylinders
*   14087 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   2  501  0 143349312 143349311
   6  400  0 143349312 143349311

# zpool create -f -o failmode=continue -R /a -m legacy -o 
cachefile=/etc/zfs/zpool.cache rpool c1t0d0

# cat /mnt/rpool.2406 | zfs receive -Fdu rpool

# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool   5.55G  61.4G96K  /a/rpool
rp...@2406  0  -96K  -
rpool/ROOT  4.40G  61.4G20K  legacy
rpool/r...@2406 0  -20K  -
rpool/ROOT/beroot   4.40G  61.4G  4.40G  /a
rpool/ROOT/ber...@2406  0  -  4.40G  -
rpool/dump  1.00G  61.4G  1.00G  -
rpool/d...@2406 0  -  1.00G  -
rpool/export 147M  61.4G   147M  /a/export
rpool/exp...@2406   0  -   147M  -
rpool/export/home 20K  61.4G20K  /a/export/home
rpool/export/h...@2406  0  -20K  -
rpool/swap16K  61.4G16K  -
rpool/s...@2406 0  -16K  -

# zpool set bootfs=rpool/ROOT/beroot rpool
cannot set property for 'rpool': property 'bootfs' not supported on EFI labeled 
devices

# prtvtoc /dev/rdsk/c1t0d0s0
* /dev/rdsk/c1t0d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 143374738 sectors
* 143374671 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector
*  34   222   255
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256 143358065 143358320
   8 1100  143358321 16384 143374704
#


If I destory the rpool and go back into format -e the label is set to EFI as 
confirmed by the vtoc above!

I'm stumped by this one. Is this a known problem with 10u6?

Cheers.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root recovery SMI/EFI label weirdness

2010-06-25 Thread Sean .
I've discovered the source of the problem.

zpool create -f -o failmode=continue -R /a -m legacy -o 
cachefile=/etc/zfs/zpool.cache rpool c1t0d0

It seems a root pool must only be created on a slice. Therefore 

zpool create -f -o failmode=continue -R /a -m legacy -o 
cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0

will work. I've been reading through some of the ZFS root installation stuff 
and can't find a note that explicitly states this although a bit of bing'ing 
and I found a thread that confirmed this.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Maximum zfs send/receive throughput

2010-06-25 Thread Mika Borner


It seems we are hitting a boundary with zfs send/receive over a network 
link (10Gb/s). We can see peak values of up to 150MB/s, but on average 
about 40-50MB/s are replicated. This is far away from the bandwidth that 
a 10Gb link can offer.


Is it possible, that ZFS is giving replication a too low 
priority/throttling it too much?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-06-25 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Mika Borner
 
 It seems we are hitting a boundary with zfs send/receive over a network
 link (10Gb/s). We can see peak values of up to 150MB/s, but on average
 about 40-50MB/s are replicated. This is far away from the bandwidth
 that
 a 10Gb link can offer.
 
 Is it possible, that ZFS is giving replication a too low
 priority/throttling it too much?

I don't think this is called replication, so ... careful about
terminology.

zfs send can go as fast as your hardware is able to read.  If you'd like to
know how fast your hardware is, try this:
zfs send somefilesystem | pv -i 30  /dev/null
(You might want to install pv from opencsw or blastwave.)

I think, in your case, you'll see something around 40-50MB/s

I will also add this much:  If you send the original snapshot of your
complete filesystem, it'll probably go very fast.  (Much faster than 40-50
MB/s).  Because all those blocks are essentially sequential blocks on disk.
When you're sending incrementals ... They are essentially more fragmented
... so the total throughput is lower.  The disks have to perform a greater
random IO percentage.

I have a very fast server, and my zfs send is about half as fast as yours.

In both cases, it's enormously faster than some other backup tool, like tar
or rsync or whatever.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-06-25 Thread Thomas Maier-Komor
On 25.06.2010 14:32, Mika Borner wrote:
 
 It seems we are hitting a boundary with zfs send/receive over a network
 link (10Gb/s). We can see peak values of up to 150MB/s, but on average
 about 40-50MB/s are replicated. This is far away from the bandwidth that
 a 10Gb link can offer.
 
 Is it possible, that ZFS is giving replication a too low
 priority/throttling it too much?
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

you can probably improve overall performance by using mbuffer [1] to
stream the data over the network. At least some people have reported
increased performance. mbuffer will buffer the datastream and disconnect
zfs send operations from network latencies.

Get it there:
original source: http://www.maier-komor.de/mbuffer.html
binary package:  http://www.opencsw.org/packages/CSWmbuffer/

- Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-25 Thread Thomas Burgess


 Conclusion: This device will make an excellent slog device. I'll order
 them today ;)


I have one and i love it...I sliced it though, used 9 gb for ZIL and the
rest for L2ARC (my server is on a smallish network with about 10 clients)

It made a huge difference in NFS performance and other stuff as well (for
instance, doing something like du will run a TON faster than before)

For the money, it's a GREAT deal.  I am very impressed



 --Arne
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-25 Thread Przemyslaw Ceglowski
On 25 Jun 2010, at 15:23, Thomas Burgess 
wonsl...@gmail.commailto:wonsl...@gmail.com wrote:


Conclusion: This device will make an excellent slog device. I'll order
them today ;)


I have one and i love it...I sliced it though, used 9 gb for ZIL and the rest 
for L2ARC (my server is on a smallish network with about 10 clients)

It made a huge difference in NFS performance and other stuff as well (for 
instance, doing something like du will run a TON faster than before)

For the money, it's a GREAT deal.  I am very impressed


--Arne
___
zfs-discuss mailing list
mailto:zfs-discuss@opensolaris.orgzfs-discuss@opensolaris.orgmailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

ATT1..c

Would be great if someone could test the SLC EX version of this drive.

---Przem
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root recovery SMI/EFI label weirdness

2010-06-25 Thread Richard Elling
On Jun 25, 2010, at 4:44 AM, Sean . wrote:

 I've discovered the source of the problem.
 
 zpool create -f -o failmode=continue -R /a -m legacy -o 
 cachefile=/etc/zfs/zpool.cache rpool c1t0d0
 
 It seems a root pool must only be created on a slice. Therefore 
 
 zpool create -f -o failmode=continue -R /a -m legacy -o 
 cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0
 
 will work. I've been reading through some of the ZFS root installation stuff 
 and can't find a note that explicitly states this although a bit of bing'ing 
 and I found a thread that confirmed this.

See the ZFS Administration Guide section on Creating a ZFS Root Pool,
first bullet
+ Disks used for the root pool must have a VTOC (SMI) label and the 
pool must be created with disk slices

 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root recovery SMI/EFI label weirdness

2010-06-25 Thread Cindy Swearingen

Sean,

If you review the doc section you included previously, you will see
that all the root pool examples include slice 0.

The slice is a long-standing boot requirement and is described in
the boot chapter, in this section:

http://docs.sun.com/app/docs/doc/819-5461/ggrko?l=ena=view

ZFS Storage Pool Configuration Requirements

The pool must exist either on a disk slice or on disk slices that are
mirrored.

Thanks,

Cindy

On 06/25/10 05:44, Sean . wrote:

I've discovered the source of the problem.

zpool create -f -o failmode=continue -R /a -m legacy -o 
cachefile=/etc/zfs/zpool.cache rpool c1t0d0

It seems a root pool must only be created on a slice. Therefore 


zpool create -f -o failmode=continue -R /a -m legacy -o 
cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0

will work. I've been reading through some of the ZFS root installation stuff 
and can't find a note that explicitly states this although a bit of bing'ing 
and I found a thread that confirmed this.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-25 Thread Geoff Nordli


From: Arne Jansen
Sent: Friday, June 25, 2010 3:21 AM

Now the test for the Vertex 2 Pro. This was fun.
For more explanation please see the thread Crucial RealSSD C300 and cache
flush?
This time I made sure the device is attached via 3GBit SATA. This is also
only a
short test. I'll retest after some weeks of usage.

cache enabled, 32 buffers, 64k blocks
linear write, random data: 96 MB/s
linear read, random data: 206 MB/s
linear write, zero data: 234 MB/s
linear read, zero data: 255 MB/s
random write, random data: 84 MB/s
random read, random data: 180 MB/s
random write, zero data: 224 MB/s
randow read, zero data: 190 MB/s

cache enabled, 32 buffers, 4k blocks
linear write, random data: 93 MB/s
linear read, random data: 138 MB/s
linear write, zero data: 113 MB/s
linear read, zero data: 141 MB/s
random write, random data: 41 MB/s (10300 ops/s) random read, random data:
76 MB/s (19000 ops/s) random write, zero data: 54 MB/s (13800 ops/s) random
read, zero data: 91 MB/s (22800 ops/s)


cache enabled, 1 buffer, 4k blocks
linear write, random data: 62 MB/s (15700 ops/s) linear read, random data:
32
MB/s (8000 ops/s) linear write, zero data: 64 MB/s (16100 ops/s) linear
read, zero
data: 45 MB/s (11300 ops/s) random write, random data: 14 MB/s (3400 ops/s)
random read, random data: 22 MB/s (5600 ops/s) random write, zero data: 19
MB/s (4500 ops/s) random read, zero data: 21 MB/s (5100 ops/s)

cache enabled, 1 buffer, 4k blocks, with cache flushes:
linear write, random data, flush after every write: 5700 ops/s linear
write, zero
data, flush after every write: 5700 ops/s linear write, random data, flush
after
every 4th write: 8500 ops/s linear write, zero data, flush after every 4th
write:
8500 ops/s


Some remarks:

The random op numbers have to be read with care:
 - reading occurs in the same order as the writing before
 - the ops are not aligned to any specific boundary

The device also passed the write-loss-test: after 5 repeats no data has
been lost.

It doesn't make any difference if the cache is enabled or disabled, so it
might be
worth to tune zfs to not issue cache flushes.

Conclusion: This device will make an excellent slog device. I'll order them
today ;)

--Arne

Arne, thanks for doing these tests, they are great to see.  

Is this the one
(http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maxim
um-performance-enterprise-solid-state-drives/ocz-vertex-2-pro-series-sata-ii
-2-5--ssd-.html) with the built in supercap? 

Geoff 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recomendations for Storage Pool Config

2010-06-25 Thread Tiernan OToole
Good morning all.



This question has probably poped up before, but maybe not in this exact way…



I am planning on building a SAN for my home meta centre, and have some of
the raid cards I need for the build. I will be ordering the case soon, and
then the drives. The cards I have are 2 8 port PXI-Express cards (A dell
Perc 5 and a Adaptec card…). The case will have 20 hot swap SAS/SATA drives,
and I will be adding a third RAID controller to allow the full 20 drives.



I have read something about trying to setup redundancy with the RAID
controllers, so having zpools spanning multiple controllers. Given I won’t
be using the on-board RAID features of the cards, I am wondering how this
should be setup…



I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose a
controller and not lose any data from the pools… But is this theory correct?
If I were to use 2Tb drives, each zpool would be 10Tb RAW and 6TB useable…
giving me a total of 40Tb RAW and 24Tb usable…



Is this over kill? Should I be worrying about losing a controller?



Thanks in advance.



--Tiernan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recomendations for Storage Pool Config

2010-06-25 Thread Cindy Swearingen

Tiernan,

Hardware redundancy is important, but I would be thinking about how you
are going to back up data in the 6-24 TB range, if you actually need
that much space.

Balance your space requirements with good redundancy and how much data
you can safely back up because stuff happens: hardware fails, power
fails, and you can lose data.

More suggestions:

1. Test some configs for your specific data/environment.

2. Start with smaller mirrored pools, which offer redundancy, good
performance, and more flexibility.

With a SAN, I would assume you are using multiple systems. Did you mean
meta centre or media centre?

3. Consider a mirrored source pool and then create snapshots that you
send to a mirrored backup pool on another system. Mirrored pools can be
easily expanded when you need more space.

4. If you are running a recent OpenSolaris build, you could use the
zpool split command to attach and detach disks from your source pool to
replicate it on another system, in addition to doing more regular
snapshots of source data.

Thanks,

Cindy

On 06/25/10 13:26, Tiernan OToole wrote:

Good morning all.

 


This question has probably poped up before, but maybe not in this exact way…

 

I am planning on building a SAN for my home meta centre, and have some 
of the raid cards I need for the build. I will be ordering the case 
soon, and then the drives. The cards I have are 2 8 port PXI-Express 
cards (A dell Perc 5 and a Adaptec card…). The case will have 20 hot 
swap SAS/SATA drives, and I will be adding a third RAID controller to 
allow the full 20 drives.


 

I have read something about trying to setup redundancy with the RAID 
controllers, so having zpools spanning multiple controllers. Given I 
won’t be using the on-board RAID features of the cards, I am wondering 
how this should be setup…


 

I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose a 
controller and not lose any data from the pools… But is this theory 
correct? If I were to use 2Tb drives, each zpool would be 10Tb RAW and 
6TB useable… giving me a total of 40Tb RAW and 24Tb usable…


 


Is this over kill? Should I be worrying about losing a controller?

 


Thanks in advance.

 


--Tiernan




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Apparent resilver slow down

2010-06-25 Thread Ian Collins
I've noticed (at least on Solaris 10) that the resiver rate appears to 
slow down considerably as it nears completion.


On an eight 500G raidz2 vdev, after 28 hours zpool status reported:

spare DEGRADED 0 063
  c1t6d0  DEGRADED 0 011  too many errors
  c5t5d0  ONLINE   0 0 0  513G resilvered

Now after 55 hours:

spare DEGRADED 0 063
  c1t6d0  DEGRADED 0 011  too many errors
  c5t5d0  ONLINE   0 0 0  547G resilvered

Which looks like 500GB in the first 28 hours and 34GB in the next.

Is this an artefact of the reporting, or something else?  547G 
resilvered on a 500GB drive is a little suspicious!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS on Ubuntu

2010-06-25 Thread Ben Miles
How much of a difference is there in supporting applications in between Ubuntu 
and OpenSolaris?
I was not considering Ubuntu until OpenSOlaris would not load onto my machine...

Any info would be great. I have not been able to find any sort of comparison of 
ZFS on Ubuntu and OS.

Thanks.

(My current OS install troubleshoot thread - 
http://opensolaris.org/jive/thread.jspa?messageID=488193#488193)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-25 Thread Freddie Cash
On Fri, Jun 25, 2010 at 6:31 PM, Ben Miles merloc...@hotmail.com wrote:
 How much of a difference is there in supporting applications in between 
 Ubuntu and OpenSolaris?
 I was not considering Ubuntu until OpenSOlaris would not load onto my 
 machine...

 Any info would be great. I have not been able to find any sort of comparison 
 of ZFS on Ubuntu and OS.

 Thanks.

 (My current OS install troubleshoot thread - 
 http://opensolaris.org/jive/thread.jspa?messageID=488193#488193)

If you want ZFS, then go with FreeBSD instead of Ubuntu.  FreeBSD 8.1
includes ZFSv14 with patches available for ZFSv15 and ZFSv16.  You'll
get a more stable, better performant system than trying to shoehorn
ZFS-FUSE into Ubuntu (we've tried with Debian, and ZFS-FUSE is good
for short-term testing, but not production use).


-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-25 Thread Erik Trimble

On 6/25/2010 6:49 PM, Freddie Cash wrote:

On Fri, Jun 25, 2010 at 6:31 PM, Ben Milesmerloc...@hotmail.com  wrote:
   

How much of a difference is there in supporting applications in between Ubuntu 
and OpenSolaris?
I was not considering Ubuntu until OpenSOlaris would not load onto my machine...

Any info would be great. I have not been able to find any sort of comparison of 
ZFS on Ubuntu and OS.

Thanks.

(My current OS install troubleshoot thread - 
http://opensolaris.org/jive/thread.jspa?messageID=488193#488193)
 

If you want ZFS, then go with FreeBSD instead of Ubuntu.  FreeBSD 8.1
includes ZFSv14 with patches available for ZFSv15 and ZFSv16.  You'll
get a more stable, better performant system than trying to shoehorn
ZFS-FUSE into Ubuntu (we've tried with Debian, and ZFS-FUSE is good
for short-term testing, but not production use).
   


See a previous thread on this list (i.e. look in the archives for 
May/June) for a still-in-progress port of kernel-level ZFS to Linux.  
It's not ready yet, but they promise Real Soon Now!


That said, if you need ZFS right now, it's either FreeBSD or OpenSolaris 
(or Solaris 10).



Two other considerations from your original message:

(1) What do you mean by supporting applications?  Do you mean are the 
same applications available on Linux and OpenSolaris?  Or do you mean 
that you are writing an application (or have application source) that 
was targeted for OpenSolaris/Solaris, and would like to now port it to 
Linux?


(2) Ubuntu is a desktop distribution. Don't be fooled by their server 
version. It's not - it has too many idiosyncrasies and bad design 
choices to be a stable server OS.  Use something like Debian, SLES, or 
RHEL/CentOS.




(also - have you tried installing the original 2009.06 stable 
OpenSolaris version? It might not have the install issues you're running 
into with the Dev branch, and give you something to do while you wait 
for the next 2010.X stable version of OpenSolaris...)


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-25 Thread Rodrigo E . De León Plicet
On Fri, Jun 25, 2010 at 9:08 PM, Erik Trimble erik.trim...@oracle.com wrote:
 (2) Ubuntu is a desktop distribution. Don't be fooled by their server
 version. It's not - it has too many idiosyncrasies and bad design choices to
 be a stable server OS.  Use something like Debian, SLES, or RHEL/CentOS.

Why would you say that?

What idiosyncrasies and bad design choices are you talking about?

Just curious.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Trouble detecting Seagate LP 2TB drives

2010-06-25 Thread Brandon High
I recently installed a Seagate LP drive in an Atom ICH7 based system. The
drive is showing up dmesg but not available in format. Is this a known
problem? Is there a work around for it?

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Detaching a clone from a snapshot

2010-06-25 Thread Andrej Podzimek

Hello,

Is it possible to detach a clone from its snapshot (and copy all its data 
physically)? I ran into an obscure situation where 'zfs promote' does not help.

Snapshot S has clones C1 and C2, both of which are boot environments. S has a 
data error that cannot be corrected. The error affects *one* crash dump file, 
so it's obviously benign. (The system crashed when a crash dump was being 
transferred from the dump device to /var/crash and this happened more than 
once. This is a nightly + onu system, so accidents might happen.)

If I understand it well, the original dependency graph looks like this

C1 - S - C2,

and I can only achieve one the following with 'zfs promote':

C1 - S - C2
C1 - S - C2

I can't get rid of S (and of the error message in zpool status) without 
removing either C1 or C2. Is there a solution other than removing C1 or C2?

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trouble detecting Seagate LP 2TB drives

2010-06-25 Thread Brandon High
On Fri, Jun 25, 2010 at 9:20 PM, Brandon High bh...@freaks.com wrote:
 I recently installed a Seagate LP drive in an Atom ICH7 based system. The
 drive is showing up dmesg but not available in format. Is this a known
 problem? Is there a work around for it?

I just found an older thread where this was discussed. Apparently
32-bit kernels can't support drives over 1GB.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss