Re: [zfs-discuss] CPU Limited on Checksums?

2011-02-08 Thread Richard Elling
On Feb 8, 2011, at 8:41 AM, Krunal Desai wrote:

> Hi all,
> 
> My system is powered by an Intel Core 2 Duo (E6600) with 8GB of RAM.
> Running into some very heavy CPU usage.

The data below does not show heavy CPU usage. Do you have data that
does show heavy CPU usage?  mpstat would be a good start.

> 
> First, a copy from one zpool to another (cp -aRv /oldtank/documents*
> /tank/documents/*), both in the same system. Load averages are around
> ~4.8. I think I used lockstat correctly, and found the following:
> 
> movax@megatron:/tank# lockstat -kIW -D 20 sleep 30
> 
> Profiling interrupt: 2960 events in 30.516 seconds (97 events/sec)

97 events/sec is quite a small load.

> 
> Count indv cuml rcnt nsec Hottest CPU+PILCaller
> ---
> 1518  51%  51% 0.00 1800 cpu[0] SHA256TransformBlocks

This says that 51% of the events were SHA256TransformBlocks and they
consumed about 1800 nanoseconds (!) of CPU time each for a cumulative 2,732,400
nanoseconds or 0.009% of one CPU during thesampling interval. To put this in
perspective, a write to a HDD can easily take more than 4,000,000 nanoseconds.

>  334  11%  63% 0.00 2820 cpu[0]
> vdev_raidz_generate_parity_pq
>  261   9%  71% 0.00 3493 cpu[0] bcopy_altentry
>  119   4%  75% 0.00 3033 cpu[0] mutex_enter
>   73   2%  78% 0.00 2818 cpu[0] i86_mwait
> 
> 
> So, obviously here it seems checksum calculation is, to put it mildly,
> eating up CPU cycles like none other. I believe it's bad(TM) to turn
> off checksums? (zfs property just has checksum=on, I guess it has
> defaulted to SHA256 checksums?)

Some ZFS checksums are always SHA-256. By default, data checksums are
Fletcher4 on most modern ZFS implementations, unless dedup is enabled.

> 
> Second, a copy from my desktop PC to my new zpool. (5900rpm drive over
> GigE to 2 6-drive RAID-Z2s). Load average are around ~3.

Lockstat won't provide direct insight to the run queue (which is used to 
calculate
load average). Perhaps you'd be better off starting with prstat.
 -- richard

> Again, with
> lockstat:
> 
> movax@megatron:/tank# lockstat -kIW -D 20 sleep 30
> 
> Profiling interrupt: 2919 events in 30.089 seconds (97 events/sec)
> 
> Count indv cuml rcnt nsec Hottest CPU+PILCaller
> ---
> 1298  44%  44% 0.00 1853 cpu[0] i86_mwait
>  301  10%  55% 0.00 2700 cpu[0]
> vdev_raidz_generate_parity_pq
>  144   5%  60% 0.00 3569 cpu[0] bcopy_altentry
>  103   4%  63% 0.00 3933 cpu[0] ddi_getl
>   83   3%  66% 0.00 2465 cpu[0] mutex_enter
> 
> Here it seems as if 'i86_mwait' is occupying the top spot (is this
> because I have power-management set to poll my CPU?). Is something odd
> happening drive buffer wise? (i.e. coming in on NIC, buffered in the
> HBA somehow, and then flushed to disks?)
> 
> Either case, it seems I'm hitting a ceiling of around 65MB/s. I assume
> CPU is bottlenecking, since bonnie++ benches resulted in much better
> performance for the vdev. In the latter case though, it may just be a
> limitation of the source drive (if it can't read data faster than
> 65MB/s, I can't write faster than that...).
> 
> e: E6600 is a first-generation 65nm LGA775 CPU, clocked at 2.40GHz.
> Dual-cores, no hyper-threading.
> 
> -- 
> --khd
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sil3124 Sata controller for ZFS on Sparc OpenSolaris Nevada b130

2011-02-08 Thread Jerry Kemp

Thank you for the solid answer.

It now looks like I am now seeking a 32 bit SAS card that I can put into 
a Netra T1 or a V120.


Jerry


On 02/08/11 09:38, Erik Trimble wrote:

STUFF DELETED HERE



Thomas is correct. This is a hardware issue, not an OS driver one. In
order to use a card with SPARC, its firmware must be OpenBoot-aware.
Pretty much all consumer SATA cards only have PC BIOS firmware, as there
is no market for sales to SPARC folks.

However, several of the low-end SAS cards ($100 or so) also have
available OpenBoot firmware, in addition to PC BIOS firmware. In
particular, the LSI1068 series-based HBAs are a good place to look. Note
that you *might* have to flash the new OpenBoot firmware onto the card -
cards don't come with both PC-BIOS and OpenBoot firmware. Be sure to
check the OEM's web site to make sure that the card is explicitly
supported for SPARC, not just "Solaris".




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-08 Thread David Dyer-Bennet

On 2011-02-08 21:39, Brandon High wrote:

On Tue, Feb 8, 2011 at 12:53 PM, David Dyer-Bennet  wrote:

Wait, are you saying that the handling of errors in RAIDZ and mirrors is
completely different?  That it dumps the mirror disk immediately, but
keeps trying to get what it can from the RAIDZ disk?  Because otherwise,
you assertion doesn't seem to hold up.


I think he meant that if one drive in a mirror dies completely, then
any single read error on the remaining drive is not recoverable.

With raidz2 (or a 3-way mirror for that matter), if one drive dies
completely, you still have redundancy.


Sure, a 2-way mirror has only 100% redundancy; if one dies, no more 
redundancy.  Same for a RAIDZ -- if one dies, no more redundancy.  But a 
4-drive RAIDZ has roughly twice the odds of a 2-drive mirror of having a 
drive die. And sure, a RAIDZ two has more redundancy -- as does a 3-way 
mirror.


Or a 48-way mirror (I read a report from somebody who mirrored all the 
drives in a Thumper box, just to see if he could).


--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-08 Thread Brandon High
On Tue, Feb 8, 2011 at 12:53 PM, David Dyer-Bennet  wrote:
> Wait, are you saying that the handling of errors in RAIDZ and mirrors is
> completely different?  That it dumps the mirror disk immediately, but
> keeps trying to get what it can from the RAIDZ disk?  Because otherwise,
> you assertion doesn't seem to hold up.

I think he meant that if one drive in a mirror dies completely, then
any single read error on the remaining drive is not recoverable.

With raidz2 (or a 3-way mirror for that matter), if one drive dies
completely, you still have redundancy.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Repairing Faulted ZFS pool when zbd doesn't recognize the pool as existing

2011-02-08 Thread Chris Forgeron
Yes, a full disclosure will be made once it's back to normal (hopefully that 
event will happen).

The pool is mounted RO right now, and I can give some better stats; I had 10.3 
TB of data in that pool, all a mix of dedup and compression.

Interesting enough, anything that wasn't being touched (i.e. any VM that wasn't 
mounted, or file that was  being written to) is perfectly fine so far - So that 
would mean over 95% of that 10.2 TB should be clean. I've only copied 10% of it 
so far, but it's all moved without issue.

The files that were in motion are showing great destruction - I/O errors when 
any attempt is being made to read them.

At this stage, I'm hoping that going backwards in time further with the txg's 
will give me solid files that I can work with.  George has some ideas there, 
and hopefully will have something to try around the time I'm finished copying 
the data.

This should stand as a warning that even a ZFS pool can disappear if it takes 
corruption in the right area. Hopefully there will be time and enough evidence 
for a "how did this happen" type of look..  I'm starting to beef up my second 
SAN device more as I wait for my primary pool to recover. I wanted to 
stress-test this SAN design, I guess I have covered all the bases - right up to 
pool destruction and eventual partial recovery. Once this is all production, I 
have to keep enough business processes running regardless if one of the pools 
goes down.

Many thanks to George and his continued efforts.

From: haak...@gmail.com [mailto:haak...@gmail.com] On Behalf Of Mark Alkema
Sent: Tuesday, February 08, 2011 4:38 PM
To: Chris Forgeron
Subject: Re: [zfs-discuss] Repairing Faulted ZFS pool when zbd doesn't 
recognize the pool as existing

good for you chris! i`m very interested in the details.

Rgds, Mark.
2011/2/8 Chris Forgeron mailto:cforge...@acsi.ca>>
Quick update;
 George has been very helpful, and there is progress with my zpool. I've got 
partial read ability at this point, and some data is being copied off.

It was _way_ beyond my skillset to do anything.

Once we have things resolved to a better level, I'll post more details (with a 
lot of help from George I'd say).

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-08 Thread David Dyer-Bennet

On Tue, February 8, 2011 13:03, Roy Sigurd Karlsbakk wrote:
>> Or you could stick strictly to mirrors; 4 pools 2x2T, 2x2T, 2x750G,
>> 2x1.5T. Mirrors are more flexible, give you more redundancy, and are
>> much easier to work with.
>
> Easier to work with, yes, but a RAIDz2 will statistically be safer than a
> set of mirrors, since in many cases, you loose a drive and during
> resilver, you find bad sectors on another drive in the same VDEV,
> resulting in data corruption. With RAIDz2 (or 3), the chance of these
> errors to be on the same place on all drives is quite minimal. With a
> (striped?) mirror, a single bitflip on the 'healthy' drive will involve
> data corruption.

Wait, are you saying that the handling of errors in RAIDZ and mirrors is
completely different?  That it dumps the mirror disk immediately, but
keeps trying to get what it can from the RAIDZ disk?  Because otherwise,
you assertion doesn't seem to hold up.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and TRIM

2011-02-08 Thread Jens Elkner
On Fri, Feb 04, 2011 at 03:30:45PM +0100, Pawel Jakub Dawidek wrote:
> On Sat, Jan 29, 2011 at 11:31:59AM -0500, Edward Ned Harvey wrote:
> > What is the status of ZFS support for TRIM?
> [...]
> My initial idea was to implement 100% reliable TRIM, so that I can
> implement secure delete using it, eg. if ZFS is placed on top of disk

Hmmm - IIRC, zfs loadbalances ZIL ops over available devs. Furthermore
I guess, almost everyone, who considers SSDs for ZIL will use at least
2 devs (i.e. since there is "no big" benefit in having a mirror dev,
many people will proably use one SSD/dev, some more "paranoid" people
will choose to have a N-way mirror/dev).

So why not turn the 2nd dev "temp. off" (freeze), do what you want
(trim/reformat/etc) and than turn it on again? I assume of course, that
the "loadbalancer" recognizes, when a dev goes "offline/online" and
automatically uses the available ones, only ...

If one doesn't have at least 2 devs, don't care about this home optimized
setup ;-)

Regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-08 Thread Roy Sigurd Karlsbakk
> Or you could stick strictly to mirrors; 4 pools 2x2T, 2x2T, 2x750G,
> 2x1.5T. Mirrors are more flexible, give you more redundancy, and are
> much easier to work with.

Easier to work with, yes, but a RAIDz2 will statistically be safer than a set 
of mirrors, since in many cases, you loose a drive and during resilver, you 
find bad sectors on another drive in the same VDEV, resulting in data 
corruption. With RAIDz2 (or 3), the chance of these errors to be on the same 
place on all drives is quite minimal. With a (striped?) mirror, a single 
bitflip on the 'healthy' drive will involve data corruption.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Repairing Faulted ZFS pool when zbd doesn't recognize the pool as existing

2011-02-08 Thread Chris Forgeron
Quick update;
 George has been very helpful, and there is progress with my zpool. I've got 
partial read ability at this point, and some data is being copied off.

It was _way_ beyond my skillset to do anything.

Once we have things resolved to a better level, I'll post more details (with a 
lot of help from George I'd say).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU Limited on Checksums?

2011-02-08 Thread Erik Trimble

On 2/8/2011 8:41 AM, Krunal Desai wrote:

Hi all,

My system is powered by an Intel Core 2 Duo (E6600) with 8GB of RAM.
Running into some very heavy CPU usage.

First, a copy from one zpool to another (cp -aRv /oldtank/documents*
/tank/documents/*), both in the same system. Load averages are around
~4.8. I think I used lockstat correctly, and found the following:

movax@megatron:/tank# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 2960 events in 30.516 seconds (97 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PILCaller
---
  1518  51%  51% 0.00 1800 cpu[0] SHA256TransformBlocks
   334  11%  63% 0.00 2820 cpu[0]
vdev_raidz_generate_parity_pq
   261   9%  71% 0.00 3493 cpu[0] bcopy_altentry
   119   4%  75% 0.00 3033 cpu[0] mutex_enter
73   2%  78% 0.00 2818 cpu[0] i86_mwait


So, obviously here it seems checksum calculation is, to put it mildly,
eating up CPU cycles like none other. I believe it's bad(TM) to turn
off checksums? (zfs property just has checksum=on, I guess it has
defaulted to SHA256 checksums?)

Second, a copy from my desktop PC to my new zpool. (5900rpm drive over
GigE to 2 6-drive RAID-Z2s). Load average are around ~3. Again, with
lockstat:

movax@megatron:/tank# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 2919 events in 30.089 seconds (97 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PILCaller
---
  1298  44%  44% 0.00 1853 cpu[0] i86_mwait
   301  10%  55% 0.00 2700 cpu[0]
vdev_raidz_generate_parity_pq
   144   5%  60% 0.00 3569 cpu[0] bcopy_altentry
   103   4%  63% 0.00 3933 cpu[0] ddi_getl
83   3%  66% 0.00 2465 cpu[0] mutex_enter

Here it seems as if 'i86_mwait' is occupying the top spot (is this
because I have power-management set to poll my CPU?). Is something odd
happening drive buffer wise? (i.e. coming in on NIC, buffered in the
HBA somehow, and then flushed to disks?)

Either case, it seems I'm hitting a ceiling of around 65MB/s. I assume
CPU is bottlenecking, since bonnie++ benches resulted in much better
performance for the vdev. In the latter case though, it may just be a
limitation of the source drive (if it can't read data faster than
65MB/s, I can't write faster than that...).

e: E6600 is a first-generation 65nm LGA775 CPU, clocked at 2.40GHz.
Dual-cores, no hyper-threading.



In the second case, you're almost certainly hitting a Network 
bottleneck.  If you don't have Jumbo Frames turned on at both ends of 
the connection, and a switch that understands JF, then the CPU is 
spending a horrible amount of time doing interrupts, trying to 
re-assemble small packets.



I also think you might be running into an erratum around old Xeons, CR 
6588054. This was fixed in kernel patch 127128-11, included in s10u5 or 
later.


Otherwise, it might be an issue with the powersave functionality of 
certain Intel CPUs.


In either case, try putting this in your /etc/system:

set idle_cpu_prefer_mwait = 0


If that fix causes an issue (and, there a reports it does occasionally), 
you'll need to boot without the /etc/system, append the '-a' flag to the 
end of the GRUB menu entry that you boot from. This will push you into 
an interactive boot, where, when it asks you for a /etc/system to use, 
specify /dev/null.



--

Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] CPU Limited on Checksums?

2011-02-08 Thread Krunal Desai
Hi all,

My system is powered by an Intel Core 2 Duo (E6600) with 8GB of RAM.
Running into some very heavy CPU usage.

First, a copy from one zpool to another (cp -aRv /oldtank/documents*
/tank/documents/*), both in the same system. Load averages are around
~4.8. I think I used lockstat correctly, and found the following:

movax@megatron:/tank# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 2960 events in 30.516 seconds (97 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PILCaller
---
 1518  51%  51% 0.00 1800 cpu[0] SHA256TransformBlocks
  334  11%  63% 0.00 2820 cpu[0]
vdev_raidz_generate_parity_pq
  261   9%  71% 0.00 3493 cpu[0] bcopy_altentry
  119   4%  75% 0.00 3033 cpu[0] mutex_enter
   73   2%  78% 0.00 2818 cpu[0] i86_mwait


So, obviously here it seems checksum calculation is, to put it mildly,
eating up CPU cycles like none other. I believe it's bad(TM) to turn
off checksums? (zfs property just has checksum=on, I guess it has
defaulted to SHA256 checksums?)

Second, a copy from my desktop PC to my new zpool. (5900rpm drive over
GigE to 2 6-drive RAID-Z2s). Load average are around ~3. Again, with
lockstat:

movax@megatron:/tank# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 2919 events in 30.089 seconds (97 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PILCaller
---
 1298  44%  44% 0.00 1853 cpu[0] i86_mwait
  301  10%  55% 0.00 2700 cpu[0]
vdev_raidz_generate_parity_pq
  144   5%  60% 0.00 3569 cpu[0] bcopy_altentry
  103   4%  63% 0.00 3933 cpu[0] ddi_getl
   83   3%  66% 0.00 2465 cpu[0] mutex_enter

Here it seems as if 'i86_mwait' is occupying the top spot (is this
because I have power-management set to poll my CPU?). Is something odd
happening drive buffer wise? (i.e. coming in on NIC, buffered in the
HBA somehow, and then flushed to disks?)

Either case, it seems I'm hitting a ceiling of around 65MB/s. I assume
CPU is bottlenecking, since bonnie++ benches resulted in much better
performance for the vdev. In the latter case though, it may just be a
limitation of the source drive (if it can't read data faster than
65MB/s, I can't write faster than that...).

e: E6600 is a first-generation 65nm LGA775 CPU, clocked at 2.40GHz.
Dual-cores, no hyper-threading.

-- 
--khd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sil3124 Sata controller for ZFS on Sparc OpenSolaris Nevada b130

2011-02-08 Thread Erik Trimble

On 2/8/2011 2:17 AM, Tomas Ögren wrote:

On 08 February, 2011 - Robert Soubie sent me these 1,1K bytes:


Le 08/02/2011 07:10, Jerry Kemp a écrit :

As part of a small home project, I have purchased a SIL3124 hba in
hopes of attaching an external drive/drive enclosure via eSATA.

The host in question is an old Sun Netra T1 currently running
OpenSolaris Nevada b130.

The card in question is this Sil3124 card:

http://www.newegg.com/product/product.aspx?item=N82E16816124003

although I did not purchase it from Newegg. I specifically purchased
this card as I have seen specific reports of it working under
Solaris/OpenSolaris distro's on several Solaris mailing lists.

I use a non-eSata version of this card under Solaris Express 11 for a
boot mirrored ZFS pool. And another one for a Windows 7 machine that
does backups of the server. Bios and drivers are available from the
Silicon Image site, but nothing for Solaris.

The problem itself is sparc vs x86 and firmware for the card. AFAIK,
there is no sata card with drivers for solaris sparc. Use a SAS card.

/Tomas


Thomas is correct.  This is a hardware issue, not an OS driver one. In 
order to use a card with SPARC, its firmware must be OpenBoot-aware.  
Pretty much all consumer SATA cards only have PC BIOS firmware, as there 
is no market for sales to SPARC folks.


However, several of the low-end SAS cards ($100 or so) also have 
available OpenBoot firmware, in addition to PC BIOS firmware. In 
particular, the LSI1068 series-based HBAs are a good place to look.  
Note that you *might* have to flash the new OpenBoot firmware onto the 
card - cards don't come with both PC-BIOS and OpenBoot firmware.  Be 
sure to check the OEM's web site to make sure that the card is 
explicitly supported for SPARC, not just "Solaris".




--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-08 Thread David Dyer-Bennet

On Mon, February 7, 2011 14:59, David Dyer-Bennet wrote:
>
> On Sat, February 5, 2011 11:54, Gaikokujin Kyofusho wrote:
>> Thank you kebabber. I will try out indiana and virtual box to play
>> around
>> with it a bit.
>>
>> Just to make sure I understand your example, if I say had a 4x2tb
>> drives,
>> 2x750gb, 2x1.5tb drives etc then i could make 3 groups (perhaps 1 raidz1
>> +
>> 1 mirrored + 1 mirrored), in terms of accessing them would they just be
>> mounted like 3 partitions or could it all be accessed like one big
>> partition?
>
> A ZFS pool can contain many vdevs; you could put the three groups you
> describe into one pool, and then assign one (or more) file-systems to that
> pool.  Putting them all in one pool seems to me the natural way to handle
> it; they're all similar levels of redundancy.  It's more flexible to have
> everything in one pool, generally.
>
> (You could also make separate pools; my experience, for what it's worth,
> argues for making pools based on redundancy and performance (and only
> worry about BIG differences), and assign file-systems to pools based on
> needs for redundancy and performance.  And for my home system I just have
> one big data pool, currently consisting of 1x1TB, 2x400GB, 2x400GB, plus
> 1TB hot spare.)

Typo; I don't in fact have a non-redundant vdev in my main data pool! 
It's *2*x1TB at the start of that list.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sil3124 Sata controller for ZFS on Sparc OpenSolaris Nevada b130

2011-02-08 Thread Tomas Ögren
On 08 February, 2011 - Robert Soubie sent me these 1,1K bytes:

> Le 08/02/2011 07:10, Jerry Kemp a écrit :
>> As part of a small home project, I have purchased a SIL3124 hba in  
>> hopes of attaching an external drive/drive enclosure via eSATA.
>>
>> The host in question is an old Sun Netra T1 currently running  
>> OpenSolaris Nevada b130.
>>
>> The card in question is this Sil3124 card:
>>
>> http://www.newegg.com/product/product.aspx?item=N82E16816124003
>>
>> although I did not purchase it from Newegg. I specifically purchased  
>> this card as I have seen specific reports of it working under  
>> Solaris/OpenSolaris distro's on several Solaris mailing lists.
>
> I use a non-eSata version of this card under Solaris Express 11 for a  
> boot mirrored ZFS pool. And another one for a Windows 7 machine that  
> does backups of the server. Bios and drivers are available from the  
> Silicon Image site, but nothing for Solaris.

The problem itself is sparc vs x86 and firmware for the card. AFAIK,
there is no sata card with drivers for solaris sparc. Use a SAS card.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive i/o anomaly

2011-02-08 Thread a . smith

It is a 4k sector drive, but I thought zfs recognised those drives and didn't
need any special configuration...?


4k drives are a big problem for ZFS, much has been posted/written  
about it. Basically, if the 4k drives report 512 byte blocks, as they  
almost all do, then ZFS does not detect and configure the pool  
correctly. If the drive actually reports the real 4k block size, ZFS  
handles this very nicely.
So the problem/fault is drives misreporting the real block size, to  
maintain compatibility with other OS's etc, and not really with ZFS.


cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sil3124 Sata controller for ZFS on Sparc OpenSolaris Nevada b130

2011-02-08 Thread Robert Soubie

Le 08/02/2011 07:10, Jerry Kemp a écrit :
As part of a small home project, I have purchased a SIL3124 hba in 
hopes of attaching an external drive/drive enclosure via eSATA.


The host in question is an old Sun Netra T1 currently running 
OpenSolaris Nevada b130.


The card in question is this Sil3124 card:

http://www.newegg.com/product/product.aspx?item=N82E16816124003

although I did not purchase it from Newegg. I specifically purchased 
this card as I have seen specific reports of it working under 
Solaris/OpenSolaris distro's on several Solaris mailing lists.


I use a non-eSata version of this card under Solaris Express 11 for a 
boot mirrored ZFS pool. And another one for a Windows 7 machine that 
does backups of the server. Bios and drivers are available from the 
Silicon Image site, but nothing for Solaris.


--
Éditions de l'Âge d'Or — Stanley G. Weinbaum
http://www.lulu.com/robert_soubie

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss