Re: [zfs-discuss] sorry everyone was: Re: External SATA drive enclosures + ZFS?

2011-02-25 Thread Erik Trimble
On Fri, 2011-02-25 at 20:29 -0800, Yaverot wrote:
> Sorry all, didn't realize that half of Oracle would auto-reply to a public 
> mailing list since they're out of the office 9:30 Friday nights.  I'll try to 
> make my initial post each month during daylight hours in the future.
> ___


Nah, probably just a Beehive (our mail system) burp. Happens a lot.

Besides, it's 8:45 PST here, and I'm still at work. :-)

-- 
Erik Trimble
Java System Support
Mailstop:  usca22-317
Phone:  x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] sorry everyone was: Re: External SATA drive enclosures + ZFS?

2011-02-25 Thread Yaverot

Sorry all, didn't realize that half of Oracle would auto-reply to a public 
mailing list since they're out of the office 9:30 Friday nights.  I'll try to 
make my initial post each month during daylight hours in the future.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-25 Thread Yaverot


--- rich.t...@rite-group.com wrote:
>Space is starting to get a bit tight here, so I'm looking at adding
>a couple of TB to my home server.  I'm considering external USB or
>FireWire attached drive enclosures.  Cost is a real issue, but I also
>want the data to be managed by ZFS--so enclosures without a JBOD option
>have been disgarded (i.e., I don't want to use any internal HW RAID
>controllers).

"tank" on my home file server is a raidz3 with all six drives hooked up via 
USB. Across 2 expansion card controllers. (Leaving the motherboard controller 
of mouse/keyboard, and hooking up a a fresh drive during capacity expansions.)

>The intent would be put two 1TB or 2TB drives in the enclosure and use
>ZFS to create a mirrored pool out of them. 

I'd mirror across enclosures. As a home setup, even if I label things, 3 more 
cables will appear before I want to plug or unplug. I want my single points of 
failure to be "the tower" and "the UPS" and "the guy in the mirror who can type 
zpool destroy" not any individual cable.

>I can't think of a reason why it wouldn't work, but I also have exactly
>zero experience with this kind of set up!

I appears to work fine with my commodity parts setup.  I can't speak to the 
reliability of eSATA or FireWire as they fall in the "impossible to find" 
category.  

>would I be correct in thinking that I could buy two of
>the above enclosures and connect them to two different USB ports?

Don't see why not, but if you still want the single cable to accidentally 
disconnect, you could hook the enclosures up through a hub, and then use one 
port on the system.  

>Presumably, if that is the case, I could set them up as a RAID 10
>pool controlled by ZFS?

Sure, and since you left ZFS in charge, you can upgrade them to three-way 
mirrors in the future if you desire.

>Assuming my proposed enclosure would work, and assuming the use of
>reasonable quality 7200 RPM disks, how would you expect the performance
>to compare with the differential UltraSCSI set up I'm currently using?
>I think the DWIS is rated at either 20MB/sec or 40MB/sec, so on the
>surface, the USB attached drives would seem to be MUCH faster...

Performance is one thing I don't know. My solution works for me.  Lurking here 
I haven't heard enough of people talking in consistent terms to know where the 
bottleneck is in my system, and if it is something to worry about.  That 
changes the moment I start talking to the server from more than one system at a 
time.  
And all this is with snv_134 should that make any difference.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-25 Thread Nathan Kroenert
 I'm with the gang on this one as far as USB being the spawn of the 
devil for mass storage you want to depend on. I'd rather scoop my eyes 
out with a red hot spoon than depend on permanently attached USB 
storage... And - don't even start me on SPARC and USB storage... It's 
like watching pitch flow... (see 
http://en.wikipedia.org/wiki/Pitch_drop_experiment). I never spent too 
much time working out why - but I never seen to get better than about 
10MB/s with SPARC+USB...


When it comes to cheap... I use cheap external SATA/USB combo enclosures 
(single drive ones) as I like the flexibility of being able to use them 
in eSATA mode nice and fast (and reliable considering the $$) or in USB 
mode should I need to split a mirror off and read it on my laptop, which 
has no esata port...


Also - using the single drive enclosures is by far the cheapest (at 
least here in Oz), and you get redundant power supplies, as they use 
their own mini brick AC/DC units. I'm currently very happy using 2TB 
disks in the external eSATA+USB thingies.


I had been using ASTONE external eSATA/USB units - though it seems my 
local shop has stopped carrying them... I liked them as they had 
perforated side panels, which allow the disk to stay much cooler than 
some of my other enclosures... (And have a better 'vertical' stand if 
you want the disks to stand up, rather than lie on their side.)


If your box has PCI-e slots, grab one or two $20 Silicon Image 3132 
controllers with eSATA ports and you should be golden... You will then 
be able to run between 2 and 4 disks - easily pushing them to their 
maximum platter speed - which for most of the 2TB disks is near enough 
to 100M/s at the outer edges. You will also get considerably higher IOPS 
- particularly when they are sequential - using eSATA.


Note: All of this is with the 'cheap' view... You can most certainly buy 
much better hardware... But bang for buck - I have been happy with the 
above.


Cheers!

Nathan.

On 02/26/11 01:58 PM, Brandon High wrote:

On Fri, Feb 25, 2011 at 4:34 PM, Rich Teer  wrote:

Space is starting to get a bit tight here, so I'm looking at adding
a couple of TB to my home server.  I'm considering external USB or
FireWire attached drive enclosures.  Cost is a real issue, but I also

I would avoid USB, since it can be less reliable than other connection
methods. That's the impression I get from older posts made by Sun
devs, at least. I'm not sure how well Firewire 400 is supported, let
alone Firewire 800.

You might want to consider eSATA. Port multipliers are supported in
recent builds (128+ I think), and will give better performance than
USB. I'm not sure if PMP are supported on Sparc though., since it
requires support in both the controller and PMP.

Consider enclosures from other manufacturers as well. I've heard good
things about Sans Digital, but I've never used them. The 2-drive
enclosure has the same components as the item you linked but 1/2 the
cost via Newegg.


The intent would be put two 1TB or 2TB drives in the enclosure and use
ZFS to create a mirrored pool out of them.  Assuming this enclosure is
set to JBOD mode, would I be able to use this with ZFS?  The enclosure

Yes, but I think the enclosure has a SiI5744 inside it, so you'll
still have one connection from the computer to the enclosure. If that
goes, you'll lose both drives. If you're just using two drives, two
separate enclosures on separate buses may be better. Look at
http://www.sansdigital.com/towerstor/ts1ut.html for instance. There
are also larger enclosures with up to 8 drives.


I can't think of a reason why it wouldn't work, but I also have exactly
zero experience with this kind of set up!

Like I mentioned, USB is prone to some flakiness.


Assuming this would work, given that I can't see to find a 4-drive
version of it, would I be correct in thinking that I could buy two of

You might be better off using separate enclosures for reliability.
Make sure to split the mirrors across the two devices. Use separate
USB controllers if possible, so a bus reset doesn't affect both sides.


Assuming my proposed enclosure would work, and assuming the use of
reasonable quality 7200 RPM disks, how would you expect the performance
to compare with the differential UltraSCSI set up I'm currently using?
I think the DWIS is rated at either 20MB/sec or 40MB/sec, so on the
surface, the USB attached drives would seem to be MUCH faster...

USB 2.0 is about 30-40MB/s under ideal conditions, but doesn't support
any of the command queuing that SCSI does. I'd expect performance to
be slightly lower, and to use slightly more CPU. Most USB controllers
don't support DMA, so all I/O requires CPU time.

What about an inexpensive SAS card (eg: Supermicro AOC-USAS-L4i) and
external SAS enclosure (eg: Sans Digital TowerRAID TR4X). It would
cost about $350 for the setup.

-B



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-25 Thread Brandon High
On Fri, Feb 25, 2011 at 4:34 PM, Rich Teer  wrote:
> Space is starting to get a bit tight here, so I'm looking at adding
> a couple of TB to my home server.  I'm considering external USB or
> FireWire attached drive enclosures.  Cost is a real issue, but I also

I would avoid USB, since it can be less reliable than other connection
methods. That's the impression I get from older posts made by Sun
devs, at least. I'm not sure how well Firewire 400 is supported, let
alone Firewire 800.

You might want to consider eSATA. Port multipliers are supported in
recent builds (128+ I think), and will give better performance than
USB. I'm not sure if PMP are supported on Sparc though., since it
requires support in both the controller and PMP.

Consider enclosures from other manufacturers as well. I've heard good
things about Sans Digital, but I've never used them. The 2-drive
enclosure has the same components as the item you linked but 1/2 the
cost via Newegg.

> The intent would be put two 1TB or 2TB drives in the enclosure and use
> ZFS to create a mirrored pool out of them.  Assuming this enclosure is
> set to JBOD mode, would I be able to use this with ZFS?  The enclosure

Yes, but I think the enclosure has a SiI5744 inside it, so you'll
still have one connection from the computer to the enclosure. If that
goes, you'll lose both drives. If you're just using two drives, two
separate enclosures on separate buses may be better. Look at
http://www.sansdigital.com/towerstor/ts1ut.html for instance. There
are also larger enclosures with up to 8 drives.

> I can't think of a reason why it wouldn't work, but I also have exactly
> zero experience with this kind of set up!

Like I mentioned, USB is prone to some flakiness.

> Assuming this would work, given that I can't see to find a 4-drive
> version of it, would I be correct in thinking that I could buy two of

You might be better off using separate enclosures for reliability.
Make sure to split the mirrors across the two devices. Use separate
USB controllers if possible, so a bus reset doesn't affect both sides.

> Assuming my proposed enclosure would work, and assuming the use of
> reasonable quality 7200 RPM disks, how would you expect the performance
> to compare with the differential UltraSCSI set up I'm currently using?
> I think the DWIS is rated at either 20MB/sec or 40MB/sec, so on the
> surface, the USB attached drives would seem to be MUCH faster...

USB 2.0 is about 30-40MB/s under ideal conditions, but doesn't support
any of the command queuing that SCSI does. I'd expect performance to
be slightly lower, and to use slightly more CPU. Most USB controllers
don't support DMA, so all I/O requires CPU time.

What about an inexpensive SAS card (eg: Supermicro AOC-USAS-L4i) and
external SAS enclosure (eg: Sans Digital TowerRAID TR4X). It would
cost about $350 for the setup.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-25 Thread Mike Tancsa
On 2/25/2011 7:34 PM, Rich Teer wrote:
> 
> One product that seems to fit the bill is the StarTech.com S352U2RER,
> an external dual SATA disk enclosure with USB and eSATA connectivity
> (I'd be using the USB port).  Here's a link to the specific product
> I'm considering:
> 
> http://ca.startech.com/product/S352U2RER-35in-eSATA-USB-Dual-SATA-Hot-Swap-External-RAID-Hard-Drive-Enclosure

I have had mixed results with their 4 bay version.  When they work, they
are great, but we have had a number of DOA/almost DOA units.  I have had
good luck with products from
http://www.addonics.com/
(They ship to Canada as well without issue)

Why use USB ? You wll get much better performance/throughput on eSata
(if you have good drivers of course). I use their sil3124 eSata
controller on FreeBSD as well as a number of PM units and they work great.

---Mike


-- 
---
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada   http://www.tancsa.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-25 Thread Rich Teer
Hi all,

Space is starting to get a bit tight here, so I'm looking at adding
a couple of TB to my home server.  I'm considering external USB or
FireWire attached drive enclosures.  Cost is a real issue, but I also
want the data to be managed by ZFS--so enclosures without a JBOD option
have been disgarded (i.e., I don't want to use any internal HW RAID
controllers).

One product that seems to fit the bill is the StarTech.com S352U2RER,
an external dual SATA disk enclosure with USB and eSATA connectivity
(I'd be using the USB port).  Here's a link to the specific product
I'm considering:

http://ca.startech.com/product/S352U2RER-35in-eSATA-USB-Dual-SATA-Hot-Swap-External-RAID-Hard-Drive-Enclosure

The intent would be put two 1TB or 2TB drives in the enclosure and use
ZFS to create a mirrored pool out of them.  Assuming this enclosure is
set to JBOD mode, would I be able to use this with ZFS?  The enclosure
would be connected to either my Sun Blade 1000 or an Ultra 20.  The
SB 1000 is currently running SXCE b130; the Ultra 20 would either run
SXCE b130 or the latest version of Solaris 11 Express (or whatever its
called!).

I can't think of a reason why it wouldn't work, but I also have exactly
zero experience with this kind of set up!

Assuming this would work, given that I can't see to find a 4-drive
version of it, would I be correct in thinking that I could buy two of
the above enclosures and connect them to two different USB ports?
Presumably, if that is the case, I could set them up as a RAID 10
pool controlled by ZFS?

This would be replacing a D1000 array, which is mostly empty (I think I'm
only using one pair of 10K RPM 143 GB disks at the moment!).  I could add
extra disks to the D1000 but appropriate disks seem to be rare/expensive
especially when $/GB is factored in to the equation.

Assuming my proposed enclosure would work, and assuming the use of
reasonable quality 7200 RPM disks, how would you expect the performance
to compare with the differential UltraSCSI set up I'm currently using?
I think the DWIS is rated at either 20MB/sec or 40MB/sec, so on the
surface, the USB attached drives would seem to be MUCH faster...

Many thanks for any pointers received,

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance

2011-02-25 Thread Torrey McMahon

On 2/25/2011 3:49 PM, Tomas Ögren wrote:

On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:


>  Hi All,
>
>  In reading the ZFS Best practices, I'm curious if this statement is
>  still true about 80% utilization.

It happens at about 90% for me.. all of a sudden, the mail server got
butt slow.. killed an old snapshot to get to 85% free or so, then it got
snappy again. S10u9 sparc.


Some of the recent updates have pushed the 80% watermark closer to 90% 
for most workloads.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance

2011-02-25 Thread Tomas Ögren
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:

> Hi All,
>
> In reading the ZFS Best practices, I'm curious if this statement is  
> still true about 80% utilization.

It happens at about 90% for me.. all of a sudden, the mail server got
butt slow.. killed an old snapshot to get to 85% free or so, then it got
snappy again. S10u9 sparc.

> from :   
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
>
> 
>
>
>  
> Storage
>  Pool Performance Considerations
>
> .
> Keep pool space under 80% utilization to maintain pool performance.  
> Currently, pool performance can degrade when a pool is very full and  
> file systems are updated frequently, such as on a busy mail server. Full  
> pools might cause a performance penalty, but no other issues.
>
> 
>
> Dave
>
>

> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance

2011-02-25 Thread Cindy Swearingen
Hi Dave,

Still true.

Thanks,

Cindy

On 02/25/11 13:34, David Blasingame Oracle wrote:
> Hi All,
> 
> In reading the ZFS Best practices, I'm curious if this statement is 
> still true about 80% utilization.
> 
> from :  
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> 
> 
> 
> 
>   
> Storage
>   Pool Performance Considerations
> 
> .
> Keep pool space under 80% utilization to maintain pool performance. 
> Currently, pool performance can degrade when a pool is very full and 
> file systems are updated frequently, such as on a busy mail server. Full 
> pools might cause a performance penalty, but no other issues.
> 
> 
> 
> Dave
> 
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Performance

2011-02-25 Thread David Blasingame Oracle

Hi All,

In reading the ZFS Best practices, I'm curious if this statement is 
still true about 80% utilization.


from :  
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide





 
Storage
 Pool Performance Considerations

.
Keep pool space under 80% utilization to maintain pool performance. 
Currently, pool performance can degrade when a pool is very full and 
file systems are updated frequently, such as on a busy mail server. Full 
pools might cause a performance penalty, but no other issues.




Dave


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Drives

2011-02-25 Thread webdawg
Samsung spinpoints or the hitachi are doing great @ 2tb.


-Original Message-
From: zfs-discuss-requ...@opensolaris.org
Sender: zfs-discuss-boun...@opensolaris.org
Date: Fri, 25 Feb 2011 12:00:02 
To: 
Reply-To: zfs-discuss@opensolaris.org
Subject: zfs-discuss Digest, Vol 64, Issue 53

Send zfs-discuss mailing list submissions to
zfs-discuss@opensolaris.org

To subscribe or unsubscribe via the World Wide Web, visit
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or, via email, send a message with subject or body 'help' to
zfs-discuss-requ...@opensolaris.org

You can reach the person managing the list at
zfs-discuss-ow...@opensolaris.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of zfs-discuss digest..."


Today's Topics:

   1. What drives? (Roy Sigurd Karlsbakk)
   2. Re: What drives? (David Magda)
   3. Re: What drives? (Markus Kovero)
   4. Re: SIL3114 and sparc solaris 10 (Nathan Kroenert)


--

Message: 1
Date: Fri, 25 Feb 2011 02:11:14 +0100 (CET)
From: Roy Sigurd Karlsbakk 
To: zfs-discuss 
Subject: [zfs-discuss] What drives?
Message-ID: <19143837.181.1298596274466.JavaMail.root@zimbra>
Content-Type: text/plain; charset=utf-8

Hi all

I have about 350TB on ZFS now, and with an old box with WD Greens (those 
getting replaced as of now), I've had very little drive failures. iostat -en 
shows a minimum of errors, and all is well.

Then, the new 100TB setups are based on FASS drives (2TB WD Blacks). Those seem 
to be rather worse in the terms of expected age. We've replaced seven so far, 
out of 158 in total, so it's not that bad, but even the newer ones seem to show 
high error rates. Some of the drives that have been returned, have shown no 
errors with WD's tools.

So, does anyone know which drives to choose for the next setup? Hitachis look 
good so far, perhaps also seagates, but right now, I'm dubious about the blacks.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer p? norsk.


--

Message: 2
Date: Thu, 24 Feb 2011 21:05:10 -0500
From: David Magda 
To: Roy Sigurd Karlsbakk 
Cc: zfs-discuss 
Subject: Re: [zfs-discuss] What drives?
Message-ID: <68b13b2f-0778-4362-b142-20d60bffa...@ee.ryerson.ca>
Content-Type: text/plain; charset=windows-1252

On Feb 24, 2011, at 20:11, Roy Sigurd Karlsbakk wrote:

> So, does anyone know which drives to choose for the next setup? Hitachis look 
> good so far, perhaps also seagates, but right now, I'm dubious about the 
> blacks.

There are people who have lost data on Seagates, and so swear they'll never use 
them again; there are people who have lost data on Hitachis, and so swear 
they'll never use them again; there are people?.

I have a bunch of Seagates that I'm using to clone the internal drive (Seagate) 
of my iMac for offsite backup purposes. No problems, but only minima use really.

Generally get the drive with the longest warranty period: this is usually an 
indication that the manufacturer is willing to put their money where their 
mouth is when it comes to longevity claims.

Of course WD FASS Black devices come with five year warranties, and that 
doesn't seem to be helping you much. :)

--

Message: 3
Date: Fri, 25 Feb 2011 06:45:14 +
From: Markus Kovero 
To: Roy Sigurd Karlsbakk , zfs-discuss

Subject: Re: [zfs-discuss] What drives?
Message-ID:

Content-Type: text/plain; charset="utf-8"

> So, does anyone know which drives to choose for the next setup? Hitachis look 
> good so far, perhaps also seagates, but right now, I'm dubious about the > 
> blacks.

Hi! I'd go for WD RE edition. Blacks and Greens are for desktop use and 
therefore lack proper TLER settings and have useless power saving features that 
could induce errors and mysterious slowness.

Yours
Markus Kovero

--

Message: 4
Date: Fri, 25 Feb 2011 22:29:53 +1100
From: Nathan Kroenert 
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] SIL3114 and sparc solaris 10
Message-ID: <4d6792b1.7010...@tuneunix.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

  I can confirm that on *at least* 4 different cards - from different 
board OEMs - I have seen single bit ZFS checksum errors that went away 
immediately after removing the 3114 based card.

I stepped up to the 3124 (pci-x up to 133mhz) and 3132 (pci-e) and have 
never looked back.

I now throw any 3114 card I find into the bin at the first available 
opportunity as they are a pile of doom waiting to insert an exploding 
garden gnom

Re: [zfs-discuss] Investigating a hung system

2011-02-25 Thread Tomas Ögren
On 25 February, 2011 - Mark Logan sent me these 0,6K bytes:

> Hi,
>
> I'm investigating a hung system. The machine is running snv_159 and was  
> running a full build of Solaris 11. You cannot get any response from the  
> console and you cannot ssh in, but it responds to ping.
>
> The output from ::arc shows:
> arc_meta_used =  3836 MB
> arc_meta_limit=  3836 MB
> arc_meta_max  =  3951 MB
>
> Is it normal for arc_meta_used == arc_meta_limit?

It means that it has cached as much metadata as it's allowed to during
the current circumstances (arc size).

> Does this explain the hang?

No..

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using Solaris iSCSI target in VirtualBox iSCSI Initiator

2011-02-25 Thread Geoff Nordli


>-Original Message-
>From: Thierry Delaitre
>Sent: Wednesday, February 23, 2011 4:42 AM
>To: zfs-discuss@opensolaris.org
>Subject: [zfs-discuss] Using Solaris iSCSI target in VirtualBox iSCSI
Initiator
>
>Hello,
>
>I’m using ZFS to export some iscsi targets for the virtual box iscsi
initiator.
>
>It works ok if I try to install the guest OS manually.
>
>However, I’d like to be able to import my already prepared guest os vdi
images
>into the iscsi devices but I can’t figure out how to do it.
>
>Each time I tried, I cannot boot.
>
>It only works if I save the manually installed guest os and re-instate the
same as
>follows:
>
>dd if=/dev/dsk/c3t600144F07551C2004D619D170002d0p0 of=debian.vdi dd
>if=debian.vdi  of=/dev/dsk/c3t600144F07551C2004D619D170002d0p0
>
>fdisk /dev/dsk/c3t600144F07551C2004D619D170002d0p0
>
>   Total disk size is 512 cylinders
> Cylinder size is 4096 (512 byte) blocks
>
>   Cylinders
>  Partition   Status    Type  Start   End   Length    %
>  =   ==      =   ===   ==   ===
>  1   Active    Linux native  0   463 464 91
>  2 EXT-DOS 464   511  48  9
>
>I’m wondering whether there is an issue with the disk geometry hardcoded in
the
>vdi file container ?
>
>Does the VDI disk geometry need to match the LUN size ?
>
>Thanks,
>
>Thierry.

Hi Thierry.

You need to convert the VDI image into a raw image before you import into a
zvol.  

Something like:  vboxmanage internalcommands converttoraw debian.vdi
debian.raw

Then I run dd directly into the zvol device, not the iscsi LUN like:
/dev/zvol/rdsk/zpool_name/debian

Geoff 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SIL3114 and sparc solaris 10

2011-02-25 Thread Marion Hakanson
nat...@tuneunix.com said:
>   I can confirm that on *at least* 4 different cards - from different  board
> OEMs - I have seen single bit ZFS checksum errors that went away  immediately
> after removing the 3114 based card.
> 
> I stepped up to the 3124 (pci-x up to 133mhz) and 3132 (pci-e) and have
> never looked back.
> 
> I now throw any 3114 card I find into the bin at the first available
> opportunity as they are a pile of doom waiting to insert an exploding  garden
> gnome into the unsuspecting chest cavity of your data. 

Maybe I've just been lucky.  I have a 3114 card configured with two ports
internal and two external (E-SATA).  There is a ZFS pool configured as a
mirror of a 1TB drive on the E-SATA port in an external dock, and a 1TB
drive on a motherboard SATA port.  It's been running like this for a
couple of years, with weekly scrubs, and has so far had no errors.
The system is a 32-bit x86 running Solaris-10U6.

My 3114 card came with RAID firmware, and I re-flashed it to non-RAID,
as others have mentioned.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Investigating a hung system

2011-02-25 Thread Mark Logan

Hi,

I'm investigating a hung system. The machine is running snv_159 and was 
running a full build of Solaris 11. You cannot get any response from the 
console and you cannot ssh in, but it responds to ping.


The output from ::arc shows:
arc_meta_used =  3836 MB
arc_meta_limit=  3836 MB
arc_meta_max  =  3951 MB

Is it normal for arc_meta_used == arc_meta_limit?
Does this explain the hang?

Thanks,
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SIL3114 and sparc solaris 10

2011-02-25 Thread Eric D. Mudama

On Fri, Feb 25 at 22:29, Nathan Kroenert wrote:
I don't recall if Solaris 10 (Sparc or X86) actually has the si3124 
driver, but if it does, for a cheap thrill, they are worth a bash. I 
have no problems pushing 4 disks pretty much flat out on a PCI-X 133 
3124 based card. (note that there was a pci and a pci-x version of 
the 3124, so watch out.)


Most 3124 I've seen are PCI-X natively, but they work fine in PCI
slots, albiet with less bandwidth available.

--eric

--
Eric D. Mudama
edmud...@bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What drives?

2011-02-25 Thread Tuomas Leikola
I'd pick samsung and use the savings for additional redundancy. Ymmv.
On Feb 25, 2011 8:46 AM, "Markus Kovero"  wrote:
>> So, does anyone know which drives to choose for the next setup? Hitachis
look good so far, perhaps also seagates, but right now, I'm dubious about
the > blacks.
>
> Hi! I'd go for WD RE edition. Blacks and Greens are for desktop use and
therefore lack proper TLER settings and have useless power saving features
that could induce errors and mysterious slowness.
>
> Yours
> Markus Kovero
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SIL3114 and sparc solaris 10

2011-02-25 Thread Nathan Kroenert
 I can confirm that on *at least* 4 different cards - from different 
board OEMs - I have seen single bit ZFS checksum errors that went away 
immediately after removing the 3114 based card.


I stepped up to the 3124 (pci-x up to 133mhz) and 3132 (pci-e) and have 
never looked back.


I now throw any 3114 card I find into the bin at the first available 
opportunity as they are a pile of doom waiting to insert an exploding 
garden gnome into the unsuspecting chest cavity of your data.


I'd also add that I have never made an effort to determine if it was 
actually the Solaris driver that was at fault - but being that the other 
two cards I have mentioned are available for about $20 a pop, it's not 
worth my time.


I don't recall if Solaris 10 (Sparc or X86) actually has the si3124 
driver, but if it does, for a cheap thrill, they are worth a bash. I 
have no problems pushing 4 disks pretty much flat out on a PCI-X 133 
3124 based card. (note that there was a pci and a pci-x version of the 
3124, so watch out.)


Cheers!

Nathan.

On 02/24/11 02:10 AM, Andrew Gabriel wrote:

Krunal Desai wrote:
On Wed, Feb 23, 2011 at 8:38 AM, Mauricio Tavares 
 wrote:

   I see what you mean; in
http://mail.opensolaris.org/pipermail/opensolaris-discuss/2008-September/043024.html 


they claim it is supported by the uata driver. What would you suggest
instead? Also, since I have the card already, how about if I try it 
out?


My experience with SPARC is limited, but perhaps the Option ROM/BIOS
for that card is intended for x86, and not SPARC? I might thinking of
another controller, but this could be the case. You could always try
to boot with the card; the worst that'll probably happen is boot hangs
before the OS even comes into play.


SPARC won't try to run the BIOS on the card anyway (it will only run 
OpenFirmware BIOS), but you will have to make sure the card has the 
non-RAID BIOS so that the PCI class doesn't claim it to be a RAID 
controller, which will prevent Solaris going anywhere near the card at 
all. These cards could be bought with either RAID or non-RAID BIOS, 
but RAID was more common. You can (or could some time back) download 
the RAID and non-RAID BIOS from Silicon Image and re-flash which also 
updates the PCI class, and I think you'll need a Windows system to 
actually flash the BIOS.


You might want to do a google search on "3114 data corruption" too, 
although it never hit me back when I used the cards.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss