Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Kyle McDonald
Zlotnick Fred wrote:
> On Aug 20, 2008, at 6:39 PM, Kyle McDonald wrote:
>
>>
>> My suggestion still remains though. Log your enterprises wish for this
>> feature through as many channels as you have into Sun. This list, Sales,
>> Support, every way you can think of. Get it documented, so that when
>> they go to set priorities on RFE's there'll be more data on this one.
>
> Knock yourself out, but it's really unnecessary.  As has been amply
> documented, on this thread and others, this is already a very high
> priority for us.  It just happens to be rather difficult to do it right.
> We're working on it.  We've heard the message (years ago, actually, just
> about as soon as we shipped ZFS in S10 6/06.)  Your further encouragement
> is appreciated, but it's unlikely to speed up what already is deemed
> a high priority.
>
Cool. I love it when I'm wrong this way. :)

I don't know where I got it but I really thought it wasn't seen as a big 
deal for the larger storage customers.
Glad to see I'm wrong, because it's a real big deal for us little guys. :)

  -Kyle

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Zlotnick Fred
On Aug 20, 2008, at 6:39 PM, Kyle McDonald wrote:

> John wrote:
>> Our "enterprise" is about 300TB.. maybe a bit more...
>>
>> You are correct that most of the time we grow and not shrink...  
>> however, we are fairly dynamic and occasionally do shrink. DBA's  
>> have been known to be off on their space requirements/requests.
>>
>>
> For the record I agree with you and I'm wating for this feature  
> also. I
> was only citing my recolection of the explanatin given in the past.
>
> To add more from my memory, I think the 'Enterprise grows not shrinks'
> idea is coming from the idea that in ZFS you should be creating fewer
> data pools from a few different specific sized LUNs, and using ZFS to
> allocate filesystems and zVOL's from the pool, instead of customizing
> LUN sizes to create more pools each for different purposes. If true,  
> (if
> you can make all your LUNs one size, and make a few [prefferably one I
> think] data zPool per server host.) then the need to reduce pool  
> size is
> diminished.
>
> That's not realistic in the home/hobby/developer market, and I'm not
> convinced that's realistic in the enterprise either.
>> There is also the human error factor.  If someone accidentally  
>> grows a zpool there is no easy way to recover that space without  
>> down time.  Some of my LUNs are in the 1TB range and if that gets  
>> added to the wrong zpool that space is basically stuck there until  
>> i can get a maintenance window. And then I'm not that's even  
>> possible since my windows are only 3 hours... for example what if I  
>> add a LUN to 20TB zpool.  What would I do to remove the LUN?  I  
>> think I would have to create a new 20TB pool and move the data from  
>> the original to the new zpool... so that would assume I have a free  
>> 20TB and the down time
>>
>>
> I agree here also, even with a single zpool per server. Consider a
> policy where when the pool grows you always add a RAIDz2 of 10 200GB
> LUNs. So your single data pool is currently 3 of these RAIDz2 vdevs,  
> and
> an admin goes to add 10 more, but forgets the 'raidz2, so you end up
> with 3 RAIDz2, and 10 single LUN non redundant vdevs. How do you fix  
> that?
>
> My suggestion still remains though. Log your enterprises wish for this
> feature through as many channels as you have into Sun. This list,  
> Sales,
> Support, every way you can think of. Get it documented, so that when
> they go to set priorities on RFE's there'll be more data on this one.

Knock yourself out, but it's really unnecessary.  As has been amply
documented, on this thread and others, this is already a very high
priority for us.  It just happens to be rather difficult to do it right.
We're working on it.  We've heard the message (years ago, actually, just
about as soon as we shipped ZFS in S10 6/06.)  Your further  
encouragement
is appreciated, but it's unlikely to speed up what already is deemed
a high priority.

My 2 cents,
Fred
>
>
>  -Kyle
>>
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Fred Zlotnick
Senior Director, Open Filesystem and Sharing Technologies
Sun Microsystems, Inc.
[EMAIL PROTECTED]
x81142/+1 650 352 9298








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Kyle McDonald
John wrote:
> Our "enterprise" is about 300TB.. maybe a bit more...
>
> You are correct that most of the time we grow and not shrink... however, we 
> are fairly dynamic and occasionally do shrink. DBA's have been known to be 
> off on their space requirements/requests.
>
>   
For the record I agree with you and I'm wating for this feature also. I 
was only citing my recolection of the explanatin given in the past.

To add more from my memory, I think the 'Enterprise grows not shrinks' 
idea is coming from the idea that in ZFS you should be creating fewer 
data pools from a few different specific sized LUNs, and using ZFS to 
allocate filesystems and zVOL's from the pool, instead of customizing 
LUN sizes to create more pools each for different purposes. If true, (if 
you can make all your LUNs one size, and make a few [prefferably one I 
think] data zPool per server host.) then the need to reduce pool size is 
diminished.

That's not realistic in the home/hobby/developer market, and I'm not 
convinced that's realistic in the enterprise either.
> There is also the human error factor.  If someone accidentally grows a zpool 
> there is no easy way to recover that space without down time.  Some of my 
> LUNs are in the 1TB range and if that gets added to the wrong zpool that 
> space is basically stuck there until i can get a maintenance window. And then 
> I'm not that's even possible since my windows are only 3 hours... for example 
> what if I add a LUN to 20TB zpool.  What would I do to remove the LUN?  I 
> think I would have to create a new 20TB pool and move the data from the 
> original to the new zpool... so that would assume I have a free 20TB and the 
> down time
>  
>   
I agree here also, even with a single zpool per server. Consider a 
policy where when the pool grows you always add a RAIDz2 of 10 200GB 
LUNs. So your single data pool is currently 3 of these RAIDz2 vdevs, and 
an admin goes to add 10 more, but forgets the 'raidz2, so you end up 
with 3 RAIDz2, and 10 single LUN non redundant vdevs. How do you fix that?

My suggestion still remains though. Log your enterprises wish for this 
feature through as many channels as you have into Sun. This list, Sales, 
Support, every way you can think of. Get it documented, so that when 
they go to set priorities on RFE's there'll be more data on this one.

  -Kyle
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Ian Collins
Al Hopper writes: 

> 
> Interesting thread - thanks to all the contributors.  I've seen, on
> several different forums, that many CF users lean towards Sandisk for
> reliability and longevity.  Does anyone else see consensus in terms of
> CF brands? 
> 
The people to ask are probably professional photographers and their dealers, 
they have been using these cards for many years. Photographers don't have 
the luxury of ZFS 

My local specialist camera shop sells a lot of Sandisk cards. 

Ian 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Bob Friesenhahn
On Wed, 20 Aug 2008, Al Hopper wrote:

> How about for serving up CDROM and DVD images (genunix.org).  Even two
> 32Gb drives in a ZFS mirrored config would give you 20K+ read OPs/Sec
> - as compared to a 10k RPM SCSI drive that starts to fall-over at 400
> read IOPS.  This type is workload is way over 90% read only - a
> perfect match for an SSD and this type of workload.

The logical approach for hot sites like genunix.org is to make sure 
that the servers are fitted with enough RAM that the disks are 
virtually idle.  I don't know how many servers are at genunix.org, but 
if it is just one, then it should definitely have enough RAM to store 
all the hot data.  For example, the BeleniX 0.7.1 ISOs should 
definitely be in RAM.

Perhaps the Genunix Stats page should be updated to list the hardware 
and software configuration used to serve up all those bytes.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Nathan Kroenert
I second that question, and also ask what brand folks like for 
performance and compatibility?

Ebay is killing me with vast choice and no detail... ;)

Nathan.

Al Hopper wrote:
> On Wed, Aug 20, 2008 at 12:57 PM, Neal Pollack <[EMAIL PROTECTED]> wrote:
>> Ian Collins wrote:
>>> Brian Hechinger wrote:
 On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:

> Has anyone here had any luck using a CF to SATA adapter?
>
> I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card 
> that I wanted to use for a boot pool and even though the BIOS reports the 
> disk, Solaris B95 (or the installer) doesn't see it.
>
 I tried this a while back with an IDE to CF adapter.  Real nice looking 
 one too.

 It would constantly cause OpenBSD to panic.

 I would recommend against using this, unless you get real lucky.  If you 
 want
 flash to boot from, buy one of the ones that is specifically made for it 
 (not
 CF, but industrial grade flash meant to be a HDD).  Those things work a LOT
 better.  I can look up the details of the ones my friend uses if you'd 
 like.


>>> I was looking to run some tests with a CF boot drive before we get an
>>> X4540, which has a CF slot. The installer did see the attached USB sticks...
>> My team does some of the testing inside Sun for the CF boot devices.
>> We've used a number of IDE attaced CF adapters, such as;
>> http://www.addonics.com/products/flash_memory_reader/ad44midecf.asp
>> and also some random models from www.frys.com.
>> We also test the CF boot feature on various Sun rack servers and blades
>> that use a CF socket.
>>
>> I have not tested the SATA adapters but would not expect issues.
>> I'd like to know if you find issues.
>>
>>
>> The IDE attached devices use the legacy ATA/IDE device driver software,
>> which had some bugs fixed for DMA and misc CF specific issues.
>> It would be interesting to see if a SATA adapter for CF, set in bios to
>> use AHCI instead of Legacy/IDE mode, would have any issues with
>> the AHCI device driver software.  I've had no reason to test this yet, since
>> the Sun HW models build the CF socket right onto the motherboard/bus.
>> I can't find a reason to worry about hot-plug, since removing the boot
>> "drive" while Solaris is running would be, um, somewhat interesting :-)
>>
>> True, the enterprise grade devices are higher quality and will last longer.
>> But do not underestimate the current (2008) device wear leveling firmware
>> that controls the CF memory usage, and hence life span.  Our in house
>> destructive life span testing shows that the commercial grade CF device
>> will last longer than the motherboard will.  The consumer grade devices
> 
> Interesting thread - thanks to all the contributors.  I've seen, on
> several different forums, that many CF users lean towards Sandisk for
> reliability and longevity.  Does anyone else see consensus in terms of
> CF brands?
> 
>> that you find in the store or on mail order, may or may not be current
>> generation, so your device lifespan will vary.  It should still be rather
>> good for a boot device,  because Solaris does very little writing to the
>> boot "disk".  You can review configuration ideas to maximize the life
>> of your CF device in this Solaris white paper for non-volatile memory;
>> http://www.sun.com/bigadmin/features/articles/nvm_boot.jsp
>>
>> I hope this helps.
>>
>> Cheers,
>>
>> Neal Pollack
>>
>>> Any further information welcome.
>>>
>>> Ian
> 
> Regards,
> 

-- 
//
// Nathan Kroenert  [EMAIL PROTECTED] //
// Systems Engineer Phone:  +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288 //
// Level 7, 476 St. Kilda Road  Mobile: 0419 305 456//
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Al Hopper
On Wed, Aug 20, 2008 at 12:57 PM, Neal Pollack <[EMAIL PROTECTED]> wrote:
> Ian Collins wrote:
>> Brian Hechinger wrote:
>>> On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
>>>
 Has anyone here had any luck using a CF to SATA adapter?

 I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card 
 that I wanted to use for a boot pool and even though the BIOS reports the 
 disk, Solaris B95 (or the installer) doesn't see it.

>>> I tried this a while back with an IDE to CF adapter.  Real nice looking one 
>>> too.
>>>
>>> It would constantly cause OpenBSD to panic.
>>>
>>> I would recommend against using this, unless you get real lucky.  If you 
>>> want
>>> flash to boot from, buy one of the ones that is specifically made for it 
>>> (not
>>> CF, but industrial grade flash meant to be a HDD).  Those things work a LOT
>>> better.  I can look up the details of the ones my friend uses if you'd like.
>>>
>>>
>> I was looking to run some tests with a CF boot drive before we get an
>> X4540, which has a CF slot. The installer did see the attached USB sticks...
>
> My team does some of the testing inside Sun for the CF boot devices.
> We've used a number of IDE attaced CF adapters, such as;
> http://www.addonics.com/products/flash_memory_reader/ad44midecf.asp
> and also some random models from www.frys.com.
> We also test the CF boot feature on various Sun rack servers and blades
> that use a CF socket.
>
> I have not tested the SATA adapters but would not expect issues.
> I'd like to know if you find issues.
>
>
> The IDE attached devices use the legacy ATA/IDE device driver software,
> which had some bugs fixed for DMA and misc CF specific issues.
> It would be interesting to see if a SATA adapter for CF, set in bios to
> use AHCI instead of Legacy/IDE mode, would have any issues with
> the AHCI device driver software.  I've had no reason to test this yet, since
> the Sun HW models build the CF socket right onto the motherboard/bus.
> I can't find a reason to worry about hot-plug, since removing the boot
> "drive" while Solaris is running would be, um, somewhat interesting :-)
>
> True, the enterprise grade devices are higher quality and will last longer.
> But do not underestimate the current (2008) device wear leveling firmware
> that controls the CF memory usage, and hence life span.  Our in house
> destructive life span testing shows that the commercial grade CF device
> will last longer than the motherboard will.  The consumer grade devices

Interesting thread - thanks to all the contributors.  I've seen, on
several different forums, that many CF users lean towards Sandisk for
reliability and longevity.  Does anyone else see consensus in terms of
CF brands?

> that you find in the store or on mail order, may or may not be current
> generation, so your device lifespan will vary.  It should still be rather
> good for a boot device,  because Solaris does very little writing to the
> boot "disk".  You can review configuration ideas to maximize the life
> of your CF device in this Solaris white paper for non-volatile memory;
> http://www.sun.com/bigadmin/features/articles/nvm_boot.jsp
>
> I hope this helps.
>
> Cheers,
>
> Neal Pollack
>
>>
>> Any further information welcome.
>>
>> Ian

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Al Hopper
On Wed, Aug 20, 2008 at 6:48 PM, Ross <[EMAIL PROTECTED]> wrote:
>> Where's the beef?
>>
>> I sense a lot of smoke and mirrors here, similar to Intel's recent CPU
>> "announcements" which don't even reveal the number of cores.  No
>> prices and funny numbers that the writers of
>> technical articles can't seem to get straight.
>>
>> Obviously these are a significant improvement for laptop drives but
>> how many laptop users have a need for 11,000 IOPs and 170MB/s?
>
> Err, laptop drives?  Who cares about laptop drives.  170MB/s writes and 
> 11,000 IOPS?  I'll take four please for my ZFS log device!
>
> Seriously, I don't even care about the cost.  Even with the smallest 
> capacity, four of those gives me 128GB of write cache supporting 680MB/s and 
> 40k IOPS.  Show me a hardware raid controller that can even come close to 
> that.  Four of those will strain even 10GB/s Infiniband.

How about for serving up CDROM and DVD images (genunix.org).  Even two
32Gb drives in a ZFS mirrored config would give you 20K+ read OPs/Sec
- as compared to a 10k RPM SCSI drive that starts to fall-over at 400
read IOPS.  This type is workload is way over 90% read only - a
perfect match for an SSD and this type of workload.

> Plus, if Intel are comparing these with 5400rpm drives, and planning them for 
> the laptop market I can't see them being too expensive.  Certainly worth the 
> money. for ZFS.
>
> Personally I'd like to see something that mounts on a PCIe card, but if I 
> need to I'll happily start bolting 2.5" SSD's to the sides of my server cases!
>

I got to play with one of Sun low-power prototypes and it came with
apologies for the way the el-cheapo (transend) SSD was mounted in the
case.  Wait for it...  secured by a two inch wide strip of packing
tape.   It was certainly a "first" for me!  I still smile when I think
of it.

Apparently the Intel folks put some of their heavy math geeks to work
on wear leveling algorithms and came up with something that has
advanced the state of the art - from what I've heard from people in
the know.  But few details are available publicly (yet).

It's interesting how the SSD has already turned into a chip based arms
raced.  Explanation: in the old days, because of the cost and lead
time, products were only committed to Integrated Circuits (ICs) aka
chips, after the technology had matured.  This is no longer the case -
as we've seen with CMOS digital camera sensors and processing logic.
Now we have a case in point with SSDs, where it looks like the
technology leaders are out the door with  chip based implementations
of their first entry into the marketplace.

Now thats progress! :)

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Al Hopper
On Wed, Aug 20, 2008 at 5:29 PM, Tim <[EMAIL PROTECTED]> wrote:
>
>
> On Wed, Aug 20, 2008 at 5:17 PM, Bob Friesenhahn
> <[EMAIL PROTECTED]> wrote:
>>
>> On Wed, 20 Aug 2008, Al Hopper wrote:
>>
>> > It looks like Intel has a huge hit (product) on its hands with the
>> > latest SSD product announcements.  No pricing yet ... but the specs
>> > will push computer system IO bandwidth performance to numbers only
>> > possible today with extremely expensive RAM based disk subsystems.
>> >
>> > SSDs + ZFS - a marriage made in (computer) heaven!
>>
>> Where's the beef?
>>
>> I sense a lot of smoke and mirrors here, similar to Intel's recent CPU
>> "announcements" which don't even reveal the number of cores.  No
>> prices and funny numbers that the writers of technical articles can't
>> seem to get straight.
>>
>> Obviously these are a significant improvement for laptop drives but
>> how many laptop users have a need for 11,000 IOPs and 170MB/s?  It
>> seems to me that most laptops suffer from insufficent RAM and
>> low-power components which don't deliver much performance.  The CPUs
>> which come in laptops are not going to be able to process 170MB/s.
>>
>> What about the dual-ported SAS models for enterprise use?
>>
>> Bob
>> ==
>> Bob Friesenhahn
>> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
>> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> I don't know about that.  I just went from an SSD back to a SATA drive
> because the SSD started failing in less than a month (I'm having troubles
> believing this great write-leveling they talk about is working
> properly...).  And the SATA drive is dog slow in comparison.  The biggest
> issue is seek times.  Opening apps/directories there is a VERY noticeable
> difference from the SSD to this drive.
>
> The user experience is drastically improved with the SSD imho.  Of course,
> the fact that it started giving me i/o errors after just 3 weeks means it's
> going to be RMA'd and won't find a home back in my laptop anytime soon.
>
> This was one of the 64GB OCZ Core drives for reference.

Hi Tim.  There are a lot of reports of corrupted data with the OCZ
Core drive.  They have just released a new (replacement perhaps?)
product called "Core 2".  Its based on a Samsung product.
tomshardware.com has a review here:

http://www.tomshardware.com/reviews/flash-ssd-hard-drive,2000.html

I'm not seeing the promised reliability claims being validated in
terms of the warranties that are being offered on most of the current
consumer grade SSDs.

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Marion Hakanson
[EMAIL PROTECTED] said:
> Seriously, I don't even care about the cost.  Even with the smallest
> capacity, four of those gives me 128GB of write cache supporting 680MB/s and
> 40k IOPS.  Show me a hardware raid controller that can even come close to
> that.  Four of those will strain even 10GB/s Infiniband. 

I had my sights set lower.  Our Thumper has four hot-spare drives
right now.  I'd take one or two of those out and replace them with
one or two 80GB SSD's, upgrade to S10U6 when available, and set them
up as a separate log device.  Now I've gotten rid of the horrible NFS
latencies that come from NFS-vs-ZIL interaction.

It would only take a tiny SSD for an NFS ZIL, really.  We have an old
array with 1GB cache, and telling that to ignore cache-flush requests
from ZFS made a huge difference in NFS latency.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Ross
> Where's the beef?
> 
> I sense a lot of smoke and mirrors here, similar to Intel's recent CPU 
> "announcements" which don't even reveal the number of cores.  No 
> prices and funny numbers that the writers of
> technical articles can't seem to get straight.
>
> Obviously these are a significant improvement for laptop drives but 
> how many laptop users have a need for 11,000 IOPs and 170MB/s?

Err, laptop drives?  Who cares about laptop drives.  170MB/s writes and 11,000 
IOPS?  I'll take four please for my ZFS log device!

Seriously, I don't even care about the cost.  Even with the smallest capacity, 
four of those gives me 128GB of write cache supporting 680MB/s and 40k IOPS.  
Show me a hardware raid controller that can even come close to that.  Four of 
those will strain even 10GB/s Infiniband.

Plus, if Intel are comparing these with 5400rpm drives, and planning them for 
the laptop market I can't see them being too expensive.  Certainly worth the 
money. for ZFS.

Personally I'd like to see something that mounts on a PCIe card, but if I need 
to I'll happily start bolting 2.5" SSD's to the sides of my server cases!

Ross
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-20 Thread Tim
I don't think its just b94, I recall this behavior for as long as I've
had the card.  I'd also be interested to know if the sun driver team
has ever even tested with this card.  I realize its probably not a top
priority, but it sure would be nice to have it working properly.






On 8/20/08, Ross Smith <[EMAIL PROTECTED]> wrote:
>
>> > Without fail, cfgadm changes the status from "disk" to "sata-port" when
>> > I
>> > unplug a device attached to port 6 or 7, but most of the time unplugging
>> > disks 0-5 results in no change in cfgadm, until I also attach disk 6 or
>> > 7.
>>
>> That does seem inconsistent, or at least, it's not what I'd expect.
>
> Yup, was an absolute nightmare to diagnose on top of everything else.
> Definitely doesn't happen in windows too.  I really want somebody to try
> snv_94 on a Thumper to see if you get the same behaviour there, or whether
> it's unique to Supermicro's Marvell card.
>
>> > Often the system hung completely when you pulled one of the disks 0-5,
>> > and wouldn't respond again until you re-inserted it.
>> >
>> > I'm 99.99% sure this is a driver issue for this controller.
>>
>> Have you logged a bug on it yet?
>
> Yup, 6735931.  Added the information about it working in Windows today too.
>
> Ross
>
> _
> Get Hotmail on your mobile from Vodafone
> http://clk.atdmt.com/UKM/go/107571435/direct/01/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question: Disable ACL on ZFS filesystem

2008-08-20 Thread Arlina G. Capiral
All,


System running Solaris 10 8/07 withe ZFS filesystem and Oracle 
application on it.
Customer accidentally removed one of the Oracle directories under zfs 
filesystem and now would like
to restore.
They are using EMC Networker Backup software for backup/restore.
Cu tried to restore the directory via EMC Networker as a local user and 
it failed.
They tried as root and it worked fine.
While EMC doing further  troubleshooting, they found out that it might 
be caused by ZFS ACL permission.

Now the questions i got from the customer are the following:

1. Can you disable ACL on ZFS filesystem?
2. If yes, what command to use and what would be the effect?

Any recommendation on this aside from removing ACL on ZFS?

Your inputs are highly appreciated.

Thank you in advance,
Arlina-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-20 Thread James C. McPherson
Ross Smith wrote:
>  > > Without fail, cfgadm changes the status from "disk" to "sata-port" 
> when I
>  > > unplug a device attached to port 6 or 7, but most of the time 
> unplugging
>  > > disks 0-5 results in no change in cfgadm, until I also attach disk 
> 6 or 7.
>  >
>  > That does seem inconsistent, or at least, it's not what I'd expect.
> 
> Yup, was an absolute nightmare to diagnose on top of everything else.  
> Definitely doesn't happen in windows too.  I really want somebody to try 
> snv_94 on a Thumper to see if you get the same behaviour there, or 
> whether it's unique to Supermicro's Marvell card.

That's a very good question.

>  > > Often the system hung completely when you pulled one of the disks 0-5,
>  > > and wouldn't respond again until you re-inserted it.
>  > >
>  > > I'm 99.99% sure this is a driver issue for this controller.
>  >
>  > Have you logged a bug on it yet?
> 
> Yup, 6735931.  Added the information about it working in Windows today too.


Heh... I should have recognised that, I moved it from the
triage queue to driver/sata :-)


James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Aaron Blew
I've heard (though I'd be really interested to read the studies if someone
has a link) that a lot of this human error percentage comes at the hardware
level.  Replacing the wrong physical disk in a RAID-5 disk group, bumping
cables, etc.

-Aaron



On Wed, Aug 20, 2008 at 3:40 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Wed, 20 Aug 2008, Miles Nordin wrote:
>
> >> "j" == John  <[EMAIL PROTECTED]> writes:
> >
> > j> There is also the human error factor.  If someone accidentally
> > j> grows a zpool
> >
> > or worse, accidentally adds an unredundant vdev to a redundant pool.
> > Once you press return, all you can do is scramble to find mirrors for
> > it.
>
> Not to detract from the objective to be able to re-shuffle the zfs
> storage layout, any system administration related to storage is risky
> business.  Few people should be qualified to do it.  Studies show that
> 36% of data loss is due to human error.  Once zfs mirroring, raidz, or
> raidz2 are used to virtually eliminate loss due to hardware or system
> malfunction, this 36% is increased to a much higher percentage.  For
> example, if loss due to hardware or system malfunction is reduced to
> just 1% (still a big number) then the human error factor is increased
> to a wopping 84%.  Humans are like a ticking time bomb for data.
>
> The errant command which accidentally adds a vdev could just as easily
> be a command which scrambles up or erases all of the data.
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread paul
Kyle wrote:
> ... If I recall, the low priority  was based on the percieved low demand
> for the feature in enterprise organizations. As I understood it shrinking a
> pool is percieved as being a feature most desired by home/hobby/development
> users, and that  enterprises mainly only grow thier pools, not shrink.

Although it's historically clear that data tends to grow to fill available 
storable, it
should be equally clear that as storage resources are neither free nor 
inexhaustible;
the flexibility to redeploy existing storage resources from one system to 
another
considered more critical may simply necessitate the ability to reduce resources 
in
one pool for transfer to another, and therefore should be easily and efficiently
accomplishable without having to otherwise manually shuffle data around like a
shell game between multiple storage configurations to achieve the desired 
results.

Seemingly equally valid, if its determined that some storage resources within a
pool are beginning to potentially fail and their storage is not literally 
required
at the moment, it would seem like a good idea to have any data which they
may contain moved to other resources in the pool, and simply remove them
as a storage candidate without having to replace them with alternates which
may simply not physically exist at the moment.

That being said, fixing bugs which would otherwise render the zfs file system
unreliable should always trump "nice to have features".
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-20 Thread Ross Smith

> > Without fail, cfgadm changes the status from "disk" to "sata-port" when I
> > unplug a device attached to port 6 or 7, but most of the time unplugging
> > disks 0-5 results in no change in cfgadm, until I also attach disk 6 or 7.
> 
> That does seem inconsistent, or at least, it's not what I'd expect.

Yup, was an absolute nightmare to diagnose on top of everything else.  
Definitely doesn't happen in windows too.  I really want somebody to try snv_94 
on a Thumper to see if you get the same behaviour there, or whether it's unique 
to Supermicro's Marvell card.

> > Often the system hung completely when you pulled one of the disks 0-5,
> > and wouldn't respond again until you re-inserted it.
> > 
> > I'm 99.99% sure this is a driver issue for this controller.
> 
> Have you logged a bug on it yet?

Yup, 6735931.  Added the information about it working in Windows today too.

Ross

_
Get Hotmail on your mobile from Vodafone 
http://clk.atdmt.com/UKM/go/107571435/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Ian Collins
Bob Friesenhahn writes:
> 
> The SSD drives will work well for a boot drive, or a non-volatile 
> transaction cache, but will be dramatically more expensive for storage 
> than traditional hard drives.  This must be why Intel is focusing on 
> laptop users and not on enterprise storage. 
> 
The sweet spot will be non-volatile transaction cache coupled with slow, low 
cost and low power big hard drives.  As non-volatile cache devices get 
bigger, the performance demands on the bulk storage become less. 

Ian. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Neal Pollack
Bob Friesenhahn wrote:

>>
>> SSDs + ZFS - a marriage made in (computer) heaven!
> 
> Where's the beef?
> 
> I sense a lot of smoke and mirrors here, similar to Intel's recent CPU 
> "announcements" which don't even reveal the number of cores.  No 
> prices and funny numbers that the writers of technical articles can't 
> seem to get straight.
> 
> Obviously these are a significant improvement for laptop drives but 
> how many laptop users have a need for 11,000 IOPs and 170MB/s?  It 
> seems to me that most laptops suffer from insufficent RAM and 
> low-power components which don't deliver much performance.  The CPUs 
> which come in laptops are not going to be able to process 170MB/s.

I guess you have not used current day laptops.
I've used several brands that come standard with dual-core
processors, 4 gig RAM, and 250 GB disks.
Later this year, they are showing off mobile quad core
laptops.

The limiting factor on boot time and data movement is always
the darn HDD, spining at a fixed 7200rpm.  Using "parallel-channel"
flash SSDs will indeed improve performance significantly, and when
I can get my hands on one, I'd be happy to show you numbers
and price data.

I've been installing and testing OS's on various SSDs and CF
devices that are single channel (300x speed equiv for CF marketing),
and I can't wait to test the new parallel channel devices.

But in so far as zfs server storage array with heavy write
operations?  Yeah, we'd have to talk "write data volumes over time"
vs. device life span.  But that is also set to change, as the vendors
are working on newer flash tech that can last much longer.

I still see many applications where an SSD or Flash can improve
storage system performance in the enterprise.  Just stay tuned.
Products/solutions are in progress.


> 
> What about the dual-ported SAS models for enterprise use?
> 
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-20 Thread James C. McPherson
Ross wrote:
> lol, I got bored after 13 pages and a whole day of going back through my
> notes to pick out the relevant information.
> 
> Besides, I did mention that I was using cfgadm to see what was connected
> :-p.  If you're really interested, most of my troubleshooting notes have
> been posted to the forum, but unfortunately Sun's software has split it
> into three or four pieces.  Just search for posts talking about the
> AOC-SAT2-MV8 card to find them.
> 
> Without fail, cfgadm changes the status from "disk" to "sata-port" when I
> unplug a device attached to port 6 or 7, but most of the time unplugging
> disks 0-5 results in no change in cfgadm, until I also attach disk 6 or 7.

That does seem inconsistent, or at least, it's not what I'd expect.

> Often the system hung completely when you pulled one of the disks 0-5,
> and wouldn't respond again until you re-inserted it.
> 
> I'm 99.99% sure this is a driver issue for this controller.

Have you logged a bug on it yet?


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Will Murnane
On Wed, Aug 20, 2008 at 18:40, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> The errant command which accidentally adds a vdev could just as easily
> be a command which scrambles up or erases all of the data.
True enough---but if there's a way to undo accidentally adding a vdev,
there's one source of disastrously bad human error eliminated.  If the
vdev is removable, then typing "zpool evacuate c3t4d5" to fix the
problem instead of getting backups up to date, destroying and
recreating the pool, then restoring from backups saves quite a bit of
the cost associated with human error in this case.

Think of it as the analogue of "zpool import -D": if you screw up, ZFS
has a provision to at least try to help.  The recent discussion on
accepting partial 'zfs recv' streams is a similar measure.  No system
is perfectly resilient to human error, but any simple ways in which
the resilience (especially of such a large unit as a pool!) can be
improved should be considered.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-20 Thread Ross
lol, I got bored after 13 pages and a whole day of going back through my notes 
to pick out the relevant information.

Besides, I did mention that I was using cfgadm to see what was connected :-p.  
If you're really interested, most of my troubleshooting notes have been posted 
to the forum, but unfortunately Sun's software has split it into three or four 
pieces.  Just search for posts talking about the AOC-SAT2-MV8 card to find them.

Without fail, cfgadm changes the status from "disk" to "sata-port" when I 
unplug a device attached to port 6 or 7, but most of the time unplugging disks 
0-5 results in no change in cfgadm, until I also attach disk 6 or 7.

Often the system hung completely when you pulled one of the disks 0-5, and 
wouldn't respond again until you re-inserted it.

I'm 99.99% sure this is a driver issue for this controller.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Bob Friesenhahn
On Wed, 20 Aug 2008, Tim wrote:
>
> I don't know about that.  I just went from an SSD back to a SATA drive
> because the SSD started failing in less than a month (I'm having troubles
> believing this great write-leveling they talk about is working
> properly...).  And the SATA drive is dog slow in comparison.  The biggest
> issue is seek times.  Opening apps/directories there is a VERY noticeable
> difference from the SSD to this drive.

The fact of the matter is that these new SSD drives are only 32GB or 
80GB in size and will become available when 2TB hard drives become 
available.  The 2TB hard drives will offer similar throughput but with 
far less IOPS.

The SSD drives will work well for a boot drive, or a non-volatile 
transaction cache, but will be dramatically more expensive for storage 
than traditional hard drives.  This must be why Intel is focusing on 
laptop users and not on enterprise storage.

In spite of many vendor claims of reliability, most enterprise users 
are going to want to see these products deployed for a few years 
before they entrust them with their critical data.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Bob Friesenhahn
On Wed, 20 Aug 2008, Miles Nordin wrote:

>> "j" == John  <[EMAIL PROTECTED]> writes:
>
> j> There is also the human error factor.  If someone accidentally
> j> grows a zpool
>
> or worse, accidentally adds an unredundant vdev to a redundant pool.
> Once you press return, all you can do is scramble to find mirrors for
> it.

Not to detract from the objective to be able to re-shuffle the zfs 
storage layout, any system administration related to storage is risky 
business.  Few people should be qualified to do it.  Studies show that 
36% of data loss is due to human error.  Once zfs mirroring, raidz, or 
raidz2 are used to virtually eliminate loss due to hardware or system 
malfunction, this 36% is increased to a much higher percentage.  For 
example, if loss due to hardware or system malfunction is reduced to 
just 1% (still a big number) then the human error factor is increased 
to a wopping 84%.  Humans are like a ticking time bomb for data.

The errant command which accidentally adds a vdev could just as easily 
be a command which scrambles up or erases all of the data.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Tim
On Wed, Aug 20, 2008 at 5:17 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Wed, 20 Aug 2008, Al Hopper wrote:
>
> > It looks like Intel has a huge hit (product) on its hands with the
> > latest SSD product announcements.  No pricing yet ... but the specs
> > will push computer system IO bandwidth performance to numbers only
> > possible today with extremely expensive RAM based disk subsystems.
> >
> > SSDs + ZFS - a marriage made in (computer) heaven!
>
> Where's the beef?
>
> I sense a lot of smoke and mirrors here, similar to Intel's recent CPU
> "announcements" which don't even reveal the number of cores.  No
> prices and funny numbers that the writers of technical articles can't
> seem to get straight.
>
> Obviously these are a significant improvement for laptop drives but
> how many laptop users have a need for 11,000 IOPs and 170MB/s?  It
> seems to me that most laptops suffer from insufficent RAM and
> low-power components which don't deliver much performance.  The CPUs
> which come in laptops are not going to be able to process 170MB/s.
>
> What about the dual-ported SAS models for enterprise use?
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



I don't know about that.  I just went from an SSD back to a SATA drive
because the SSD started failing in less than a month (I'm having troubles
believing this great write-leveling they talk about is working
properly...).  And the SATA drive is dog slow in comparison.  The biggest
issue is seek times.  Opening apps/directories there is a VERY noticeable
difference from the SSD to this drive.

The user experience is drastically improved with the SSD imho.  Of course,
the fact that it started giving me i/o errors after just 3 weeks means it's
going to be RMA'd and won't find a home back in my laptop anytime soon.

This was one of the 64GB OCZ Core drives for reference.


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-20 Thread Brian D. Horn
Well, when you leave out a bunch of relevant information you also leave
people guessing! :-)

Regardless, is it possibly that all of your testing was done with ZFS and not
just the "raw" disk?  If so, it is possible that ZFS isn't noticing the hot 
unplugging
of the disk until it tries to access the drive.  I don't know this, but it would
be consistent with what you have related to date.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Bob Friesenhahn
On Wed, 20 Aug 2008, Al Hopper wrote:

> It looks like Intel has a huge hit (product) on its hands with the
> latest SSD product announcements.  No pricing yet ... but the specs
> will push computer system IO bandwidth performance to numbers only
> possible today with extremely expensive RAM based disk subsystems.
>
> SSDs + ZFS - a marriage made in (computer) heaven!

Where's the beef?

I sense a lot of smoke and mirrors here, similar to Intel's recent CPU 
"announcements" which don't even reveal the number of cores.  No 
prices and funny numbers that the writers of technical articles can't 
seem to get straight.

Obviously these are a significant improvement for laptop drives but 
how many laptop users have a need for 11,000 IOPs and 170MB/s?  It 
seems to me that most laptops suffer from insufficent RAM and 
low-power components which don't deliver much performance.  The CPUs 
which come in laptops are not going to be able to process 170MB/s.

What about the dual-ported SAS models for enterprise use?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! Possible with b80 and newest ZFS?

2008-08-20 Thread Ross
I wouldn't know about using newer ZFS with older builds, but I can tell you 
that b94 looks rock solid to me.  I've been running it for a few weeks on a 
live server and haven't had any crashing or instability problems at all.

Ordinarily, if you're having problems, the first thing I would try would be to 
eliminate faulty hardware.  If you have another machine handy, install b94 on 
that.  Failing that, remove unnecessary components from this computer to try to 
get it stable enough to read your zpool.

However, you also say you're running Wine, and you don't say Solaris crashes, 
just that you get kicked out to the login screen.  Did you do a clean install 
when you upgraded from the b94 you were having problems with, or did you try an 
upgrade?  I don't think Wine is included in a standard build, so it's probably 
worth doing a clean re-install of build 94 or 95 to see if that helps.

Your last post implies that you're able to run a fair deal on this computer, so 
it's not dying completely.  I strongly suspect you've got a configuration or 
compability issue here rather than a bug in solaris.  The only way to really 
find out is to run a clean install and see how that goes.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Traditional SAN

2008-08-20 Thread Vincent Fox
> 
> All,I'm currently working out
> details on an upgrade from UFS/SDS on DAS to ZFS on a
> SAN fabric.  I'm interested in hearing how
> ZFS has behaved in more traditional SAN environments
> using gear that scales vertically like EMC
> Clarion/HDS AMS/3PAR etc.  Do you experience
> issues with zpool integrity because of MPxIO
> events?  Has the zpool been reliable over your
> fabric?  Has performance been where you would
> have expected it to be?
> Thanks much,-Aaron
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

No
Yes
Yes

Search for ZFS SAN I think we've had a bunch of threads on this.

I run 2-node clusters with T-2000 units to pairs of either 3510FC (older) or 
2540 (newer) over dual SAN switches.  Multipath has worked fine, had an entire 
array go offline during attempt to replace a bad controller.  Since ZFS pool 
was mirrored onto another array, there was no disruption of service.  We use a 
hybrid setup where I build 5-disk RAID-5 LUNs on each array, then that is 
mirrored in ZFS with a LUN from a different array.  Best of both worlds IMO.   
Each cluster supports about 10K users for Cyrus mail-store since 10u4 fsync 
performance patch no performance issues.  I wish zpool scrub ran a bit quicker, 
but I think 10u6 will address that.

I'm not sure I see the point of EMC but like me you probably already have some 
equipment and just want to use it or add to it.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Ian Collins
John wrote:
> Our "enterprise" is about 300TB.. maybe a bit more...
>
> You are correct that most of the time we grow and not shrink... however, we 
> are fairly dynamic and occasionally do shrink. DBA's have been known to be 
> off on their space requirements/requests.
>
>   
Isn't that one of the problems ZFS solves? Grow the pool to meet the
demand rather than size it for the estimated maximum usage. Even
exported vdevs can be thin provisioned.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Miles Nordin
> "j" == John  <[EMAIL PROTECTED]> writes:

 j> There is also the human error factor.  If someone accidentally
 j> grows a zpool 

or worse, accidentally adds an unredundant vdev to a redundant pool.
Once you press return, all you can do is scramble to find mirrors for
it.

vdev removal is also neeeded to, for example, change each vdev in a
big pool of JBOD devices from mirroring to raidz2.  in general, for
reconfiguring pools' layouts without outage, not just shrinking.  
This online-layout-reconfig is also a Veritas selling point, yes?, or
is that Veritas feature considered too risky for actual use?

For my home user setup, the ability to grow a single vdev by replacing
all the disks within it with bigger ones, then export/import, is
probably good enough.  Note however this is still not quite ``online''
because export/import is needed to claim the space.  Though IIRC some
post here said that's fixed in the latest Nevadas, one would have to
look at the whole stack to make sure it's truly online---can FC and
iSCSI gracefully handle a target's changing size and report it to ZFS,
or does FC/iSCSI need to be whacked, or is size change only noticed at
zpool replace/attach time?

The thing that really made me wish for 'pvmove' / RFE 4852783 at home
so far is the recovering-from-mistaken-add scenario.


pgpQvOcQ6F2w3.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Ian Collins
Neal Pollack wrote:
> Ian Collins wrote:
>> Brian Hechinger wrote:
>>> On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
>>>  
 Has anyone here had any luck using a CF to SATA adapter?

 I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB
 card that I wanted to use for a boot pool and even though the BIOS
 reports the disk, Solaris B95 (or the installer) doesn't see it.
 
>> I was looking to run some tests with a CF boot drive before we get an
>> X4540, which has a CF slot. The installer did see the attached USB
>> sticks...
>
> My team does some of the testing inside Sun for the CF boot devices.
> We've used a number of IDE attaced CF adapters, such as;
> http://www.addonics.com/products/flash_memory_reader/ad44midecf.asp
> and also some random models from www.frys.com.
> We also test the CF boot feature on various Sun rack servers and blades
> that use a CF socket.
>
> I have not tested the SATA adapters but would not expect issues.
> I'd like to know if you find issues.
>
Well the biggest was the device not being recognised! 

The BIOS sees it and reports it as a SanDisk SDCFX3-008G whit I assume
is the label from the card.  The board is an nForce 4 Asus A8N-E.

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with Traditional SAN

2008-08-20 Thread Aaron Blew
All,
I'm currently working out details on an upgrade from UFS/SDS on DAS to ZFS
on a SAN fabric.  I'm interested in hearing how ZFS has behaved in more
traditional SAN environments using gear that scales vertically like EMC
Clarion/HDS AMS/3PAR etc.  Do you experience issues with zpool integrity
because of MPxIO events?  Has the zpool been reliable over your fabric?  Has
performance been where you would have expected it to be?

Thanks much,
-Aaron
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Neal Pollack
Ian Collins wrote:
> Brian Hechinger wrote:
>> On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
>>   
>>> Has anyone here had any luck using a CF to SATA adapter?
>>>
>>> I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card 
>>> that I wanted to use for a boot pool and even though the BIOS reports the 
>>> disk, Solaris B95 (or the installer) doesn't see it.
>>> 
>> I tried this a while back with an IDE to CF adapter.  Real nice looking one 
>> too.
>>
>> It would constantly cause OpenBSD to panic.
>>
>> I would recommend against using this, unless you get real lucky.  If you want
>> flash to boot from, buy one of the ones that is specifically made for it (not
>> CF, but industrial grade flash meant to be a HDD).  Those things work a LOT
>> better.  I can look up the details of the ones my friend uses if you'd like.
>>
>>   
> I was looking to run some tests with a CF boot drive before we get an
> X4540, which has a CF slot. The installer did see the attached USB sticks...

My team does some of the testing inside Sun for the CF boot devices.
We've used a number of IDE attaced CF adapters, such as;
http://www.addonics.com/products/flash_memory_reader/ad44midecf.asp
and also some random models from www.frys.com.
We also test the CF boot feature on various Sun rack servers and blades
that use a CF socket.

I have not tested the SATA adapters but would not expect issues.
I'd like to know if you find issues.


The IDE attached devices use the legacy ATA/IDE device driver software,
which had some bugs fixed for DMA and misc CF specific issues.
It would be interesting to see if a SATA adapter for CF, set in bios to
use AHCI instead of Legacy/IDE mode, would have any issues with
the AHCI device driver software.  I've had no reason to test this yet, since
the Sun HW models build the CF socket right onto the motherboard/bus.
I can't find a reason to worry about hot-plug, since removing the boot
"drive" while Solaris is running would be, um, somewhat interesting :-)

True, the enterprise grade devices are higher quality and will last longer.
But do not underestimate the current (2008) device wear leveling firmware
that controls the CF memory usage, and hence life span.  Our in house
destructive life span testing shows that the commercial grade CF device
will last longer than the motherboard will.  The consumer grade devices
that you find in the store or on mail order, may or may not be current
generation, so your device lifespan will vary.  It should still be rather
good for a boot device,  because Solaris does very little writing to the
boot "disk".  You can review configuration ideas to maximize the life
of your CF device in this Solaris white paper for non-volatile memory;
http://www.sun.com/bigadmin/features/articles/nvm_boot.jsp

I hope this helps.

Cheers,

Neal Pollack

> 
> Any further information welcome.
> 
> Ian
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SSD update

2008-08-20 Thread Al Hopper
It looks like Intel has a huge hit (product) on its hands with the
latest SSD product announcements.  No pricing yet ... but the specs
will push computer system IO bandwidth performance to numbers only
possible today with extremely expensive RAM based disk subsystems.

SSDs + ZFS - a marriage made in (computer) heaven!

http://www.tomshardware.com/news/intel-idf-ssd,6205.html
http://www.theinquirer.net/gb/inquirer/news/2008/08/19/idf-2008-intel-ssd-cometh

Regards,

-- 
Al Hopper  Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
   Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] [Fwd: beadm: Unable to activate opensolaris-x (build 95)]

2008-08-20 Thread Evan Layton
Evan Layton wrote:
> Rob McMahon wrote:
>> Evan Layton wrote:
>>> Can you set BE_PRINT_ERR to see if we can get a bit more information 
>>> on what going on here? (export BE_PRINT_ERR=true)
>>> It would also be helpful to see what "zpool status" shows as well as 
>>> what's in menu.lst
>>>
>>>
>>  > env BE_PRINT_ERR=true beadm activate opensolaris-7
>> set_bootfs: failed to set bootfs property for pool rpool: property 
>> 'bootfs' not supported on EFI labeled devices
>> be_activate: failed to set bootfs pool property for rpool/ROOT/opensolaris-7
>> beadm: Unable to activate opensolaris-7
>>  >
>>
>> Ah-ha.  I remember something about this.  But why was it supported up 
>> until 94, but no longer in 95 ?  What was the change ?  I certainly 
>> haven't been round relabeling any disks, and it's been working fine up 
>> until now.
> 
> EFI labels have never been supported for root pools so something has
> definitely changed either in ZFS or your pool. Either something has
> relabeled one of your disks to EFI or the checking in ZFS has recently
> become more restrictive. Hopefully someone on zfs-discuss will have some
> idea why this would suddenly be showing up.

As Ethan mentioned There was a ZFS change in 95 that explicitly makes a
check now and does not allow the bootfs property to be set on an EFI
labeled disks.

So nevermind... d'oh...

> 
>>  > zpool status
>>   pool: rpool
>>  state: ONLINE
>>  scrub: none requested
>> config:
>>
>> NAME  STATE READ WRITE CKSUM
>> rpool ONLINE   0 0 0
>>   mirror  ONLINE   0 0 0
>> c5t0d0s0  ONLINE   0 0 0
>> c5t1d0s0  ONLINE   0 0 0
>>
>> errors: No known data errors
>>  >
>>
>> Looks happy enough.  Can I detach half the mirror, re-label, re-attach, 
>> resilver, and repeat ?  Or will this restriction be removed in the near 
>> future ?  This is my desktop machine, so it's not the end of the world 
>> if I trash it, but I'd rather not, and I'd also rather keep looking at 
>> new updates.
>>
>> Cheers,
>>
>> Rob
>>
> 
> I would follow the instructions Dan Price sent out last week:
> 
> If you want to remove an EFI labelled disk from your root pool, my advice
> to you would be to do the following.  Note that I have not tested this
> particular sequence, but I think it will work.  Hah.
> 
> 0) Backup your data and settings.
> 
> 1) 'zpool detach' the EFI labelled disk from your pool.  After you do this
> YOU MUST NOT REBOOT.  Your system is now in a fragile state.
> 
> 2) Run 'zpool status' to ensure that your pool now has one disk.
> 
> 3) Edit /etc/boot/solaris/filelist.ramdisk.  Remove the only line in the
> file:
> 
>   etc/zfs/zpool.cache
> 
> 4) Delete /platform/i86pc/boot_archive and /platform/i86pc/amd64/boot_archive
> 
> 5) Run 'bootadm update-archive' -- This rebuilds the boot archive,
> omitting the zpool.cache file.
> 
> It may also be necessary to do installgrub at this point.  Probably, and
> it wouldn't hurt.
> 
> 6) Reboot your system, to ensure that you have a working configuration.
> 
> 
> I hope this helps...
> 
> -evan
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] [Fwd: beadm: Unable to activate opensolaris-x (build 95)]

2008-08-20 Thread Evan Layton
Rob McMahon wrote:
> Evan Layton wrote:
>> Can you set BE_PRINT_ERR to see if we can get a bit more information 
>> on what going on here? (export BE_PRINT_ERR=true)
>> It would also be helpful to see what "zpool status" shows as well as 
>> what's in menu.lst
>>
>>
>  > env BE_PRINT_ERR=true beadm activate opensolaris-7
> set_bootfs: failed to set bootfs property for pool rpool: property 
> 'bootfs' not supported on EFI labeled devices
> be_activate: failed to set bootfs pool property for rpool/ROOT/opensolaris-7
> beadm: Unable to activate opensolaris-7
>  >
> 
> Ah-ha.  I remember something about this.  But why was it supported up 
> until 94, but no longer in 95 ?  What was the change ?  I certainly 
> haven't been round relabeling any disks, and it's been working fine up 
> until now.

EFI labels have never been supported for root pools so something has
definitely changed either in ZFS or your pool. Either something has
relabeled one of your disks to EFI or the checking in ZFS has recently
become more restrictive. Hopefully someone on zfs-discuss will have some
idea why this would suddenly be showing up.

> 
>  > zpool status
>   pool: rpool
>  state: ONLINE
>  scrub: none requested
> config:
> 
> NAME  STATE READ WRITE CKSUM
> rpool ONLINE   0 0 0
>   mirror  ONLINE   0 0 0
> c5t0d0s0  ONLINE   0 0 0
> c5t1d0s0  ONLINE   0 0 0
> 
> errors: No known data errors
>  >
> 
> Looks happy enough.  Can I detach half the mirror, re-label, re-attach, 
> resilver, and repeat ?  Or will this restriction be removed in the near 
> future ?  This is my desktop machine, so it's not the end of the world 
> if I trash it, but I'd rather not, and I'd also rather keep looking at 
> new updates.
> 
> Cheers,
> 
> Rob
> 

I would follow the instructions Dan Price sent out last week:

If you want to remove an EFI labelled disk from your root pool, my advice
to you would be to do the following.  Note that I have not tested this
particular sequence, but I think it will work.  Hah.

0) Backup your data and settings.

1) 'zpool detach' the EFI labelled disk from your pool.  After you do this
YOU MUST NOT REBOOT.  Your system is now in a fragile state.

2) Run 'zpool status' to ensure that your pool now has one disk.

3) Edit /etc/boot/solaris/filelist.ramdisk.  Remove the only line in the
file:

etc/zfs/zpool.cache

4) Delete /platform/i86pc/boot_archive and /platform/i86pc/amd64/boot_archive

5) Run 'bootadm update-archive' -- This rebuilds the boot archive,
omitting the zpool.cache file.

It may also be necessary to do installgrub at this point.  Probably, and
it wouldn't hurt.

6) Reboot your system, to ensure that you have a working configuration.


I hope this helps...

-evan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread John
Our "enterprise" is about 300TB.. maybe a bit more...

You are correct that most of the time we grow and not shrink... however, we are 
fairly dynamic and occasionally do shrink. DBA's have been known to be off on 
their space requirements/requests.

There is also the human error factor.  If someone accidentally grows a zpool 
there is no easy way to recover that space without down time.  Some of my LUNs 
are in the 1TB range and if that gets added to the wrong zpool that space is 
basically stuck there until i can get a maintenance window. And then I'm not 
that's even possible since my windows are only 3 hours... for example what if I 
add a LUN to 20TB zpool.  What would I do to remove the LUN?  I think I would 
have to create a new 20TB pool and move the data from the original to the new 
zpool... so that would assume I have a free 20TB and the down time
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help! Possible with b80 and newest ZFS?

2008-08-20 Thread Orvar Korvar
I, as several others, have severe problems with the latest builds of SXCE. 
After b93-94 or, everything became extremely unstable to the point of rendering 
my Solaris totally useless. This is written from a Windows machine.
http://www.opensolaris.org/jive/thread.jspa?threadID=69654&tstart=0


The problem is that I need access to all my data which is on the newest version 
of ZFS raid. I can not use the horribly broken b9X. I must use the stable b80 
or so. Can I from b80 only upgrade the ZFS part so it can access my ZFS raid?

Is it possible? Suggestions please? SXCE is unusable with b9X for me. Can I 
install OpenSolaris and access the newest ZFS? Or, god forbid, do I have to 
install Linux and reach ZFS with FUSE?

Solutions? Suggestions? Tips? Help! Please!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Kyle McDonald
Mario Goebbels wrote:
>> WOW! This is quite a departure from what we've been
>> told for the past 2 years...
>> 
>
> This must be misinformation.
>
> The reason there's no project (yet) is very likely because pool shrinking 
> depends strictly on the availability of bp_rewrite functionality, which is 
> still in development.
>
> The last time the topic came up, maybe a few months ago, still in 2008, the 
> discussion indicated that it's still on the plan. But as said, it relies on 
> aforementioned functionality to be present.
>
>   
I agree, it's on the plan, but in addition to the dependency on that 
feature it was at a very low priority.  If I recall, the low priority 
was based on the percieved low demand for the feature in enterprise 
organizations. As I understood it shrinking a pool is percieved as being 
a feature most desired by home/hobby/development users, and that 
enterprises mainly only grow thier pools, not shrink.

So if anyone in enterprise has need to shrink pools they might want to 
notify thier Sun support people, and make their voices heard.

Unless of course I'm wrong Which has been known to happen from time 
to time. :)

  -Kyle

> -mg
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Mario Goebbels
> WOW! This is quite a departure from what we've been
> told for the past 2 years...

This must be misinformation.

The reason there's no project (yet) is very likely because pool shrinking 
depends strictly on the availability of bp_rewrite functionality, which is 
still in development.

The last time the topic came up, maybe a few months ago, still in 2008, the 
discussion indicated that it's still on the plan. But as said, it relies on 
aforementioned functionality to be present.

-mg
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread John
WOW! This is quite a departure from what we've been told for the past 2 years...

In fact if your comments are true that we'll never be able to shrink a ZFS 
pool, i will be, for lack of a better word, PISSED.  

Like others not being able to shrink is a feature that truly prevents us from 
replacing all of our Veritas... without being able to shrink, ZFS will be stuck 
in our dev environment and our non-critical systems...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Florin Iucha
On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
> Has anyone here had any luck using a CF to SATA adapter?
> 
> I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card that 
> I wanted to use for a boot pool and even though the BIOS reports the disk, 
> Solaris B95 (or the installer) doesn't see it.
> 
> I might give the IDE version a go (I really wanted hoT-plug), otherwise I'll 
> be able to store a couple of thousand photos in my camera...

I was able to get b94 to install on an Addonics AD2SAHDCF [1]
with two Transcend Industrial 4GB CF.  Search the archives of this
list/forum for more details.

florin

1: http://www.addonics.com/products/flash_memory_reader/ad2sahdcf.asp

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpReCk5FZBft.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ARCSTAT Kstat Definitions

2008-08-20 Thread Sanjeev
Ben,

Here is an attempt.

c   -> Is the total cache size (MRU + MFU)
p   ->  represents the limit of MRU 
(c - p) -> represents the limit of MFU

c_max, c_min-> hard limits 
size-> Total amount consumed by ARC

memory_throttle_count -> The number of times ZFS decided to
 throttle ARC growth.

ARC maintains ghost lists for MRU and MFU. When it decides to
evict a buffer from the MRU/MFU, the data buffer is freed. However,
it moves the corresponding header into these ghost lists.

In future, when we have a hit on one of these ghost lists it
is a indication to the algorithm that the corresponding (MRU/MFU)
should have been larger to accomodate it.

mfu_ghost_hits -> These are the hits into the Ghost list
mru_ghost_hits

This is used to correct the size of MRU/MFU by adjusting the value
of 'p' (arc.c:arc_adjust()). 

I know this is not complete.

Thanks and regrads,
Sanjeev.

On Wed, Aug 20, 2008 at 04:04:59AM -0700, Ben Rockwood wrote:
> Would someone "in the know" be willing to write up (preferably blog) 
> definitive definitions/explanations of all the arcstats provided via kstat?  
> I'm struggling with proper interpretation of certain values, namely "p", 
> "memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit 
> counters.  I think I've got it figured out, but I'd really like expert 
> clarification before I start tweaking.
> 
> Thanks.
> 
> benr.
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Will Murnane
On Wed, Aug 20, 2008 at 05:17, Ian Collins <[EMAIL PROTECTED]> wrote:
> Has anyone here had any luck using a CF to SATA adapter?
I have two of these:
http://cgi.ebay.com/CF-Compact-Flash-to-SATA-Adapter-mini-usb-by-i88990_W0QQitemZ290253443832QQihZ019QQcategoryZ74941QQssPageNameZWDVWQQrdZ1QQcmdZViewItem
with two of these cards:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820183189
both of which are recognized fine by Solaris.  I'm using a Marvell
controller from Supermicro.  I haven't tried installing to them, but
they show up (and hotplug) once the OS is up.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ETA on zpool vdev removal?

2008-08-20 Thread Daniel Polombo
I meant shrinking the pool. I know it's already possible to replace a disk, 
failing or not.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ARCSTAT Kstat Definitions

2008-08-20 Thread Ben Rockwood
Would someone "in the know" be willing to write up (preferably blog) definitive 
definitions/explanations of all the arcstats provided via kstat?  I'm 
struggling with proper interpretation of certain values, namely "p", 
"memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit 
counters.  I think I've got it figured out, but I'd really like expert 
clarification before I start tweaking.

Thanks.

benr.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Robert Milkowski
Hello Ian,

Wednesday, August 20, 2008, 8:57:33 AM, you wrote:

IC> Brian Hechinger wrote:
>> On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
>>   
>>> Has anyone here had any luck using a CF to SATA adapter?
>>>
>>> I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card 
>>> that I wanted to use for a boot pool and even though the BIOS reports the 
>>> disk, Solaris B95 (or the installer) doesn't see it.
>>> 
>>
>> I tried this a while back with an IDE to CF adapter.  Real nice looking one 
>> too.
>>
>> It would constantly cause OpenBSD to panic.
>>
>> I would recommend against using this, unless you get real lucky.  If you want
>> flash to boot from, buy one of the ones that is specifically made for it (not
>> CF, but industrial grade flash meant to be a HDD).  Those things work a LOT
>> better.  I can look up the details of the ones my friend uses if you'd like.
>>
>>   
IC> I was looking to run some tests with a CF boot drive before we get an
IC> X4540, which has a CF slot. The installer did see the attached USB sticks...

IC> Any further information welcome.

When it comes to x4540 and booting from CF card - it just works.
It is slow for writes but it doesn't matter that much. Just make sure
you put core dumps, cores, etc somewhere else than CF card.


-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ETA on zpool vdev removal?

2008-08-20 Thread Glaser, David
When you say 'removing a disk' from a zpool, do you mean shrinking a zpool by 
logically taking disks away from it, or just removing a failing disk from a 
zpool?


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Daniel Polombo
Sent: Wednesday, August 20, 2008 6:20 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ETA on zpool vdev removal?

I've just recently discovered that ZFS doesn't support (yet) removing a disk 
other than a hot spare from a zpool. I've also found out that this feature has 
been on the TODO list for ages (at least since January 2006).

Is there any kind of ETA on that feature's availability? Unfortunately, there's 
no way my current company will switch to ZFS without it, and I was already 
having a hard time convincing the m...


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ETA on zpool vdev removal?

2008-08-20 Thread Daniel Polombo
I've just recently discovered that ZFS doesn't support (yet) removing a disk 
other than a hot spare from a zpool. I've also found out that this feature has 
been on the TODO list for ages (at least since January 2006).

Is there any kind of ETA on that feature's availability? Unfortunately, there's 
no way my current company will switch to ZFS without it, and I was already 
having a hard time convincing the m...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris installer can't be run, if target ZFS pool exists.

2008-08-20 Thread jan damborsky

>> And log an RFE for having user defined properties at the pool (if one 
>> doesn't already exist).
>> 

6739057 was filed to track this.

Thank you,
Jan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Ian Collins
Brian Hechinger wrote:
> On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
>   
>> Has anyone here had any luck using a CF to SATA adapter?
>>
>> I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card that 
>> I wanted to use for a boot pool and even though the BIOS reports the disk, 
>> Solaris B95 (or the installer) doesn't see it.
>> 
>
> I tried this a while back with an IDE to CF adapter.  Real nice looking one 
> too.
>
> It would constantly cause OpenBSD to panic.
>
> I would recommend against using this, unless you get real lucky.  If you want
> flash to boot from, buy one of the ones that is specifically made for it (not
> CF, but industrial grade flash meant to be a HDD).  Those things work a LOT
> better.  I can look up the details of the ones my friend uses if you'd like.
>
>   
I was looking to run some tests with a CF boot drive before we get an
X4540, which has a CF slot. The installer did see the attached USB sticks...

Any further information welcome.

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Brian Hechinger
On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
> Has anyone here had any luck using a CF to SATA adapter?
> 
> I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card that 
> I wanted to use for a boot pool and even though the BIOS reports the disk, 
> Solaris B95 (or the installer) doesn't see it.

I tried this a while back with an IDE to CF adapter.  Real nice looking one too.

It would constantly cause OpenBSD to panic.

I would recommend against using this, unless you get real lucky.  If you want
flash to boot from, buy one of the ones that is specifically made for it (not
CF, but industrial grade flash meant to be a HDD).  Those things work a LOT
better.  I can look up the details of the ones my friend uses if you'd like.

> I might give the IDE version a go (I really wanted hoT-plug), otherwise I'll 
> be able to store a couple of thousand photos in my camera...

That's the better plan. ;)

-brian
-- 
"Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss