Re: [zfs-discuss] ZFS in a SAN environment

2006-12-21 Thread Darren J Moffat

Bart Smaalders wrote:

Jason J. W. Williams wrote:

Not sure. I don't see an advantage to moving off UFS for boot pools. :-)

-J


Except of course that snapshots & clones will surely be a nicer
way of recovering from "adverse administrative events"...


and make live upgrade and patching so much nicer.

lucopy is often one of the most time consuming parts of doing live upgrade.

The other HUGE advantage from ZFS root is that you don't need to prepare 
in advance for live upgrade because file systems are cheap and easily 
added in ZFS unlike with UFS root where you need at least one vtoc slice 
per live upgrade boot environment you want to keep around.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread Bart Smaalders

Jason J. W. Williams wrote:

Not sure. I don't see an advantage to moving off UFS for boot pools. :-)

-J


Except of course that snapshots & clones will surely be a nicer
way of recovering from "adverse administrative events"...

-= Bart

--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread Jason J. W. Williams

Not sure. I don't see an advantage to moving off UFS for boot pools. :-)

-J

On 12/20/06, James C. McPherson <[EMAIL PROTECTED]> wrote:

Jason J. W. Williams wrote:
> I agree with others here that the kernel panic is undesired behavior.
> If ZFS would simply offline the zpool and not kernel panic, that would
> obviate my request for an informational message. It'd be pretty darn
> obvious what was going on.

What about the root/boot pool?


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
   http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread James C. McPherson

James C. McPherson wrote:

Jason J. W. Williams wrote:

I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was going on.


What about the root/boot pool?


The default with ufs today is onerror=panic, so having ZFS do
likewise is no backwards step.

What other mechanisms do people suggest be implemented to
guarantee the integrity of your data on zfs?

James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread James C. McPherson

Jason J. W. Williams wrote:

I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was going on.


What about the root/boot pool?


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[6]: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread Jason J. W. Williams

Hi Robert,

I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was going on.

Best Regards,
Jason

On 12/20/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:

Hello Jason,

Wednesday, December 20, 2006, 1:02:36 AM, you wrote:

JJWW> Hi Robert

JJWW> I didn't take any offense. :-) I completely agree with you that zpool
JJWW> striping leverages standard RAID-0 knowledge in that if a device
JJWW> disappears your RAID group goes poof. That doesn't really require a
JJWW> notice...was just trying to be complete. :-)

JJWW> The surprise to me was that detecting block corruption did the same
JJWW> thing...since most hardware RAID controllers and filesystems do a poor
JJWW> job of detecting block-level corruption, kernel panicking on corrupt
JJWW> blocks seems to be what folks like me aren't expecting until it
JJWW> happens.

JJWW> Frankly, in about 5 years when ZFS and its concepts are common
JJWW> knowledge, warning folks about corrupt blocks re-booting your server
JJWW> would be like notifying them what rm and mv do. However, until then
JJWW> warning them that corruption will cause a panic would definitely aid
JJWW> folks who think they understand because they have existing RAID and
JJWW> SAN knowledge, and then get bitten. Also, I think the zfsassist
JJWW> program is a great idea for newbies. I'm not sure how often it would
JJWW> be used by storage pros new to ZFS. Using the gal with the EMC DMX-3
JJWW> again as an example (sorry! O:-) ), I'm sure she's pretty experienced
JJWW> and had no problems using ZFS correctly...just was not expecting a
JJWW> kernel panic on corruption and so was taken by surprise as to what
JJWW> caused the kernel panic when it happened. A warning message when
JJWW> creating a striped pool, would in my case have stuck in my brain so
JJWW> that when the kernel panic happened, corruption of the zpool would
JJWW> have been on my top 10 things to expect as a cause. Anyway, this is
JJWW> probably an Emacs/VI argument to some degree. Now that I've
JJWW> experienced a panic from zpool corruption its on the forefront of my
JJWW> mind when designing ZFS zpools, and the warning wouldn't do much for
JJWW> me now. Though I probably would have preferred to learn from a warning
JJWW> message instead of a panic. :-)

But with other file systems you basically get the same - in many cases
kernel crash - but in a more unpredictable way. Now not that I'm fond
of current ZFS behavior, I would really like to specify like in UFS if
system has to panic or just lock the filesystem (or a pool).

As Eric posted some time ago (I think it was Eric) it's on a list to
address.

However I still agree that striped pools should be displayed (zpool
status) with stripe keyword like mirrors or raidz groups - that would
be less confusing for beginners.

--
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread Toby Thain


On 19-Dec-06, at 11:51 AM, Jonathan Edwards wrote:



On Dec 19, 2006, at 10:15, Torrey McMahon wrote:


Darren J Moffat wrote:

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to  
do corrections when the checksums (which always work) detect  
problems.





We do not have the intelligent end-to-end management to make these  
judgments. Trying to make one layer of the stack {stronger,  
smarter, faster, bigger,} while ignoring the others doesn't help.  
Trying to make educated guesses as to what the user intends  
doesn't help either.


"Hi! It looks like you're writing a block"
 Would you like help?
- Get help writing the block
- Just write the block without help
- (Don't show me this tip again)

somehow I think we all know on some level that letting a system  
attempt to guess your intent will get pretty annoying after a while ..


I think what you (hilariously) describe above is a system that's *too  
stupid* not a system that's *too smart*...


--Toby


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread Matthew Ahrens

Jason J. W. Williams wrote:

"INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect
your data."


This is a bug, not a feature.  We are currently working on fixing it.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[6]: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread Robert Milkowski
Hello Jason,

Wednesday, December 20, 2006, 1:02:36 AM, you wrote:

JJWW> Hi Robert

JJWW> I didn't take any offense. :-) I completely agree with you that zpool
JJWW> striping leverages standard RAID-0 knowledge in that if a device
JJWW> disappears your RAID group goes poof. That doesn't really require a
JJWW> notice...was just trying to be complete. :-)

JJWW> The surprise to me was that detecting block corruption did the same
JJWW> thing...since most hardware RAID controllers and filesystems do a poor
JJWW> job of detecting block-level corruption, kernel panicking on corrupt
JJWW> blocks seems to be what folks like me aren't expecting until it
JJWW> happens.

JJWW> Frankly, in about 5 years when ZFS and its concepts are common
JJWW> knowledge, warning folks about corrupt blocks re-booting your server
JJWW> would be like notifying them what rm and mv do. However, until then
JJWW> warning them that corruption will cause a panic would definitely aid
JJWW> folks who think they understand because they have existing RAID and
JJWW> SAN knowledge, and then get bitten. Also, I think the zfsassist
JJWW> program is a great idea for newbies. I'm not sure how often it would
JJWW> be used by storage pros new to ZFS. Using the gal with the EMC DMX-3
JJWW> again as an example (sorry! O:-) ), I'm sure she's pretty experienced
JJWW> and had no problems using ZFS correctly...just was not expecting a
JJWW> kernel panic on corruption and so was taken by surprise as to what
JJWW> caused the kernel panic when it happened. A warning message when
JJWW> creating a striped pool, would in my case have stuck in my brain so
JJWW> that when the kernel panic happened, corruption of the zpool would
JJWW> have been on my top 10 things to expect as a cause. Anyway, this is
JJWW> probably an Emacs/VI argument to some degree. Now that I've
JJWW> experienced a panic from zpool corruption its on the forefront of my
JJWW> mind when designing ZFS zpools, and the warning wouldn't do much for
JJWW> me now. Though I probably would have preferred to learn from a warning
JJWW> message instead of a panic. :-)

But with other file systems you basically get the same - in many cases
kernel crash - but in a more unpredictable way. Now not that I'm fond
of current ZFS behavior, I would really like to specify like in UFS if
system has to panic or just lock the filesystem (or a pool).

As Eric posted some time ago (I think it was Eric) it's on a list to
address.

However I still agree that striped pools should be displayed (zpool
status) with stripe keyword like mirrors or raidz groups - that would
be less confusing for beginners.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[4]: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jason J. W. Williams

Hi Robert

I didn't take any offense. :-) I completely agree with you that zpool
striping leverages standard RAID-0 knowledge in that if a device
disappears your RAID group goes poof. That doesn't really require a
notice...was just trying to be complete. :-)

The surprise to me was that detecting block corruption did the same
thing...since most hardware RAID controllers and filesystems do a poor
job of detecting block-level corruption, kernel panicking on corrupt
blocks seems to be what folks like me aren't expecting until it
happens.

Frankly, in about 5 years when ZFS and its concepts are common
knowledge, warning folks about corrupt blocks re-booting your server
would be like notifying them what rm and mv do. However, until then
warning them that corruption will cause a panic would definitely aid
folks who think they understand because they have existing RAID and
SAN knowledge, and then get bitten. Also, I think the zfsassist
program is a great idea for newbies. I'm not sure how often it would
be used by storage pros new to ZFS. Using the gal with the EMC DMX-3
again as an example (sorry! O:-) ), I'm sure she's pretty experienced
and had no problems using ZFS correctly...just was not expecting a
kernel panic on corruption and so was taken by surprise as to what
caused the kernel panic when it happened. A warning message when
creating a striped pool, would in my case have stuck in my brain so
that when the kernel panic happened, corruption of the zpool would
have been on my top 10 things to expect as a cause. Anyway, this is
probably an Emacs/VI argument to some degree. Now that I've
experienced a panic from zpool corruption its on the forefront of my
mind when designing ZFS zpools, and the warning wouldn't do much for
me now. Though I probably would have preferred to learn from a warning
message instead of a panic. :-)

Best Regards,
Jason

On 12/19/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:

Hello Jason,

Tuesday, December 19, 2006, 11:23:56 PM, you wrote:

JJWW> Hi Robert,

JJWW> I don't think its about assuming the admin is an idiot. It happened to
JJWW> me in development and I didn't expect it...I hope I'm not an idiot.
JJWW> :-)

JJWW> Just observing the list, a fair amount of people don't expect it. The
JJWW> likelihood you'll miss this one little bit of very important
JJWW> information in the manual or man page is pretty high. So it would be
JJWW> nice if an informational message appeared saying something like:

JJWW> "INFORMATION: If a member of this striped zpool becomes unavailable or
JJWW> develops corruption, Solaris will kernel panic and reboot to protect
JJWW> your data."

JJWW> I definitely wouldn't require any sort of acknowledgment of this
JJWW> message, such as requiring a "-f" flag to continue.

First sorry for my wording - no offense to anyone was meant.

I don't know it's like changing every tool in system so:

  # rm file
  INFORMATION: by removing file you won't be able to read it again

  # mv fileA fileB
  INFORMATION: by moving fileA to fileB you won't be able 

  # reboot
  INFORMATION: by rebooting server it won't be up for some time


I don't know such behavior is desired.
If someone don't understand basic RAID concepts then perhaps some
assistant utilities (gui or cli) is more appropriate for them, like
Veritas did. But putting warning messages here and there to inform
user that he probably doesn't know what is he doing isn't a good
option.

Perhaps zpool status should explicitly show stripe groups with word
stripe, like:

   home
 stripe
   c0t0d0
   c0t1d0

So it will be more clear to people what they actually configured.
I would really hate a system informing me on every command that I
possibly don't know what I'm doing.


Maybe just a wrapper:

zfsassist redundant space-optimized disk0 disk1 disk2
zfsassist redundant speed-optimized disk0 disk1 disk2
zfsassist non-redundant disk0 disk1 disk2

you get the idea.



--
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[4]: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Robert Milkowski
Hello Jason,

Tuesday, December 19, 2006, 11:23:56 PM, you wrote:

JJWW> Hi Robert,

JJWW> I don't think its about assuming the admin is an idiot. It happened to
JJWW> me in development and I didn't expect it...I hope I'm not an idiot.
JJWW> :-)

JJWW> Just observing the list, a fair amount of people don't expect it. The
JJWW> likelihood you'll miss this one little bit of very important
JJWW> information in the manual or man page is pretty high. So it would be
JJWW> nice if an informational message appeared saying something like:

JJWW> "INFORMATION: If a member of this striped zpool becomes unavailable or
JJWW> develops corruption, Solaris will kernel panic and reboot to protect
JJWW> your data."

JJWW> I definitely wouldn't require any sort of acknowledgment of this
JJWW> message, such as requiring a "-f" flag to continue.

First sorry for my wording - no offense to anyone was meant.

I don't know it's like changing every tool in system so:

  # rm file
  INFORMATION: by removing file you won't be able to read it again

  # mv fileA fileB
  INFORMATION: by moving fileA to fileB you won't be able 

  # reboot
  INFORMATION: by rebooting server it won't be up for some time


I don't know such behavior is desired.
If someone don't understand basic RAID concepts then perhaps some
assistant utilities (gui or cli) is more appropriate for them, like
Veritas did. But putting warning messages here and there to inform
user that he probably doesn't know what is he doing isn't a good
option.

Perhaps zpool status should explicitly show stripe groups with word
stripe, like:

   home
 stripe
   c0t0d0
   c0t1d0

So it will be more clear to people what they actually configured.
I would really hate a system informing me on every command that I
possibly don't know what I'm doing.


Maybe just a wrapper:

zfsassist redundant space-optimized disk0 disk1 disk2
zfsassist redundant speed-optimized disk0 disk1 disk2
zfsassist non-redundant disk0 disk1 disk2

you get the idea.



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jason J. W. Williams

Hi Robert,

I don't think its about assuming the admin is an idiot. It happened to
me in development and I didn't expect it...I hope I'm not an idiot.
:-)

Just observing the list, a fair amount of people don't expect it. The
likelihood you'll miss this one little bit of very important
information in the manual or man page is pretty high. So it would be
nice if an informational message appeared saying something like:

"INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect
your data."

I definitely wouldn't require any sort of acknowledgment of this
message, such as requiring a "-f" flag to continue.

Best Regards,
Jason


On 12/19/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:

Hello Jason,

Tuesday, December 19, 2006, 8:54:09 PM, you wrote:

>> > Shouldn't there be a big warning when configuring a pool
>> > with no redundancy and/or should that not require a -f flag ?
>>
>> why?  what if the redundancy is below the pool .. should we
>> warn that ZFS isn't directly involved in redundancy decisions?

JJWW> Because if the host controller port goes flaky and starts introducing
JJWW> checksum errors at the block level (a lady a few weeks ago reported
JJWW> this) ZFS will kernel panic, and most users won't expect it.  Users
JJWW> should be warned it seems to me to the real possibility of a kernel
JJWW> panic if they don't implement redundancy at the zpool level. Just my 2
JJWW> cents.

I don't agree - do not assume sys admin is complete idiot.
Sure, lets create GUI and other 'inteligent' creators which are for
very beginner users with no understanding at all.

Maybe we need something like vxassist (zfsassist?)?



--
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Robert Milkowski
Hello Jason,

Tuesday, December 19, 2006, 8:54:09 PM, you wrote:

>> > Shouldn't there be a big warning when configuring a pool
>> > with no redundancy and/or should that not require a -f flag ?
>>
>> why?  what if the redundancy is below the pool .. should we
>> warn that ZFS isn't directly involved in redundancy decisions?

JJWW> Because if the host controller port goes flaky and starts introducing
JJWW> checksum errors at the block level (a lady a few weeks ago reported
JJWW> this) ZFS will kernel panic, and most users won't expect it.  Users
JJWW> should be warned it seems to me to the real possibility of a kernel
JJWW> panic if they don't implement redundancy at the zpool level. Just my 2
JJWW> cents.

I don't agree - do not assume sys admin is complete idiot.
Sure, lets create GUI and other 'inteligent' creators which are for
very beginner users with no understanding at all.

Maybe we need something like vxassist (zfsassist?)?



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jason J. W. Williams

> Shouldn't there be a big warning when configuring a pool
> with no redundancy and/or should that not require a -f flag ?

why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Because if the host controller port goes flaky and starts introducing
checksum errors at the block level (a lady a few weeks ago reported
this) ZFS will kernel panic, and most users won't expect it.  Users
should be warned it seems to me to the real possibility of a kernel
panic if they don't implement redundancy at the zpool level. Just my 2
cents.

Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Richard Elling

Torrey McMahon wrote:
The first bug we'll get when adding a "ZFS is not going to be able to 
fix data inconsistency problems" error message to every pool creation or 
similar operation is going to be "Need a flag to turn off the warning 
message..."


Richard pines for ditto blocks for data...
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards


On Dec 19, 2006, at 10:15, Torrey McMahon wrote:


Darren J Moffat wrote:

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to  
do corrections when the checksums (which always work) detect  
problems.





We do not have the intelligent end-to-end management to make these  
judgments. Trying to make one layer of the stack {stronger,  
smarter, faster, bigger,} while ignoring the others doesn't help.  
Trying to make educated guesses as to what the user intends doesn't  
help either.


"Hi! It looks like you're writing a block"
 Would you like help?
- Get help writing the block
- Just write the block without help
- (Don't show me this tip again)

somehow I think we all know on some level that letting a system  
attempt to guess your intent will get pretty annoying after a while ..

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Darren J Moffat

Torrey McMahon wrote:

Darren J Moffat wrote:

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to do 
corrections when the checksums (which always work) detect problems.





We do not have the intelligent end-to-end management to make these 
judgments. Trying to make one layer of the stack {stronger, smarter, 
faster, bigger,} while ignoring the others doesn't help. Trying to make 
educated guesses as to what the user intends doesn't help either.


The first bug we'll get when adding a "ZFS is not going to be able to 
fix data inconsistency problems" error message to every pool creation or 
similar operation is going to be "Need a flag to turn off the warning 
message..."


said "flag" is 2>/dev/null ;-)


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Torrey McMahon

Darren J Moffat wrote:

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to do 
corrections when the checksums (which always work) detect problems.





We do not have the intelligent end-to-end management to make these 
judgments. Trying to make one layer of the stack {stronger, smarter, 
faster, bigger,} while ignoring the others doesn't help. Trying to make 
educated guesses as to what the user intends doesn't help either.


The first bug we'll get when adding a "ZFS is not going to be able to 
fix data inconsistency problems" error message to every pool creation or 
similar operation is going to be "Need a flag to turn off the warning 
message..."



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Darren J Moffat

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to do 
corrections when the checksums (which always work) detect problems.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Roch - PAE

Jonathan Edwards writes:
 > On Dec 19, 2006, at 07:17, Roch - PAE wrote:
 > 
 > >
 > > Shouldn't there be a big warning when configuring a pool
 > > with no redundancy and/or should that not require a -f flag ?
 > 
 > why?  what if the redundancy is below the pool .. should we
 > warn that ZFS isn't directly involved in redundancy decisions?
 > 

I think so while pointing to the associated downside of doing that.

-r

 > ---
 > .je

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards


On Dec 18, 2006, at 17:52, Richard Elling wrote:

In general, the closer to the user you can make policy decisions,  
the better
decisions you can make.  The fact that we've had 10 years of RAID  
arrays
acting like dumb block devices doesn't mean that will continue for  
the next
10 years :-)  In the interim, we will see more and more  
intelligence move

closer to the user.


I thought this is what the T10 OSD spec was set up to address.  We've  
already

got device manufacturers beginning to design and code to the spec.

---
.je

(ps .. actually it's closer to 20+ years of RAID and dumb block  
devices ..)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Roch - PAE

Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?

-r

Al Hopper writes:
 > On Sun, 17 Dec 2006, Ricardo Correia wrote:
 > 
 > > On Friday 15 December 2006 20:02, Dave Burleson wrote:
 > > > Does anyone have a document that describes ZFS in a pure
 > > > SAN environment?  What will and will not work?
 > > >
 > > >  From some of the information I have been gathering
 > > > it doesn't appear that ZFS was intended to operate
 > > > in a SAN environment.
 > >
 > > This might answer your question:
 > > http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
 > 
 > The section entitled "Does ZFS work with SAN-attached devices?" does not
 > make it clear the (some would say) dire effects of not having pool
 > redundancy.  I think that FAQ should clearly spell out the downside; i.e.,
 > where ZFS will "say" (Sorry Charlie) "pool is corrupt".
 > 
 > A FAQ should always emphasize the real-world downsides to poor decisions
 > made by the reader.   Not delivering "bad news" does the reader a
 > dis-service IMHO.
 > 
 > Regards,
 > 
 > Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
 >Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
 > OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 >  OpenSolaris Governing Board (OGB) Member - Feb 2006
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-18 Thread Jason J. W. Williams

It seems to me that the optimal scenario would be network filesystems
on top of ZFS, so you can get the data portability of a SAN, but let
ZFS make all of the decisions. Short of that, ZFS on SAN-attached
JBODs would give a similar benefit. Having benefited tremendously from
being able to easily detach and re-attach storage because of a SAN,
its difficult to give that capability up to get maximum ZFS-benefit.

Best Regards,
Jason

On 12/18/06, Richard Elling <[EMAIL PROTECTED]> wrote:

comment far below...

Jonathan Edwards wrote:
>
> On Dec 18, 2006, at 16:13, Torrey McMahon wrote:
>
>> Al Hopper wrote:
>>> On Sun, 17 Dec 2006, Ricardo Correia wrote:
>>>
>>>
 On Friday 15 December 2006 20:02, Dave Burleson wrote:

> Does anyone have a document that describes ZFS in a pure
> SAN environment?  What will and will not work?
>
>  From some of the information I have been gathering
> it doesn't appear that ZFS was intended to operate
> in a SAN environment.
>
 This might answer your question:
 http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid

>>>
>>> The section entitled "Does ZFS work with SAN-attached devices?" does not
>>> make it clear the (some would say) dire effects of not having pool
>>> redundancy.  I think that FAQ should clearly spell out the downside;
>>> i.e.,
>>> where ZFS will "say" (Sorry Charlie) "pool is corrupt".
>>>
>>> A FAQ should always emphasize the real-world downsides to poor decisions
>>> made by the reader.   Not delivering "bad news" does the reader a
>>> dis-service IMHO.
>>
>>
>> I'd say that it's clearly described in the FAQ.  If you push to hard
>> people will infer that SANs are broken if you use ZFS on top of them
>> or vice versa. The only bit that looks a little questionable to my
>> eyes is ...
>>
>>Overall, ZFS functions as designed with SAN-attached devices, but if
>>you expose simpler devices to ZFS, you can better leverage all
>>available features.
>>
>> What are "simpler devices"?  (I could take a guess ... )
>
> stone tablets in a room full of monkeys with chisels?
>
> The bottom line is ZFS wants to ultimately function as the controller cache
> and eventually eliminate the blind data algorithms that they incorporate ..

I don't get this impression at all.

> the problem is that we can't really say that explicitly since we sell,
> and much
> of the enterprise operates with enterprise class arrays and integrated data
> cache.  The trick is in balancing who does what since you've really got
> duplicate Virtualization, RAID, and caching options open to you.

In general, the closer to the user you can make policy decisions, the better
decisions you can make.  The fact that we've had 10 years of RAID arrays
acting like dumb block devices doesn't mean that will continue for the next
10 years :-)  In the interim, we will see more and more intelligence move
closer to the user.
  -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-18 Thread Al Hopper
On Mon, 18 Dec 2006, Torrey McMahon wrote:

> Al Hopper wrote:
> > On Sun, 17 Dec 2006, Ricardo Correia wrote:
> >
> >
> >> On Friday 15 December 2006 20:02, Dave Burleson wrote:
> >>
> >>> Does anyone have a document that describes ZFS in a pure
> >>> SAN environment?  What will and will not work?
> >>>
> >>>  From some of the information I have been gathering
> >>> it doesn't appear that ZFS was intended to operate
> >>> in a SAN environment.
> >>>
> >> This might answer your question:
> >> http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
> >>
> >
> > The section entitled "Does ZFS work with SAN-attached devices?" does not
> > make it clear the (some would say) dire effects of not having pool
> > redundancy.  I think that FAQ should clearly spell out the downside; i.e.,
> > where ZFS will "say" (Sorry Charlie) "pool is corrupt".
> >
> > A FAQ should always emphasize the real-world downsides to poor decisions
> > made by the reader.   Not delivering "bad news" does the reader a
> > dis-service IMHO.
>
>
> I'd say that it's clearly described in the FAQ.  If you push to hard
> people will infer that SANs are broken if you use ZFS on top of them or
> vice versa.
[  re-formatted ... but no content changed  ]

Fair enough - I'm also in receipt of pushback from the illustrious Eric
Schrock - which usually indicates that I'm on the loosing side of this
argument ^H^H^H^H^H^H^H^H (sorry) discussion. :)

> The only bit that looks a little questionable to my eyes is ...
>
> Overall, ZFS functions as designed with SAN-attached devices, but if
> you expose simpler devices to ZFS, you can better leverage all
> available features.
>
> What are "simpler devices"?  (I could take a guess ... )
>

--- new comment 

Let me look at a couple of possible user "bad" assumptions and see if the
FAQ still reflects what a ZFS "convert" _might_ inadvertantly do.  And
I'll try the scenarios in mind on Update 3.  In the case that I don't come
up with anything worthwhile, I'll still post a followup.  I think it is
always best to "fess up" to a mistake or a misleading post.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-18 Thread Richard Elling

comment far below...

Jonathan Edwards wrote:


On Dec 18, 2006, at 16:13, Torrey McMahon wrote:


Al Hopper wrote:

On Sun, 17 Dec 2006, Ricardo Correia wrote:



On Friday 15 December 2006 20:02, Dave Burleson wrote:


Does anyone have a document that describes ZFS in a pure
SAN environment?  What will and will not work?

 From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.


This might answer your question:
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid



The section entitled "Does ZFS work with SAN-attached devices?" does not
make it clear the (some would say) dire effects of not having pool
redundancy.  I think that FAQ should clearly spell out the downside; 
i.e.,

where ZFS will "say" (Sorry Charlie) "pool is corrupt".

A FAQ should always emphasize the real-world downsides to poor decisions
made by the reader.   Not delivering "bad news" does the reader a
dis-service IMHO.



I'd say that it's clearly described in the FAQ.  If you push to hard 
people will infer that SANs are broken if you use ZFS on top of them 
or vice versa. The only bit that looks a little questionable to my 
eyes is ...


   Overall, ZFS functions as designed with SAN-attached devices, but if
   you expose simpler devices to ZFS, you can better leverage all
   available features.

What are "simpler devices"?  (I could take a guess ... )


stone tablets in a room full of monkeys with chisels?

The bottom line is ZFS wants to ultimately function as the controller cache
and eventually eliminate the blind data algorithms that they incorporate ..


I don't get this impression at all.

the problem is that we can't really say that explicitly since we sell, 
and much

of the enterprise operates with enterprise class arrays and integrated data
cache.  The trick is in balancing who does what since you've really got
duplicate Virtualization, RAID, and caching options open to you.


In general, the closer to the user you can make policy decisions, the better
decisions you can make.  The fact that we've had 10 years of RAID arrays
acting like dumb block devices doesn't mean that will continue for the next
10 years :-)  In the interim, we will see more and more intelligence move
closer to the user.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-18 Thread Jonathan Edwards


On Dec 18, 2006, at 16:13, Torrey McMahon wrote:


Al Hopper wrote:

On Sun, 17 Dec 2006, Ricardo Correia wrote:



On Friday 15 December 2006 20:02, Dave Burleson wrote:


Does anyone have a document that describes ZFS in a pure
SAN environment?  What will and will not work?

 From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.


This might answer your question:
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid



The section entitled "Does ZFS work with SAN-attached devices?"  
does not

make it clear the (some would say) dire effects of not having pool
redundancy.  I think that FAQ should clearly spell out the  
downside; i.e.,

where ZFS will "say" (Sorry Charlie) "pool is corrupt".

A FAQ should always emphasize the real-world downsides to poor  
decisions

made by the reader.   Not delivering "bad news" does the reader a
dis-service IMHO.



I'd say that it's clearly described in the FAQ.  If you push to  
hard people will infer that SANs are broken if you use ZFS on top  
of them or vice versa. The only bit that looks a little  
questionable to my eyes is ...


   Overall, ZFS functions as designed with SAN-attached devices,  
but if

   you expose simpler devices to ZFS, you can better leverage all
   available features.

What are "simpler devices"?  (I could take a guess ... )


stone tablets in a room full of monkeys with chisels?

The bottom line is ZFS wants to ultimately function as the controller  
cache
and eventually eliminate the blind data algorithms that they  
incorporate ..
the problem is that we can't really say that explicitly since we  
sell, and much
of the enterprise operates with enterprise class arrays and  
integrated data

cache.  The trick is in balancing who does what since you've really got
duplicate Virtualization, RAID, and caching options open to you.

.je


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-18 Thread Torrey McMahon

Al Hopper wrote:

On Sun, 17 Dec 2006, Ricardo Correia wrote:

  

On Friday 15 December 2006 20:02, Dave Burleson wrote:


Does anyone have a document that describes ZFS in a pure
SAN environment?  What will and will not work?

 From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.
  

This might answer your question:
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid



The section entitled "Does ZFS work with SAN-attached devices?" does not
make it clear the (some would say) dire effects of not having pool
redundancy.  I think that FAQ should clearly spell out the downside; i.e.,
where ZFS will "say" (Sorry Charlie) "pool is corrupt".

A FAQ should always emphasize the real-world downsides to poor decisions
made by the reader.   Not delivering "bad news" does the reader a
dis-service IMHO.



I'd say that it's clearly described in the FAQ.  If you push to hard 
people will infer that SANs are broken if you use ZFS on top of them or 
vice versa. The only bit that looks a little questionable to my eyes is ...


   Overall, ZFS functions as designed with SAN-attached devices, but if
   you expose simpler devices to ZFS, you can better leverage all
   available features.

What are "simpler devices"?  (I could take a guess ... )

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-17 Thread Eric Schrock
On Sun, Dec 17, 2006 at 07:57:20PM -0600, Al Hopper wrote:
> 
> The section entitled "Does ZFS work with SAN-attached devices?" does not
> make it clear the (some would say) dire effects of not having pool
> redundancy.  I think that FAQ should clearly spell out the downside; i.e.,
> where ZFS will "say" (Sorry Charlie) "pool is corrupt".
> 

This is not entirely true, thanks to ditto blocks.  All metadata is
written multiple times (3 times for pool metadata) regardless of the
entirely device layout.  We did this precisely because ZFS has a
tree-based layout - losing an entire pool due to a single corrupt block
is not acceptable.  If you have a corruption in three distinct blocks
across different devices, then you have some seriously busted hardware.
I would be surprised if any filesystem were able to run sensibly in such
an environment.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-17 Thread Gregory Shaw

On Dec 17, 2006, at 6:57 PM, Al Hopper wrote:


On Sun, 17 Dec 2006, Ricardo Correia wrote:


On Friday 15 December 2006 20:02, Dave Burleson wrote:

Does anyone have a document that describes ZFS in a pure
SAN environment?  What will and will not work?

 From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.


This might answer your question:
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid


The section entitled "Does ZFS work with SAN-attached devices?"  
does not

make it clear the (some would say) dire effects of not having pool
redundancy.  I think that FAQ should clearly spell out the  
downside; i.e.,

where ZFS will "say" (Sorry Charlie) "pool is corrupt".

A FAQ should always emphasize the real-world downsides to poor  
decisions

made by the reader.   Not delivering "bad news" does the reader a
dis-service IMHO.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Hmmm... A question.  Are you referring to not using redundancy within  
the array, or not using a redundant pool configuration?


In the case of the former, I completely agree.

In the case of the latter using intelligent arrays, I don't see how  
the a 'pool corrupt' problem differs from any non-zfs solution  
today.  If you're using RAID-5 LUNs along with UFS/VxFS/SVM with no  
mirroring, you're in the same situation; corruption within the array  
will require a data restore.


Personally, I think data that requires more than RAID-5 redundancy  
should be mirrored between discrete storage arrays.  This  
configuration allows ZFS to mirror the data, while using RAID-5 (or  
better) within the controllers for best performance.


This solution isn't cheap, however.   The justification for a dual- 
array solution really depends on the data value.


-
Gregory Shaw, IT Architect
IT CTO Group, Sun Microsystems Inc.
Phone: (303)-272-8817
500 Eldorado Blvd, UBRM02-157 [EMAIL PROTECTED] (work)
Broomfield, CO 80021   [EMAIL PROTECTED] (home)
"When Microsoft writes an application for Linux, I've won." - Linus  
Torvalds




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-17 Thread Al Hopper
On Sun, 17 Dec 2006, Ricardo Correia wrote:

> On Friday 15 December 2006 20:02, Dave Burleson wrote:
> > Does anyone have a document that describes ZFS in a pure
> > SAN environment?  What will and will not work?
> >
> >  From some of the information I have been gathering
> > it doesn't appear that ZFS was intended to operate
> > in a SAN environment.
>
> This might answer your question:
> http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid

The section entitled "Does ZFS work with SAN-attached devices?" does not
make it clear the (some would say) dire effects of not having pool
redundancy.  I think that FAQ should clearly spell out the downside; i.e.,
where ZFS will "say" (Sorry Charlie) "pool is corrupt".

A FAQ should always emphasize the real-world downsides to poor decisions
made by the reader.   Not delivering "bad news" does the reader a
dis-service IMHO.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-17 Thread Ricardo Correia
On Friday 15 December 2006 20:02, Dave Burleson wrote:
> Does anyone have a document that describes ZFS in a pure
> SAN environment?  What will and will not work?
>
>  From some of the information I have been gathering
> it doesn't appear that ZFS was intended to operate
> in a SAN environment.

This might answer your question:
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-15 Thread Mike Seda
I use zfs in a san. I have two Sun V440s running solaris 10 U2, which 
have luns assigned to them from my Sun SE 3511. So far, it has worked 
flawlessly.



Robert Milkowski wrote:

Hello Dave,

Friday, December 15, 2006, 9:02:31 PM, you wrote:

DB> Does anyone have a document that describes ZFS in a pure
DB> SAN environment?  What will and will not work?

ZFS is "just" a filesystem with "just" an integrated volume manager.
Ok, it's more than that.
The point is that if any other file system works in your SAN then ZFS
should also work. There could be some issues with some arrays with
flushing cache (I haven't got hit by that) but there's an workaround.
Other than that it should just work or should even work better due to
end-to-end data integrity - generally with SANs you've got more things
which can play with your data and ZFS can take care of it or at least
detect it.


DB>  From some of the information I have been gathering
DB> it doesn't appear that ZFS was intended to operate
DB> in a SAN environment.

I don't know why people keep saying strange things about ZFS.
Maybe it's due to fact that ZFS is so different they don't know what
to do with it and get confused? Or maybe as ZFS makes cheap storage
solutions really valuable option people start to think it only belongs
to that segment - which is of course not true.

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-15 Thread Robert Milkowski
Hello Dave,

Friday, December 15, 2006, 9:02:31 PM, you wrote:

DB> Does anyone have a document that describes ZFS in a pure
DB> SAN environment?  What will and will not work?

ZFS is "just" a filesystem with "just" an integrated volume manager.
Ok, it's more than that.
The point is that if any other file system works in your SAN then ZFS
should also work. There could be some issues with some arrays with
flushing cache (I haven't got hit by that) but there's an workaround.
Other than that it should just work or should even work better due to
end-to-end data integrity - generally with SANs you've got more things
which can play with your data and ZFS can take care of it or at least
detect it.


DB>  From some of the information I have been gathering
DB> it doesn't appear that ZFS was intended to operate
DB> in a SAN environment.

I don't know why people keep saying strange things about ZFS.
Maybe it's due to fact that ZFS is so different they don't know what
to do with it and get confused? Or maybe as ZFS makes cheap storage
solutions really valuable option people start to think it only belongs
to that segment - which is of course not true.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-15 Thread Torrey McMahon

Dave Burleson wrote:

Does anyone have a document that describes ZFS in a pure
SAN environment?  What will and will not work?

From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.


What information? ZFS works on a SAN just as well as it does in other 
environments.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS in a SAN environment

2006-12-15 Thread Dave Burleson

Does anyone have a document that describes ZFS in a pure
SAN environment?  What will and will not work?

From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.

Thanks,

Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss