[zfs-discuss] Remove a mirrored pair from a pool

2008-01-07 Thread Alex
Hi, I had a question regarding a situation i have with my zfs pool

I have a zfs pool ftp and within it are 3 250gb drives in a raid z and 2 
400gb drives in a simple mirror. The pool itself has more than 400gb free and I 
would like to remove the 400gb drives from the server. My concern is how to 
remove them without causing the entire pool to become inconsistent. Is there a 
way to tell zfs to get all data off the 400gb mirror so the disks can safely be 
removed? 

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove a mirrored pair from a pool

2008-01-07 Thread Robert Milkowski
Hello Alex,

Monday, January 7, 2008, 11:59:42 AM, you wrote:

A Hi, I had a question regarding a situation i have with my zfs pool

A I have a zfs pool ftp and within it are 3 250gb drives in a raid
A z and 2 400gb drives in a simple mirror. The pool itself has more
A than 400gb free and I would like to remove the 400gb drives from
A the server. My concern is how to remove them without causing the
A entire pool to become inconsistent. Is there a way to tell zfs to
A get all data off the 400gb mirror so the disks can safely be removed?

Unfortunately you've got to manually put data somewhere, re-create the
pool and put data back.

on-line removing of disks is coming...

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] removing a separate zil device

2008-01-07 Thread Bill Moloney
This is a re-post of this issue ... I didn't get any replies to the previous
post of 12/27 ... I'm hoping someone is back from holiday
who may have some insight into this problem ... Bill

when I remove a separate zil disk from a pool, the pool continues to function,
logging synchronous writes to the disks in the pool. Status shows that the log
disk has been removed, and everything seems to work fine until I export the
pool. 

After the pool has been exported (long after the log disk was removed
and gigabytes of synchronous writes were performed successfully), 
I am no longer able to
import the pool. I get an error stating that a pool device cannot be found, 
and importing the pool cannot succeed until the missing device (the separate
zil log disk) is replaced in the system. 

There is a bug filed by Neil Perrin:
6574286 removing a slog doesn't work
regading the problem of not being able to remove a separate zil device from
a pool, but no detail on the ramifications of just taking the device out of
the JBOD. 

Taking it out does not impact the immediate function of the pool,
but the inability to re-import it after this event is a significant issue. Has 
anyone found a workaround for this problem ? I have data in a pool that
I cannot import because the separate zil is no longer available to me.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] removing a separate zil device

2008-01-07 Thread Kyle McDonald
Bill Moloney wrote:
 Taking it out does not impact the immediate function of the pool,
 but the inability to re-import it after this event is a significant issue. 
 Has 
 anyone found a workaround for this problem ? I have data in a pool that
 I cannot import because the separate zil is no longer available to me.
   
Just a guess here. The disk the ZIL was on is no longer available, but 
do you have another disk available?
I would think a Zpool replace mioght help you rpelace the missing disk 
with some other disk

But maybe not, if you can't import it to begin with


   -Kyle


  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Intent logs vs Journaling

2008-01-07 Thread parvez shaikh
Hello,

I am learning ZFS, its design and layout.

I would like to understand how Intent logs are different from journal? 

Journal too are logs of updates to ensure consistency of file system over 
crashes. Purpose of intent log also appear to be same.  I hope I am not missing 
something important in these concepts.

Also I read that Updates in ZFS are intrinsically atomic,  I cant understand 
how they are intrinsically atomic 
http://weblog.infoworld.com/yager/archives/2007/10/suns_zfs_is_clo.html

I would be grateful if someone can address my query

Thanks

   
-
 Explore your hobbies and interests. Click here to begin.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intent logs vs Journaling

2008-01-07 Thread Neil Perrin


parvez shaikh wrote:
 Hello,
 
 I am learning ZFS, its design and layout.
 
 I would like to understand how Intent logs are different from journal?
 
 Journal too are logs of updates to ensure consistency of file system 
 over crashes. Purpose of intent log also appear to be same.  I hope I am 
 not missing something important in these concepts.

There is a difference. A journal contains the necessary transactions to
make the on-disk fs consistent. The ZFS intent is not needed for consistency.
Here's an extract from http://blogs.sun.com/perrin/entry/the_lumberjack :


ZFS is always consistent on disk due to its transaction model. Unix system 
calls can be considered as transactions which are aggregated into a transaction 
group for performance and committed together periodically. Either everything 
commits or nothing does. That is, if a power goes out, then the transactions in 
the pool are never partial. This commitment happens fairly infrequently - 
typically a few seconds between each transaction group commit.

Some applications, such as databases, need assurance that say the data they 
wrote or mkdir they just executed is on stable storage, and so they request 
synchronous semantics such as O_DSYNC (when opening a file), or execute 
fsync(fd) after a series of changes to a file descriptor. Obviously waiting 
seconds for the transaction group to commit before returning from the system 
call is not a high performance solution. Thus the ZFS Intent Log (ZIL) was born.


 
 Also I read that Updates in ZFS are intrinsically atomic,  I cant 
 understand how they are intrinsically atomic 
 http://weblog.infoworld.com/yager/archives/2007/10/suns_zfs_is_clo.html
 
 I would be grateful if someone can address my query
 
 Thanks
 
 
 Explore your hobbies and interests. Click here to begin. 
 http://in.rd.yahoo.com/tagline_groups_6/*http://in.promos.yahoo.com/groups 
 
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] removing a separate zil device

2008-01-07 Thread Richard Elling
Perhaps this is being tracked as 6538021?
http://bugs.opensolaris.org/view_bug.do?bug_id=6538021
 -- richard

Bill Moloney wrote:
 This is a re-post of this issue ... I didn't get any replies to the previous
 post of 12/27 ... I'm hoping someone is back from holiday
 who may have some insight into this problem ... Bill

 when I remove a separate zil disk from a pool, the pool continues to function,
 logging synchronous writes to the disks in the pool. Status shows that the log
 disk has been removed, and everything seems to work fine until I export the
 pool. 

 After the pool has been exported (long after the log disk was removed
 and gigabytes of synchronous writes were performed successfully), 
 I am no longer able to
 import the pool. I get an error stating that a pool device cannot be found, 
 and importing the pool cannot succeed until the missing device (the separate
 zil log disk) is replaced in the system. 

 There is a bug filed by Neil Perrin:
 6574286 removing a slog doesn't work
 regading the problem of not being able to remove a separate zil device from
 a pool, but no detail on the ramifications of just taking the device out of
 the JBOD. 

 Taking it out does not impact the immediate function of the pool,
 but the inability to re-import it after this event is a significant issue. 
 Has 
 anyone found a workaround for this problem ? I have data in a pool that
 I cannot import because the separate zil is no longer available to me.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS speaker needed (Was: Upcoming 1/9 Meeting at Google Chicago...)

2008-01-07 Thread Eric Boutilier
Marc Temkin wrote:
 
  Eric,
 
  ...
  ... In addition we are interested in getting a speaker on the Sun ZFS
  technology. If you know of any available speakers knowledgeable with
  ZFS please let me know or pass onto them my contact information.
 
 
  Thanks,
 
  Marc Temkin
 
  Vice Chair ACM Chicago
  Marc Temkin
   
  7410 N. Talman Ave.
  Chicago, IL 60645
  773-274-6544
  [EMAIL PROTECTED]
   
   Meeting Info ==
 
   
 
  January 9, 2008 Chicago ACM Meeting at Google Chicago
 
  Reservations: [EMAIL PROTECTED] .  The meeting  is at  : 6:30 PM 
Presentation, 5:30 PM Social Hour.   The event is free .   Google is 
located at 20 W. Kinzie , Chicago which is near State and Kinzie . More 
info is  at  our website: www.acm.org/chapters/chicago
 
  ===
 
  Talk 1 Title: An Introduction to Bloom Filters (Jon Trowbridge)
 
  Abstract: When choosing the best algorithm to solve a problem, we 
often think in terms of trade-offs. The most familiar trade-off is 
between storage space and time, but there are more exotic possibilities 
that can lead to data structures with surprising properties. This talk 
will discuss Bloom Filters, a probabilistic data structure that 
efficiently encodes set membership and allows a trade-off between 
storage space and uncertainty.
 
  Talk 2 Title: How Open Source Projects Survive Poisonous People (And 
You Can Too) (Ben Collins-Sussman and Brian Fitzpatrick)
 
  Abstract: Every open source project runs into people who are selfish, 
uncooperative, and disrespectful. These people can silently poison the 
atmosphere of a happy developer community. Come learn how to identify 
these people and peacefully de-fuse them before they derail your 
project. Told through a series of (often amusing) real-life anecdotes 
and experiences.
 
  Bios:
 
  Jon Trowbridge
 
  Senior Software Engineer
 
  Google Chicago
 
  Jon is a software engineer, a long-time advocate for free software, 
and a member of Google's Open Source Program Office. He is currently 
working on Google's Palimpsest Project, an effort to help archive and 
distribute large scientific datasets. Prior to joining Google, Jon spent 
four years at Ximian/Novell, where he worked on the GNOME desktop and 
created Beagle, a desktop search system for Linux.
 
  Ben Collins-Sussman
 
  Senior Software Engineer
 
  Google Chicago
 
  Ben is a member of Google's Open Source Program Office, working on 
projects to promote the spread of open source software both inside and 
outside the company. He is a technical lead for Google Code's open 
source project hosting service, available at _http://code.google.com_ 
http://code.google.com.
 
  He helped port Subversion to Google's Bigtable technology, which now 
runs across numerous machines and serves over 60,000 open source 
repositories. Prior to Google, Ben spent five years with Collabnet as 
one of the original designers and founders of the Subversion project.
 
  He is still active in the Subversion community and is also a 
co-author of the O'Reilly book Version Control with Subversion. He 
received his B.S. in Mathematics from the University of Chicago, and 
enjoys speaking with Brian Fitzpatrick at various conferences on topics 
both serious and irreverent.
 
  Brian Fitzpatrick, Engineering Manager, Google
 
  Brian Fitzpatrick started his career at Google in 2005 as the first 
software engineer hired in the Chicago office. Brian leads Google's 
Chicago engineering efforts and also serves as engineering manager for 
Google Code and internal advisor for Google's open source efforts.
 
  Prior to joining Google, Brian worked at CollabNet, Apache Software 
Foundation and Apple Computer.
 
  Brian has been an active open source contributor for over ten years.
 
  He became a core Subversion developer in 2000 and was the lead 
developer of the cvs2svn utility. Brian has written articles and 
presentations on version control and software development. He co-wrote 
Version Control with Subversion and contributed chapters for Unix in 
a Nutshell and Linux in a Nutshell.
 
  Brian has an A.B. in Classics from Loyola University Chicago with a 
major in Latin, a minor in Greek, and a concentration in Fine Arts and 
Ceramics.


Hi Marc --

Sorry for the slow reply. I was on vacation all last week. Have you
tried contacting anyone from the system/sales engineering teams in the
Chicago Sun offices?

Also (this is a long shot) if you haven't already, the zfs-discuss
list (copied) might be a good place to look. All the OpenSolaris
Community ZFS experts -- including the Sun ZFS engineers -- are
subscribers there.

Eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] removing a separate zil device

2008-01-07 Thread Eric Schrock
The problem is that the ZIL device is treated just like another toplevel
vdev.  As part of the import process, we find all vdevs and assemble the
config, and verify that the sum of all vdev GUIDs match the expected
sum.  Now, each vdev only stores enough configuration to keep track of
the toplevel vdev its a part of, relying on the zpool.cache file or
import logic to assemble the complete pool topology.

When you go to import the pool and it can't find the log device, it
doesn't know about that toplevel vdev at all, notices that the vdev
GUID sum doesn't match, and complains in a generic way about there must
be something out there that I don't know about.

Keeping a fully connected graph of toplevel vdevs is expensive and error
prone, but there is an open RFE for neighbor lists that would allow
discovery of other vdevs, even if entire toplevel vdevs are missing.
But there would be situations where you could construct pathological
failure modes that would have the same result.

A better solution would be making ZFS survive toplevel vdev failure
better than it does today.  In the world of ditto blocks, it should be
possible to import a pool that is missing toplevel vdevs.  I have a
workspace that implicitly allows you to do this, but there are a bunch
of issues in the SPA that need to be addressed before this could be
exposed as a first class operation.

Recovering from your current situation is doable, but tricky.  The
easiest thing to do would be compile your own ZFS kernel module that
doesn't do the vdev GUID sum check and import the pool.  This should
just cause the log device to be forgotten, but you'd definitely want to
try this out on a different pool first.  You could also create another
pool with a separate log device, export it, and then manually tweak the
label on the disk to match the expected pool guid and vdev guid.
Neither of these is straightforward, and will require some time with the
source code, but if your data is vital then it may be worthwhile.

- Eric

On Mon, Jan 07, 2008 at 08:36:54AM -0800, Bill Moloney wrote:
 This is a re-post of this issue ... I didn't get any replies to the previous
 post of 12/27 ... I'm hoping someone is back from holiday
 who may have some insight into this problem ... Bill
 
 when I remove a separate zil disk from a pool, the pool continues to function,
 logging synchronous writes to the disks in the pool. Status shows that the log
 disk has been removed, and everything seems to work fine until I export the
 pool. 
 
 After the pool has been exported (long after the log disk was removed
 and gigabytes of synchronous writes were performed successfully), 
 I am no longer able to
 import the pool. I get an error stating that a pool device cannot be found, 
 and importing the pool cannot succeed until the missing device (the separate
 zil log disk) is replaced in the system. 
 
 There is a bug filed by Neil Perrin:
 6574286 removing a slog doesn't work
 regading the problem of not being able to remove a separate zil device from
 a pool, but no detail on the ramifications of just taking the device out of
 the JBOD. 
 
 Taking it out does not impact the immediate function of the pool,
 but the inability to re-import it after this event is a significant issue. 
 Has 
 anyone found a workaround for this problem ? I have data in a pool that
 I cannot import because the separate zil is no longer available to me.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copy on write related query

2008-01-07 Thread Nicolas Williams
On Sun, Jan 06, 2008 at 08:05:56AM -0800, sudarshan sridhar wrote:
   My exact doubt is, if COW is default behavior of ZFS then does COWd
   data written to the same physical drive where the filesystem
   resides? 

Just to clarify: there is no way to disable COW in ZFS.

   If so the physical device capacity should be more that what the file
   system size is. 
 
   I mean in normal filesystem sinario, a partition with 1Gb with some
   some filesystem (say ext2fs) is created, then use can save upto 1Gb
   data under that.
 
   Is the same behavior with ZFS?. Because I feel since COW is default
   ZFS require  1Gb for one fileystem inorder to store COWed data.

   Please correct me if i am wrong.

If you are trying to update as much data as you already have on disk,
all in the same transaction, then yes, COW doubles your transient
storage requirements for completing the transaction.

Of course, noone ever updates (replaces) terabytes of data in one
transaction.  That's because the necessary bandwidth does not exist.

So the amount of free space required in order to complete any given
transaction is going to be a small fraction of the total amount of space
in the given volume.  You will only notice that you even need to have
that much space available when your volume is very close to full.  COW
or no COW, if you're close to volume full you have a problem -- you
are very likely to reach volume full and not having COW wouldn't save
you.

If you're trying to say that COW is inefficient, space-wise, what you'll
find is that the space overhead for COW is in the noise for any large
volume.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs pool does remount automatically

2008-01-07 Thread Mark J Musante
On Mon, 7 Jan 2008, Andre Lue wrote:

 I usually have to do a zpool import -f pool to get it back.

What do you mean by 'usually'?

After the import, what's the output of 'zpool status'?

During reboot, are there any relevant messages in the console?


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS via Virtualized Solaris?

2008-01-07 Thread Eric L. Frederich
From what I read, one of the main things about ZFS is Don't trust the 
underlying hardware.  If this is the case, could I run Solaris under 
VirtualBox or under some other emulated environment and still get the benefits 
of ZFS such as end to end data integrity?

The reason I ask is that the only computer I have with the requirements to run 
ZFS is also my MythTV machine.  I can't run ZFS under Linux and I can't run 
MythTV under Solaris.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intent logs vs Journaling

2008-01-07 Thread Bill Moloney
file system journals may support a variety of availability models, ranging from
simple support for fast recovery (return to consistency) with possible data 
loss, to those that attempt to support synchronous write semantics with no data 
loss on failure, along with fast recovery

the simpler models use a persistent caching scheme for file system meta-data
that can be used to limit the possible sources of file system corruption,
avoiding a complete fsck run after a failure ... the journal specifies the only
possible sources of corruption, allowing a quick check-and-recover mechanism
... here the journal is always written with meta-data changes (at least), 
before the actual updated meta-data in question is over-written to its old
location on disk ... after a failure, the journal indicates what meta-data 
must be checked for consistency

more elaborate models may cache both data and meta-data, to support 
limited data loss, synchronous writes and fast recovery ... newer file systems
often let you choose among these features

since ZFS never updates any data or meta-data in place (anything written into a 
pool is always written to a new (unused) location, it does not have the same
consistency issues that traditional file systems have to deal with ... a ZFS
pool is always in a consistent state, moving an old state to a new state only
after the new state has been completely committed to persistent store ...
the final update to a new state depends on a single atomic write that either
succeeds (moving the system to a consistent new state) or fails, leaving the
system in its current consistent state ... there can be no interim inconsistent
state

a ZFS pool builds its new state information in host memory for some period of
time (about 5 seconds), as host IOs are generated by various applications ...
at the end of this period these buffers are written to fresh locations on 
persistent store as described above, meaning that application writes are
treated asynchronously by default, and in the face a failure, some amount of
information that has been accumulating in host memory can be lost

if an application requires synchronous writes and a guarantee of no data loss,
then ZFS must somehow get the written information to persistent store
before it returns the application write call ... this is where the intent log 
comes
in ... the system call information (including the data) involved in a 
synchronous write operation are written to the intent log on persistent store
before the application write call returns ... but the information is also
written into the host memory buffer scheduled for its 5 sec updates (just
as if it was an asynchronous write) ... at then end of the 5 sec update time 
the new host buffers are written to disk, and, once committed, the intent
log information written to the ZIL is not longer needed and can be jettisoned
(so the ZIL never needs to be very large)

if the system fails, the accumulated but not flushed host buffer information
will be lost, but the ZIL records will already be on disk for any synchronous
writes and can be replayed when the host comes back up, or the pool is
imported by some other living host ... the pool, of course, always comes up
in a consistent state, but any ZIL records can be incorporated into a new 
consistent state before the pool is fully imported for use

the ZIL is always there in host memory, even when no synchronous writes
are being done, since the POSIX fsync() call could be made on an open 
write channel at any time, requiring all to-date writes on that channel
to be committed to persistent store before it returns to the application
... it's cheaper to write the ZIL at this point than to force the entire 5 sec
buffer out prematurely

synchronous writes can clearly have a significant negative performance 
impact in ZFS (or any other system) by forcing writes to disk before having a
chance to do more efficient, aggregated writes (the 5 second type), but
the ZIL solution in ZFS provides a good trade-off with a lot of room to
choose among various levels of performance and potential data loss ...
this is especially true with the recent addition of separate ZIL device
specification ... a small, fast (nvram type) device can be designated for
ZIL use, leaving slower spindle disks for the rest of the pool 

hope this helps ... Bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS via Virtualized Solaris?

2008-01-07 Thread ian
Eric L. Frederich writes: 

From what I read, one of the main things about ZFS is Don't trust the 
underlying hardware.  If this is the case, could I run Solaris under 
VirtualBox or under some other emulated environment and still get the 
benefits of ZFS such as end to end data integrity?
 
You could probably answer that question by changing the phrase to Don't 
trust the underlying virtual hardware!  ZFS doesn't care if the storage is 
virtualised or not. 

Ian 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS via Virtualized Solaris?

2008-01-07 Thread Peter Schuller
 From what I read, one of the main things about ZFS is Don't trust the
  underlying hardware.  If this is the case, could I run Solaris under
  VirtualBox or under some other emulated environment and still get the
  benefits of ZFS such as end to end data integrity?

 You could probably answer that question by changing the phrase to Don't
 trust the underlying virtual hardware!  ZFS doesn't care if the storage is
 virtualised or not.

But worth noting is that, as with for example hardware RAID, if you intend to 
take advantage of the self-healing properties of ZFS with multiple disks, you 
must expose the individual disks to your mirror/raidz/raidz2 individually 
through the virtualization environment and use them in your pool.

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller [EMAIL PROTECTED]'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org



signature.asc
Description: This is a digitally signed message part.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS via Virtualized Solaris?

2008-01-07 Thread Peter Dunlap
You could probably run MythTV in a linux domU within a Solaris system 
(basically your same idea but virtualize the Linux instead of the 
Solaris).  The only hangup would be your TV tuner card(s).  I use MythTV 
with a separate Solaris file server but I've contemplated the 
possibility of consolidating the two systems using xVM.  If and when xVM 
supports PCI passthrough I will probably give it a shot.

-Peter

Eric L. Frederich wrote:
 From what I read, one of the main things about ZFS is Don't trust the 
 underlying hardware.  If this is the case, could I run Solaris under 
 VirtualBox or under some other emulated environment and still get the 
 benefits of ZFS such as end to end data integrity?

 The reason I ask is that the only computer I have with the requirements to 
 run ZFS is also my MythTV machine.  I can't run ZFS under Linux and I can't 
 run MythTV under Solaris.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS via Virtualized Solaris?

2008-01-07 Thread Torrey McMahon
Peter Schuller wrote:
 From what I read, one of the main things about ZFS is Don't trust the
   
 underlying hardware.  If this is the case, could I run Solaris under
 VirtualBox or under some other emulated environment and still get the
 benefits of ZFS such as end to end data integrity?
 
 You could probably answer that question by changing the phrase to Don't
 trust the underlying virtual hardware!  ZFS doesn't care if the storage is
 virtualised or not.
 

 But worth noting is that, as with for example hardware RAID, if you intend to 
 take advantage of the self-healing properties of ZFS with multiple disks, you 
 must expose the individual disks to your mirror/raidz/raidz2 individually 
 through the virtualization environment and use them in your pool.

Or expose enough LUNs to take advantage of it. Two raid LUNs in a mirror 
for example.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Does block allocation for small writes work over iSCSI?

2008-01-07 Thread Gilberto Mautner
Hello list,
 
 
I'm thinking about this topology:
 
NFS Client NFS--- zFS Host ---iSCSI--- zFS Node 1, 2, 3 etc.
 
The idea here is to create a scalable NFS server by plugging in more nodes as 
more space is needed, striping data across them.
 
A question is: we know from the docs that zFS optimizes random write speed by 
consolidating what would be many random writes into a single sequential 
operation.
 
I imagine that for zFS be able to do that it has to have some knowledge about 
the hard disk geography. Now, if this geography is being abstracted by iSCSI, 
is that optimization still valid?
 
 
Thanks
 
Gilberto
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does block allocation for small writes work over iSCSI?

2008-01-07 Thread Richard Elling
Gilberto Mautner wrote:
 Hello list,
  
  
 I'm thinking about this topology:
  
 NFS Client NFS--- zFS Host ---iSCSI--- zFS Node 1, 2, 3 etc.
  
 The idea here is to create a scalable NFS server by plugging in more 
 nodes as more space is needed, striping data across them.

I see people doing this, but, IMHO, it seems like a waste of
resources and will be generally slower than having the 
disks on the NFS server.

  
 A question is: we know from the docs that zFS optimizes random write 
 speed by consolidating what would be many random writes into a single 
 sequential operation.
  
 I imagine that for zFS be able to do that it has to have some 
 knowledge about the hard disk geography. Now, if this geography is 
 being abstracted by iSCSI, is that optimization still valid?

ZFS doesn't do any optimization for hard disk geometry.  Allocations are
made starting at the beginning and proceeding according to the slab size.
For diversity, redundant copies of metadata are spread further away, so
there may be some additional jumps, but these aren't really based on
disk geometry.  In other words, I believe the optimization is probably still
valid.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss