Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-14 Thread bob netherton
On 12/14/12 10:07 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) 
wrote:

Is that right?  You can't use zfs send | zfs receive to send from a newer 
version and receive on an older version?



No.  You can, with recv, override any property in the sending stream that can be 
set from the command line (ie, a writable).  Version is not one of those 
properties.  It only gets changed, in an upward direction, when you do a zfs 
upgrade.


ie:

#  zfs get version repo/support
NAME  PROPERTY  VALUESOURCE
repo/support  version   5-


# zfs send repo/support@cpu-0412 | zfs recv -o version=4 repo/test
cannot receive: cannot override received version



You can send a version 6 file system into a version 28 pool, but it will still 
be a version 6 file system.



Bob



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-13 Thread Bob Netherton
That is a touch misleading.  This has always been the case since S10u2.  You 
have to create the pool AND the file systems at the oldest versions you want to 
support.  

I maintain a table of pool and version numbers on my blog (blogs.oracle. 
com/bobn) for this very purpose.   I got lazy the other day and made this 
mistake between 11ga and 11.1.  

Watch the ZFS send approach because you might be sending a newer file system 
version than is supported.  Yes, I've done that too :)

Bob

Sent from my iPhone

On Dec 13, 2012, at 10:47 AM, Jan Owoc jso...@gmail.com wrote:

 Hi,
 
 On Thu, Dec 13, 2012 at 9:14 AM, sol a...@yahoo.com wrote:
 Hi
 
 I've just tried to use illumos (151a5)  import a pool created on solaris
 (11.1) but it failed with an error about the pool being incompatible.
 
 Are we now at the stage where the two prongs of the zfs fork are pointing in
 incompatible directions?
 
 Yes, that is correct. The last version of Solaris with source code
 used zpool version 28. This is the last version that is readable by
 non-Solaris operating systems FreeBSD, GNU/Linux, but also
 OpenIndiana. The filesystem, zfs, is technically at the same
 version, but you can't access it if you can't access the pool :-).
 
 If you want to access the data now, your only option is to use Solaris
 to read it, and copy it over (eg. with zfs send | recv) onto a pool
 created with version 28.
 
 Jan
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-13 Thread Bob Netherton
At this point, the only thing would be to use 11.1 to create a new pool at 
151's version (-o version=) and top level dataset (-O version=).   Recreate the 
file system hierarchy and do something like an rsync.  I don't think there is 
anything more elegant, I'm afraid.  

That's what I did yesterday :)

Bob

Sent from my iPhone

On Dec 13, 2012, at 12:54 PM, Jan Owoc jso...@gmail.com wrote:

 On Thu, Dec 13, 2012 at 11:44 AM, Bob Netherton bob.nether...@gmail.com 
 wrote:
 On Dec 13, 2012, at 10:47 AM, Jan Owoc jso...@gmail.com wrote:
 Yes, that is correct. The last version of Solaris with source code
 used zpool version 28. This is the last version that is readable by
 non-Solaris operating systems FreeBSD, GNU/Linux, but also
 OpenIndiana. The filesystem, zfs, is technically at the same
 version, but you can't access it if you can't access the pool :-).
 
 That is a touch misleading.  This has always been the case since S10u2.  You 
 have to create the pool AND the file systems at the oldest versions you want 
 to support.
 
 I maintain a table of pool and version numbers on my blog (blogs.oracle. 
 com/bobn) for this very purpose.   I got lazy the other day and made this 
 mistake between 11ga and 11.1.
 
 Watch the ZFS send approach because you might be sending a newer file system 
 version than is supported.  Yes, I've done that too :)
 
 Bob, you are correct. There is now a new version of zfs in Solaris
 11.1. I assume it's incompatible with the previous version:
 http://docs.oracle.com/cd/E26502_01/html/E29007/gjxik.html#scrolltoc
 
 Any suggestions how to help OP read his data on anything but Solaris
 11.1 or migrate it back a version?
 
 Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-13 Thread Bob Netherton
Perhaps slightly elegant, you can do the new pool/rsync thing on the 11.1 live 
CD so you don't actually have to stand up a new system to do this.   Assuming 
this is x86 and VirtualBox works on Illumos, you could fire up a VM to do this 
as well. 

Bob

Sent from my iPhone

On Dec 13, 2012, at 12:54 PM, Jan Owoc jso...@gmail.com wrote:

 On Thu, Dec 13, 2012 at 11:44 AM, Bob Netherton bob.nether...@gmail.com 
 wrote:
 On Dec 13, 2012, at 10:47 AM, Jan Owoc jso...@gmail.com wrote:
 Yes, that is correct. The last version of Solaris with source code
 used zpool version 28. This is the last version that is readable by
 non-Solaris operating systems FreeBSD, GNU/Linux, but also
 OpenIndiana. The filesystem, zfs, is technically at the same
 version, but you can't access it if you can't access the pool :-).
 
 That is a touch misleading.  This has always been the case since S10u2.  You 
 have to create the pool AND the file systems at the oldest versions you want 
 to support.
 
 I maintain a table of pool and version numbers on my blog (blogs.oracle. 
 com/bobn) for this very purpose.   I got lazy the other day and made this 
 mistake between 11ga and 11.1.
 
 Watch the ZFS send approach because you might be sending a newer file system 
 version than is supported.  Yes, I've done that too :)
 
 Bob, you are correct. There is now a new version of zfs in Solaris
 11.1. I assume it's incompatible with the previous version:
 http://docs.oracle.com/cd/E26502_01/html/E29007/gjxik.html#scrolltoc
 
 Any suggestions how to help OP read his data on anything but Solaris
 11.1 or migrate it back a version?
 
 Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs_arc_max values

2012-05-17 Thread Bob Netherton
I'll agree with Bob on this.  A specific use case is a VirtualBox server 
hosting lots of guests.  I even made a point of mentioning this tunable in the 
Solaris 10 Virtualization Essentials section on vbox :)

There are several other use cases as well.  

Bob

Bob

Sent from my iPad

On May 17, 2012, at 9:03 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
wrote:

 On Thu, 17 May 2012, Paul Kraus wrote:
 
   Why are you trying to tune the ARC as _low_ as possible? In my
 experience the ARC gives up memory readily for other uses. The only
 place I _had_ to tune the ARC in production was a  couple systems
 running an app that checks for free memory _before_ trying to allocate
 it. If the ARC has all but 1 GB in use, the app (which is looking for
 
 On my system I adjusted the ARC down due to running user-space applications 
 with very bursty short-term large memory usage. Reducing the ARC assured that 
 there would be no contention between zfs ARC and the applications.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] About ZFS compatibility

2009-05-20 Thread bob netherton

zhihui Chen wrote:

I have created a pool on external storage with B114. Then I export this pool
and import it on another system with B110.But this import will fail and show
error:  cannot import 'tpool': pool is formatted using a newer ZFS version.
Any big change in ZFS with B114 leads to this compatibility issue?

  

It's always a good idea to check out the release flag days to get an idea
of the impacts of changes.   
http://opensolaris.org/os/community/on/flag-days/


This one stands out - 
http://opensolaris.org/os/community/on/flag-days/pages/2009041801/

which points to PSARC 2009/204 and the case materials at
http://arc.opensolaris.org/caselog/PSARC/2009/204/20090330_matthew.ahrens
give the reason for the version number bump.   User quotas.




Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/zpool Versions in Solaris 10

2009-04-21 Thread bob netherton




since I am trying to keep my pools at a version that different updates 
can handle, I personally am glad it did not get rev'ed. I did get into 
trouble recently that SX-CE 112 created a file system on an old pool 
with a version newer than Solaris 10 likes :(




-o is your best friend ;-)I can now get rid of all of those 
pre-allocated filesystems
that I used for just this purpose.   Don't know where all of the corner 
cases are, but
this appears to be a workaround.   Just keep a table of the pool and 
filesystem versions

for each release handy.


# zpool create -o version=10 newpool c0d0s4

I did this on nv112 and . drum roll please ..   it can be 
imported on Solaris 10.


Same thing for ZFS.

# zfs create -o version=1 rpool/legacy-file-system

Also created on nv112 and also  wait for it ...   mountable and 
totally

usable on Solaris 10 10/08.

It's a beautiful thing.   





Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to increase rpool size in a VM?

2009-03-26 Thread bob netherton

Bob Doolittle wrote:

Blake wrote:

You need to use 'installgrub' to get the right boot pits in place on
your new disk.
  


I did that, but it didn't help.
I ran:
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0

Is it OK to run this before resilvering has completed?



You need to install GRUB in the master boot record (MBR). 


# installgrub -m boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0

And yes, it is safe to do while the resilvering is happening.   The 
master boot record

is outside of the block range of your pool.

Changing the boot order shouldn't be necessary (that's what findroot is 
supposed to
help take care of).   It should only be necessary if the new disk wasn't 
seen by the BIOS
in the first place or for some reason isn't selected as part of the 
normal BIOS boot sequence.


Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Bob Netherton

 Multiple pools on one server only makes sense if you are going to have
 different RAS for each pool for business reasons.  It's a lot easier to
 have a single pool though.  I recommend it.

A couple of other things to consider to go with that recommendation.

- never build a pool larger than you are willing to restore.   Bad
things can still happen that would require you to restore the entire
pool.  Convenience and SLAs aren't always in agreement :-)   The
advances in ZFS availability might make me look at my worst case
restore scenario a little different though - but there will still
be a restore case that worries me.

- as I look at the recent lifecycle improvements with zones (in the
Solaris 10 context of zones), I really like upgrade on attach.   That
means I will be slinging zones more freely.   So I need to design my
pools to match that philosophy.

- if you are using clustering technologies, pools will go hand in
hand with failover boundaries.   So if I have multiple failover
zones, I will have multiple pools.


Bob


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Virutal zfs server vs hardware zfs server

2009-02-27 Thread Bob Netherton
Bob is right.  Less chance of failure perhaps but also less  
protection.  I don't like it when my storage lies to me :)


Bob

Sent from my iPhone

On Feb 27, 2009, at 12:48 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
 wrote:



On Fri, 27 Feb 2009, Blake wrote:

SinceZFS is trying to checksum blocks, the fewer abstraction  
layers youhave in between ZFS and spinning rust, the less points  
oferror/failure.


Are you saying that ZFS checksums are responsible for the failure?

In what way does more layers of abstraction cause particular  
problems for ZFS which won't also occur with some other filesystem?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Noob: Best way to replace a disk when you're out of internal connectors?

2008-12-30 Thread Bob Netherton

 I am a bit slow today.  It seems like a dying drive should be replaced 
 ASAP. 

Completely agree with Bob on this.   I drive an 8.000lb truck and the
tires have industrial strength runflats.   If I get a puncture or tear
in a tire I replace it as soon as I can, not when it is convenient.
The runflats get me out of the woods or down the street.   Since you
are running RAIDZ2 then the better analogy might be half-shafts,
but you get the point.

  Since you are using RAIDZ2, replacing the drive as described 
 above should not be a problem.  Is the issue that your hardware does 
 not support hot swap and this is not a good time to shut the system 
 down?

I would recommend a system maintenance window for failing hardware as
soon as you can reasonably do it.   You still have protection for your
data, but a flaky drive needs to be replaced.

 Regarding the proposal to replace the drive with an external USB 
 drive, this approach will surely work but since USB is only good for 
 about 11 or 12MB/second, performance of the whole raidz2 vdev would 
 surely suffer and writes would then be limited by USB speeds.  It is 
 likely to take quite a long time to resilver to the USB drive and if 
 the filesystem is busy, maybe it will never catch up.  It may perform 
 better with the dying drive.

Exactly.   The analogy here is the space saver spare tire.   use only
as a last resort :-)


Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpol mirror creation after non-mirrored zpool is setup

2008-12-14 Thread Bob Netherton


Jeff Bonwick wrote:
 On Sat, Dec 13, 2008 at 04:44:10PM -0800, Mark Dornfeld wrote:
 I have installed Solaris 10 on a ZFS filesystem that is not mirrored. Since 
 I have an identical disk in the machine, I'd like to add that disk to the 
 existing pool as a mirror. Can this be done, and if so, how do I do it?
 
 Yes:
 
 # zpool attach poolname old_disk new_disk
 
And if you want to be able to boot off of the newly attached
replica you might want to install a boot block on it.

See http://docs.sun.com/app/docs/doc/816-5166/installboot-1m

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk \
   raw device of the replica
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragmentation with MySQL databases

2008-11-23 Thread Bob Netherton

 This argument can be proven by basic statistics without need to resort 
 to actual testing.

Mathematical proof  reality of how things end up getting used.

 Luckily, most data access is not completely random in nature.

Which was my point exactly.   I've never seen a purely mathematical
model put in production anywhere :-)


Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragmentation with MySQL databases

2008-11-22 Thread Bob Netherton

 In other words, for random access across a working set larger (by say X%) 
 than the SSD-backed L2 ARC, the cache is useless.  This should asymptotically 
 approach truth as X grows and experience shows that X=200% is where it's 
 about 99% true.
   
Ummm, before we throw around phrases like useless, how about a little 
testing ?I like a
good academic argument just like the next guy, but before I dismiss 
something completely
out of hand I'd like to see some data. 

Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copies set to greater than 1

2008-11-06 Thread Bob Netherton
On Thu, 2008-11-06 at 19:54 -0500, Krzys wrote:
 WHen property value copies is set to value greater than 1 how does it work? 
 Will 
 it store second copy of data on different disk? or does it store it on the 
 same 
 disk? Also when this setting is changed at some point on file system, will it 
 make copies of existing data or just new data thats being written from now on?

I have done this on my home directory the microsecond that it became
available :-)

It tries to make copies on multiple devices if it can.   If not (as in
my single disk laptop) it places both copies on the same disk.   It will
not duplicate any existing data, so it would be a good idea to do a
zfs create -o copies=2 ..   so that all of the data in the dataset
will have some sort of replication from the beginning. 

df output reflects actual pool usage.

# mkfile 300m f

# ls -la 
total 1218860
drwxr-xr-x   2 bobn local  3 Nov  6 19:04 .
drwxr-xr-x  81 bobn sys  214 Nov  6 19:04 ..
-rw---   1 bobn local314572800 Nov  6 19:04 f

# du -h .
 600M   .



Bob





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shared ZFS in Multi-boot?

2008-08-07 Thread Bob Netherton
On Thu, 2008-08-07 at 09:16 -0700, Daniel Templeton wrote:
 Is there a way that I can add the disk to a ZFS pool and have 
 the ZFS pool accessible to all of the OS instances?  I poked through the 
 docs and searched around a bit, but I couldn't find anything on the topic.

Yes.  I do that all of the time.   The trick here is to create the pool
and filesystems with the oldest Solaris you will use.  ZFS has very
good backward compatibility but not the reverse.

Here's a trick that will come in handy.  Create quite a few empty
ZFS filesystems in your oldest Solaris.  In my case the pool is
called throatwarbler and I have misc1 misc2 misc3 misc4 misc5 .

What happens is that I will be running a newer Solaris and want a
filesystem.  Rather than reboot to the older Solaris, just rename
misc[n] to the new name.


Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread Bob Netherton
soren wrote:
 ZFS has detected that my root filesystem has a small number of errors.  Is 
 there a way to tell which specific files have been corrupted?
   
After a scrub a zpool status -v should give you a list of files with 
unrecoverable errors.


Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help me....

2008-08-03 Thread Bob Netherton
On Sun, 2008-08-03 at 20:46 -0700, Rahul wrote:
 hi 
 can you give some disadvantages of the ZFS file system??

In what context ?   Relative to what ?



Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I trust ZFS?

2008-07-31 Thread Bob Netherton
On Thu, 2008-07-31 at 13:25 -0700, Ross wrote:
 Hey folks,
 
 I guess this is an odd question to be asking here, but I could do with some 
 feedback from anybody who's actually using ZFS in anger.

ZFS in anger ?   That's an interesting way of putting it :-)

 but I have some real concerns about whether I can really trust ZFS to
  keep my data alive if things go wrong.  This is a big step for us, 
 we're a 100% windows company and I'm really going out on a limb by 
 pushing Solaris.

I can appreciate how this could be considered a risk, especially if it
is your idea.   But let's put this all in perspective and you'll see
why it isn't even remotely a question.

I have put all sorts of file servers into production with things like
Online Disk Suite 1.0, NFS V1 - and slept like a baby.  Now, for the
non-historians on the list, the quality of Online Disk Suite 1.0 led
directly to the creation of the volume management marketplace and
Veritas in particular (hey - that's a joke, OK    but only
marginally).


 The question is whether I can make a server I can be confident in.  
 I'm now planning a very basic OpenSolaris server just using ZFS as a 
 NFS server, is there anybody out there who can re-assure me that such
 a server can work well and handle real life drive failures?

There are two questions in there - can it be built and are you
comfortable with it.   Those are two different things.  The simple
answer to the first is yes.  Although if this is mission critical
(and things like NFS servers generally are - even if they are only
serving up iTunes music libraries - ask my daughter).  

Enda's point about the Marvell driver updates for Solaris 10 should
be carefully considered.  If it's just an NFS server then the vast
majority of OpenSolaris benefits won't be applicable (newer GNOME,
better packaging, better Linux interoperability, etc).  Putting
this one Solaris 10 with Live Upgrade and a service contract
would make me sleep like a baby.

Now, for the other question - if you are looking at this like an
appliance then you might not be quite as happy.  It does take a little
care and feeding, but nearly every piece of technology more complicated
than a toaster needs a little love every once in a while.   I would much
rather put a Solaris/ZFS file server into a Windows environment than a
Windows file server into a Unix environment :-)


Bob



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I trust ZFS?

2008-07-31 Thread Bob Netherton

 We haven't had any real life drive failures at work, but at home I
 took some old flaky IDE drives and put them in a pentium 3 box running
 Nevada. 

Similar story here.  Some IDE and SATA drive burps under Linux (and
please don't tell me how wonderful Reiser4 is - 'cause it's banned in
this house forever agh) and Windows.   It ate my entire iTunes
library.  Yeah, lurve that silent data corruption feature.

  Several of them were known to cause errors under Linux, so I
 mirrored them in approximately-the-same-size pairs and set up weekly
 scrubs.  Two drives out of six failed entirely, and were nicely
 retired, before I gave up on the idea and bought new disks. 

Pretty cool, eh ?

 Finally, at work we're switching everything over to ZFS because it's
 so convenient... but we keep tape backups nonetheless.  

A very good idea.  Disasters will still occur.  With enough storage,
snapshots can eliminate the routine file by file restores but a complete
meltdown is always a possibility.  So backups aren't optional, but I
find myself doing very few restores any more.


Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-13 Thread Bob Netherton
  I want to
 start testing out ZFS boot and zfs allow to minimize the delay between the
 release of U6 and my production deployment.

Good observation.  I mention this in every Solaris briefing that I do. 
Get some stick time with this capability using SXCE or OpenSolaris so
that you can reduce the time it takes to deploy whatever upcoming 
Solaris update has ZFS root (how's that for being evasive).   I said
the same thing about ZULU before the s10 11/07 timeframe.

 I don't think so, unless you mean the new openSolaris distribution. I
 evaluated that, unfortunately it's not quite ready for production
 deployment in our environment.

Out of curiosity, where did it miss the mark ?   It is still very much
work in progress, early adopter stuff, but what were the things that
kept you from deploying it.   Just curious.


Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs mount fails - directory not empty

2008-03-08 Thread Bob Netherton
Multi-boot system (s10u3, s10u4, and nevada84) having problems
mounting ZFS filesystems at boot time.   The pool is s10u3
are are most of the filesystems.  A few of the filesystems
are nevada83.

# zfs mount -a
cannot mount '/pandora': directory is not empty

# zfs list -o name,mountpoint 
NAME  MOUNTPOINT
pandora   /pandora
pandora/domains   /pandora/domains
pandora/domains/fedora8   /pandora/domains/fedora8
pandora/domains/nv83a /pandora/domains/nv83a
pandora/domains/s10u5 /pandora/domains/s10u5
pandora/domains/ub710 /pandora/domains/ub710
pandora/domains/winxp /pandora/domains/winxp
pandora/export/export
pandora/home  /export/home
pandora/home-restore  legacy
pandora/[EMAIL PROTECTED]  -
pandora/iso   /export/iso


All of the filesystems except the legacy and snapshots are
mounted.   The error return code is making filesystem/local
really fussy, which makes booting really fussy, etc :-)

I do notice that zfs umount is leaving the mountpoints behind.


The other thing is more of a question.   I understand why the
ZFS filesystems created in nevada don't mount on s10.   But
should those cause mountall (and filesystem/local) to fail ?
I guess I could do legacy mounts for all my xVM domains, but
that seems un-ZFS like.


Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: How does ZFS write data to disks?

2007-05-11 Thread Bob Netherton
On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
 I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
  around 5 seconds. I assume it's just something to do with caching? 

Yep - the ZFS equivalent of fsflush.  Runs more often so the pipes don't
get as clogged.   We've had lots of rain here recently, so I'm sort of
sensitive to stories of clogged pipes.

 Is this behavior ok? seems it would be better to have the disks writing
  the whole time instead of in bursts.

Perhaps - although not in all cases (probably not in most cases). 
Wouldn't it be cool to actually do some nice sequential writes to
the sweet spot of the disk bandwidth curve, but not depend on it
so much that a single random I/O here and there throws you for
a loop ?

Human analogy - it's often more wise to work smarter than harder :-)

Directly to your question - are you seeing any anomalies in file
system read or write performance (bandwidth or latency) ?

Bob



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss