Re: [zfs-discuss] SUMMARY: mounting datasets from a read-only pool with aid of tmpfs

2011-11-23 Thread Lori Alt

On 11/22/11 17:54, Jim Klimov wrote:

2011-11-23 2:26, Lori Alt wrote:

Did you try a temporary mount point?
zfs mount -o mountpoint=/whatever dataset

- lori



I do not want to lie so I'll delay with a definite answer.
I think I've tried that, but I'm not certain now. I'll try
to recreate the situation later and respond responsibly ;)

If this works indeed - that's a good idea ;)
Should it work relative to alternate root as well (just
like a default/predefined mountpoint value would)?


It does not take an alternate root into account.  Whatever you specify 
as the value of the temporary mountpoint property is exactly where it's 
mounted.




BTW, is there a way to do overlay mounts like mount -O
with zfs automount attributes?


No, there is no property for that.  You would need to specify the -O to 
the mount or zfs mount command to get an overlay mount.  You don't 
need to use legacy  mounts for this.  The -O option works with regular 
zfs mounts.



I have to use legacy mounts
and /etc/vfstab for that now on some systems, but would
like to aviod such complication if possible...

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send and dedupe

2011-09-07 Thread Lori Alt

On 09/ 6/11 11:45 PM, Daniel Carosone wrote:

On Tue, Sep 06, 2011 at 10:05:54PM -0700, Richard Elling wrote:

On Sep 6, 2011, at 9:01 PM, Freddie Cash wrote:


For example, does 'zfs send -D' use the same DDT as the pool?

No.

My understanding was that 'zfs send -D' would use the pool's DDT in
building its own, if present.
It does not use the pool's DDT, but it does use the SHA-256 checksums 
that have already been calculated for on-disk dedup, thus speeding the 
generation of the send stream.




If blocks were known by the filesystem
to be duplicate, it would use that knowledge to skip some work seeding
its own ddt and stream back-references. This doesn't change the stream
contents vs what it would have generated without these hints, so No
still works as a short answer :)

That understanding was based on discussions and blog posts at the
time, not looking at code. At least in theory, it should help avoid
reading and checksumming extra data blocks if this knowledge can be
used, so less work regardless of measurable impact on send throughput.
(It's more about diminished impact to other concurrent activities)

The point has mostly been moot in practice, though, because I've found
zfs send -D just plain doesn't work and often generates invalid
streams, as you note. Good to know there are fixes.

--
Dan.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send and dedupe

2011-09-07 Thread Lori Alt

On 09/ 7/11 02:20 PM, Daniel Carosone wrote:

On Wed, Sep 07, 2011 at 08:47:36AM -0600, Lori Alt wrote:

On 09/ 6/11 11:45 PM, Daniel Carosone wrote:

My understanding was that 'zfs send -D' would use the pool's DDT in
building its own, if present.

It does not use the pool's DDT, but it does use the SHA-256 checksums
that have already been calculated for on-disk dedup, thus speeding the
generation of the send stream.

Ah, thanks for the clarification.  Presumably the same is true if the
pool is using checksum=sha256, without dedup?


Yes, I think so.


Still a moot point for now :)

--
Dan.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Lori Alt

 On 04/ 6/11 07:59 AM, Arjun YK wrote:

Hi,

I am trying to use ZFS for boot, and kind of confused about how the 
boot paritions like /var to be layed out.


With old UFS, we create /var as sepearate filesystem to avoid various 
logs filling up the / filesystem


I believe that creating /var as a separate file system was a common 
practice, but not a universal one.  It really depended on the 
environment and local requirements.




With ZFS, during the OS install it gives the option to Put /var on a 
separate dataset, but no option is given to set quota. May be, others 
set quota manually.


Having a separate /var dataset gives you the option of setting a quota 
on it later.  That's why we provided the option.  It was a way of 
enabling administrators to get the same effect as having a separate /var 
slice did with ufs.  Administrators can choose to use it or not, 
depending on local requirements.




So, I am trying to understand what's the best practice for /var in 
ZFS. Is that exactly same as in UFS or is there anything different ?


I'm not sure there's a defined best practice.  Maybe someone else can 
answer that question.  My guess is that in environments where, before, a 
separate ufs /var slice was used, a separate zfs /var dataset with a 
quota might now be appropriate.


Lori




Could someone share some thoughts ?


Thanks
Arjun


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Lori Alt

 On 04/ 6/11 11:42 AM, Paul Kraus wrote:

On Wed, Apr 6, 2011 at 1:14 PM, Brandon Highbh...@freaks.com  wrote:


The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you can't downgrade a dataset, using snv_151a
and planning to send to Nexenta as a final step will trip you up unless you
explicitly create them with a lower version.

 I thought I saw that with zpool 10 (or was it 15) the zfs send
format had been committed and you *could* send/recv between different
version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:

The format of the stream is committed. You will be  able
to receive your streams on future versions of ZFS.

correct.


-or- does this just mean upward compatibility ? In other words I can
send from pool 15 to pool 22 but not the other way around.

It does mean upward compatibility only, but I believe that it's the 
dataset version that matters, not the pool version, and the dataset 
version has not changed as often as the pool version:


root@v40z-brm-02:/home/lalt/ztest# zfs get version rpool/export/home
NAME   PROPERTY  VALUESOURCE
rpool/export/home  version   5-
root@v40z-brm-02:/home/lalt/ztest# zpool get version rpool
NAME   PROPERTY  VALUESOURCE
rpool  version   32   default

(someone still on the zfs team please correct me if that's wrong.)

Lori





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering last good BE

2010-10-25 Thread Lori Alt
 Try install-disc...@opensolaris.org as this is more of a liveupgrade 
issue than a zfs issue.


Lori

On 10/25/10 10:20 AM, Kartik Vashishta wrote:

I created a new BE for patching, I applied patches to that BE, I then activated 
the new BE, this is the message I got:



**
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like /mnt). 
You can use the following command to mount:
mount -Fzfs /dev/dsk/c0d0s0 /mnt
3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.

--

I ran into problems booting off of the patched BE, so I followed the steps 
above to get to the last good BE, but I got the following message:

cannot open ‘/dev/dsk/c0t0d0s0′: invalid dataset name


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver of older root pool disk

2010-09-23 Thread Lori Alt

 On 09/23/10 04:40 PM, Frank Middleton wrote:

Bumping this because no one responded. Could this be because
it's such a stupid question no one wants to stoop to answering it,
or because no one knows the answer? Trying to picture, say, what
could happen in /var (say /var/adm/messages), let alone a swap
zvol, is giving me a headache...

On 07/09/10 17:00, Frank Middleton wrote:

This is a hypothetical question that could actually happen:

Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0
and for some reason c0t0d0s0 goes off line, but comes back
on line after a shutdown. The primary boot disk would then
be c0t0d0s0 which would have much older data than c0t1d0s0.

Under normal circumstances ZFS would know that c0t0d0s0
needs to be resilvered. But in this case c0t0d0s0 is the boot
disk. Would ZFS still be able to correctly resilver the correct
disk under these circumstances? I suppose it might depend
on which files, if any, had actually changed...


Booting from the out-of-date disk will fail with the message:

The boot device is out of date. Please try booting from '%s'

where a different device is suggested.

The reason for this is that once the kernel has gotten far enough to 
import the root pool and read all the labels (and thereby determine that 
the device being booted does not have the latest contents), it's too 
late to switch over to a different disk for booting.  So the next best 
action is to fail and let the user explicitly boot from another disk in 
the pool (using whatever means the BIOS provides for this, or by 
selecting a different disk using the OBP interface on sparc platforms).  
Once the system is up, the out-of-date disk will be resilvered.


Ideally, the boot loader should examine all the disks in the root pool 
and boot off one with the most recent contents.  I believe that there is 
a bug filed requesting that this be implemented for sparc platforms.  
It's more difficult with x86 platforms (there are issues with accessing 
more than one disk from within the current boot loader), and I don't 
know what the prospects are for implementing that logic. But in any 
case, the boot logic prevents you from unwittingly booting off an 
out-of-date disk when a more up-to-date disk is online.


Lori





Thanks -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS archive image

2010-09-13 Thread Lori Alt

On 09/13/10 09:40 AM, Buck Huffman wrote:

I have a flash archive that is stored in a ZFS snapshot stream.  Is there a way 
to mount this image so I can read files from it.
   
No, but you can use the flar split command to split the flash archive 
into its constituent parts, one of which will be a zfs send stream than 
you can unpack with zfs recv.


Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JET and ZFS ?

2010-08-16 Thread Lori Alt


I don't know much about JET, but a jumpstart install of a system with a 
zfs root will do the necessary disk formatting.   The profile keywords 
that describe the disk layout work more or less the same for zfs as they 
do for ufs, subject to the ways that zfs is different from ufs (you 
don't need a separate swap/dump slice, for example).  any will format 
the drive for you, just like for ufs.


Lori

On 08/16/10 09:21 AM, Mark A. Hodges wrote:
With JET, you can specify a ZFS install by selecting the disk slice to 
install the rpool onto (or any to let the install choose a disk). 
But this appears to assume that the disk is already formatted. Or does 
it? It looks like if you specify any that it may or may not format 
the drive for you.


With UFS-based installs, jumpstart will format the disk per the 
jumpstart profile. of course. It's not clear how to dive this with JET.




Any experiences?



Thanks,


mark



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root flash archive issue

2010-07-13 Thread Lori Alt


The setting of the content_architectures field is likely to be 
independent of the file system type, so at least at first glance, I 
don't think that this is zfs issue.  You might try this question at 
install-disc...@opensolaris.org.


Lori


On 07/13/10 11:45 AM, Ketan wrote:

I have created a flash archive from a Ldom on T5220 with zfs root solaris 
10_u8. But after creation of flar the info shows the 
content_architectures=sun4c,sun4d,sun4m,sun4u,sun4us,sun4s but not sun4v due to 
which i 'm unable to install this flash archive on another Ldom on the same 
host. Is there any way to modify the content_architectures ? and whats the 
reason for missing sun4v architecture form the content_architecture ?

flar -i zfsflar_ldom
archive_id=4506fd9b45fba5b2c5e042715da50f0a
files_archived_method=cpio
creation_date=20100713054458
creation_master=e-u013
content_name=zfsBE
creation_node=ezzz-u013
creation_hardware_class=sun4v
creation_platform=SUNW,SPARC-Enterprise-T5220
creation_processor=sparc
creation_release=5.10
creation_os_name=SunOS
creation_os_version=Generic_141444-09
rootpool=rootpool
bootfs=rootpool/ROOT/zfsBE
snapname=zflash.100713.00.07
files_compressed_method=none
files_archived_size=3869213825
files_unarchived_size=3869213825
content_architectures=sun4c,sun4d,sun4m,sun4u,sun4us,sun4s
type=FULL
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS crash

2010-07-07 Thread Lori Alt

On 07/ 7/10 11:33 AM, Garrett D'Amore wrote:

On Wed, 2010-07-07 at 10:09 -0700, Mark Christooph wrote:
   

I had an interesting dilemma recently and I'm wondering if anyone here can 
illuminate on why this happened.

I have a number of pools, including the root pool, in on-board disks on the 
server. I also have one pool on a SAN disk, outside the system. Last night the 
SAN crashed, and shortly thereafter, the ZFS system executed a number of cron 
jobs, most of which involved running functions on the pool that was on the SAN. 
This caused a number of problems, most notably that when the SAN eventually 
came up, those cron jobs finished, and then crashed the system again.

Only by [i]zfs destroy[/i] on the newly created zfs file system that the cron 
jobs created was the system able to boot up again. As long as those corrupted 
zfs file systems remained on the SAN disk, not even the rpool would boot up 
correctly. None of the zfs file systems would mount, and most services were 
disabled. Once I destroyed the newly created zfs file systems, everything 
instantly mounted and all services started.

Question: why would those one zfs file systems prevent ALL pools from mounting, 
even when they are on different disks and file systems, and prevent all 
services from starting? I thought ZFS was more resistant to this sort of thing. 
I will have to edit my scripts and add SAN-checking to make sure it is up 
before they execute to prevent this from happening again. Luckily I still had 
all the raw data that the cron jobs were working with, so I was able to quickly 
re-create what the cron jobs did originally.   Although this happened with 
Solaris 10, perhaps the discussion could be applicable to OpenSolaris as well 
(I use both).
 


This sounds like a bug.  It would certainly be informative to see the
backtrace of the panic from your logs.  Also, it would be more
informative if you were seeing this problem on OpenSolaris.  Many of us
will not be able to do much with a Solaris 10 backtrace, since we don't
have access to S10 sources.
   

Also, the logs from /var/svc/log would be helpful.

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legacy MountPoint for /rpool/ROOT

2010-07-06 Thread Lori Alt

On 07/ 6/10 10:56 AM, Ketan wrote:

I have two different servers with ZFS root but both of them has different 
mountpoint for rpool/ROOT  one is /rpool/ROOT and other is legacy.


It should be legacy.


  Whats
the difference between the two and which is the one we should keep.

And why there is 3 different zfs datasets  rpool, rpool/ROOT and 
rpool/ROOT/zfsBE ?
   


rpool is the top-level dataset of the pool (every pool has one).

rpool/ROOT is just a grouping mechanism for the roots of all BEs on the 
system.


rpool/ROOT/zfsBE is the root file system of the BE called zfsBE.

lori



# zfs list -r rpool
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool  24.1G   110G98K/rpool
rpool/ROOT6.08G   110G21K/rpool/ROOT
rpool/ROOT/zfsBE   6.08G   110G  6.08G  /

**

zfs list -r rpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool 34.0G  99.9G94K  /rpool
rpool/ROOT   27.9G  99.9G18K  legacy
rpool/ROOT/s10s_u6  27.9G  99.9G  27.9G  /
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] c5-c9 device name change prevents beadm activate

2010-06-24 Thread Lori Alt

On 06/24/10 03:27 AM, Brian Nitz wrote:

Lori,

In my case what may have caused the problem is that after a previous 
upgrade failed, I used this zfs send/recv procedure to give me (what I 
thought was) a sane rpool:


http://blogs.sun.com/migi/entry/broken_opensolaris_never

Is it possible that a zfs recv of a root pool contains the device 
names from the sending hardware?


Yes, the data installed by the zfs recv will contain the device names 
from the sending hardware.


I looked at the instructions in the blog you reference above and while 
the procedure *might* work in some circumstances, it would mostly be by 
accident.  Maybe if there is an exact match of hardware, it might work, 
but there's also metadata that describes the BEs on a system and I doubt 
whether the send/recv would restore all the information necessary to do 
that.


You might want to bring this subject up on the 
caiman-disc...@opensolaris.org alias, where needs like this can be 
addressed for real, in the supported installation tools.


Lori





On 06/23/10 18:15, Lori Alt wrote:

Cindy Swearingen wrote:



On 06/23/10 10:40, Evan Layton wrote:

On 6/23/10 4:29 AM, Brian Nitz wrote:

I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub failed:

# BE_PRINT_ERR=true beadm activate opensolarismigi-4
be_do_installgrub: installgrub failed for device c5t0d0s0.
Unable to activate opensolarismigi-4.
Unknown external error.

The reason installgrub failed is that it is attempting to install 
grub

on c5t0d0s0 which is where my root pool is:
# zpool status
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The 
pool can

still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, 
the

pool will no longer be accessible on older software versions.
scan: scrub repaired 0 in 5h3m with 0 errors on Tue Jun 22 
22:31:08 2010

config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

But the raw device doesn't exist:
# ls -ls /dev/rdsk/c5*
/dev/rdsk/c5*: No such file or directory

Even though zfs pool still sees it as c5, the actual device seen by
format is c9t0d0s0


Is there any workaround for this problem? Is it a bug in install, 
zfs or

somewhere else in ON?



In this instance beadm is a victim of the zpool configuration 
reporting

the wrong device. This does appear to be a ZFS issue since the device
actually being used is not what zpool status is reporting. I'm 
forwarding

this on to the ZFS alias to see if anyone has any thoughts there.

-evan


Hi Evan,

I suspect that some kind of system, hardware, or firmware event changed
this device name. We could identify the original root pool device with
the zpool history output from this pool.

Brian, you could boot this system from the OpenSolaris LiveCD and
attempt to import this pool to see if that will update the device info
correctly.

If that doesn't help, then create /dev/rdsk/c5* symlinks to point to
the correct device.

I've seen this kind of device name change in a couple contexts now 
related to installs, image-updates, etc.


I think we need to understand why this is happening.  Prior to 
OpenSolaris and the new installer, we used to go to a fair amount of 
trouble to make sure that device names, once assigned, never 
changed.  Various parts of the system depended on device names 
remaining the same across upgrades and other system events.


Does anyone know why these device names are changing?  Because that 
seems like the root of the problem.  Creating symlinks with the old 
names seems like a band-aid, which could cause problems down the 
road--what if some other device on the system gets assigned that name 
on a future update?


Lori









___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] c5-c9 device name change prevents beadm activate

2010-06-23 Thread Lori Alt

Cindy Swearingen wrote:



On 06/23/10 10:40, Evan Layton wrote:

On 6/23/10 4:29 AM, Brian Nitz wrote:

I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub failed:

# BE_PRINT_ERR=true beadm activate opensolarismigi-4
be_do_installgrub: installgrub failed for device c5t0d0s0.
Unable to activate opensolarismigi-4.
Unknown external error.

The reason installgrub failed is that it is attempting to install grub
on c5t0d0s0 which is where my root pool is:
# zpool status
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The 
pool can

still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: scrub repaired 0 in 5h3m with 0 errors on Tue Jun 22 22:31:08 
2010

config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

But the raw device doesn't exist:
# ls -ls /dev/rdsk/c5*
/dev/rdsk/c5*: No such file or directory

Even though zfs pool still sees it as c5, the actual device seen by
format is c9t0d0s0


Is there any workaround for this problem? Is it a bug in install, 
zfs or

somewhere else in ON?



In this instance beadm is a victim of the zpool configuration reporting
the wrong device. This does appear to be a ZFS issue since the device
actually being used is not what zpool status is reporting. I'm 
forwarding

this on to the ZFS alias to see if anyone has any thoughts there.

-evan


Hi Evan,

I suspect that some kind of system, hardware, or firmware event changed
this device name. We could identify the original root pool device with
the zpool history output from this pool.

Brian, you could boot this system from the OpenSolaris LiveCD and
attempt to import this pool to see if that will update the device info
correctly.

If that doesn't help, then create /dev/rdsk/c5* symlinks to point to
the correct device.

I've seen this kind of device name change in a couple contexts now 
related to installs, image-updates, etc.


I think we need to understand why this is happening.  Prior to 
OpenSolaris and the new installer, we used to go to a fair amount of 
trouble to make sure that device names, once assigned, never changed.  
Various parts of the system depended on device names remaining the same 
across upgrades and other system events.


Does anyone know why these device names are changing?  Because that 
seems like the root of the problem.  Creating symlinks with the old 
names seems like a band-aid, which could cause problems down the 
road--what if some other device on the system gets assigned that name on 
a future update?


Lori

 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unsetting the bootfs property possible? imported a FreeBSD pool

2010-05-26 Thread Lori Alt


Was a bug ever filed against zfs for not allowing the bootfs property to 
be set to ?  We should always let that request succeed.


lori

On 05/26/10 09:09 AM, Cindy Swearingen wrote:

Hi--

I'm glad you were able to resolve this problem.

I drafted some hints in this new section:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues 



We had all the clues, you and Brandon got it though.

I think my brain function was missing yesterday.

Thanks,

Cindy

On 05/25/10 16:59, Reshekel Shedwitz wrote:

Cindy,

Thanks. Same goes to everyone else on this thread.

I actually solved the issue - I booted back into FreeBSD's Fixit 
mode and was still able to import the pool (wouldn't have been able 
to if I upgraded the pool version!). FreeBSD's zpool command allowed 
me to unset the bootfs property.
I guess that should have been more obvious to me. At least now I'm in 
good shape as far as this pool goes - zpool won't complain when I try 
to replace disks or add cache.
Might be worth documenting this somewhere as a gotcha when 
migrating from FreeBSD to OpenSolaris.


Thanks!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Lori Alt


First, I suggest you open a bug at  https://defect.opensolaris.org/bz  
and get a bug number.


Then, name your core dump something like bug.bugnumber and upload it 
using the instructions here:


http://supportfiles.sun.com/upload

Update the bug once you've uploaded the core  and supply the name of the 
core file.



Lori




On 05/19/10 12:40 PM, John Andrunas wrote:

OK, I got a core dump, what do I do with it now?

It is 1.2G in size.


On Wed, May 19, 2010 at 10:54 AM, John Andrunasj...@andrunas.net  wrote:
   

Hmmm... no coredump even though I configured it.

Here is the trace though  I will see what I can do about the coredump

r...@cluster:/export/home/admin# zfs mount vol2/vm2

panic[cpu3]/thread=ff001f45ec60: BAD TRAP: type=e (#pf Page fault)
rp=ff001f45e950 addr=30 occurred in module zfs due to a NULL
pointer deree

zpool-vol2: #pf Page fault
Bad kernel fault at addr=0x30
pid=1469, pc=0xf795d054, sp=0xff001f45ea48, eflags=0x10296
cr0: 8005003bpg,wp,ne,et,ts,mp,pe  cr4: 6f8xmme,fxsr,pge,mce,pae,pse,de
cr2: 30cr3: 500cr8: c

rdi:0 rsi: ff05208b2388 rdx: ff001f45e888
rcx:0  r8:3000900ff  r9: 198f5ff6
rax:0 rbx:  200 rbp: ff001f45ea50
r10: c0130803 r11: ff001f45ec60 r12: ff05208b2388
r13: ff0521fc4000 r14: ff050c0167e0 r15: ff050c0167e8
fsb:0 gsb: ff04eb9b8080  ds:   4b
 es:   4b  fs:0  gs:  1c3
trp:e err:2 rip: f795d054
 cs:   30 rfl:10296 rsp: ff001f45ea48
 ss:   38

ff001f45e830 unix:die+dd ()
ff001f45e940 unix:trap+177b ()
ff001f45e950 unix:cmntrap+e6 ()
ff001f45ea50 zfs:ddt_phys_decref+c ()
ff001f45ea80 zfs:zio_ddt_free+55 ()
ff001f45eab0 zfs:zio_execute+8d ()
ff001f45eb50 genunix:taskq_thread+248 ()
ff001f45eb60 unix:thread_start+8 ()

syncing file systems... done
skipping system dump - no dump device configured
rebooting...


On Wed, May 19, 2010 at 8:55 AM, Michael Schuster
michael.schus...@oracle.com  wrote:
 

On 19.05.10 17:53, John Andrunas wrote:
   

Not to my knowledge, how would I go about getting one?  (CC'ing discuss)
 

man savecore and dumpadm.

Michael
   


On Wed, May 19, 2010 at 8:46 AM, Mark J Musantemark.musa...@oracle.com
  wrote:
 

Do you have a coredump?  Or a stack trace of the panic?

On Wed, 19 May 2010, John Andrunas wrote:

   

Running ZFS on a Nexenta box, I had a mirror get broken and apparently
the metadata is corrupt now.  If I try and mount vol2 it works but if
I try and mount -a or mount vol2/vm2 is instantly kernel panics and
reboots.  Is it possible to recover from this?  I don't care if I lose
the file listed below, but the other data in the volume would be
really nice to get back.  I have scrubbed the volume to no avail.  Any
other thoughts.


zpool status -xv vol2
  pool: vol2
state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
  see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   vol2ONLINE   0 0 0
 mirror-0  ONLINE   0 0 0
   c3t3d0  ONLINE   0 0 0
   c3t2d0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

   vol2/v...@snap-daily-1-2010-05-06-:/as5/as5-flat.vmdk

--
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 


Regards,
markm

   



 


--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'

   



--
John

 



   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Odd dump volume panic

2010-05-12 Thread Lori Alt

On 05/12/10 04:29 AM, Ian Collins wrote:
I just tried moving a dump volume form rpool into another pool so I 
used zfs send/receive to copy the volume (to keep some older dumps) 
then ran dumpadm -d to use the new location.  This caused a panic.  
Nothing ended up in messages and needless to say, there isn't a dump!


Creating a new volume and using that worked fine.

This was on Solaris 10 update 8.

Has anyone else seen anything like this?



The fact that a panic occurred is some kind of bug, but I'm also not 
surprised that this didn't work.  Dump volumes have specialized behavior 
and characteristics and using send/receive to move them (or any other 
way to move them) is probably not going to work.  You need to extract 
the dump from the dump zvol using savecore and then move the resulting file.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split problem?

2010-04-01 Thread Lori Alt

On 03/31/10 03:50 AM, Damon Atkins wrote:

Why do we still need /etc/zfs/zpool.cache file???
(I could understand it was useful when zfs import was slow)

zpool import is now multi-threaded 
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844191), hence a 
lot faster,  each disk contains the hostname 
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6282725) , if a 
pool contains the same hostname as the server then import it.

ie This bug should not be a problem any more 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6737296 with a 
multi-threaded zpool import.

HA Storage should be changed to just do a zpool -h import mypool instead of 
using a private zpool.cache file (-h being ignore if the pool was imported by a 
different host, and maybe a noautoimport property is need on a zpool so 
clustering software can decided to import it by hand as it was)

And therefore this zpool zplit problem would be fixed.
   
The problem with splitting a root pool goes beyond the issue of the 
zpool.cache file.  If you look at the comments for 6939334 
http://monaco.sfbay.sun.com/detail.jsf?cr=6939334, you will see other 
files whose content is not correct when a root pool is renamed or split.


I'm not questioning your logic about whether zpool.cache is still 
needed.  I'm only pointing out that eliminating the zpool.cache file 
would not enable root pools to be split.  More work is required for that.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split problem?

2010-04-01 Thread Lori Alt
You might want to take this issue over to 
caiman-disc...@opensolaris.org, because this is more of an 
installation/management issue than a zfs issue.  Other than providing a 
mechanism for updating the zpool.cache file, the actions listed below 
are not directly related to zfs.


I believe that the Caiman team is looking at implementing a mass 
provisioning and disaster recovery mechanism (functionally similar to 
flash archives in the legacy Solaris installer).  Pool splitting could 
be another tool in their toolbox for accomplishing that goal.


Lori


The zfs tea
On 03/31/10 06:41 PM, Damon Atkins wrote:

I assume the swap, dumpadm, grub is because the pool has a different name now, 
but is it still a problem if you take it to a *different system* boot off a CD 
change it back to rpool. (which is most likley unsupported, ie no help to get 
it working)

Over 10 years ago (way before flash archive existed)  I developed a script, 
used after spliting a mirror, which would remove most of the device tree, 
cleaned up path_to_inst etc so it look like the OS was just installed and about 
to do the reboot without the install CD. (every thing was still in there expect 
for hardware specific stuff, I no longer have the script and most likey would 
not do it again because its not a supported install method)

I still had to boot from CD on the new system and create the dev tree before 
booting off the disk for the first time, and then fix vfstab (but the fix 
vfstab should be gone with zfs rpool)

It would be nice for Oracle/Sun to produce a separate script which reset 
system/devices  back to a install like begining so if you move a OS disk with 
current password file and software from one system to another, and have it 
rebuild the device tree on the new system.

 From member (updated for zfs) something like:
zfs split rpool newrpool
mount newrpool
remove newrpool/dev and newrpool/devices of all non-packages content (ie 
dynamically created content)
clean up newrpool/etc/path_to_inst
create /newrool/reconfigure
remove all prevoius snapshots in newrool
update beadm info inside newrpool
ensure grub is installed on the disk
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-22 Thread Lori Alt

On 03/22/10 05:04 PM, Brandon High wrote:
On Mon, Mar 22, 2010 at 10:26 AM, Richard Elling 
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:


NB. deduped streams should further reduce the snapshot size.


I haven't seen a lot of discussion on the list regarding send dedup, 
but I understand it'll use the DDT if you have dedup enabled on your 
dataset.
The send code (which is user-level) builds its own DDT no matter what, 
but it will use existing checksums if on-disk dedup is already in effect.


What's the process and penalty for using it on a dataset that is not 
already deduped?


The penalty is the cost of doing the checksums.


Does it build a DDT for just the data in the send?


Yes, currently limited to 20% of physical memory size.

Lori




-B

--
Brandon High : bh...@freaks.com mailto:bh...@freaks.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Lori Alt



I think what you're saying is:  Why bother trying to backup with zfs
send
when the recommended practice, fully supportable, is to use other tools
for
backup, such as tar, star, Amanda, bacula, etc.   Right?

The answer to this is very simple.
#1  ...
#2  ...
 

Oh, one more thing.  zfs send is only discouraged if you plan to store the
data stream and do zfs receive at a later date.
   


This is no longer the case.  The send stream format is now versioned in 
such a way that future versions of Solaris will be able to read send 
streams generated by earlier versions of Solaris.  The comment in the 
zfs(1M) manpage discouraging the use of send streams for later 
restoration has been removed.  This versioning leverages the ZFS object 
versioning that already exists (the versioning that allows earlier 
version pools to be read by later versions of zfs), plus versioning and 
feature flags in the stream header.


Lori



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-03-02 Thread Lori Alt

On 03/ 2/10 11:48 AM, Freddie Cash wrote:
On Tue, Mar 2, 2010 at 7:15 AM, Kjetil Torgrim Homme 
kjeti...@linpro.no mailto:kjeti...@linpro.no wrote:


valrh...@gmail.com mailto:valrh...@gmail.com
valrh...@gmail.com mailto:valrh...@gmail.com writes:

 I have been using DVDs for small backups here and there for a decade
 now, and have a huge pile of several hundred. They have a lot of
 overlapping content, so I was thinking of feeding the entire stack
 into some sort of DVD autoloader, which would just read each
disk, and
 write its contents to a ZFS filesystem with dedup enabled. [...]
That
 would allow me to consolidate a few hundred CDs and DVDs onto
probably
 a terabyte or so, which could then be kept conveniently on a hard
 drive and archived to tape.

it would be inconvenient to make a dedup copy on harddisk or tape, you
could only do it as a ZFS filesystem or ZFS send stream.  it's
better to
use a generic tool like hardlink(1), and just delete files afterwards
with

Why would it be inconvenient?  This is pretty much exactly what ZFS + 
dedupe is perfect for.


Since dedupe is pool-wide, you could create individual filesystems for 
each DVD.  Or use just 1 filesystem with sub-directories.  Or just one 
filesystem with snapshots after each DVD is copied over top.


The data would be dedupe'd on write, so you would only have 1 copy of 
unique data.


To save it to tape, just zfs send it, and save the stream file.


Stream dedup is largely independent of on-disk dedup.  If the content is 
dedup'ed on disk, but you don't specify the -D to 'zfs send', the 
dedup'ed data will be re-expanded.  Even if the content is NOT dedup'ed 
on disk, the -D option will cause the blocks to be dedup'ed in the stream.


One advantage to using them both is that the 'zfs send -D' processing 
doesn't need to recalculate the block checksums if they already exist on 
disk.  This speeds up the send stream generation code by a lot.


Also, in response to another comment about the send stream format not 
being recommended for archiving, that all depends on how you intend to 
use the send stream in the future.   The format IS supported going 
forward, and future version of zfs will continue to be capable of 
reading older send stream formats (the zfs(1M) man page has been 
modified to clarify this now).


Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive : panic and reboot

2010-02-16 Thread Lori Alt

Hi Bruno,

I've tried to reproduce this panic you are seeing.  However, I had 
difficulty following your procedure.  See below:




On 02/08/10 15:37, Bruno Damour wrote:

On 02/ 8/10 06:38 PM, Lori Alt wrote:


Can you please send a complete list of the actions taken:  The 
commands you used to create the send stream, the commands used to 
receive the stream.  Also the output of `zfs list -t all` on both the 
sending and receiving sides.  If you were able to collect a core dump 
(it should be in /var/crash/hostname), it would be good to upload it.


The panic you're seeing is in the code that is specific to receiving 
a dedup'ed stream.  It's possible that you could do the migration if 
you turned off dedup (i.e. didn't specify -D) when creating the send 
stream.. However, then we wouldn't be able to diagnose and fix what 
appears to be a bug.


The best way to get us the crash dump is to upload it here:

https://supportfiles.sun.com/upload

We need either both vmcore.X and unix.X OR you can just send us 
vmdump.X.


Sometimes big uploads have mixed results, so if there is a problem 
some helpful hints are
on 
http://wikis.sun.com/display/supportfiles/Sun+Support+Files+-+Help+and+Users+Guide, 


specifically in section 7.

It's best to include your name or your initials or something in the 
name of the file you upload.  As

you might imagine we get a lot of files uploaded named vmcore.1

You might also create a defect report at 
http://defect.opensolaris.org/bz/


Lori


On 02/08/10 09:41, Bruno Damour wrote:

copied from opensolaris-dicuss as this probably belongs here.

I kept on trying to migrate my pool with children (see previous 
threads) and had the (bad) idea to try the -d option on the receive 
part.

The system reboots immediately.

Here is the log in /var/adm/messages

Feb 8 16:07:09 amber unix: [ID 836849 kern.notice]
Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ff014ba86e40:
Feb 8 16:07:09 amber genunix: [ID 169834 kern.notice] avl_find() 
succeeded inside avl_add()

Feb 8 16:07:09 amber unix: [ID 10 kern.notice]
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4660 genunix:avl_add+59 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c46c0 zfs:find_ds_by_guid+b9 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c46f0 zfs:findfunc+23 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c47d0 zfs:dmu_objset_find_spa+38c ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4810 zfs:dmu_objset_find+40 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4a70 zfs:dmu_recv_stream+448 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4c40 zfs:zfs_ioc_recv+41d ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4cc0 zfs:zfsdev_ioctl+175 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4d00 genunix:cdev_ioctl+45 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4d40 specfs:spec_ioctl+5a ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4dc0 genunix:fop_ioctl+7b ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4ec0 genunix:ioctl+18e ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] 
ff00053c4f10 unix:brand_sys_syscall32+1ca ()

Feb 8 16:07:09 amber unix: [ID 10 kern.notice]
Feb 8 16:07:09 amber genunix: [ID 672855 kern.notice] syncing file 
systems...

Feb 8 16:07:09 amber genunix: [ID 904073 kern.notice] done
Feb 8 16:07:10 amber genunix: [ID 111219 kern.notice] dumping to 
/dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
Feb 8 16:07:10 amber ahci: [ID 405573 kern.info] NOTICE: ahci0: 
ahci_tran_reset_dport port 3 reset port

Feb 8 16:07:35 amber genunix: [ID 10 kern.notice]
Feb 8 16:07:35 amber genunix: [ID 665016 kern.notice] ^M100% done: 
107693 pages dumped,

Feb 8 16:07:35 amber genunix: [ID 851671 kern.notice] dump succeeded
  



Hello,
I'll try to do my best.

Here are the commands :

amber ~ # zfs unmount data
amber ~ # zfs snapshot -r d...@prededup
amber ~ # zpool destroy ezdata
amber ~ # zpool create ezdata c6t1d0
amber ~ # zfs set dedup=on ezdata
amber ~ # zfs set compress=on ezdata
amber ~ # zfs send -RD d...@prededup |zfs receive ezdata/data
cannot receive new filesystem stream: destination 'ezdata/data' exists
must specify -F to overwrite it
amber ~ # zpool destroy ezdata
amber ~ # zpool create ezdata c6t1d0
amber ~ # zfs set compression=on ezdata
amber ~ # zfs set dedup=on ezdata
amber ~ # zfs send -RD d...@prededup |zfs receive -F ezdata/data
cannot receive new filesystem stream: destination has snapshots
(eg. ezdata/d...@prededup)
must destroy them to overwrite it




This send piped to recv didn't even get started because of the above error.

Are you saying that the command ran for several house and THEN produced 
that message?


I created a hierarchy of dataset

Re: [zfs-discuss] What Happend to my OpenSolaris X86 Install?

2010-02-11 Thread Lori Alt
This sounds more like an install issue than a zfs issue.  I suggest you 
take this to caiman-disc...@opensolaris.org .


Lori

On 02/10/10 23:44, Jeff Rogers wrote:

Thanks for the tip but it was not that. The two hard drives where running under 
RAID 1 on my Linux install so the two drives have identical information on them 
when I installed OpenSolaris. I disable the hardware RAID support in my BIOS to 
install OpenSolaris. Looking at the disk from the still functional linux OS on 
the disk it appears that OpenSolaris only formatted some of the partitions to 
ZFS. As luck would have it the unformatted ZFS partitions (which are still 
EXT3) has the MBR and the rest of the file system to run just fine. Looking at 
the partition table under Linux I can see the partitions that are formatted ZFS 
as they are shown but not supported with Linux.

Bummer. Looks like I will have to reformat the drives with GParted and 
reinstall OpenSolaris. I can not see how a LiveCD of GParted could gain access 
to my filesystem before I say it is okay to rewrite the MBR. I suppose this is 
a GParted issue and not a OpenSolaris one. I'll go bark up that tree now.

Does anyone have any good pointers of info on OpenSolaris, Postgres with WAL 
enabled and ZFS?
I would like to put the WAL files on a different disk than the one with the OS 
and Postgres.

Thanks

Jeff
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread Lori Alt

On 02/11/10 08:15, taemun wrote:

Can anyone comment about whether the on-boot Reading ZFS confi is
any slower/better/whatever than deleting zpool.cache, rebooting and
manually importing?

I've been waiting more than 30 hours for this system to come up. There
is a pool with 13TB of data attached. The system locked up whilst
destroying a 934GB dedup'd dataset, and I was forced to reboot it. I
can hear hard drive activity presently - ie its doing
bsomething/b, but am really hoping there is a better way :)

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
I think that this is a consequence of 6924390.-  
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 ZFS 
destroy on de-duped dataset locks all I/O


This bug is closed as a dup of another bug which is not readable from 
the opensolaris site, (I'm not clear what makes some bugs readable and 
some not).


While trying to reproduce 6924390 (or its equivalent) yesterday, my 
system hung as yours did, and when I rebooted, it hung at Reading ZFS 
config.


Someone who knows more about the root cause of this situation (i.e., the 
bug named above) might be able tell you what's going on and how to 
recover (it might be that what's going on is that the destroy has 
resumed and you have to wait for it to complete, which I think it will, 
but it might take a long time).


Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive : panic and reboot

2010-02-08 Thread Lori Alt


Can you please send a complete list of the actions taken:  The commands 
you used to create the send stream, the commands used to receive the 
stream.  Also the output of `zfs list -t all` on both the sending and 
receiving sides.  If you were able to collect a core dump (it should be 
in /var/crash/hostname), it would be good to upload it.


The panic you're seeing is in the code that is specific to receiving a 
dedup'ed stream.  It's possible that you could do the migration if you 
turned off dedup (i.e. didn't specify -D) when creating the send 
stream.. However, then we wouldn't be able to diagnose and fix what 
appears to be a bug.


The best way to get us the crash dump is to upload it here:

https://supportfiles.sun.com/upload

We need either both vmcore.X and unix.X OR you can just send us vmdump.X.

Sometimes big uploads have mixed results, so if there is a problem some 
helpful hints are
on 
http://wikis.sun.com/display/supportfiles/Sun+Support+Files+-+Help+and+Users+Guide, 


specifically in section 7.

It's best to include your name or your initials or something in the name 
of the file you upload.  As

you might imagine we get a lot of files uploaded named vmcore.1

You might also create a defect report at http://defect.opensolaris.org/bz/

Lori


On 02/08/10 09:41, Bruno Damour wrote:

copied from opensolaris-dicuss as this probably belongs here.

I kept on trying to migrate my pool with children (see previous threads) and 
had the (bad) idea to try the -d option on the receive part.
The system reboots immediately.

Here is the log in /var/adm/messages

Feb 8 16:07:09 amber unix: [ID 836849 kern.notice]
Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ff014ba86e40:
Feb 8 16:07:09 amber genunix: [ID 169834 kern.notice] avl_find() succeeded 
inside avl_add()
Feb 8 16:07:09 amber unix: [ID 10 kern.notice]
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4660 
genunix:avl_add+59 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c46c0 
zfs:find_ds_by_guid+b9 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c46f0 
zfs:findfunc+23 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c47d0 
zfs:dmu_objset_find_spa+38c ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4810 
zfs:dmu_objset_find+40 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4a70 
zfs:dmu_recv_stream+448 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4c40 
zfs:zfs_ioc_recv+41d ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4cc0 
zfs:zfsdev_ioctl+175 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4d00 
genunix:cdev_ioctl+45 ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4d40 
specfs:spec_ioctl+5a ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4dc0 
genunix:fop_ioctl+7b ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4ec0 
genunix:ioctl+18e ()
Feb 8 16:07:09 amber genunix: [ID 655072 kern.notice] ff00053c4f10 
unix:brand_sys_syscall32+1ca ()
Feb 8 16:07:09 amber unix: [ID 10 kern.notice]
Feb 8 16:07:09 amber genunix: [ID 672855 kern.notice] syncing file systems...
Feb 8 16:07:09 amber genunix: [ID 904073 kern.notice] done
Feb 8 16:07:10 amber genunix: [ID 111219 kern.notice] dumping to 
/dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
Feb 8 16:07:10 amber ahci: [ID 405573 kern.info] NOTICE: ahci0: 
ahci_tran_reset_dport port 3 reset port
Feb 8 16:07:35 amber genunix: [ID 10 kern.notice]
Feb 8 16:07:35 amber genunix: [ID 665016 kern.notice] ^M100% done: 107693 pages 
dumped,
Feb 8 16:07:35 amber genunix: [ID 851671 kern.notice] dump succeeded
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root pool on upgraded host

2010-01-29 Thread Lori Alt

On 01/25/10 17:56, Shannon Fiume wrote:

Hi,
I installed opensolaris on a x2200 m2 with two internal drives that 
had an existing root pool with a Solaris 10 update 6. After installing 
opensolaris 2009.06 the host refused to boot. The opensolaris install 
was fine. I had to pull the second hard drive to get the host to boot. 
Then insert the second drive and relabel the old root pool to 
something other than rpool. Then the host was able to boot.


There are a few bugs open on a similar issue. Which bug or bugs should 
I update?
I looked and didn't see any bugs that matched this one very closely 
(granted, I didn't look very hard).  But then you know what kind of 
error messages to search for in bugster.  If you don't see a bug which 
is clearly the same as this, please enter a new one and include as much 
as you about the steps to reproduce and the error messages you saw.  
We'll close it as a dup if it ends up matching one we already have.


lori




Thanks in advance,

~~sa

Shannon A. Fiume
shannon (dot) fiume at sun (dot) com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need help with repairing zpool :(

2010-01-28 Thread Lori Alt


First, you might want to send this out to caiman-disc...@opensolaris.org 
as well in order to find the experts in the OpenSolaris install 
process.  (I am familiar with zfs booting in general and the legacy 
installer, but not so much about the OpenSolaris installer).


Second, including the text of the commands you issued below and the 
messages received would give me context and help me figure out what's 
going on and how to help.


Third, booting the system from the OpenSolaris install medium (CD or 
whatever) might give you some more diagnostic flexibility while you're 
trying to figure this out.  It may be that once you're booted from the 
install CD, you can simply import the pool (like any other pool, nothing 
special about the root pool in this way) and retrieve the files you need 
from it without having to worry about booting the pool.  In that case, 
you'll want to use the -R altroot option when importing the pool and 
you will have to explicitly mount some of the datasets since some of the 
datasets in root pools have their canmount property set to noauto.


Lori

On 01/28/10 08:53, Eugene Turkulevich wrote:

...how this can happen is not a topic of this message.
now, there is a problem and I need to solve it, if it is possible.

have one HDD device (80gb), entire disk is for rpool, system on it and home 
folders.
this is no problem to reinstall system, but need to save some files from user 
dirs.
an, o'cos, there is no backup.

so, the problem is that zpool is broken :( when I try to start system from this 
disk, got a message about there is no bootable partition.
as I see, the data on the disk is not touched, so the only broken information is 
partition table. format says that disk is unknown and propose to enter 
cilinders/heads/etc manually. this has been done (looking to the same HDD info). but when 
I try to enter into partition menu of format utility, there is a message 
about This disk may be in use by an application that has modified the fdisk table. so, 
there is no way to see/modify slices (I need s0 slice, 'cos rpool is on it).
any try to access to s0 partition (dd if=/dev/dsk/v14t0d0s0 ...) reports an I/O 
error, but p0 is readable.
also, it is known that disk was formatted with OpenSolaris installer to use the 
whole disk.

help me, please. is it possible to access to the pool on this disk?
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Lori Alt

On 01/28/10 12:05, Dick Hoogendijk wrote:

Op 28-1-2010 17:35, Cindy Swearingen schreef:

Thomas,

Excellent and much better suggestion... :-)

You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.

Dick, you will need to update the BIOS to boot from the smaller disk.
It's not that great an idea after all. Creating a new ABE in the new 
root pool goes wel, BUT all other files systems on rpool 
(rpool/export, export/home, etc) don't get transfered. So, attaching 
is not possible because '/export/home/me' is busy ;-)


But those could be copied by send/recv from the larger disk (current 
root pool) to the smaller disk (intended new root pool).  You won't be 
attaching anything until you can boot off the smaller disk and then it 
won't matter what's on the larger disk because attaching the larger disk 
to the root  mirror will destroy the contents of the larger disk anyway.


lori




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Lori Alt

On 01/28/10 14:08, dick hoogendijk wrote:

On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
  
But those could be copied by send/recv from the larger disk (current 
root pool) to the smaller disk (intended new root pool).  You won't be 
attaching anything until you can boot off the smaller disk and then it 
won't matter what's on the larger disk because attaching the larger disk 
to the root  mirror will destroy the contents of the larger disk anyway.



You are right of course.
Are these right values for amd64 swap/dump:
zfs create -V 2G rpool/dump
zfs create -V 2G -b 4k rpool/swap
Are these -b 4k values OK?

  
I suggest that you set the sizes of the dump and swap zvols to match the 
zvols created by install on your original boot disk (the dump zvold size 
is particular difficult to get right because install calls the kernel to 
determine the optimal value).


The block size for rpool/dump should 128k .  The block size for swap 
zvols should be 4k for x86 (or amd64)  platforms and 8k for sparc platforms.


Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs streams

2010-01-27 Thread Lori Alt

On 01/25/10 16:08, Daniel Carosone wrote:

On Mon, Jan 25, 2010 at 05:42:59PM -0500, Miles Nordin wrote:
  

et You cannot import a stream into a zpool of earlier revision,
et thought the reverse is possible.

This is very bad, because it means if your backup server is pool
version 22, then you cannot use it to back up pool version 15 clients:
you can backup, but then you can never restore.



It would be, yes.
  
Correct.  It would be bad if it were true, but it's not.  What matters 
when doing receives of streams is that the version of the dataset (which 
can differ between datasets on the same system and between datasets in 
the same replication stream) be less than or equal to the version of the 
zfs filesystem supported on the receiving system.  The zfs filesystem 
version supported on a system can be displayed with the command zfs 
upgrade (with no further arguments).


The zfs filesystem version is different than the zpool version 
(displayed by `zpool get version poolname`).  You can send a stream 
from one system to another even if the zpool version is lower on the 
receiving system or pool.  I verified that this works by replicating a 
dataset from a system running build 129 (zpool version 22 and zfs 
version 4 ) to a system running S10 update (zpool version 15 and zfs 
version 4).  Since they agree on the file system version, it works.


But when I try to send a stream from build 120 to S10 U6 (zfs version = 
3), I get:


# zfs recv rpool/new  /net/x4200-brm-16/export/out.zar
Jan 27 17:44:36 v20z-brm-03 zfs: Mismatched versions:  File system is 
version 4 on-disk format, which is incompatible with this software 
version 3!


The version of a zfs dataset (i.e. fileystem or zvol) is preserved 
unless modified.  So, I just did zfs send from S10 U6 (zfs version 3) to 
S10 U8 (zfs version 4).  This created a dataset and its snapshot on the 
build 129 system.  Then I checked the version of the dataset and 
snapshot that was created:


# zfs get -r version rpool/new
NAME  PROPERTY  VALUESOURCE
rpool/new version   3-
rpool/n...@s1  version   3-

So even though the current version of the zfs filesystem on the target 
system is 4, the dataset created by the receive is 3, because that's the 
version that was sent.  Then I tried sending that dataset back to the U6 
system, and it worked.  So as long as the version of the *filesystem* is 
compatible with the target system, you can do sends from, say, S10U8 to 
S10U6, even though U8 has a higher zfs filesystem version number than U6.


Also, as someone pointed out, the stream version has to match too.  So 
if you use dedup (the -D option), that sets the dedup feature flag in 
the stream header, which makes the stream only receivable on systems 
that support stream dedup.  But if you don't use dedup, the stream can 
still be read on earlier version of zfs.


Lori




O
  

For backup to work the zfs send format needs to depend on the zfs
version only, not the pool version in which it's stored nor the kernel
version doing the sending.



I can send from b130 to b111, zpool 22 to 14. (Though not with the new
dedup send -D option, of course).  I don't have S10 to test.

--
Dan.

  



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] etc on separate pool

2010-01-22 Thread Lori Alt

On 01/22/10 01:55, Alexander wrote:
Is it possible to have /etc on separate zfs pool in OpenSolaris? 
The purpose is to have rw non-persistent main pool and rw persistent /etc...
I've tried to make legacy etcpool/etc file system and mount it in /etc/vfstab... 
Is it possible to extend boot-archive in such a way that it include most of the files necessary for mounting /etc from separate pool? Have someone tried such configurations?
  
There have been efforts (some ongoing) to enable what you are trying to 
do, but they involve substantial changes to Solaris configuration and 
system administration. 

As Solaris works right now, it is not supported to have /etc in a 
separate dataset, let alone a separate pool.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] image-update failed; ZFS returned an error

2010-01-04 Thread Lori Alt


Also, you might want to pursue this question at 
caiman-disc...@opensolaris.org, since that's where you'll find the 
experts on beadm.


Lori


On 01/04/10 10:46, Cindy Swearingen wrote:

Hi Garen,

Does this system have a mirrored root pool and if so, is
a p0 device included as a root pool device instead of an
s0 device?

Thanks,

Cindy


On 12/22/09 18:56, Garen Parham wrote:

Never seen this before:

# pkg image-update
DOWNLOAD  PKGS   FILESXFER (MB)
Completed  2/2 194/19464.0/64.0

PHASEACTIONS
Removal Phase4/4
Install Phase7/7
Update Phase 219/219
PHASE  ITEMS
Reading Existing Index   8/8
Indexing Packages2/2
pkg: unable to activate osol-129-1

...

# beadm activate osol-129-1
Unable to activate osol-129-1.
ZFS returned an error.

What error is that, though?  Don't see anything in dmesg.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raid-z as a boot pool

2009-12-15 Thread Lori Alt

On 12/15/09 09:26, Luca Morettoni wrote:

As reported here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsbootFAQ

we can't boot from a pool with raidz, any plan to have this feature?
At this time, there is no scheduled availability for raidz boot.  It's 
on the list of possible enhancements, but not yet under development.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] More Dedupe Questions...

2009-11-03 Thread Lori Alt

Kyle McDonald wrote:

Hi Darren,

More below...

Darren J Moffat wrote:

Tristan Ball wrote:

Obviously sending it deduped is more efficient in terms of bandwidth 
and CPU time on the recv side, but it may also be more complicated 
to achieve?


A stream can be deduped even if the on disk format isn't and vice versa.

Is the send dedup'ing more efficient if the filesystem is already 
depdup'd? If both are enabled do they share anything?


 -Kyle



At this time, no.  But very shortly we hope to tie the two together 
better to make use of the existing checksums and duplication info 
available in the on-disk and in-kernel structures.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread Lori Alt

On 10/01/09 09:25, camps support wrote:

I did zpool import -R /tmp/z rootpool

It only mounted /export and /rootpool only had /boot and /platform.

I need to be able to get /etc and /var?
  
You need to explicitly  mount the root file system (its canmount 
property is set to noauto, which means it isn't mounted automatically 
when the pool is import)


do:

# zfs mount rootpool/ROOT/bename

for the appopriate value of bename.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive should allow to keep received system unmounted

2009-09-28 Thread Lori Alt

On 09/28/09 15:54, Igor Velkov wrote:
zfs receive should allow option to disable immediately mount of received filesystem. 

In case of original filesystem have changed mountpoints, it's hard to make clone fs with send-receive, because received filesystem immediately try to mount to old mountpoint, that locked by sourcr fs. 
In case of different host mountpoint can be locked by unrelated filesystem.


Can anybody recommend a way to avoid mountpoint conflict in that cases?
  

The -u option to zfs receive suppresses all mounts.

lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive should allow to keep received system

2009-09-28 Thread Lori Alt

On 09/28/09 16:16, Igor Velkov wrote:

Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx zfs 
recv -vuFd xxx/xxx

invalid option 'u'
usage:
receive [-vnF] filesystem|volume|snapshot
receive [-vnF] -d filesystem

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
r...@xxx:~# uname -a
SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890

What's wrong?
  
the option was added in S10 Update 7.  I'm not sure whether the 
patch-level shown above included U7 changes or not.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-27 Thread Lori Alt

Bill Sommerfeld wrote:

On Fri, 2009-09-25 at 14:39 -0600, Lori Alt wrote:
  

The list of datasets in a root pool should look something like this:


...
  
rpool/swap  



I've had success with putting swap into other pools.  I believe others
have, as well.

  
Yes, that's true.  Swap can be in a different pool.  



  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cloning Systems using zpool

2009-09-25 Thread Lori Alt


The whole pool.  Although you can choose to exclude individual datasets 
from the flar when creating it.



lori


On 09/25/09 12:03, Peter Pickford wrote:

Hi Lori,

Is the u8 flash support for the whole root pool or an individual BE
using live upgrade?

Thanks

Peter

2009/9/24 Lori Alt lori@sun.com:
  

On 09/24/09 15:54, Peter Pickford wrote:

Hi Cindy,

Wouldn't

touch /reconfigure
mv /etc/path_to_inst* /var/tmp/

regenerate all device information?


It might, but it's hard to say whether that would accomplish everything
needed to move a root file system from one system to another.

I just got done modifying flash archive support to work with zfs root on
Solaris 10 Update 8.  For those not familiar with it, flash archives are a
way to clone full boot environments across multiple machines.  The S10
Solaris installer knows how to install one of these flash archives on a
system and then do all the customizations to adapt it to the  local hardware
and local network environment.  I'm pretty sure there's more to the
customization than just a device reconfiguration.

So feel free to hack together your own solution.  It might work for you, but
don't assume that you've come up with a completely general way to clone root
pools.

lori

AFIK zfs doesn't care about the device names it scans for them
it would only affect things like vfstab.

I did a restore from a E2900 to V890 and is seemed to work

Created the pool and zfs recieve.

I would like to be able to have a zfs send of a minimal build and
install it in an abe and activate it.
I tried that is test and it seems to work.

It seems to work but IM just wondering what I may have missed.

I saw someone else has done this on the list and was going to write a blog.

It seems like a good way to get a minimal install on a server with
reduced downtime.

Now if I just knew how to run the installer in and abe without there
being an OS there already that would be cool too.

Thanks

Peter

2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:


Hi Peter,

I can't provide it because I don't know what it is.

Even if we could provide a list of items, tweaking
the device informaton if the systems are not identical
would be too difficult.

cs

On 09/24/09 12:04, Peter Pickford wrote:


Hi Cindy,

Could you provide a list of system specific info stored in the root pool?

Thanks

Peter

2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:


Hi Karl,

Manually cloning the root pool is difficult. We have a root pool recovery
procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
and manually tweaking.


http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery

The problem is that the amount system-specific info stored in the root
pool and any kind of device differences might be insurmountable.

Solaris 10 ZFS/flash archive support is available with patches but not
for the Nevada release.

The ZFS team is working on a split-mirrored-pool feature and that might
be an option for future root pool cloning.

If you're still interested in a manual process, see the steps below
attempted by another community member who moved his root pool to a
larger disk on the same system.

This is probably more than you wanted to know...

Cindy



# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/c1t1d0s0
for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
Set the bootfs property on the root pool BE.
# zpool set bootfs=altrpool/ROOT/zfsBE altrpool
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

On 09/24/09 10:06, Karl Rossing wrote:


I would like to clone the configuration on a v210 with snv_115.

The current pool looks like this:

-bash-3.2$ /usr/sbin/zpool statuspool: rpool
 state: ONLINE
 scrub: none requested
config:

  NAME  STATE READ WRITE CKSUM
  rpool ONLINE   0 0 0
mirror  ONLINE   0 0 0
  c1t0d0s0  ONLINE   0 0 0
  c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to
/tmp/a so that I can make the changes I need prior to removing the drive
and
putting it into the new v210.

I supose I could lucreate -n new_v210, lumount new_v210, edit what I
need
to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0
and
then luactivate the original boot environment.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http

Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-25 Thread Lori Alt

On 09/25/09 13:35, David Abrahams wrote:

Hi,

Since I don't even have a mirror for my root pool rpool, I'd like to
move as much of my system as possible over to my raidz2 pool, tank.
Can someone tell me which parts need to stay in rpool in order for the
system to work normally?

Thanks.

  

The list of datasets in a root pool should look something like this:

rpool
rpool/ROOT   
rpool/ROOT/snv_124  (or whatever version you're running)
rpool/ROOT/snv_124/var   (you might not have this) 
rpool/ROOT/snv_121  (or whatever other BEs you still have)   
rpool/dump   
rpool/export 
rpool/export/home
rpool/swap 

plus any other datasets you might have added.  Datasets you've added in 
addition to the above (unless they are zone roots under 
rpool/ROOT/be-name ) can be moved to another pool.  Anything you have 
in /export or /export/ home can be moved to another pool.  Everything 
else needs to stay in the root pool.  Yes, there are contents of the 
above datasets that could be moved and  your system would still run 
(you'd have to play with mount points or symlinks to get them included 
in the Solaris name space), but such a configuration would be 
non-standard, unsupported, and probably not upgradeable.


lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-25 Thread Lori Alt

I have no idea why that last mail lost its line feeds.   Trying again:




On 09/25/09 13:35, David Abrahams wrote:

Hi,

Since I don't even have a mirror for my root pool rpool, I'd like to
move as much of my system as possible over to my raidz2 pool, tank.
Can someone tell me which parts need to stay in rpool in order for the
system to work normally?

Thanks.

  

The list of datasets in a root pool should look something like this:


rpool   
rpool/ROOT
rpool/ROOT/snv_124  (or whatever version you're running)

rpool/ROOT/snv_124/var   (you might not have this)
rpool/ROOT/snv_121  (or whatever other BEs you still have)
rpool/dump  
rpool/export
rpool/export/home   
rpool/swap



plus any other datasets you might have added.  Datasets you've added in 
addition to the above (unless they are zone roots under 
rpool/ROOT/be-name ) can be moved to another pool.  Anything you have 
in /export or /export/ home can be moved to another pool.  Everything 
else needs to stay in the root pool.  Yes, there are contents of the 
above datasets that could be moved and  your system would still run 
(you'd have to play with mount points or symlinks to get them included 
in the Solaris name space), but such a configuration would be 
non-standard, unsupported, and probably not upgradeable.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cloning Systems using zpool

2009-09-24 Thread Lori Alt

On 09/24/09 15:54, Peter Pickford wrote:

Hi Cindy,

Wouldn't

touch /reconfigure
mv /etc/path_to_inst* /var/tmp/

regenerate all device information?
  
It might, but it's hard to say whether that would accomplish everything 
needed to move a root file system from one system to another.


I just got done modifying flash archive support to work with zfs root on 
Solaris 10 Update 8.  For those not familiar with it, flash archives 
are a way to clone full boot environments across multiple machines.  The 
S10 Solaris installer knows how to install one of these flash archives 
on a system and then do all the customizations to adapt it to the  local 
hardware and local network environment.  I'm pretty sure there's more to 
the customization than just a device reconfiguration. 

So feel free to hack together your own solution.  It might work for you, 
but don't assume that you've come up with a completely general way to 
clone root pools.


lori


AFIK zfs doesn't care about the device names it scans for them
it would only affect things like vfstab.

I did a restore from a E2900 to V890 and is seemed to work

Created the pool and zfs recieve.

I would like to be able to have a zfs send of a minimal build and
install it in an abe and activate it.
I tried that is test and it seems to work.

It seems to work but IM just wondering what I may have missed.

I saw someone else has done this on the list and was going to write a blog.

It seems like a good way to get a minimal install on a server with
reduced downtime.

Now if I just knew how to run the installer in and abe without there
being an OS there already that would be cool too.

Thanks

Peter

2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:
  

Hi Peter,

I can't provide it because I don't know what it is.

Even if we could provide a list of items, tweaking
the device informaton if the systems are not identical
would be too difficult.

cs

On 09/24/09 12:04, Peter Pickford wrote:


Hi Cindy,

Could you provide a list of system specific info stored in the root pool?

Thanks

Peter

2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:
  

Hi Karl,

Manually cloning the root pool is difficult. We have a root pool recovery
procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
and manually tweaking.


http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery

The problem is that the amount system-specific info stored in the root
pool and any kind of device differences might be insurmountable.

Solaris 10 ZFS/flash archive support is available with patches but not
for the Nevada release.

The ZFS team is working on a split-mirrored-pool feature and that might
be an option for future root pool cloning.

If you're still interested in a manual process, see the steps below
attempted by another community member who moved his root pool to a
larger disk on the same system.

This is probably more than you wanted to know...

Cindy



# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/c1t1d0s0
for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
Set the bootfs property on the root pool BE.
# zpool set bootfs=altrpool/ROOT/zfsBE altrpool
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

On 09/24/09 10:06, Karl Rossing wrote:


I would like to clone the configuration on a v210 with snv_115.

The current pool looks like this:

-bash-3.2$ /usr/sbin/zpool statuspool: rpool
 state: ONLINE
 scrub: none requested
config:

  NAME  STATE READ WRITE CKSUM
  rpool ONLINE   0 0 0
mirror  ONLINE   0 0 0
  c1t0d0s0  ONLINE   0 0 0
  c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to
/tmp/a so that I can make the changes I need prior to removing the drive
and
putting it into the new v210.

I supose I could lucreate -n new_v210, lumount new_v210, edit what I
need
to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0
and
then luactivate the original boot environment.
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  



Re: [zfs-discuss] zfs send older version?

2009-09-16 Thread Lori Alt

Erik Trimble wrote:

Lori Alt wrote:

On 09/15/09 06:27, Luca Morettoni wrote:

On 09/15/09 02:07 PM, Mark J Musante wrote:

zfs create -o version=N pool/filesystem


is possible to implement into a future version of ZFS a released 
send command, like:


# zfs send -r2 snap ...

to send a specific release (version 2 in the example) of the metadata?

I just created a RFE for this problem in general:  6882134.  I'm not 
sure the above suggestion is the best way to solve the problem, but 
we do need some kind of support for inter-version send stream 
readability.


Lori



I haven't see this specific problem, but it occurs to me thus:

For the reverse of the original problem, where (say) I back up a 'zfs 
send' stream to tape, then later on, after upgrading my system, I want 
to get that stream back.


Does 'zfs receive' support reading a version X stream and dumping it 
into a version X+N zfs filesystem?


If not, frankly, that's a higher priority than the reverse.



I'm afraid that my answers yesterday, and the RFE I filed, have just 
confused matters.  I've learned some more about send stream 
inter-version compatibility and found that the problem is not as 
difficult as I thought:


There are two aspects to the stream versioning issue:  the version of 
the objects stored in the stream and the version of the stream format 
itself.  I was thinking that the object versioning was the harder 
problem, but I discussed this with Matt Ahrens and he pointed out that 
the existing zfs upgrade capability already copes with reading versions 
of data object formats that are earlier than the version of the pool 
into which the objects are being read.  So as long as you're doing a 
receive into a pool that has the same or greater version than the one 
that was used to generate the send stream, the objects should be 
interpreted correctly.


The second version issue, which is the stream format, is where we 
haven't necessarily promised compatibility in the past, but so far, the 
stream format has not changed.  Changes are coming however for dedup.  
We're looking at making those changes in such a way that earlier stream 
formats will still be receivable into future releases. 

So we're considering a refinement of the current policy of not 
guaranteeing future readability of streams generated by earlier version 
of ZFS.  The time may have come where we know enough about how send 
streams fit into overall ZFS versioning that we can make them more 
useful by making a stronger assurance about future readability.  So stay 
tuned.


As for being able to read streams of a later format on an earlier 
version of ZFS, I don't think that will ever be supported.  In that 
case, we really would have to somehow convert the format of the objects 
stored within the send stream and we have no plans to implement anything 
like that. 

As for being able to selectively receive only parts of a send stream, 
that's another subject, unrelated to versioning.  As someone else 
pointed out on this thread, that's one way in which zfs send/recv is 
really not a 'backup' tool in the usual sense.  If you need that kind of 
functionality, zfs send/recv will not provide it for you. At some point, 
we could perhaps implement a way to select the datasets from within a 
send stream to receive, but given the way that objects are recorded in 
send streams, it would be very difficult to restore individual files.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send older version?

2009-09-16 Thread Lori Alt

On 09/16/09 11:56, Erik Trimble wrote:

Lori Alt wrote:

On 09/16/09 10:48, Marty Scholes wrote:

Lori Alt wrote:
 

As for being able to read streams of a later format
on an earlier version of ZFS, I don't think that will ever be
supported.  In that case, we really would have to somehow convert the
format of the objects stored within the send stream and we have no 
plans to
implement anything like that. 


If that is true, then it at least makes sense to include a zfs 
downgrade and zpool downgrade option, does it not?
  
Not necessarily.  The existence of an upgrade function doesn't 
automatically mean that downgrade should be provided.  What would a 
downgrade function do with, say, properties that weren't even defined 
in the earlier version?
Some kind of downgrade could theoretically be done (with appropriate 
messages about capabilities and fields that are not understood by the 
earlier version of the coded), but I don't think its value would be 
worth the effort, at least not in comparison to other work that  
needs to be done.


Lori


In my earlier posting, I was more hypothesizing something that I've 
heard folks talking about.


Personally, I don't dump zfs streams to tape.  'zfs send/recive' is 
not (nor do I expect it to be) a dump/receive for zfs.  Backups and 
archives are to be done by appropriate backup software.


That said, it's becoming more and more common to design in a large 
disk backup device (Thor/Thumper, in particular) as either a staging 
area for backups, or as the incremental repository.  Think of it as a 
variation on VTL.A typical config for this is with the client 
machines doing 'zfs send' over to the Thumper, and the Backup Software 
running solely on the Thumper to do archival stuff.  Doing it this way 
also means it's simple to replicate the Thumper's data elsewhere, so 
one can have redundant on-line backups, from which it is trivial to 
get back stuff quickly.   Naturally, the bane of this kind of setup is 
zfs version mismatch between the Thumper and the client(s).  I can 
easily imagine situations where the Thumper has a later version of zfs 
than a client, as well as one where the Thumper has an earlier version 
than a client (e.g. Thumper runs 2008.11, client A runs 2009.05, 
client B runs Solaris 10 Update 6).  Some method of easily dealing 
with this problem would be really good.


Personally, I would go with not changing 'zfs receive', and modifying 
'zfs send' to be able to specify a zfs filesystem version during 
stream creation. As per Lori's original RFE CR.
But that's just moving the 'downgrade' problem from one machine to 
another.  I'm going to modify or delete the RFE, because what I thought 
made sense before, doesn't.


lori






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Lori Alt

On 09/15/09 06:27, Luca Morettoni wrote:

On 09/15/09 02:07 PM, Mark J Musante wrote:

zfs create -o version=N pool/filesystem


is possible to implement into a future version of ZFS a released 
send command, like:


# zfs send -r2 snap ...

to send a specific release (version 2 in the example) of the metadata?

I just created a RFE for this problem in general:  6882134.  I'm not 
sure the above suggestion is the best way to solve the problem, but we 
do need some kind of support for inter-version send stream readability.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] check a zfs rcvd file

2009-09-04 Thread Lori Alt

On 09/04/09 09:41, dick hoogendijk wrote:

Lori Alt wrote:
The -n option does some verification.  It verifies that the record 
headers distributed throughout the stream are syntactically valid.  
Since each record header contains a length field which allows the 
next header to be found, one bad header will cause the processing of 
the stream to abort.  But it doesn't verify the content of the data 
associated with each record.


So, storing the stream in a zfs received filesystem is the better 
option. Alas, it also is the most difficult one. Storing to a file 
with zfs send -Rv is easy. The result is just a file and if your 
reboot the system all is OK. However, if I zfs receive -Fdu into a 
zfs filesystem I'm in trouble when I reboot the system. I get 
confusion on mountpoints! Let me explain:


Some time ago I backup up my rpool and my /export ; /export/home to 
/backup/snaps (with  zfs receive -Fdu). All's OK because the newly 
created zfs FS's stay unmounted 'till the next reboot(!). When I 
rebooted my system (due to a kernel upgrade) the system would nog 
boot, because it had mounted the zfs FS backup/snaps/export on 
/export and backup/snaps/export/home on /export/home. The system 
itself had those FS's too, of course. So, there was a mix up. It would 
be nice if the backup FS's would not be mounted (canmount=noauto), but 
I cannot give this option when I create the zfs send | receive, can I? 
And giving this option later on is very difficult, because canmount 
is NOT recursive! And I don't want to set it manualy on all those 
backup up FS's.


I wonder how other people overcome this mountpoint issue.

The -u option to zfs recv (which was just added to support flash archive 
installs, but it's useful for other reasons too) suppresses all mounts 
of the received file systems.  So you can mount them yourself afterward 
in whatever order is appropriate, or do a 'zfs mount -a'.


lori



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] check a zfs rcvd file

2009-09-04 Thread Lori Alt

On 09/04/09 10:17, dick hoogendijk wrote:

Lori Alt wrote:
The -u option to zfs recv (which was just added to support flash 
archive installs, but it's useful for other reasons too) suppresses 
all mounts of the received file systems.  So you can mount them 
yourself afterward in whatever order is appropriate, or do a 'zfs 
mount -a'.
You misunderstood my problem. It is very convenient that the 
filesystems are not mounted. I only wish they could stay that way!. 
Alas, they ARE mounted (even if I don't want them to) when I  *reboot* 
the system. And THAT's when thing get ugly. I then have different zfs 
filesystems using the same mountpoints! The backed up ones have the 
same mountpoints as their origin :-/  - The only way to stop it is to 
*export* the backup zpool OR to change *manualy* the zfs prop 
canmount=noauto in all backed up snapshots/filesystems.


As I understand I cannot give this canmount=noauto to the zfs 
receive command.

# zfs send -Rv rp...@0909 | zfs receive -Fdu backup/snaps
There is a RFE to allow zfs recv to assign properties, but I'm not sure 
whether it would help in your case.  I would have thought that 
canmount=noauto would have already been set on the sending side, 
however.  In that case, the property should be preserved when the stream 
is preserved.  But if for some reason, you're not setting that property 
on the sending side, but want it set on the receiving side, you might 
have to write a script to set the properties for all those datasets 
after they are received.


lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-03 Thread Lori Alt


I agree and Cindy Swearingen and I are talking to marketing to get this 
fixed.  Thanks to all for bringing this to our attention.


lori

On 09/03/09 00:55, Ross wrote:

I agree, mailing that to all Sun customers is something I think is likely to 
turn around and bite you.

A lot of people are now going to use that to archive their data, and some of 
them are not going to be happy when months or years down the line they try to 
restore it and find that the 'zfs receive' just fails, with no hope of 
recovering their data.

And they're not going to blame the device that's corrupted a bit, they're going to come 
here blaming Sun and ZFS.  Archiving to a format with such a high risk of loosing 
everything sounds like incredibly bad advice.  Especially when the article actually uses 
the term create archives for long-term storage.

Long term storage doesn't mean storage that could break with the next upgrade, or 
storage that will fail if even a single bit has been corrupted anywhere in the process. 
 Sun really need to decide what they are doing with zfs send/receive because we're getting very 
mixed messages now.
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-03 Thread Lori Alt


yes to all the comments below.  Those are all mitigating factors.  But I 
also agree with Ross and Mike and others that we should be more clear 
about when send/recv is appropriate and when it's not the best choice.  
We're looking into it.


Lori


On 09/03/09 10:06, Richard Elling wrote:

On Sep 2, 2009, at 11:55 PM, Ross wrote:

I agree, mailing that to all Sun customers is something I think is 
likely to turn around and bite you.


Some points to help clarify the situation:

1. There is no other way to archive a dataset than using a snapshot

2. You cannot build a zpool on a tape

3. The stability of the protocol is only a problem if it becomes 
impossible

   to run some version of OpenSolaris on hardware that is needed to
   receive the snapshot. Given the ubiquity of virtualization and 
the x86
   legacy, I don't think this is a problem for at least the 
expected lifetime

   of the storage medium.

A lot of people are now going to use that to archive their data, and 
some of them are not going to be happy when months or years down the 
line they try to restore it and find that the 'zfs receive' just 
fails, with no hope of recovering their data.


And they're not going to blame the device that's corrupted a bit, 
they're going to come here blaming Sun and ZFS.  Archiving to a 
format with such a high risk of loosing everything sounds like 
incredibly bad advice.  Especially when the article actually uses the 
term create archives for long-term storage.


Long term storage doesn't mean storage that could break with the 
next upgrade, or storage that will fail if even a single bit has 
been corrupted anywhere in the process.  Sun really need to decide 
what they are doing with zfs send/receive because we're getting very 
mixed messages now.


Data integrity is a problem for all archiving systems, not just ZFS.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] check a zfs rcvd file

2009-09-03 Thread Lori Alt

On 09/03/09 14:21, dick hoogendijk wrote:

On Wed, 2 Sep 2009 13:06:35 -0500 (CDT)
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
  

Nothing prevents validating the self-verifying archive file via this
zfs recv -vn  technique.



Does this verify the ZFS format/integrity of the stream?
Or is the only way to do that to zfs recv the stream into ZFS?

  
The -n option does some verification.  It verifies that the record 
headers distributed throughout the stream are syntactically valid.  
Since each record header contains a length field which allows the next 
header to be found, one bad header will cause the processing of the 
stream to abort.  But it doesn't verify the content of the data 
associated with each record. 

We might want to implement an option to enhance zfs recv -n to calculate 
a checksum of each dataset's records as it's reading the stream and then 
verify the checksum when the dataset's END record is seen.   I'm 
looking at integrating a utility which allows the metadata in a stream 
to be dumped for debugging purposes (zstreamdump).   It also verifies 
that the data in the stream agrees with the checksum.


lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-08-31 Thread Lori Alt

On 08/31/09 08:30, Henrik Bjornstrom - Sun Microsystems wrote:

Hi !

Have anyone given an answer to this that I have missed ? I have a 
customer that have the same question and I want to give him a correct 
answer.


/Henrik

Ketan wrote:
I created a snapshot and subsequent clone of a zfs volume. But now i 
'm not able to remove the snapshot it gives me following error

zfs destroy newpool/ldom2/zdi...@bootimg
cannot destroy 'newpool/ldom2/zdi...@bootimg': snapshot has dependent 
clones

use '-R' to destroy the following datasets:
newpool/ldom2/zdisk0

and if i promote the clone then the original volume becomes the 
dependent clone , is there a way to destroy just the snapshot leaving 
the clone and original volume intact ?

no. As long as a clone exists, its origin snapshot must exist as well.

lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-31 Thread Lori Alt

On 08/29/09 05:41, Robert Milkowski wrote:

casper@sun.com wrote:

Randall Badilla wrote:
   

Hi all:
First; it is possible modify the boot zpool rpool after OS 
installation...? I install the OS on the whole 72GB harddisk.. it 
is mirrored so If I want to decrease the rpool; for example resize 
to a 36GB slice it can be done?
As far I remember on UFS/SVM I was able to resize boot OS disk via 
detach mirror (so tranforming to one-way mirror); ajust the 
partitions then attach de mirror. After sync boot form the resized 
mirror; re-doing the resize on the remaining mirror and attach 
mirror and reboot.

Dowtime reduced to a reboot times.

  
Yes, you can follow same procedure with zfs (details will differ of 
course).



You can actually change the partitions while you're using the slice.
But after changing the size of both slices you may need to reboot

I've used it also when going from ufs to zfs for boot.

  


But the OP wants to decrease a slice size which if it would work at 
all could lead to loss of data.
You can't decrease the size of a root pool this way (or any way, right 
now).  This is just a specific case of bug 4852783 (reduce pool 
capacity).  A fix for this is underway, but is not yet available.


Lori






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-08-19 Thread Lori Alt



On 08/19/09 14:57, Kris Kasner wrote:


Hi.

First - Thanks very much for releasing this as a patch and not making 
us wait until U8. It's very much appreciated. Things we put on hold 
can start moving again which is a good thing.




Is there further documentation on this yet? 


I just asked Cindy Swearingen, the tech writer for ZFS, about this and 
sadly, it appears that there isn't any documentation for this available 
outside of Sun yet.  The documentation for using flash archives to set 
up systems with zfs roots won't be available until the release of U8.  I 
will give you the very short summary and Cindy is going to look into 
putting some more information out on the zfs-boot page of the 
opensolaris website (which doesn't really make sense since this isnt' 
supported on OpenSolaris, but whatever).


I haven't been able to find it, and it looks like most of the 
interesting stuff is in 'pfinstall' so I can't just go look and see 
how it works.. I have a couple of questions on how zfsroot interacts 
with flash install - I'm happy to go figure it out if you can point me 
at the docs.. Failing that, hopefully someone can help me answer the 
below questions.


on our zfs root systems, we like to keep /var as a separate dataset on 
servers so we can monitor it's usage. We did that before with the line:

bootenv installbe bename zfsroot dataset /var
Bottom line: whatever the configuration is of the system that you use as 
the basis for generating the flash archive, that's the configuration 
that will exist on systems installed using the flar.  If the original 
system has a separate /var, the systems created from it will have a 
separate  /var.




additional datasets I create in a finish script.

Can a zfsroot flash installed system have a separate dataset for /var?


It seems a little counter intuitive to me to say
partitioning explicit
when everything is going into a zfs pool with a single dataset - are 
there other profile tokens that make this.
Yes, I  know, but that's the way flash archive support works (even 
before zfs was added to it).





One last thing - My flash installs are working, but I'm ending up with 
/tmp as a non tmpfs filesystem (no vfstab entry for it..) I can easily 
fix this in a postinstall/finish script, but if there is a reason it's 
not happening that can be fixed in my JS profile, that would be better.
I don't know why you wouldn't have a separate /tmp.  If you're sure that 
the system from which the flar is generated has a /tmp  file system, but 
the system that results from the install of the flash archive does not 
have a separate  /tmp, there might be a bug.  Get back to me on it  
later if you confirm this.


So here's the quick summary of how to do flash archive installs of 
systems with zfs root file systems:


1.  Set up the system that you want to replicate or re-create.  I.e. 
install a system with a zfs root file system and add whatever 
customizations you want to it.  By default, ALL datasets in the root 
pool (except for swap and dump) are included in the flash archive, not 
just those that are part of the Solaris name space.


2.  Use flarcreate to create the flash archive on the system.  There is 
a new -D dataset option that can be used (repeatedly, if you wish) on 
the command to exclude datasets from the archive.


So an example command might be:

  # flarcreate -n ourU8 -D rpool/somedata /some-remote-directory/ourU8.flar

3.  Set up the system you want to install for jumpstart install (however 
you do that).  Use a profile something like this:


install_type flash_install
archive_location nfs schubert:/export/home/lalt/U8.flar
partitioning explicit
pool rpool auto auto auto mirror c0t1d0s0 c0t0d0s0

I think that covers the basics.

Lori




Thanks again.

Kris Kasner
Qualcomm Inc.

Jul 9 at 16:41, Lori Alt lori@sun.com wrote:


On 07/09/09 17:25, Mark Michael wrote:
Thanks for the info.  Hope that the pfinstall changes to support zfs 
root flash jumpstarts can be extended to support luupgrade -f at 
some point soon.


BTW, where can I find an example profile?  do I just substitute in the

  install_type flash_install
  archive_location ...

for

   install_type initial_install

??


Here's a sample:

install_type flash_install
archive_location nfs schubert:/export/home/lalt/mirror.flar
partitioning explicit
pool rpool auto auto auto mirror c0t1d0s0 c0t0d0s0



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs incremental send stream size

2009-08-06 Thread Lori Alt

On 08/06/09 12:19, Robert Lawhead wrote:

I'm puzzled by the size reported for incremental zfs send|zfs receive.  I'd expect the 
stream to be roughly the same size as the used blocks reported by zfs list.  
Can anyone explain why the stream size reported is so much larger that the used data in 
the source snapshots?  Thanks.
  
part of the reason is that the send stream contains a lot of records for 
free blocks and free objects.  I'm working on a fix to the send stream 
format that will eliminate some of that.


Lori

% zfs list -r -t snapshot mail/00 | tail -4
mail/0...@.nightly  1.98M  -  34.1G  - 
mail/0...@0400.hourly   1.67M  -  34.1G  - 
mail/0...@0800.hourly   1.43M  -  34.1G  - 
mail/0...@1000.hourly   0  -  34.1G  - 


# zfs send -i mail/0...@.nightly mail/0...@0400.hourly | zfs receive -v -F 
mailtest/00
receiving incremental stream of mail/0...@0400.hourly into 
mailtest/0...@0400.hourly
received 17.9MB stream in 4 seconds (4.49MB/sec)
  
# zfs send -i mail/0...@0400.hourly mail/0...@0800.hourly | zfs receive -v -F mailtest/00

receiving incremental stream of mail/0...@0800.hourly into 
mailtest/0...@0800.hourly
received 15.1MB stream in 1 seconds (15.1MB/sec)
  
# zfs send -i mail/0...@0800.hourly mail/0...@1000.hourly | zfs receive -v -F mailtest/00

receiving incremental stream of mail/0...@1000.hourly into 
mailtest/0...@1000.hourly
received 13.7MB stream in 2 seconds (6.86MB/sec)
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Soon out of space (after upgrade to 2009.06)

2009-07-24 Thread Lori Alt
In general, questions about beadm and related tools should be sent or at 
least cross-posted to install-disc...@opensolaris.org. 


Lori


On 07/24/09 07:04, Jean-Noël Mattern wrote:

Axelle,

You can safely run beadm destroy opensolaris if everything's 
allright with your new opensolaris-1 boot env.


You will get back your space (something around 7.18 GB).

There's something strang with the mountpoint of rpool which should be 
/rpool and not /a/rpool, maybe you'll have to fix this with zfs set 
mountpoint=/rpool rpool. Be sure before to do that that you back up 
what's in /rpool, and restore it after having the mountpoint changed!


Jnm.

--

Axelle Apvrille a écrit :

Hi,
I have upgraded from to 2008.11 to 2009.06. The upgrade process 
created a new boot environment (named opensolaris-1 in my case), but 
I am now getting out of space in my ZFS pool. So, can I safely erase 
the old boot environment, and if so will that get me back the disk 
space I need ?


BEActive Mountpoint Space Policy Created  
---- -- - -- ---  
opensolaris   R  -  7.57G static 2009-01-03 13:18 
opensolaris-1 N  /  3.20G static 2009-07-20 22:38

As you see there, my new BE has 3G only, I probably need more.

zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 41.8G  6.24G  1.42G  /a/rpool
rpool/ROOT10.4G  6.24G18K  legacy
rpool/ROOT/opensolaris7.18G  6.24G  6.75G  /
rpool/ROOT/opensolaris-1  3.20G  6.24G  7.36G  /
rpool/dump 895M  6.24G   895M  -
rpool/export  28.2G  6.24G19K  /export
rpool/export/home 28.2G  6.24G   654M  /export/home
rpool/export/home/axelle  27.5G  6.24G  27.4G  /export/home/axelle
rpool/swap 895M  7.03G  88.3M  -

Of course, I don't want to erase any critical file or data. However, 
I do not wish to boot 2008.11 any longer as 2009.06 is working.


Thanks
Axelle
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool boot failed

2009-07-13 Thread Lori Alt

On 07/11/09 05:15, iman habibi wrote:

Dear Admins
I had solaris 10u8 installation based on ZFS (rpool)filesystem on two 
mirrored scsi disks in sunfire v880.
but after some months,when i reboot server with reboot command,it 
didnt boot from disks,and returns cant boot from boot media.

how can i recover some data from my previous installation?
also i run
boot disk0 (failed)
boot disk1 (failed)
also run probe-scsi-all,,then boot from each disk,it returns failed,,why?
thank for any guide
Regards
  
Someone with more knowledge of the boot proms might have to help you 
with the boot failures, but if you're looking for a way to recover data 
from the root pools, you could try booting from your installation medium 
(whether that's a local CD/DVD or a network installation image), 
escaping out of the install, and try importing the pool.


Lori



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-09 Thread Lori Alt
Flash archive on zfs means archiving an entire root pool (minus any 
explicitly excluded datasets), not an individual BE.  These types of 
flash archives can only be installed using Jumpstart and are intended to 
install an entire system, not an individual BE.


Flash archives of a single BE could perhaps be implemented in the future.

Lori

On 07/09/09 09:56, Mark Michael wrote:

I've been hoping to get my hands on patches that permit Sol10U7 to do a 
luupgrade -f of a ZFS root-based ABE since Solaris 10 10/08.

Unfortunately, after applying patchids 119534-15 and 124630-26 to both the PBE and the 
miniroot of the OS image, I'm still getting the same ERROR: Field 2 - Invalid disk 
name (insert_abe_name_here).

The flarcreate command I used was simply 


  #  flarcreate -n root_var_no_snap /export/fssnap/flars/root_var

which created a flar file that was about 4 to 5 times the size of a UFS-based 
flar file.

I then used the command

  # luupgrade -f -n be_d70 -s /export/fssnap/os_image \
   -a /export/fssnap/flars/root_var

which then failed with the pfinstall diagnostic given above.

What am I still doing wrong?

ttfn
mm
mark.o.mich...@boeing.com
mark.mich...@es.bss.boeing.com
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-09 Thread Lori Alt

On 07/09/09 17:25, Mark Michael wrote:

Thanks for the info.  Hope that the pfinstall changes to support zfs root flash 
jumpstarts can be extended to support luupgrade -f at some point soon.

BTW, where can I find an example profile?  do I just substitute in the 


  install_type flash_install
  archive_location ...

for

   install_type initial_install

??
  

Here's a sample:

install_type flash_install
archive_location nfs schubert:/export/home/lalt/mirror.flar
partitioning explicit
pool rpool auto auto auto mirror c0t1d0s0 c0t0d0s0



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Lori Alt

On 07/08/09 13:43, Bob Friesenhahn wrote:

On Wed, 8 Jul 2009, Jerry K wrote:

It has been a while since this has been discussed, and I am hoping 
that you can provide an update, or time estimate.  As we are several 
months into Update 7, is there any chance of an Update 7 patch, or 
are we still waiting for Update 8.


I saw that a Solaris 10 patch for supporting Flash archives on ZFS 
came out about a week ago.


Correct.  These are the patches:

sparc:
119534-15 : fixes to the /usr/sbin/flarcreate and /usr/sbin/flar command
124630-26: updates to the install software

x86:
119535-15 : fixes to the /usr/sbin/flarcreate and /usr/sbin/flar command
124631-27: updates to the install software


Lori
---BeginMessage---
I received the following message about the patches for zfs flash archive 
support:



The submitted patch has been received as release ready by raid.central and
will be officially released to the Enterprise Services patch databases within
24 - 48 hours (except on weekends or holidays) or submitter will be further
notified of any issues that prevent SunService from releasing it.

Contact patch-mana...@sun.com if there are any further questions.

The patches are:

sparc:
119534-15 : fixes to the /usr/sbin/flarcreate and /usr/sbin/flar command
124630-26: updates to the install software

x86:
119535-15 : fixes to the /usr/sbin/flarcreate and /usr/sbin/flar command
124631-27: updates to the install software

A couple weeks ago, I sent out a mail about the content of these patches 
and about how they should be applied.  I have included that message 
again, below.


Lori

---


I have two pieces of information to convey with this mail.  The first is 
a summary
of how flash archives work with zfs as a result of this patch.  During 
the discussions

of what should be implemented, there was some disagreement about what was
needed.  I want to summarize what finally got implemented, just so there 
is no
confusion.  Second, I want to bring everyone up to date on the state of 
the patch

for zfs flash archive support.

Overview of ZFS Flash Archive Functionality
-
With this new functionality, it is possible to

- generate flash archives that can be used to install systems to boot off
of ZFS root pools
- perform Jumpstart initial installations of entire systems using these
zfs-type flash archives
- the flash archive backs up an entire root pool, not individual
boot environments.  Individual datasets within the pool can be
excluded using a new -D option to flarcreate and flar.

Here are the limitations:

- Jumpstart installations only.  No interactive install support for
flash archive installs of zfs-rooted systems.  No installation of
individual boot environments using Live Upgrade.
- Full initial install only.  No differential flash installs.
- No hybrid ufs/zfs archives.  Existing (ufs-type) flash archives
can still only be used to install ufs roots.  The new zfs-type
flash archive can only be used to install zfs-rooted systems.
- Although the entire root pool (minus any explicitly excluded
datasets) is archived and installed, only the BE booted at
the time of the flarcreate will be usable after the flash archive
is installed.  (except for pools archived with the -R rootdir
option,which can be used to archive a root pool other than the
one currently booted).
- The options to flarcreate and flar to include and exclude
individual files in a flash archive is not support with zfs-type
flash archives.  Only entire datasets may be excluded from a
zfs flar.
- The new pool created by the flash archive install will have the
same name as the pool that was used to generate the flash archive.



Status of the ZFS Flash Archive Patches
---
I have received test versions of the patches for zfs flash archive 
support (CR 6690473).

Those patches are:

sparc:
119534-15 : fixes to the /usr/sbin/flarcreate and /usr/sbin/flar command
124630-26: updates to the install software

x86:
119535-15 : fixes to the /usr/sbin/flarcreate and /usr/sbin/flar command
124631-27: updates to the install software

The patches are applied as follows:

The flarcreate/flar patch  (119534-15/119535-15) must be applied to the 
system

where the flash archive is generated.
The install software patch (124630-26/124631-27) must be applied to the
install medium (probably a netinstall image), since that is where the 
install

software resides.  A system being installed with a flash archive image will
have to be booted from a patched image so that the install software can
recognize the zfs-type flash archive and handle it correctly.
I verified these patches on both sparc and x86 platforms, and as applied to
both Update 6 and Update 7 systems and images.  On Update 6, it is also
necessary to apply the kernel update (KU) patch to the netinstall image
in order for the install to work.  The KU patch is

  

Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Lori Alt

On 07/08/09 15:57, Carl Brewer wrote:

Thankyou!  Am I right in thinking that rpool snapshots will include things like 
swap?  If so, is there some way to exclude them?  Much like rsync has --exclude?
  
By default, the zfs send -R will send all the snapshots, including 
swap and dump.  But you can do the following after taking the snapshot:


# zfs destroy rpool/d...@mmddhh
# zfs destroy rpool/s...@mmddhh

and then do the zfs send -R .  You'll get messages about the missing 
snapshots, but they can be ignored. 

In order to re-create a bootable pool from your backup, there are 
additional steps required.  A full description of a procedure similar to 
what you are attempting can be found here:


http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery


Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] possible to override/inherit mountpoint on received snapshots?

2009-07-07 Thread Lori Alt


To elaborate, the -u option to zfs receive suppresses all mounts.  The 
datasets you extract will STILL have mountpoints that might not work on 
the local system, but at least you can unpack the entire hierarchy of 
datasets and then modify mountpoints as needed to arrange to make the 
file systems mountable.


It's not a complete solution to your problem, but it should let you 
construct one.


The -u option is not documented partly because (1) it was added for the 
specific purpose of enabling flash archive installation support and (2) 
it's not a complete solution to the problem of moving dataset 
hierarchies from one system to another. However, it's turning out to be 
useful enough that we should probably document it.


Lori



On 07/07/09 15:05, Richard Elling wrote:

You need the zfs receive -u option.
-- richard

Andrew Daugherity wrote:
I attempted to migrate data from one zfs pool to another, larger one 
(both pools are currently mounted on the same host) using the 
snapshot send/receive functionality.  Of course, I could use 
something like rsync/cpio/tar instead, but I'd have to first manually 
create all the target FSes, and send/receive seemed like a better 
option.


Unfortunately, this trips up on filesystems that have mountpoints 
explicitly set, as after receiving the snapshot, it attempts to mount 
the target FS at the same mountpoint as the source FS, which 
obviously fails.  The zfs receive command aborts here and doesn't 
try to receive any other FSes.


The source pool:

and...@imsfs-mirror:~$ zfs list -t filesystem -r ims_pool_mirror
NAME USED  AVAIL  REFER  MOUNTPOINT
ims_pool_mirror 3.78T   498G  46.0K  
/ims_pool_mirror

ims_pool_mirror/backup  1.99T   498G   749M  /export/backup
ims_pool_mirror/backup/bushlibrary  54.7G   498G  54.7G  
/export/backup/bushlibrary
ims_pool_mirror/backup/bvcnet429M   498G   429M  
/export/backup/bvcnet
ims_pool_mirror/backup/isc   129G   498G   129G  
/export/backup/isc

[several more FSes under ims_pool_mirror/backup omitted for brevity]
ims_pool_mirror/ims 1.79T   498G  1.79T  /export/ims
ims_pool_mirror/ims/webroot 62.4M   498G  60.3M  
/export/ims/webroot



I took a recursive snapshot (@0702) and attempted to copy it to the 
new pool (named ims_mirror_new for now) and ran into the above 
problem.  I hit the same issue when dropping down a level to 
ims_pool_mirror/backup, but not when going down a level below that 
(to e.g. ims_pool_mirror/backup/bushlibrary).  In a certain way this 
make sense, since it's backup that has the mountpoint set, and the 
FSes under it just inherit their mountpoints, but it isn't 
immediately obvious.


What's more confusing is that doing a dry-run succeeds (for all 
FSes), but removing the '-n' flag causes it to fail after the first FS:


r...@imsfs-mirror:/# zfs send -R ims_pool_mirror/bac...@0702 | zfs 
receive -nv -F  -d ims_mirror_new
would receive full stream of ims_pool_mirror/bac...@0702 into 
ims_mirror_new/bac...@0702
would receive full stream of ims_pool_mirror/backup/bvc...@0702 into 
ims_mirror_new/backup/bvc...@0702
would receive full stream of ims_pool_mirror/backup/z...@0702 into 
ims_mirror_new/backup/z...@0702
would receive full stream of ims_pool_mirror/backup/j...@0702 into 
ims_mirror_new/backup/j...@0702
would receive full stream of ims_pool_mirror/backup/z...@0702 into 
ims_mirror_new/backup/z...@0702
would receive full stream of ims_pool_mirror/backup/ocfs...@0702 into 
ims_mirror_new/backup/ocfs...@0702
would receive full stream of ims_pool_mirror/backup/my...@0702 into 
ims_mirror_new/backup/my...@0702
would receive full stream of ims_pool_mirror/backup/i...@0702 into 
ims_mirror_new/backup/i...@0702
would receive full stream of ims_pool_mirror/backup/bushlibr...@0702 
into ims_mirror_new/backup/bushlibr...@0702
would receive full stream of ims_pool_mirror/backup/purgat...@0702 
into ims_mirror_new/backup/purgat...@0702
would receive full stream of ims_pool_mirror/backup/l...@0702 into 
ims_mirror_new/backup/l...@0702
would receive full stream of ims_pool_mirror/backup/the...@0702 into 
ims_mirror_new/backup/the...@0702
would receive full stream of ims_pool_mirror/backup/pg...@0702 into 
ims_mirror_new/backup/pg...@0702
r...@imsfs-mirror:/# zfs send -R ims_pool_mirror/bac...@0702 | zfs 
receive -v -F  -d ims_mirror_new
receiving full stream of ims_pool_mirror/bac...@0702 into 
ims_mirror_new/bac...@0702

cannot mount '/export/backup': directory is not empty


Of course it can't, since that mountpoint is used by the source pool!

Doing the child FSes (which inherit) one at a time works just fine:

r...@imsfs-mirror:/# zfs send -R ims_pool_mirror/backup/bvc...@0702 | 
zfs receive -v  -d ims_mirror_new
receiving full stream of ims_pool_mirror/backup/bvc...@0702 into 
ims_mirror_new/backup/bvc...@0702

received 431MB stream in 10 seconds (43.1MB/sec)



Now that I 

Re: [zfs-discuss] Cloning a ZFS compact flash card

2009-06-29 Thread Lori Alt

On 06/28/09 08:41, Ross wrote:

Can't you just boot from an OpenSolaris CD, create a ZFS pool on the new 
device, and just do a ZFS send/receive directly to it?  So long as there's 
enough space for the data, a send/receive won't care at all that the systems 
are different sizes.

I don't know what you need to do to a pool to make it bootable I'm afraid, so I 
don't know if this will just work or if it'll need some more tweaking.  However 
if you have any problems you should be able to find more information about how 
ZFS boot works online.
  

A procedure for making a backup of a root pool and then restoring it is at:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE, ZFS root, b101 - b103 fails with ERROR: No upgradeable file systems

2009-06-29 Thread Lori Alt

On 06/27/09 23:50, Ian Collins wrote:

Leela wrote:

So no one has any idea?
  

About what?

This was in regards to a question sent to the install-discuss alias on 
6/18 and later copied to zfs-discuss.  I have answered it on the install 
alias, if anyone is following the issue.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] core dump on zfs receive

2009-06-22 Thread Lori Alt


This is probably 6696858. The fix is known, but I don't know when it's 
expected to become available. I have asked the CR's responsible engineer 
to update it with when the fix is expected.


Lori

On 06/22/09 14:03, Charles Hedrick wrote:

I'm trying to do a simple backup. I did

zfs snapshot -r rp...@snapshot
zfs send -R rp...@snapshot | zfs receive -Fud external/rpool

zfs snapshot -r rp...@snapshot2
zfs send -RI rp...@snapshot1 rp...@snapshot2 | zfs receive -d external/rpool

The receive coredumps
$c
libc_hwcap1.so.1`strcmp+0xec(809ba50, 0, 8044938, 1020)
libzfs.so.1`recv_incremental_replication+0xb57(8088648, 8047430, 2, 8084ca8, 
80841e8, 40)
libzfs.so.1`zfs_receive_package+0x436(8088648, 0, 8047e2b, 2, 8047580, 80476c0)
libzfs.so.1`zfs_receive_impl+0x689(8088648, 8047e2b, 2, 0, 0, 8047c6c)
libzfs.so.1`zfs_receive+0x35(8088648, 8047e2b, 2, 0, 0, 8047d40)
zfs_do_receive+0x172(3, 8047d40, 8047d3c, 807187c)
main+0x2af(4, 8047d3c, 8047d50, feffb7b4)
_start+0x7d(4, 8047e1c, 8047e20, 8047e28, 8047e2b, 0)

I've tried a number of variants to arguments for send and receive, but it 
always coredumps.
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] v440 - root mirror lost after LU

2009-06-16 Thread Lori Alt

On 06/16/09 16:32, Jens Elkner wrote:

Hmmm,

just upgraded some servers to U7. Unfortunately one server's primary disk
died during the upgrade, so that luactivate was not able to activate the
s10u7 BE (Unable to determine the configuration ...). Since the rpool
is a 2-way mirror, the boot-device=/p...@1f,70/s...@2/d...@1,0:a was
simply set to /p...@1f,70/s...@2/d...@0,0:a and checked, whether the
machine still reboots unattended. As expected - no problem.

At the evening the faulty disk was replaced and the mirror resilvered via
'zpool replace rpool c1t1d0s0' (see below).  Since there was no error and
everything stated to be healthy, the s10u7 BE was luactivated (no error
here message as well) and 'init 6'.

Unfortunately, now the server was gone and no known recipe helped to
revive it (I guess, LU damaged the zpool.cache?) :

  

Even if LU somehow damaged the zpool.cache, that wouldn't explain why an
import, while booted off the net, wouldn't work.  LU would have had to 
damage

the pool's label in some way.

I notice that when you booted the system from the  net, you booted from an
Update 6 image.  It's possible that the luupgrade to U7 upgraded the pool
version to one not understood by Update 6.  One thing I suggest is booting
off the net from a U7 image and see if that allows you to import the pool.

The other suggestion I have is to remove the

/p...@1f,70/s...@2/d...@1,0:a

device (the one that didn't appear to be able to boot at all) and try 
booting

off just the other disk.  There are some problems booting from one side
of a mirror when the disk on the other side is present, but somehow not
quite right.  If that boot is successful, insert the other disk again (while
still booted, if your system supports hot-plugging), and then perform the
recovery procedure documented here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Primary_Mirror_Disk_in_a_ZFS_Root_Pool_is_Unavailable_or_Fails

making sure you perform the installboot step to install a boot block.

If you can manage to boot off one side of the mirror, but your system
does not support hot-plugging, try doing a 'zpool detach' to remove
the other disk from the root mirror, then taking the system down, inserting
the replacement disk, rebooting, then re-attach the disk to the
mirror using zpool attach, and then reinstall the boot block, as
shown in the above link.

If none of this works, I'll look into other reasons why a pool might
appear to have corrupted data and therefore be non-importable.

Lori


Any hints, how to get the rpool back? 


Regards,
jel.


What has been tried 'til now:

{3} ok boot
Boot device: /p...@1f,70/s...@2/d...@1,0:a  File and args:
Bad magic number in disk label
Can't open disk label package

Can't open boot device

{3} ok

{3} ok boot /p...@1f,70/s...@2/d...@0,0:a
Boot device: /p...@1f,70/s...@2/d...@0,0:a  File and args:
SunOS Release 5.10 Version Generic_13-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
NOTICE:
spa_import_rootpool: error 22

Cannot mount root on /p...@1f,70/s...@2/d...@0,0:a fstype zfs

panic[cpu3]/thread=180e000: vfs_mountroot: cannot mount root

0180b950 genunix:vfs_mountroot+358 (800, 200, 0, 1875c00,
189f800, 18ca000)
  %l0-3: 010ba000 010ba208 0187bba8
  %011e8400
  %l4-7: 011e8400 018cc400 0600
  %0200
0180ba10 genunix:main+a0 (1815178, 180c000, 18397b0, 18c6800,
181b578, 1815000)
  %l0-3: 01015400 0001 70002000
  %
  %l4-7: 0183ec00 0003 0180c000
  %

skipping system dump - no dump device configured
rebooting...

SC Alert: Host System has Reset

{3} ok boot net -s
Boot device: /p...@1c,60/netw...@2  File and args: -s
1000 Mbps FDX Link up
Timeout waiting for ARP/RARP packet
3a000 1000 Mbps FDX Link up
SunOS Release 5.10 Version Generic_137137-09 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Booting to milestone milestone/single-user:default.
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface ce1...
Skipped interface ce1
Attempting to configure interface ce0...
Configured interface ce0
Requesting System Maintenance Mode
SINGLE USER MODE
# mount -F zfs /dev/dsk/c1t1d0s0 /mnt
cannot open '/dev/dsk/c1t1d0s0': invalid dataset name
# mount -F zfs /dev/dsk/c1t0d0s0 /mnt
cannot open '/dev/dsk/c1t0d0s0': invalid dataset name
# zpool import
  pool: pool1
id: 5088500955966129017
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

pool1   ONLINE
  mirrorONLINE

Re: [zfs-discuss] no pool_props for OpenSolaris 2009.06 with old SPARC hardware

2009-06-12 Thread Lori Alt



Frank Middleton wrote:


On 06/03/09 09:10 PM, Aurélien Larcher wrote:

PS: for the record I roughly followed the steps of this blog entry 
=  http://blogs.sun.com/edp/entry/moving_from_nevada_and_live



Thanks for posting this link! Building pkg with gdb was an
interesting exercise, but it worked, with the additional step of
making the packages and pkgadding them. Curious as to why pkg
isn't available as a pkgadd package. Is there any reason why
someone shouldn't make them available for download? It would
make it much less painful for those of us who are OBP version
deprived - but maybe that's the point :-)

During the install cycle, ran into this annoyance (doubtless this
is documented somewhere):

# zpool create rpool c2t2d0
creates a good rpool that can be exported and imported. But it
seems to create an EFI label, and, as documented, attempting to boot
results in a bad magic number error. Why does zpool silently create
an apparently useless disk configuration for a root pool? 


I can't comment on the overall procedure documented in the above blog entry,
but I can address this one issue.

The answer is that there is nothing special about the name rpool.
The thing that makes a pool a root pool is the present of the bootfs
pool property, NOT the name of the pool.

The command above uses a whole disk specifier (c2t2d0) instead of
a slice specifier (e.g. c2t2d0s0) to specify the device for the pool.  The
zpool command does what it always does when given a whole disk
specifier:  it puts an EFI label on the disk before creating a pool.
The instructions in the blog entry show this:

zpool create -f rpool c1t1d0s0

which indicate that the c1t1d0 disk was already formatted (presumably
with an SMI label) and had a s0 slice that would then be used for
the pool.

In the blog entry, I don't see the bootfs property ever being set on the
pool, so I'm not sure how it's booting.  Perhaps the presence of the 
bootfs

command in the grub menu entry is supplying the boot dataset and thus the
system boots anyway.  If so, I think that this is a bug:  the boot 
loader should

insist on the pool having the bootfs  property set to something because
otherwise it's not necessarily a valid root pool.  The reason for being a
stickler about this is that zfs won't allow the bootfs property to be
set on a pool that isn't a valid root pool (because it has an EFI label,
or is a a RAIDZ device).  That's a valuable hurdle when creating a
root pool because it prevents the user from getting into a situation where
the process of creating the root pool seemed to go fine, but then the
pool wasn't bootable.  I will look into this and file a bug if I confirm
that it's appropriate.


Lori



Anyway,
it was a good opportunity to test zfs send/recv of a root pool (it
worked like a charm).

Using format -e to relable the disk so that slice 0 and slice 2
both have the whole disk resulted in this odd problem:

# zpool create -f  rpool c2t2d0s0
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  18.6G  73.5K  18.6G 0%  ONLINE  -
space  1.36T   294G  1.07T21%  ONLINE  -
# zpool export rpool
# zpool import rpool
cannot import 'rpool': no such pool available
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
space  1.36T   294G  1.07T21%  ONLINE  -

# zdb -l /dev/dsk/c2t2d0s0
lists 3 perfectly good looking labels.
Format says:
...
selecting c2t2d0
[disk formatted]
/dev/dsk/c2t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c2t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).

However this disk boots ZFS OpenSolaris just fine and this inability to
import an exported pool isn't a problem. Just wondering if any ZFS guru
had a comment about it. (This is with snv103 on SPARC). FWIW this is
an old ide drive connected to a sas controller via a sata/pata adapter...

Cheers -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
s



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots ignore everything in /export/....

2009-06-10 Thread Lori Alt

On 06/09/09 18:15, Krenz von Leiberman wrote:

When I take a snapshot of my rpool, (of which /export/... is a part of), ZFS 
ignores all the data in it and doesn't take any snapshots...

How do I make it include /export in my snapshots?

BTW, I'm running on Solaris 10 Update 6 (Or whatever is the first update to 
allow for root pools...)

Thanks.
  

So are you doing the following?

# zfs snapshot -r rp...@today

If so, what is the output then of `zfs list` ?

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multiple devs for rpool

2009-06-10 Thread Lori Alt
A root pool is composed of one top-level vdev, which can be a mirror 
(i.e. 2 or more disks).  A raidz vdev is not supported for the root pool 
yet.  It might be supported in the future, but the timeframe is unknown 
at this time.


Lori

Colleen wrote:


As I understand it, you cannot currently use multiple disks for a rpool (IE: 
something similar to raid10). Are there plans to provide this functionality, 
and if so does anyone know what the general timeframe is?

Thanks!
 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multiple devs for rpool

2009-06-10 Thread Lori Alt



Carson Gaspar wrote:


Lori Alt wrote:

A root pool is composed of one top-level vdev, which can be a mirror 
(i.e. 2 or more disks).  A raidz vdev is not supported for the root 
pool yet.  It might be supported in the future, but the timeframe is 
unknown at this time.



The original poster was asking about a zpool of more than 1 mirrored 
pair (4 disks making up 2 mirrored pairs, for example). I don't know 
if that changes the answer (doubtful), but raidz/raidz2 was not being 
discussed.



one top-level vdev means that a root pool can be composed of no more 
than one mirrored pair (or mirrored triple, or whatever).  That might 
change in the future, but there is no projected date for relaxing that 
constraint.


Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a root pool

2009-05-22 Thread Lori Alt

On 05/21/09 22:40, Ian Collins wrote:

Mark J Musante wrote:

On Thu, 21 May 2009, Ian Collins wrote:

I'm trying to use zfs send/receive to replicate the root pool of a 
system and I can't think of a way to stop the received copy 
attempting to mount the filesystem over the root of the destination 
pool.


If you're using build 107 or later, there's a hidden -u option 
available for zfs receive to tell it not to mount the dataset.  See 
http://tinyurl.com/crgog8 for more details.


Thanks for the tip Mark, unfortunately I'm stuck with Solaris 10 on 
this system.

Actually,  the -u option has been integrated into Update 7 of S10.

Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Lori Alt

This sounds like a good idea to me, but it should be brought up
on the caiman-disc...@opensolaris.org mailing list, since this
is not just, or even primarily, a zfs issue.

Lori

Rich Teer wrote:


On Wed, 6 May 2009, Richard Elling wrote:

 


popular interactive installers much more simplified.  I agree that
interactive installation needs to remain as simple as possible.
   



How about offering a choice an installation time: Custom or default??

Those that don't want/need the interactive flexibility can pick default
whereas others who want more flexibility (but still want or need an
interactive installation) can pick the custom option.  Just a thought...

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bad SWAP performance from zvol

2009-04-06 Thread Lori Alt

I'm not sure where this issue stands now (am just now checking
mail after being out for a few days), but here are the block sizes
used when the install software creates swap and dump zvols:

swap:  block size is set to PAGESIZE  (4K for x86, 8K for sparc)
dump:  block size is set to 128 KB

Liveupgrade should use the same characteristics, though I think
there was a bug at one time where it did not.

If that does not improve dump/swap zvol performance, further
investigation should be done.  Perhaps file a bug.

Lori


On 03/31/09 03:02, casper@sun.com wrote:

I've upgraded my system from ufs to zfs (root pool).

By default, it creates a zvol for dump and swap.

It's a 4GB Ultra-45 and every late night/morning I run a job which takes 
around 2GB of memory.


With a zvol swap, the system becomes unusable and the Sun Ray client often 
goes into 26B.


So I removed the zvol swap and now I have a standard swap partition.
The performance is much better (night and day).  The system is usable and 
I don't know the job is running.


Is this expected?

Casper



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Lori Alt

no, this is an incorrect diagnosis.  The problem is that by
using the -V option, you created a volume, not a file system. 
That is, you created a raw device.  You could then newfs

a ufs file system within the volume, but that is almost certainly
not what you want.

Don't use -V when you create the oracle/prd_data/db1
dataset.  Then it will be a mountable  file system.  You
will need to give it a mount point however by setting the
mountpoint property, since the default mountpoint won't
be what you want.

Lori


On 03/17/09 15:45, Grant Lowe wrote:

Ok, Cindy.  Thanks. I would like to have one big pool and divide it into 
separate file systems for an Oracle database.  What I had before was a separate 
pool for each file system.  So does it look I have to go back to what I had 
before?



- Original Message 
From: cindy.swearin...@sun.com cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:20:18 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems

Grant,

If I'm following correctly, you can't mount a ZFS resource
outside of the pool from which the resource resides.

Is this a UFS directory, here:

# mkdir -p /opt/mis/oracle/data/db1

What are you trying to do?

Cindy

Grant Lowe wrote:
  

Another newbie question:

I have a new system with zfs. I create a directory:

bash-3.00# mkdir -p /opt/mis/oracle/data/db1

I do my zpool:

bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1 
c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5 
c2t5006016B306005AAd6 c2t5006016B306005AAd7 c2t5006016B306005AAd8 
c2t5006016B306005AAd9 c2t5006016B306005AAd10 c2t5006016B306005AAd11 
c2t5006016B306005AAd12 c2t5006016B306005AAd13 c2t5006016B306005AAd14 
c2t5006016B306005AAd15 c2t5006016B306005AAd16 c2t5006016B306005AAd17 
c2t5006016B306005AAd18 c2t5006016B306005AAd19
bash-3.00# zfs create oracle/prd_data
bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1

I'm trying to set a mountpoint.  But trying to mount it doesn't work.

bash-3.00# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
oracle   44.0G   653G  25.5K  /oracle
oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
oracle/prd_data/db1  22.5K   697G  22.5K  -
bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/prd_data/db1
cannot set property for 'oracle/prd_data/db1': 'mountpoint' does not apply to 
datasets of this type
bash-3.00#

What's the correct syntax?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs streams data corruption

2009-02-24 Thread Lori Alt

On 02/24/09 12:57, Christopher Mera wrote:

How is it  that flash archives can avoid these headaches?
  

Are we sure that they do avoid this headache?  A flash archive
(on ufs root) is created by doing a cpio of the root file system.
Could a cpio end up archiving a file that was mid-way through
an SQLite2 transaction?

Lori

Ultimately I'm doing this to clone ZFS root systems because at the moment Flash 
Archives are UFS only.


-Original Message-
From: Brent Jones [mailto:br...@servuhome.net] 
Sent: Tuesday, February 24, 2009 2:49 PM

To: Christopher Mera
Cc: Mattias Pantzare; Nicolas Williams; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs streams  data corruption

On Tue, Feb 24, 2009 at 11:32 AM, Christopher Mera cm...@reliantsec.net wrote:
  

Thanks for your responses..

Brent:
And I'd have to do that for every system that I'd want to clone?  There
must be a simpler way.. perhaps I'm missing something.


Regards,
Chris




Well, unless the database software itself can notice a snapshot
taking place, and flush all data to disk, pause transactions until the
snapshot is finished, then properly resume, I don't know what to tell
you.
It's an issue for all databases, Oracle, MSSQL, MySQL... how to do an
atomic backup, without stopping transactions, and maintaining
consistency.
Replication is on possible solution, dumping to a file periodically is
one, or just tolerating that your database will not be consistent
after a snapshot and have to replay logs / consistency check it after
bringing it up from a snapshot.

Once you figure that out in a filesystem agnostic way, you'll be a
wealthy person indeed.


  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs streams data corruption

2009-02-23 Thread Lori Alt


I don't know what's causing this, nor have I seen it. 


Can you send more information about the errors you
see when the system crashes and svc.configd fails?

Doing the scrub seems like a harmless and possibly
useful thing to do.  Let us know what you find out
from it.

Lori

On 02/23/09 11:05, Christopher Mera wrote:


Hi folks,

 

I recently read up on Scott Dickson's blog with his solution for 
jumpstart/flashless cloning of ZFS root filesystem boxes.  I have to 
say that it initially looks to work out cleanly, but of course there 
are kinks to be worked out that deal with auto mounting filesystems 
mostly. 

 

The issue that I'm having is that a few days after these cloned 
systems are brought up and reconfigured they are crashing and 
svc.configd refuses to start.


 

I thought about using zpool scrub poolname  right after completing 
the stream as an integrity check. 

 


If you have any suggestions about this I'd love to hear them!

 

 


Thanks,

Christopher Mera

 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread Lori Alt



Dave wrote:


Frank Cusack wrote:


When you try to backup the '/' part of the root pool, it will get
mounted on the altroot itself, which is of course already occupied.
At that point, the receive will fail.

So far as I can tell, mounting the received filesystem is the last
step in the process.  So I guess maybe you could replicate everything
except '/', finally replicate '/' and just ignore the error message.
I haven't tried this.  You have to do '/' last because the receive
stops at that point even if there is more data in the stream.



Wouldn't it be relatively easy to add an option to 'zfs receive' to 
ignore/not mount the received filesystem, or set the canmount option 
to 'no' when receiving? Is there an RFE for this, or has it been added 
to a more recent release already?


see:

http://bugs.opensolaris.org/view_bug.do?bug_id=6794452

now fixed in both the community edition and in Update 8
of Solaris 10.

- Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-02-19 Thread Lori Alt

On 02/18/09 21:24, Jerry Kemp wrote:

Hello Lori,

Thank you again for the quick reply.

Unfortunately, I had mistakenly anticipated a somewhat quicker 
integration than Solaris 10u8.


Approaching this from another angle, would it be possible for me to 
build a jumpstart server using a current Solaris Nevada b107/SXCE, and 
to ultimately jumpstart Solaris 10u6 clients with a ZFS root using a 
flash archive as a source?

No, you need install software that has been modified
to understand how to unpack a flash archive onto
a system that has been created with a zfs pool.

There is this:

http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs

It's not the same as flash archive support, but it accomplishes
some of the same goals.

- Lori


thank you,

Jerry Kemp

On 02/18/09 10:40, Lori Alt wrote:

Latest is that this will go into an early build of Update 8
and be available as a patch shortly thereafter (shortly
after it's putback, that is.  The patch doesn't have to wait for U8
to be released.)

I will update the CR with this information.

Lori


On 02/18/09 09:12, Jerry K wrote:

Hello Lori,

Any update to this issue, and can you speculate as to if it will be 
a patch to Solaris 10u6, or part of 10u7?


Thanks again,

Jerry


Lori Alt wrote:


This is in the process of being resolved right now.  Stay tuned
for when it will be available.  It might be a patch to Update 6.

In the meantime, you might try this:

http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs 



- Lori


On 01/09/09 12:28, Jerry K wrote:
I understand that currently, at least under Solaris 10u6, it is 
not possible to jumpstart a new system with a zfs root using a 
flash archive as a source.


Can anyone comment as to whether this restriction will pass in the 
near term, or if this is a while out (6+ months) before this will 
be possible?


Thanks,



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-02-18 Thread Lori Alt

Latest is that this will go into an early build of Update 8
and be available as a patch shortly thereafter (shortly
after it's putback, that is.  The patch doesn't have to wait for U8
to be released.)

I will update the CR with this information.

Lori


On 02/18/09 09:12, Jerry K wrote:

Hello Lori,

Any update to this issue, and can you speculate as to if it will be a 
patch to Solaris 10u6, or part of 10u7?


Thanks again,

Jerry


Lori Alt wrote:


This is in the process of being resolved right now.  Stay tuned
for when it will be available.  It might be a patch to Update 6.

In the meantime, you might try this:

http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs 



- Lori


On 01/09/09 12:28, Jerry K wrote:
I understand that currently, at least under Solaris 10u6, it is not 
possible to jumpstart a new system with a zfs root using a flash 
archive as a source.


Can anyone comment as to whether this restriction will pass in the 
near term, or if this is a while out (6+ months) before this will be 
possible?


Thanks,

Jerry
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recover data in /root

2009-02-12 Thread Lori Alt

On 02/11/09 12:14, Jonny Gerold wrote:
I have a non bootable disk and need to recover files from /root... 
When I import the disk via zpool import /root isnt mounted...


Thanks, Jonny
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

I need more information.  Is this a root pool?  (you say it's non-bootable,
but that does mean it's not a root pool, or it's a root pool that is
failing to boot?).

Is the  name of the pool root?  Are there error messages
from the import?  What does zfs get all dataset-name report
after the import?  What's the output of zfs list? 


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root pool over more than one disks?

2009-02-11 Thread Lori Alt

On 02/11/09 01:28, Sandro wrote:

Hey Cindy

Thanks for your help.

How would I configure a 2-way mirror pool for a root pool?
Basically I'd do it this way.

zpool create pool mirror disk0 disk2 mirror disk1 disk3
  

This command does not create a valid root pool.  Root pools cannot
have more than one top-level vdevs.  I.e., you can't concatenate two
mirrored vdevs.
or with an already configured root pool mirror 
zpool add rpool mirror disk1 disk3
  

Here, you need attach, not 'add'.  And even then, you need to
add the boot blocks as a result of this bug:

   * CR 6668666 - If you attach a disk to create a mirrored root pool
 after an initial installation, you will need to apply the boot
 blocks to the secondary disks. For example:

 sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c0t1d0s0
 x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0 



Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS core contributor nominations

2009-02-05 Thread Lori Alt
I accept nomination as a core contributor to the zfs
community.

Lori Alt

On 02/02/09 08:55, Mark Shellenbaum wrote:
 The time has come to review the current Contributor and Core contributor 
 grants for ZFS.  Since all of the ZFS core contributors grants are set 
 to expire on 02-24-2009 we need to renew the members that are still 
 contributing at core contributor levels.   We should also add some new 
 members to both Contributor and Core contributor levels.

 First the current list of Core contributors:

 Bill Moore (billm)
 Cindy Swearingen (cindys)
 Lori M. Alt (lalt)
 Mark Shellenbaum (marks)
 Mark Maybee (maybee)
 Matthew A. Ahrens (ahrens)
 Neil V. Perrin (perrin)
 Jeff Bonwick (bonwick)
 Eric Schrock (eschrock)
 Noel Dellofano (ndellofa)
 Eric Kustarz (goo)*
 Georgina A. Chua (chua)*
 Tabriz Holtz (tabriz)*
 Krister Johansen (johansen)*

 All of these should be renewed at Core contributor level, except for 
 those with a *.  Those with a * are no longer involved with ZFS and 
 we should let their grants expire.

 I am nominating the following to be new Core Contributors of ZFS:

 Jonathan W. Adams (jwadams)
 Chris Kirby
 Lin Ling
 Eric C. Taylor (taylor)
 Mark Musante
 Rich Morris
 George Wilson
 Tim Haley
 Brendan Gregg
 Adam Leventhal
 Pawel Jakub Dawidek
 Ricardo Correia

 For Contributor I am nominating the following:
 Darren Moffat
 Richard Elling

 I am voting +1 for all of these (including myself)

 Feel free to nominate others for Contributor or Core Contributor.


 -Mark



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-01-09 Thread Lori Alt

This is in the process of being resolved right now.  Stay tuned
for when it will be available.  It might be a patch to Update 6.

In the meantime, you might try this:

http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs

- Lori


On 01/09/09 12:28, Jerry K wrote:
 I understand that currently, at least under Solaris 10u6, it is not 
 possible to jumpstart a new system with a zfs root using a flash archive 
 as a source.

 Can anyone comment as to whether this restriction will pass in the near 
 term, or if this is a while out (6+ months) before this will be possible?

 Thanks,

 Jerry
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot Solaris 10/08 whole disk or slice

2008-12-18 Thread Lori Alt

On 12/18/08 12:57, Ian Collins wrote:

Shawn joy wrote:
  

Hi All,

I see from the zfs Best practices guide 


http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

 ZFS Root Pool Considerations

* A root pool must be created with disk slices rather than whole disks. Allocate the entire disk capacity for the root pool to slice 0, for example, rather than partition the disk that is used for booting for many different uses. 


What issues are there if one would like to uses other slices on the same disk 
for data. Is this even supported in Solaris 10/08?
  


There shouldn't be any issues.

  

Correct.  You can use other slices on the disk for other pools or file
systems of other types.

Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-12-10 Thread Lori Alt

On 12/10/08 12:15, Jesus Cea wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ian Collins wrote:
  

I have ZFS root/boot in my environment, and I am interested in
separating /var in a independent dataset. How can I do it. I can use
Live Upgrade, if needed.
  

It's an install option.



But I am not installing, but doing a Live Upgrade. The machine is in
production; I can not do a reinstall. I can mess with configuration
files an create datasets and such by hand.
  

OK, here's some mail I sent to someone last week in response to
a similar question:


The separate /var directory is supported as an option
of an initial install, but right now, there's no way to
get it automatically as a result of a migration from
a ufs BE to a zfs BE.  However, you can still make it
happen:

You might possibly want to boot from the failsafe
archive while doing this, but doing it while booted
from the BE I was modifying just worked for me:

Assuming that your BE name is myBE, you should
be able to do the following:

# zfs create -o canmount=noauto pool/ROOT/myBE/var
# zfs set mountpoint=/mnt pool/ROOT/myBE/var
# zfs mount pool/ROOT/myBE/var   # this assumes you have an empty 
directory /mnt

# cd /var
# find . |  cpio -pmud /mnt
# zfs unmount pool/ROOT/myBE/var
# zfs inherit mountpoint pool/ROOT/myBE/var

Now reboot.

If that all works, you might want to then boot off the
failsafe archive, mount pool/ROOT/myBE to a
temporary mount point, and then delete the old
/var directory under that mount point (just to get
rid of the old one so that it doesn't cause confusion
later). 


The instructions above are intended for use when booted
from the BE you're trying to modify, but you could also
do something similar after you've created a new BE,
but before you boot from it.

LiveUpgrade should probably support doing something
like this, but then, LiveUpgrade should probably support a lot
of things.

Lori

  

- --
  

The correct sig. delimiter is -- 



I know. The issue is PGP/GNUPG/Enigmail integration.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSUAVTplgi5GaxT1NAQINdwP+I3rz5NJPu7pvQ7yJ5TPg1oxjs/5pHnz/
o5ZyCaLChfucM3vIebaBo3pTCpJUpqB+zV3MOE7Q1e/zjEkCNi38A5nGXTJJlO+t
10ecqrLau3Dyv3frIF32Nfj9yLQl5WeN4rJeFLUbAueGf3iXpFJQMlno3BoOuOhb
wG6KMZB3RA8=
=uyYE
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [install-discuss] differences.. why?

2008-12-02 Thread Lori Alt
On 12/02/08 03:21, jan damborsky wrote:
 Hi Dick,

 I am redirecting your question to zfs-discuss
 mailing list, where people are more knowledgeable
 about this problem and your question could be
 better answered.

 Best regards,
 Jan


 dick hoogendijk wrote:
   
 I have s10u6 installed on my server.
 zfs list (partly):
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 rpool  88.8G   140G  27.5K  /rpool
 rpool/ROOT 20.0G   140G18K  /rpool/ROOT
 rpool/ROOT/s10BE2  20.0G   140G  7.78G  /

 But just now, on a newly installed s10u6 system I got rpool/ROOT with a
 mountpoint legacy

 
The mount point for /rootpoolname/ROOT is supposed
to be legacy because that dataset should never be mounted.
It's just a container dataset to group all the BEs.

 The drives were different. On the latter (legacy) system it was not
 formatted (yet) (in VirtualBox). On my server I switched from UFS to
 ZFS, so I first created a rpool and than did a luupgrade into it.
 This could explain the mountpoint /rpool/ROOT but WHY the difference?
 Why can't s10u6 install the same mountpoint on the new disk?
 The server runs very well; is this legacy thing really needed?

 
When you created the rpool, did you also explicitly create the rpool/ROOT
datasets?   If you did create it and didn't set the mount point to legacy,
that explains why you ended up with your original configuration.  If
you didn't create the rpool/ROOT dataset yourself, and instead let 
LiveUpgrade
create it automatically, and LiveUpgrade set the mountpoint to 
/rpool/ROOT, then
that's a bug in LiveUpgrade (though a minor one, I think).

Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-12-02 Thread Lori Alt

On 12/02/08 09:00, Gary Mills wrote:

On Mon, Dec 01, 2008 at 04:45:16PM -0700, Lori Alt wrote:
  

   On 11/27/08 17:18, Gary Mills wrote:
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:

I'm currently working with an organisation who
want use ZFS for their   full zones. Storage is SAN attached, and they
also want to create a   separate /var for each zone, which causes issues
when the zone is   installed. They believe that a separate /var is
still good practice.
If your mount options are different for /var and /, you will need
a separate filesystem.  In our case, we use `setuid=off' and
`devices=off' on /var for security reasons.  We do the same thing
for home directories and /tmp .

For zones?

Sure, if you require different mount options in the zones.

   I looked into this and found that, using ufs,  you can indeed set up
   the zone's /var directory as a separate file system.  I  don't know
   about
   how LiveUpgrade works with that configuration (I didn't try it).
   But I was at least able to get the zone to install and boot.
   But with zfs, I couldn't even get a zone with a separate /var
   dataset to install, let alone be manageable with LiveUpgrade.
   I configured the zone like so:
   # zonecfg -z z4
   z4: No such zone configured
   Use 'create' to begin configuring a new zone.
   zonecfg:z4 create
   zonecfg:z4 set zonepath=/zfszones/z4
   zonecfg:z4 add fs
   zonecfg:z4:fs set dir=/var
   zonecfg:z4:fs set special=rpool/ROOT/s10x_u6wos_07b/zfszones/z4/var
   zonecfg:z4:fs set type=zfs
   zonecfg:z4:fs end
   zonecfg:z4 exit
   I then get this result from trying to install the zone:
   prancer# zoneadm -z z4 install
   Preparing to install zone z4.
   ERROR: No such file or directory: cannot mount /zfszones/z4/root/var



You might have to pre-create this filesystem. `special' may not be
needed at all.
  

I did pre-create the file system.  Also, I tried omitting special and
zonecfg complains. 


I think that there might need to be some changes
to zonecfg and the zone installation code to get separate
/var datasets in non-global zones to work.

Lori
  

   in non-global zone to install: the source block device or directory
   rpool/ROOT/s10x_u6wos_07b/zfszones/z1/var cannot be accessed
   ERROR: cannot setup zone z4 inherited and configured file systems
   ERROR: cannot setup zone z4 file systems inherited and configured
   from the global zone
   ERROR: cannot create zone boot environment z4
   I don't fully  understand the failures here.  I suspect that there are
   problems both in the zfs code and zones code.  It SHOULD work though.
   The fact that it doesn't seems like a bug.
   In the meantime, I guess we have to conclude that a separate /var
   in a non-global zone is not supported on zfs.  A separate /var in
   the global zone is supported  however, even when the root is zfs.



I haven't tried ZFS zone roots myself, but I do have a few comments.
ZFS filesystems are cheap because they don't require separate disk
slices.  As well, they are attribute boundaries.  Those are necessary
or convenient in some case.

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [install-discuss] differences.. why?

2008-12-02 Thread Lori Alt

On 12/02/08 11:29, dick hoogendijk wrote:

Lori Alt wrote:
  

On 12/02/08 03:21, jan damborsky wrote:


Hi Dick,

I am redirecting your question to zfs-discuss
mailing list, where people are more knowledgeable
about this problem and your question could be
better answered.

Best regards,
Jan


dick hoogendijk wrote:

  

I have s10u6 installed on my server.
zfs list (partly):
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  88.8G   140G  27.5K  /rpool
rpool/ROOT 20.0G   140G18K  /rpool/ROOT
rpool/ROOT/s10BE2  20.0G   140G  7.78G  /

But just now, on a newly installed s10u6 system I got rpool/ROOT with a
mountpoint legacy




The mount point for /rootpoolname/ROOT is supposed
to be legacy because that dataset should never be mounted.
It's just a container dataset to group all the BEs.



The drives were different. On the latter (legacy) system it was not
formatted (yet) (in VirtualBox). On my server I switched from UFS to
ZFS, so I first created a rpool and than did a luupgrade into it.
This could explain the mountpoint /rpool/ROOT but WHY the difference?
Why can't s10u6 install the same mountpoint on the new disk?
The server runs very well; is this legacy thing really needed?




When you created the rpool, did you also explicitly create the rpool/ROOT
datasets?   If you did create it and didn't set the mount point to
legacy,
that explains why you ended up with your original configuration.  If
you didn't create the rpool/ROOT dataset yourself, and instead let
LiveUpgrade
create it automatically, and LiveUpgrade set the mountpoint to
/rpool/ROOT, then
that's a bug in LiveUpgrade (though a minor one, I think).



NO, I'm quite positive all I did was zfs create rpool and after that I
did a lucreate -n zfsBE -p rpool followed by luupgrade -u -n zfsBE -s
/iso

So, it must have been LU that forgot to set the mountpoint to legacy.
  

yes, we verified that and filed a bug against LU.

What is the correct syntax to correct this situation?

  

I'm not sure you really  need to, but you should be able
to do this:

zfs unmount rpool/ROOT
zfs set mountpoint=legacy rpool/ROOT

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot - U6 kernel patch breaks sparc boot

2008-12-02 Thread Lori Alt

I don't want to steer you wrong under the circumstances,
so I think we need more information. 

First, is the failure the same as in the earlier part of this
thread.   I.e., when you boot, do you get a failure like this?

Warning: Fcode sequence resulted in a net stack depth change of 1
Evaluating:

Evaluating:

The file just loaded does not appear to be executable


Second, at least at first glance, this looks like more of
a generic patch problem than a problem specifically
related to zfs boot.  Since this is S10, not OpenSolaris,
perhaps you should be escalating this through the
standard support channels.  This alias probably
won't get you any really useful answers on general
problems with patching.

Lori

On 12/02/08 14:42, Vincent Fox wrote:
 The SupportTech responding to case #66153822 so far
 has only suggested boot from cdrom and patchrm 137137-09
 which tells me I'm dealing with a level-1 binder monkey.
 It's the idle node of a cluster holding 10K email accounts
 so I'm proceeding cautiously.  It is unfortunate the admin doing
 the original patching did them from multi-user but here we are.

 I am attempting to boot net:dhcp -s just to collect more info:

 My patchadd output shows 138866-01  137137-09 being applied OK:

 bash-3.00# patchadd /net/matlock/local/d02/patches/all_patches/138866-01
 Validating patches...

 Loading patches installed on the system...

 Done!

 Loading patches requested to install.

 Done!

 Checking patches that you specified for installation.

 Done!


 Approved patches will be installed in this order:

 138866-01 


 Checking installed patches...
 Verifying sufficient filesystem capacity (dry run method)...
 Installing patch packages...

 Patch 138866-01 has been successfully installed.
 See /var/sadm/patch/138866-01/log for details

 Patch packages installed:
   SUNWcsr

 bash-3.00# patchadd /net/matlock/local/d02/patches/all_patches/137137-09
 Validating patches...

 Loading patches installed on the system...

 Done!

 Loading patches requested to install.

 Version of package SUNWcakr from directory SUNWcakr.u in patch 137137-09 
 differs from the package installed on the system.
 Version of package SUNWcar from directory SUNWcar.u in patch 137137-09 
 differs from the package installed on the system.
 Version of package SUNWkvm from directory SUNWkvm.c in patch 137137-09 
 differs from the package installed on the system.
 Version of package SUNWkvm from directory SUNWkvm.d in patch 137137-09 
 differs from the package installed on the system.
 Version of package SUNWkvm from directory SUNWkvm.m in patch 137137-09 
 differs from the package installed on the system.
 Version of package SUNWkvm from directory SUNWkvm.u in patch 137137-09 
 differs from the package installed on the system.
 Architecture for package SUNWnxge from directory SUNWnxge.u in patch 
 137137-09 differs from the package installed on the system.
 Version of package SUNWcakr from directory SUNWcakr.us in patch 137137-09 
 differs from the package installed on the system.
 Version of package SUNWcar from directory SUNWcar.us in patch 137137-09 
 differs from the package installed on the system.
 Version of package SUNWkvm from directory SUNWkvm.us in patch 137137-09 
 differs from the package installed on the system.
 Done!

 The following requested patches have packages not installed on the system
 Package SUNWcpr from directory SUNWcpr.u in patch 137137-09 is not installed 
 on the system. Changes for package SUNWcpr will not be applied to the system.
 Package SUNWefc from directory SUNWefc.u in patch 137137-09 is not installed 
 on the system. Changes for package SUNWefc will not be applied to the system.
 Package SUNWfruip from directory SUNWfruip.u in patch 137137-09 is not 
 installed on the system. Changes for package SUNWfruip will not be applied to 
 the system.
 Package SUNWluxd from directory SUNWluxd.u in patch 137137-09 is not 
 installed on the system. Changes for package SUNWluxd will not be applied to 
 the system.
 Package SUNWs8brandr from directory SUNWs8brandr in patch 137137-09 is not 
 installed on the system. Changes for package SUNWs8brandr will not be applied 
 to the system.
 Package SUNWs8brandu from directory SUNWs8brandu in patch 137137-09 is not 
 installed on the system. Changes for package SUNWs8brandu will not be applied 
 to the system.
 Package SUNWs9brandr from directory SUNWs9brandr in patch 137137-09 is not 
 installed on the system. Changes for package SUNWs9brandr will not be applied 
 to the system.
 Package SUNWs9brandu from directory SUNWs9brandu in patch 137137-09 is not 
 installed on the system. Changes for package SUNWs9brandu will not be applied 
 to the system.
 Package SUNWus from directory SUNWus.u in patch 137137-09 is not installed on 
 the system. Changes for package SUNWus will not be applied to the system.
 Package SUNWefc from directory SUNWefc.us in patch 137137-09 is not installed 
 on the system. Changes for package SUNWefc will not be 

Re: [zfs-discuss] Separate /var

2008-12-02 Thread Lori Alt

On 12/02/08 10:24, Mike Gerdts wrote:

On Tue, Dec 2, 2008 at 11:17 AM, Lori Alt [EMAIL PROTECTED] wrote:
  

I did pre-create the file system.  Also, I tried omitting special and
zonecfg complains.

I think that there might need to be some changes
to zonecfg and the zone installation code to get separate
/var datasets in non-global zones to work.



You could probably do something like:

zfs create rpool/zones/$zone
zfs create rpool/zones/$zone/var

zonecfg -z $zone
add fs
  set dir=/var
  set special=/zones/$zone/var
  set type=lofs
  end
...

zoneadm -z $zone install

  


I follow you up to here.  But why do the next steps?


zonecfg -z $zone
remove fs dir=/var

zfs set mountpoint=/zones/$zone/root/var rpool/zones/$zone/var

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-12-02 Thread Lori Alt

On 12/02/08 11:04, Brian Wilson wrote:

- Original Message -
From: Lori Alt [EMAIL PROTECTED]
Date: Tuesday, December 2, 2008 11:19 am
Subject: Re: [zfs-discuss] Separate /var
To: Gary Mills [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org

  

On 12/02/08 09:00, Gary Mills wrote:


On Mon, Dec 01, 2008 at 04:45:16PM -0700, Lori Alt wrote:
  
  

   On 11/27/08 17:18, Gary Mills wrote:
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:

I'm currently working with an organisation who
want use ZFS for their   full zones. Storage is SAN attached, and 


they

also want to create a   separate /var for each zone, which causes 


issues


when the zone is   installed. They believe that a separate /var is
still good practice.
If your mount options are different for /var and /, you will need
a separate filesystem.  In our case, we use `setuid=off' and
`devices=off' on /var for security reasons.  We do the same thing
for home directories and /tmp .

For zones?

Sure, if you require different mount options in the zones.

   I looked into this and found that, using ufs,  you can indeed 


set up


   the zone's /var directory as a separate file system.  I  don't know
   about
   how LiveUpgrade works with that configuration (I didn't try it).
   But I was at least able to get the zone to install and boot.
   But with zfs, I couldn't even get a zone with a separate /var
   dataset to install, let alone be manageable with LiveUpgrade.
   I configured the zone like so:
   # zonecfg -z z4
   z4: No such zone configured
   Use 'create' to begin configuring a new zone.
   zonecfg:z4 create
   zonecfg:z4 set zonepath=/zfszones/z4
   zonecfg:z4 add fs
   zonecfg:z4:fs set dir=/var
   zonecfg:z4:fs set special=rpool/ROOT/s10x_u6wos_07b/zfszones/z4/var
   zonecfg:z4:fs set type=zfs
   zonecfg:z4:fs end
   zonecfg:z4 exit
   I then get this result from trying to install the zone:
   prancer# zoneadm -z z4 install
   Preparing to install zone z4.
   ERROR: No such file or directory: cannot mount /zfszones/z4/root/var




I think you're running into the problem of defining the var as the filesystem 
that already exists under the zone root.  We had issues with that, so any time 
I've been doing filesystems, I don't push in zfs datasets, I create a zfs 
filesystem in the global zone and mount that directory into the zone with lofs. 
 For example, I've got a pool zdisk with a filesystem down the path -
zdisk/zones/zvars/(zonename)

which mounts itself to -
/zdisk/zones/zvars/(zonename)

It's a ZFS filesystem with quota and reservation setup, and I just do an lofs 
to it via these lines in the /etc/zones/(zonename).xml file -

  filesystem special=/zdisk/zones/zvars/(zonename) directory=/var 
type=lofs
fsoption name=nodevices/
  /filesystem

I think that's the equivalent of the following zonecfg lines -

zonecfg:z4 add fs
zonecfg:z4:fs set dir=/var
zonecfg:z4:fs set special=/zdisk/zones/zvars/z4/var
zonecfg:z4:fs set type=lofs
zonecfg:z4:fs end

I think to put the zfs into the zone, you need to do an add dataset, instead of 
an add fs.  I tried that once and didn't like the results though completely.  
The dataset was controllable inside the zone (which is what I wanted at the 
time), but it wasn't controllably from the global zone anymore.  And I couldn't 
access it from the global zone easily to get the backup software to pick it up.

Doing it this way means you have to manage the zfs datasets from the global 
zone, but that's not really an issue here.
  

So I tried your suggestion and it appears to work, at
least initially (I have a feeling that it will cause
problems later if I want to clone the BE using LiveUpgrade,
but first things first.)



So, create the separate filesystems you want in the global zone (without 
stacking them under the zoneroot - separate directory somewhere


Why does it have to be in a separate directory?

lori

 setup the zfs stuff you want, then lofs it into the local zone.  I've had that 
install successfully before.

Hope that's helpful in some way!

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-12-01 Thread Lori Alt

On 11/27/08 17:18, Gary Mills wrote:

On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
  

On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:


On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
  

I'm currently working with an organisation who


want use ZFS for their   full zones. Storage is SAN attached, and they
also want to create a   separate /var for each zone, which causes issues
when the zone is   installed. They believe that a separate /var is
still good practice.
If your mount options are different for /var and /, you will need
a separate filesystem.  In our case, we use `setuid=off' and
`devices=off' on /var for security reasons.  We do the same thing
for home directories and /tmp .

  

For zones?



Sure, if you require different mount options in the zones.

  

I looked into this and found that, using ufs,  you can indeed set up
the zone's /var directory as a separate file system.  I  don't know about
how LiveUpgrade works with that configuration (I didn't try it). 
But I was at least able to get the zone to install and boot.


But with zfs, I couldn't even get a zone with a separate /var
dataset to install, let alone be manageable with LiveUpgrade.
I configured the zone like so:

# zonecfg -z z4
z4: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:z4 create
zonecfg:z4 set zonepath=/zfszones/z4
zonecfg:z4 add fs
zonecfg:z4:fs set dir=/var
zonecfg:z4:fs set special=rpool/ROOT/s10x_u6wos_07b/zfszones/z4/var
zonecfg:z4:fs set type=zfs
zonecfg:z4:fs end
zonecfg:z4 exit

I then get this result from trying to install the zone:

prancer# zoneadm -z z4 install
Preparing to install zone z4.
ERROR: No such file or directory: cannot mount /zfszones/z4/root/var 
in non-global zone to install: the source block device or directory 
rpool/ROOT/s10x_u6wos_07b/zfszones/z1/var cannot be accessed

ERROR: cannot setup zone z4 inherited and configured file systems
ERROR: cannot setup zone z4 file systems inherited and configured from 
the global zone

ERROR: cannot create zone boot environment z4

I don't fully  understand the failures here.  I suspect that there are
problems both in the zfs code and zones code.  It SHOULD work though.
The fact that it doesn't seems like a bug.

In the meantime, I guess we have to conclude that a separate /var
in a non-global zone is not supported on zfs.  A separate /var in
the global zone is supported  however, even when the root is zfs.

Lori



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] MIgrating to ZFS root/boot with system in several datasets

2008-11-18 Thread Lori Alt

On 11/08/08 15:24, Jesus Cea wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

  

Any advice?. Suggestions/alternative approaches welcomed.
  

One obvious question - why?



Two reasons:
  

The SXCE code base really only supports BEs that are
either all in one dataset, or have everything but /var in
one dataset and /var in its own dataset (the reason for
supporting a separate /var is to be able to set a set a
quota on it so growth in log files, etc. can't fill up a
root pool).

1. Backup policies and ZFS properties.
  

I can see wanting to do this, but right now, it's likely
to cause problems.  Maybe it will be supported
eventually.


2. I don't have enough spare space to rejoin all system slices in a
single one.
  

Since all parts of Solaris should be in the same pool
(whether they are in one dataset or multiple datasets),
you need the same amount of space either way.
So this is not a valid reason to use multiple datasets.


I thinking in messing with ICF.* files. Seems easy enough to try.

  

It might work, but no guarantees.

lori

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSRYRpJlgi5GaxT1NAQI/igP/V/I8yxTPIemBh71Oo6hPcvUYPQVyt8G2
CYkzT+3/3zLPqktHtdEPJzgcqyRyZPhgn14pBuSeMZ6CYZE4Crf3VxAMFwOKBGWX
jqCPben0AnJhgbyk+PQvxrPI6vxzzPwPlNWWGv2VZelBdDFbzmdhEUhpF4xW4ACX
7cJX9L3gz6M=
=uFZh
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs version number

2008-11-06 Thread Lori Alt

Thanks for pointing this out.  This has now been corrected.

Lori

Francois Dion wrote:


First,

Congrats to whoever/everybody was involved getting zfs booting in 
solaris 10 u6. This is killer.


Second, somebody who has admin access to this page here:
http://www.opensolaris.org/os/community/zfs/version/10/

Solaris 10 U6 is not mentionned. It is mentionned here:
http://www.opensolaris.org/os/community/zfs/version/9/

But U6 is ZFS v 10.

Thanks!
Francois



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fwd: Re: FYI: s10u6 LU issues

2008-11-04 Thread Lori Alt


Would you send the messages that appeared with
the failed ludelete?

Lori

Dick Hoogendijk wrote:


Newsgroups: comp.unix.solaris
From: Dick Hoogendijk [EMAIL PROTECTED]
Subject: Re: FYI: s10u6 LU issues

quoting cindy (Mon, 3 Nov 2008 13:01:07 -0800 (PST)):
 


Besides the release notes, I'm collecting many issues here as well:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
   



Hi Cindy,

As you are collecting weird things with ZFS I think I have one for you:
I have s10u6 fully running off ZFS, including my sparse zones.
Yesterday I did a lucreate -n zfsBE and the new snapshots/clones were
created as expected. The clones were fully mountable too and seemed in
order. That pleased me, because on UFS the zones never were handled OK
with LU and now they were.

However, today I did a ludelete zfsBE and it was/is refused.
Here are some messages from lustatus, ludelete, zfs list and df.
The BE cannot be deleted and I can't understand the errors.
I do know how to get rid of zfsBE (/etc/lutab and /etc/lu/ICF.x) and I
can remove the zfs data by removing the clone/snapshot, I guess. But
ludelete should be able to do it too, don't you think?
To be able to create == to be able to delete ;-)

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Custom Jumpstart and RAID-10 ZFS rpool

2008-10-29 Thread Lori Alt

Kyle McDonald wrote:

Ian Collins wrote:
  

Stephen Le wrote:
  


Is it possible to create a custom Jumpstart profile to install Nevada
on a RAID-10 rpool? 

  

No, simple mirrors only.
  

Though a finish sscript could add additional simple mirrors to create 
the config his example would have created.

Pretty sure that's still not RAID10 though.
  

No, this isn't possible.  The attempt to add additional
top-level vdevs will fail because it's not supported for
root pools.

One top-level vdev only.  We hope to relax this
restriction and support both multiple top-level
vdevs and RAID-Z for root pools in the future.
But that work hasn't started yet, so I have no
estimates at all for when it might be available.

lori

And any files laid down by the installer would be constrained to the 
first mirrored pair, only new files would have a chance at be 
distributed over the addtional pairs.


 -Kyle

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lu ZFS root questions

2008-10-28 Thread Lori Alt
Karl Rossing wrote:
 Currently running b93.

 I'd like to try out b101.

 I previously had b90 running on the system. I ran ludelete snv_90_zfs

 but I still see snv_90_zfs:
 $ zfs list
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 rpool 52.9G  6.11G31K  /rpool
 rpool/ROOT41.0G  6.11G19K  /rpool/ROOT
 rpool/ROOT/snv_90_zfs 29.6G  6.11G  29.3G  /.alt.tmp.b-Ugf.mnt/
 rpool/ROOT/[EMAIL PROTECTED]   319M  -  29.6G  -
 rpool/ROOT/snv_93 11.4G  6.11G  26.4G  /
 rpool/dump3.95G  6.11G  3.95G  -
 rpool/swap8.00G  13.1G  1.05G  -


 I also did some house cleaning and delete about 10GB of data but I don't 
 see that reflected with zfs list or df -h.

 Should I be concerned before I do a lucreate/luupgrade?
   
Probably, because it looks like you might not
have enough space.

Was there any output from ludelete?

If you do a lustatus now, does the BE still show up?

lori
 Thanks
 Karl






 CONFIDENTIALITY NOTICE:  This communication (including all attachments) is
 confidential and is intended for the use of the named addressee(s) only and
 may contain information that is private, confidential, privileged, and
 exempt from disclosure under law.  All rights to privilege are expressly
 claimed and reserved and are not waived.  Any use, dissemination,
 distribution, copying or disclosure of this message and any attachments, in
 whole or in part, by anyone other than the intended recipient(s) is strictly
 prohibited.  If you have received this communication in error, please notify
 the sender immediately, delete this communication from all data storage
 devices and destroy all hard copies.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Enable compression on ZFS root

2008-10-15 Thread Lori Alt

Richard Elling wrote:

Vincent Fox wrote:
  

Does it seem feasible/reasonable to enable compression on ZFS root disks during 
JumpStart?

Seems like it could buy some space  performance.
  



Yes.  There have been several people who do this regularly.
Glenn wrote a blog on how to do this when installing OpenSolaris 2008.05
http://blogs.sun.com/glagasse/entry/howto_enable_zfs_compression_when

I haven't used JumpStart in some time, but it appears as though the last
arguments of the pool keyword are passed to zpool create, so you should
be able to set the parameters there.  Cindy?
  

No, the last arguments are not options.  Unfortunately,
the syntax doesn't provide a way to specify compression
at the creation time.  It should, though.  Or perhaps
compression should be the default.

lori



 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] am I screwed?

2008-10-13 Thread Lori Alt


It would also be useful to see the output of `zfs list`
and `zfs get all rpool/ROOT/snv_99` while
booted from the failsafe archive.

- lori



dick hoogendijk wrote:

James C. McPherson wrote:
  

Please add -kv  to the end of your kernel$ line in
grub,



#GRUB kernel$ add -kv
cmdk0 at ata0 target 0 lun 0
cmdk0 is /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED]/cmdk0,0
### end the machine hangst here

  

have you tried
mount -F zfs rpool/ROOT/snv_99 /a



# mount -F zfs rpool/ROOT/snv99 /a   (*)
filesystem 'rpool/ROOT/snv99' cannot be mounted using 'mount -F zfs'
Use 'zfs set mountpoint=/a' instead
If you must use 'mount -F zfs' or /etc/vfstab, use 'zfs set
mountpoint=legacy'.
See zfs(1M) for more information.

  

zpool status -v rpool


# zpool status -v rpool
pool:   rpool
state:  ONLINE
scrub:  none requested
config:
NAMESTATE   READWRITE   CKSUM
rpool   ONLINE  0   0   0
  c0d0s0ONLINE  0   0   0

  

zpool get all rpool



# zpool get all rpool
NAMEPROPERTYVALUE   SOURCE
rpool   size148G-
used61.1G
available   89.9G
capacity41%
altroot /a  local
health  ONLINE
guid936248...
version 13
bootfs  rpool/ROOT/snv99local
delegation  on  default
autoreplace off default
cachefile   nonelocal
failmodecontinuelocal
listsnapshots   off default

(*) If I chose not to mount the boot env rpool/ROOT/snv99 rw on /a I
cannot do a thing in my failsafe login. The dataset does no exist then.
I have to say 'y' to the question; get some error msgs that it fails but
then at least I can see the pool.

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] am I screwed?

2008-10-13 Thread Lori Alt

dick hoogendijk wrote:

Lori Alt wrote:
  

It would also be useful to see the output of `zfs list`
and `zfs get all rpool/ROOT/snv_99` while
booted from the failsafe archive



# zfs list
rpool 69.0G  76.6G40K /a/rpool
rpool/ROOT22.7G   18K legacy
rpool/ROOT/snv99  22.7G 10.8G /a/.alt.tmp.b-yh.mnt/
  


Well, this is clearly a problem.  I've seen it before
(there are some bugs in LU that can leave the root
dataset of a BE in this state), but I've always seen the
system boot anyway because the zfs_mountroot
code in the kernel appeared to ignore the value
of the moutnpoint property and just mounted the
dataset at / anyway. 


Since I don't fully understand the problem, I can't
be sure this will work, but I'm pretty sure it won't
hurt:  try setting the mountpoint of the dataset to /:

zfs set mountpoint=/  rpool/ROOT/snv99

I do mean /, not /a.  After executing the command,
the mountpoint shown with `zfs list` should be /a
(in the zfs list output, the current alternate root for
the pool is prepended to the persistent mount point
of the datasets to show the effective mount point).

Then reboot and see if your problem is solved.  If not,
we'll dig deeper with kmdb into what's happening.

Lori

rpool/dump1.50G 1.50G -
rpool/export  36.6G  796M /a/export
rpool/export/home 36.8G 35.8G /a/export/home
rpool/local234M  234M /a/usr/local
rpool/swap   8G  84.6G  6.74G -

# zfs get all rpool/ROOT/snv99
type filesystem
creation wed ...
used 22.7G
available 76.6G
referenced 10.8G
compressratio 1.00x
mounted no
quota none
reservation none
recordsize 128K
mountpoint /a/.alt.tmp.b-yh.mnt/
sharefs off
checksum on
compression off
atime on
devices on
exec on
setuid on
readonly off
zoned off
snapdir hidden
aclmode groupmask
aclinherit restricted
canmount noauto
shareiscsi off
xattr on
copies 1
version 3
utf8only off
normalization none
casesensitive sensitive
vscan off
nbmand off
sharesmb off
refquota none
refreservation none
primarycache all
secondarycache all
usedbysnapshots 11.9G
usedbydataset 10.8G
usedbychildren 0
usedbyrefreservation 0

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   3   >