On 11/22/11 17:54, Jim Klimov wrote:
2011-11-23 2:26, Lori Alt wrote:
Did you try a temporary mount point?
zfs mount -o mountpoint=/whatever dataset
- lori
I do not want to lie so I'll delay with a definite answer.
I think I've tried that, but I'm not certain now. I'll try
to recreate
On 09/ 6/11 11:45 PM, Daniel Carosone wrote:
On Tue, Sep 06, 2011 at 10:05:54PM -0700, Richard Elling wrote:
On Sep 6, 2011, at 9:01 PM, Freddie Cash wrote:
For example, does 'zfs send -D' use the same DDT as the pool?
No.
My understanding was that 'zfs send -D' would use the pool's DDT in
On 09/ 7/11 02:20 PM, Daniel Carosone wrote:
On Wed, Sep 07, 2011 at 08:47:36AM -0600, Lori Alt wrote:
On 09/ 6/11 11:45 PM, Daniel Carosone wrote:
My understanding was that 'zfs send -D' would use the pool's DDT in
building its own, if present.
It does not use the pool's DDT, but it does use
On 04/ 6/11 07:59 AM, Arjun YK wrote:
Hi,
I am trying to use ZFS for boot, and kind of confused about how the
boot paritions like /var to be layed out.
With old UFS, we create /var as sepearate filesystem to avoid various
logs filling up the / filesystem
I believe that creating /var as a
On 04/ 6/11 11:42 AM, Paul Kraus wrote:
On Wed, Apr 6, 2011 at 1:14 PM, Brandon Highbh...@freaks.com wrote:
The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you
Try install-disc...@opensolaris.org as this is more of a liveupgrade
issue than a zfs issue.
Lori
On 10/25/10 10:20 AM, Kartik Vashishta wrote:
I created a new BE for patching, I applied patches to that BE, I then activated
the new BE, this is the message I got:
On 09/23/10 04:40 PM, Frank Middleton wrote:
Bumping this because no one responded. Could this be because
it's such a stupid question no one wants to stoop to answering it,
or because no one knows the answer? Trying to picture, say, what
could happen in /var (say /var/adm/messages), let alone a
On 09/13/10 09:40 AM, Buck Huffman wrote:
I have a flash archive that is stored in a ZFS snapshot stream. Is there a way
to mount this image so I can read files from it.
No, but you can use the flar split command to split the flash archive
into its constituent parts, one of which will be
I don't know much about JET, but a jumpstart install of a system with a
zfs root will do the necessary disk formatting. The profile keywords
that describe the disk layout work more or less the same for zfs as they
do for ufs, subject to the ways that zfs is different from ufs (you
don't
The setting of the content_architectures field is likely to be
independent of the file system type, so at least at first glance, I
don't think that this is zfs issue. You might try this question at
install-disc...@opensolaris.org.
Lori
On 07/13/10 11:45 AM, Ketan wrote:
I have created a
On 07/ 7/10 11:33 AM, Garrett D'Amore wrote:
On Wed, 2010-07-07 at 10:09 -0700, Mark Christooph wrote:
I had an interesting dilemma recently and I'm wondering if anyone here can
illuminate on why this happened.
I have a number of pools, including the root pool, in on-board disks on the
On 07/ 6/10 10:56 AM, Ketan wrote:
I have two different servers with ZFS root but both of them has different
mountpoint for rpool/ROOT one is /rpool/ROOT and other is legacy.
It should be legacy.
Whats
the difference between the two and which is the one we should keep.
And why there is
on the
caiman-disc...@opensolaris.org alias, where needs like this can be
addressed for real, in the supported installation tools.
Lori
On 06/23/10 18:15, Lori Alt wrote:
Cindy Swearingen wrote:
On 06/23/10 10:40, Evan Layton wrote:
On 6/23/10 4:29 AM, Brian Nitz wrote:
I saw a problem
Cindy Swearingen wrote:
On 06/23/10 10:40, Evan Layton wrote:
On 6/23/10 4:29 AM, Brian Nitz wrote:
I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub failed:
# BE_PRINT_ERR=true beadm activate opensolarismigi-4
Was a bug ever filed against zfs for not allowing the bootfs property to
be set to ? We should always let that request succeed.
lori
On 05/26/10 09:09 AM, Cindy Swearingen wrote:
Hi--
I'm glad you were able to resolve this problem.
I drafted some hints in this new section:
First, I suggest you open a bug at https://defect.opensolaris.org/bz
and get a bug number.
Then, name your core dump something like bug.bugnumber and upload it
using the instructions here:
http://supportfiles.sun.com/upload
Update the bug once you've uploaded the core and supply the
On 05/12/10 04:29 AM, Ian Collins wrote:
I just tried moving a dump volume form rpool into another pool so I
used zfs send/receive to copy the volume (to keep some older dumps)
then ran dumpadm -d to use the new location. This caused a panic.
Nothing ended up in messages and needless to say,
On 03/31/10 03:50 AM, Damon Atkins wrote:
Why do we still need /etc/zfs/zpool.cache file???
(I could understand it was useful when zfs import was slow)
zpool import is now multi-threaded
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844191), hence a
lot faster, each disk
You might want to take this issue over to
caiman-disc...@opensolaris.org, because this is more of an
installation/management issue than a zfs issue. Other than providing a
mechanism for updating the zpool.cache file, the actions listed below
are not directly related to zfs.
I believe that
On 03/22/10 05:04 PM, Brandon High wrote:
On Mon, Mar 22, 2010 at 10:26 AM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:
NB. deduped streams should further reduce the snapshot size.
I haven't seen a lot of discussion on the list regarding send dedup,
I think what you're saying is: Why bother trying to backup with zfs
send
when the recommended practice, fully supportable, is to use other tools
for
backup, such as tar, star, Amanda, bacula, etc. Right?
The answer to this is very simple.
#1 ...
#2 ...
Oh, one more thing. zfs send
On 03/ 2/10 11:48 AM, Freddie Cash wrote:
On Tue, Mar 2, 2010 at 7:15 AM, Kjetil Torgrim Homme
kjeti...@linpro.no mailto:kjeti...@linpro.no wrote:
valrh...@gmail.com mailto:valrh...@gmail.com
valrh...@gmail.com mailto:valrh...@gmail.com writes:
I have been using DVDs for small
Hi Bruno,
I've tried to reproduce this panic you are seeing. However, I had
difficulty following your procedure. See below:
On 02/08/10 15:37, Bruno Damour wrote:
On 02/ 8/10 06:38 PM, Lori Alt wrote:
Can you please send a complete list of the actions taken: The
commands you used
This sounds more like an install issue than a zfs issue. I suggest you
take this to caiman-disc...@opensolaris.org .
Lori
On 02/10/10 23:44, Jeff Rogers wrote:
Thanks for the tip but it was not that. The two hard drives where running under
RAID 1 on my Linux install so the two drives have
On 02/11/10 08:15, taemun wrote:
Can anyone comment about whether the on-boot Reading ZFS confi is
any slower/better/whatever than deleting zpool.cache, rebooting and
manually importing?
I've been waiting more than 30 hours for this system to come up. There
is a pool with 13TB of data attached.
Can you please send a complete list of the actions taken: The commands
you used to create the send stream, the commands used to receive the
stream. Also the output of `zfs list -t all` on both the sending and
receiving sides. If you were able to collect a core dump (it should be
in
On 01/25/10 17:56, Shannon Fiume wrote:
Hi,
I installed opensolaris on a x2200 m2 with two internal drives that
had an existing root pool with a Solaris 10 update 6. After installing
opensolaris 2009.06 the host refused to boot. The opensolaris install
was fine. I had to pull the second hard
First, you might want to send this out to caiman-disc...@opensolaris.org
as well in order to find the experts in the OpenSolaris install
process. (I am familiar with zfs booting in general and the legacy
installer, but not so much about the OpenSolaris installer).
Second, including the
On 01/28/10 12:05, Dick Hoogendijk wrote:
Op 28-1-2010 17:35, Cindy Swearingen schreef:
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.
On 01/28/10 14:08, dick hoogendijk wrote:
On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
But those could be copied by send/recv from the larger disk (current
root pool) to the smaller disk (intended new root pool). You won't be
attaching anything until you can boot off the smaller disk
On 01/25/10 16:08, Daniel Carosone wrote:
On Mon, Jan 25, 2010 at 05:42:59PM -0500, Miles Nordin wrote:
et You cannot import a stream into a zpool of earlier revision,
et thought the reverse is possible.
This is very bad, because it means if your backup server is pool
version 22,
On 01/22/10 01:55, Alexander wrote:
Is it possible to have /etc on separate zfs pool in OpenSolaris?
The purpose is to have rw non-persistent main pool and rw persistent /etc...
I've tried to make legacy etcpool/etc file system and mount it in /etc/vfstab...
Is it possible to extend
Also, you might want to pursue this question at
caiman-disc...@opensolaris.org, since that's where you'll find the
experts on beadm.
Lori
On 01/04/10 10:46, Cindy Swearingen wrote:
Hi Garen,
Does this system have a mirrored root pool and if so, is
a p0 device included as a root pool
On 12/15/09 09:26, Luca Morettoni wrote:
As reported here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsbootFAQ
we can't boot from a pool with raidz, any plan to have this feature?
At this time, there is no scheduled availability for raidz boot. It's
on the list of possible
Kyle McDonald wrote:
Hi Darren,
More below...
Darren J Moffat wrote:
Tristan Ball wrote:
Obviously sending it deduped is more efficient in terms of bandwidth
and CPU time on the recv side, but it may also be more complicated
to achieve?
A stream can be deduped even if the on disk format
On 10/01/09 09:25, camps support wrote:
I did zpool import -R /tmp/z rootpool
It only mounted /export and /rootpool only had /boot and /platform.
I need to be able to get /etc and /var?
You need to explicitly mount the root file system (its canmount
property is set to noauto, which means
On 09/28/09 15:54, Igor Velkov wrote:
zfs receive should allow option to disable immediately mount of received filesystem.
In case of original filesystem have changed mountpoints, it's hard to make clone fs with send-receive, because received filesystem immediately try to mount to old
On 09/28/09 16:16, Igor Velkov wrote:
Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx zfs
recv -vuFd xxx/xxx
invalid option 'u'
usage:
receive [-vnF] filesystem|volume|snapshot
receive [-vnF] -d filesystem
For the property
Bill Sommerfeld wrote:
On Fri, 2009-09-25 at 14:39 -0600, Lori Alt wrote:
The list of datasets in a root pool should look something like this:
...
rpool/swap
I've had success with putting swap into other pools. I believe others
have, as well.
Yes, that's true
The whole pool. Although you can choose to exclude individual datasets
from the flar when creating it.
lori
On 09/25/09 12:03, Peter Pickford wrote:
Hi Lori,
Is the u8 flash support for the whole root pool or an individual BE
using live upgrade?
Thanks
Peter
2009/9/24 Lori Alt lori
On 09/25/09 13:35, David Abrahams wrote:
Hi,
Since I don't even have a mirror for my root pool rpool, I'd like to
move as much of my system as possible over to my raidz2 pool, tank.
Can someone tell me which parts need to stay in rpool in order for the
system to work normally?
Thanks.
The
I have no idea why that last mail lost its line feeds. Trying again:
On 09/25/09 13:35, David Abrahams wrote:
Hi,
Since I don't even have a mirror for my root pool rpool, I'd like to
move as much of my system as possible over to my raidz2 pool, tank.
Can someone tell me which parts need
On 09/24/09 15:54, Peter Pickford wrote:
Hi Cindy,
Wouldn't
touch /reconfigure
mv /etc/path_to_inst* /var/tmp/
regenerate all device information?
It might, but it's hard to say whether that would accomplish everything
needed to move a root file system from one system to another.
I just
Erik Trimble wrote:
Lori Alt wrote:
On 09/15/09 06:27, Luca Morettoni wrote:
On 09/15/09 02:07 PM, Mark J Musante wrote:
zfs create -o version=N pool/filesystem
is possible to implement into a future version of ZFS a released
send command, like:
# zfs send -r2 snap ...
to send
On 09/16/09 11:56, Erik Trimble wrote:
Lori Alt wrote:
On 09/16/09 10:48, Marty Scholes wrote:
Lori Alt wrote:
As for being able to read streams of a later format
on an earlier version of ZFS, I don't think that will ever be
supported. In that case, we really would have to somehow convert
On 09/15/09 06:27, Luca Morettoni wrote:
On 09/15/09 02:07 PM, Mark J Musante wrote:
zfs create -o version=N pool/filesystem
is possible to implement into a future version of ZFS a released
send command, like:
# zfs send -r2 snap ...
to send a specific release (version 2 in the
On 09/04/09 09:41, dick hoogendijk wrote:
Lori Alt wrote:
The -n option does some verification. It verifies that the record
headers distributed throughout the stream are syntactically valid.
Since each record header contains a length field which allows the
next header to be found, one bad
On 09/04/09 10:17, dick hoogendijk wrote:
Lori Alt wrote:
The -u option to zfs recv (which was just added to support flash
archive installs, but it's useful for other reasons too) suppresses
all mounts of the received file systems. So you can mount them
yourself afterward in whatever order
I agree and Cindy Swearingen and I are talking to marketing to get this
fixed. Thanks to all for bringing this to our attention.
lori
On 09/03/09 00:55, Ross wrote:
I agree, mailing that to all Sun customers is something I think is likely to
turn around and bite you.
A lot of people are
yes to all the comments below. Those are all mitigating factors. But I
also agree with Ross and Mike and others that we should be more clear
about when send/recv is appropriate and when it's not the best choice.
We're looking into it.
Lori
On 09/03/09 10:06, Richard Elling wrote:
On
On 09/03/09 14:21, dick hoogendijk wrote:
On Wed, 2 Sep 2009 13:06:35 -0500 (CDT)
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
Nothing prevents validating the self-verifying archive file via this
zfs recv -vn technique.
Does this verify the ZFS format/integrity of the stream?
On 08/31/09 08:30, Henrik Bjornstrom - Sun Microsystems wrote:
Hi !
Have anyone given an answer to this that I have missed ? I have a
customer that have the same question and I want to give him a correct
answer.
/Henrik
Ketan wrote:
I created a snapshot and subsequent clone of a zfs
On 08/29/09 05:41, Robert Milkowski wrote:
casper@sun.com wrote:
Randall Badilla wrote:
Hi all:
First; it is possible modify the boot zpool rpool after OS
installation...? I install the OS on the whole 72GB harddisk.. it
is mirrored so If I want to decrease the rpool; for example
archive_location nfs schubert:/export/home/lalt/U8.flar
partitioning explicit
pool rpool auto auto auto mirror c0t1d0s0 c0t0d0s0
I think that covers the basics.
Lori
Thanks again.
Kris Kasner
Qualcomm Inc.
Jul 9 at 16:41, Lori Alt lori@sun.com wrote:
On 07/09/09 17:25, Mark Michael wrote
On 08/06/09 12:19, Robert Lawhead wrote:
I'm puzzled by the size reported for incremental zfs send|zfs receive. I'd expect the
stream to be roughly the same size as the used blocks reported by zfs list.
Can anyone explain why the stream size reported is so much larger that the used data in
In general, questions about beadm and related tools should be sent or at
least cross-posted to install-disc...@opensolaris.org.
Lori
On 07/24/09 07:04, Jean-Noël Mattern wrote:
Axelle,
You can safely run beadm destroy opensolaris if everything's
allright with your new opensolaris-1 boot
On 07/11/09 05:15, iman habibi wrote:
Dear Admins
I had solaris 10u8 installation based on ZFS (rpool)filesystem on two
mirrored scsi disks in sunfire v880.
but after some months,when i reboot server with reboot command,it
didnt boot from disks,and returns cant boot from boot media.
how can i
Flash archive on zfs means archiving an entire root pool (minus any
explicitly excluded datasets), not an individual BE. These types of
flash archives can only be installed using Jumpstart and are intended to
install an entire system, not an individual BE.
Flash archives of a single BE could
On 07/09/09 17:25, Mark Michael wrote:
Thanks for the info. Hope that the pfinstall changes to support zfs root flash
jumpstarts can be extended to support luupgrade -f at some point soon.
BTW, where can I find an example profile? do I just substitute in the
install_type flash_install
On 07/08/09 13:43, Bob Friesenhahn wrote:
On Wed, 8 Jul 2009, Jerry K wrote:
It has been a while since this has been discussed, and I am hoping
that you can provide an update, or time estimate. As we are several
months into Update 7, is there any chance of an Update 7 patch, or
are we still
On 07/08/09 15:57, Carl Brewer wrote:
Thankyou! Am I right in thinking that rpool snapshots will include things like
swap? If so, is there some way to exclude them? Much like rsync has --exclude?
By default, the zfs send -R will send all the snapshots, including
swap and dump. But you
To elaborate, the -u option to zfs receive suppresses all mounts. The
datasets you extract will STILL have mountpoints that might not work on
the local system, but at least you can unpack the entire hierarchy of
datasets and then modify mountpoints as needed to arrange to make the
file
On 06/28/09 08:41, Ross wrote:
Can't you just boot from an OpenSolaris CD, create a ZFS pool on the new
device, and just do a ZFS send/receive directly to it? So long as there's
enough space for the data, a send/receive won't care at all that the systems
are different sizes.
I don't know
On 06/27/09 23:50, Ian Collins wrote:
Leela wrote:
So no one has any idea?
About what?
This was in regards to a question sent to the install-discuss alias on
6/18 and later copied to zfs-discuss. I have answered it on the install
alias, if anyone is following the issue.
Lori
This is probably 6696858. The fix is known, but I don't know when it's
expected to become available. I have asked the CR's responsible engineer
to update it with when the fix is expected.
Lori
On 06/22/09 14:03, Charles Hedrick wrote:
I'm trying to do a simple backup. I did
zfs snapshot
On 06/16/09 16:32, Jens Elkner wrote:
Hmmm,
just upgraded some servers to U7. Unfortunately one server's primary disk
died during the upgrade, so that luactivate was not able to activate the
s10u7 BE (Unable to determine the configuration ...). Since the rpool
is a 2-way mirror, the
Frank Middleton wrote:
On 06/03/09 09:10 PM, Aurélien Larcher wrote:
PS: for the record I roughly followed the steps of this blog entry
= http://blogs.sun.com/edp/entry/moving_from_nevada_and_live
Thanks for posting this link! Building pkg with gdb was an
interesting exercise, but it
On 06/09/09 18:15, Krenz von Leiberman wrote:
When I take a snapshot of my rpool, (of which /export/... is a part of), ZFS
ignores all the data in it and doesn't take any snapshots...
How do I make it include /export in my snapshots?
BTW, I'm running on Solaris 10 Update 6 (Or whatever is the
A root pool is composed of one top-level vdev, which can be a mirror
(i.e. 2 or more disks). A raidz vdev is not supported for the root pool
yet. It might be supported in the future, but the timeframe is unknown
at this time.
Lori
Colleen wrote:
As I understand it, you cannot currently
Carson Gaspar wrote:
Lori Alt wrote:
A root pool is composed of one top-level vdev, which can be a mirror
(i.e. 2 or more disks). A raidz vdev is not supported for the root
pool yet. It might be supported in the future, but the timeframe is
unknown at this time.
The original poster
On 05/21/09 22:40, Ian Collins wrote:
Mark J Musante wrote:
On Thu, 21 May 2009, Ian Collins wrote:
I'm trying to use zfs send/receive to replicate the root pool of a
system and I can't think of a way to stop the received copy
attempting to mount the filesystem over the root of the
This sounds like a good idea to me, but it should be brought up
on the caiman-disc...@opensolaris.org mailing list, since this
is not just, or even primarily, a zfs issue.
Lori
Rich Teer wrote:
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified.
I'm not sure where this issue stands now (am just now checking
mail after being out for a few days), but here are the block sizes
used when the install software creates swap and dump zvols:
swap: block size is set to PAGESIZE (4K for x86, 8K for sparc)
dump: block size is set to 128 KB
no, this is an incorrect diagnosis. The problem is that by
using the -V option, you created a volume, not a file system.
That is, you created a raw device. You could then newfs
a ufs file system within the volume, but that is almost certainly
not what you want.
Don't use -V when you create
On 02/24/09 12:57, Christopher Mera wrote:
How is it that flash archives can avoid these headaches?
Are we sure that they do avoid this headache? A flash archive
(on ufs root) is created by doing a cpio of the root file system.
Could a cpio end up archiving a file that was mid-way through
I don't know what's causing this, nor have I seen it.
Can you send more information about the errors you
see when the system crashes and svc.configd fails?
Doing the scrub seems like a harmless and possibly
useful thing to do. Let us know what you find out
from it.
Lori
On 02/23/09 11:05,
Dave wrote:
Frank Cusack wrote:
When you try to backup the '/' part of the root pool, it will get
mounted on the altroot itself, which is of course already occupied.
At that point, the receive will fail.
So far as I can tell, mounting the received filesystem is the last
step in the
://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs
It's not the same as flash archive support, but it accomplishes
some of the same goals.
- Lori
thank you,
Jerry Kemp
On 02/18/09 10:40, Lori Alt wrote:
Latest is that this will go into an early build of Update 8
Lori,
Any update to this issue, and can you speculate as to if it will be a
patch to Solaris 10u6, or part of 10u7?
Thanks again,
Jerry
Lori Alt wrote:
This is in the process of being resolved right now. Stay tuned
for when it will be available. It might be a patch to Update 6
On 02/11/09 12:14, Jonny Gerold wrote:
I have a non bootable disk and need to recover files from /root...
When I import the disk via zpool import /root isnt mounted...
Thanks, Jonny
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 02/11/09 01:28, Sandro wrote:
Hey Cindy
Thanks for your help.
How would I configure a 2-way mirror pool for a root pool?
Basically I'd do it this way.
zpool create pool mirror disk0 disk2 mirror disk1 disk3
This command does not create a valid root pool. Root pools cannot
have more
I accept nomination as a core contributor to the zfs
community.
Lori Alt
On 02/02/09 08:55, Mark Shellenbaum wrote:
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need
This is in the process of being resolved right now. Stay tuned
for when it will be available. It might be a patch to Update 6.
In the meantime, you might try this:
http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs
- Lori
On 01/09/09 12:28, Jerry K wrote:
I
On 12/18/08 12:57, Ian Collins wrote:
Shawn joy wrote:
Hi All,
I see from the zfs Best practices guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS Root Pool Considerations
* A root pool must be created with disk slices rather than whole disks.
On 12/10/08 12:15, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins wrote:
I have ZFS root/boot in my environment, and I am interested in
separating /var in a independent dataset. How can I do it. I can use
Live Upgrade, if needed.
It's an install option.
On 12/02/08 03:21, jan damborsky wrote:
Hi Dick,
I am redirecting your question to zfs-discuss
mailing list, where people are more knowledgeable
about this problem and your question could be
better answered.
Best regards,
Jan
dick hoogendijk wrote:
I have s10u6 installed on my
On 12/02/08 09:00, Gary Mills wrote:
On Mon, Dec 01, 2008 at 04:45:16PM -0700, Lori Alt wrote:
On 11/27/08 17:18, Gary Mills wrote:
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
On Fri, Nov 28, 2008 at 07:39:43AM
On 12/02/08 11:29, dick hoogendijk wrote:
Lori Alt wrote:
On 12/02/08 03:21, jan damborsky wrote:
Hi Dick,
I am redirecting your question to zfs-discuss
mailing list, where people are more knowledgeable
about this problem and your question could be
better answered.
Best regards,
Jan
I don't want to steer you wrong under the circumstances,
so I think we need more information.
First, is the failure the same as in the earlier part of this
thread. I.e., when you boot, do you get a failure like this?
Warning: Fcode sequence resulted in a net stack depth change of 1
On 12/02/08 10:24, Mike Gerdts wrote:
On Tue, Dec 2, 2008 at 11:17 AM, Lori Alt [EMAIL PROTECTED] wrote:
I did pre-create the file system. Also, I tried omitting special and
zonecfg complains.
I think that there might need to be some changes
to zonecfg and the zone installation code to get
On 12/02/08 11:04, Brian Wilson wrote:
- Original Message -
From: Lori Alt [EMAIL PROTECTED]
Date: Tuesday, December 2, 2008 11:19 am
Subject: Re: [zfs-discuss] Separate /var
To: Gary Mills [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
On 12/02/08 09:00, Gary Mills wrote
On 11/27/08 17:18, Gary Mills wrote:
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
I'm currently working with an organisation who
On 11/08/08 15:24, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Any advice?. Suggestions/alternative approaches welcomed.
One obvious question - why?
Two reasons:
The SXCE code base really only supports BEs that are
either all in one dataset, or have
Thanks for pointing this out. This has now been corrected.
Lori
Francois Dion wrote:
First,
Congrats to whoever/everybody was involved getting zfs booting in
solaris 10 u6. This is killer.
Second, somebody who has admin access to this page here:
Would you send the messages that appeared with
the failed ludelete?
Lori
Dick Hoogendijk wrote:
Newsgroups: comp.unix.solaris
From: Dick Hoogendijk [EMAIL PROTECTED]
Subject: Re: FYI: s10u6 LU issues
quoting cindy (Mon, 3 Nov 2008 13:01:07 -0800 (PST)):
Besides the release notes, I'm
Kyle McDonald wrote:
Ian Collins wrote:
Stephen Le wrote:
Is it possible to create a custom Jumpstart profile to install Nevada
on a RAID-10 rpool?
No, simple mirrors only.
Though a finish sscript could add additional simple mirrors to create
the config his
Karl Rossing wrote:
Currently running b93.
I'd like to try out b101.
I previously had b90 running on the system. I ran ludelete snv_90_zfs
but I still see snv_90_zfs:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 52.9G 6.11G
Richard Elling wrote:
Vincent Fox wrote:
Does it seem feasible/reasonable to enable compression on ZFS root disks during
JumpStart?
Seems like it could buy some space performance.
Yes. There have been several people who do this regularly.
Glenn wrote a blog on how to do this
It would also be useful to see the output of `zfs list`
and `zfs get all rpool/ROOT/snv_99` while
booted from the failsafe archive.
- lori
dick hoogendijk wrote:
James C. McPherson wrote:
Please add -kv to the end of your kernel$ line in
grub,
#GRUB kernel$ add -kv
cmdk0
dick hoogendijk wrote:
Lori Alt wrote:
It would also be useful to see the output of `zfs list`
and `zfs get all rpool/ROOT/snv_99` while
booted from the failsafe archive
# zfs list
rpool 69.0G 76.6G40K /a/rpool
rpool/ROOT22.7G 18K legacy
1 - 100 of 208 matches
Mail list logo