Is it possible to do a replace on the root filesystem as well?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is it possible to do a replace on / as well?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello ZFS gurus and fellow fans.
As we all know ZFS does not _yet_ support relayout of pools. I want to know
whether there is any hope for this to become available in the near future?
From my outside view it sounds like it should be possible to set a flag to
stop allocating new blocks from a
On 18 December, 2008 - Johan Hartzenberg sent me these 2,7K bytes:
Hello ZFS gurus and fellow fans.
As we all know ZFS does not _yet_ support relayout of pools. I want to know
whether there is any hope for this to become available in the near future?
Hi,
since hostid is stored in the label, zpool import failed if the hostid dind't
match. Under certain circonstances (ldom failover) it means you have to
manually force the zpool import while booting. With more than 80 LDOMs on a
single host it will be great if we could configure the machine
Hi, All!
I've tried to install bootblock with installboot such as with DD too...
# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpoolONLINE 0 0 0
Do you use any form of compression?
I changed compression from none to gzip-9, got some message about changing
properties of boot pool (or fs), copied and moved all files under /usr and /etc
to enforce compression, rebooted, and - guess what message did I get.
--
This message posted from
Hi, all.
I've just installed OpensSolaris 2008.11 and among the great features (zfs
being THE feature) I'm finding some minor annoyances. One of them is that I
can't create a zfs filesystem with an accented character in its name (the
encoding is utf-8):
pjl...@pc8120a:~$ zfs create
On Thu, Dec 18, 2008 at 10:24:26AM +0200, Johan Hartzenberg wrote:
Similarly, adding a device into a raid-Z vdev seems easy to do: All future
writes include that device in the list of devices from which to allocate
blocks.
In general, I agree completely. But in practice there are limitations
Hi,
some more details to the question above.
We are using ldom in a cluster environment, which means the ldom is relocatable
between two execution hosts. The ldom owns one zpool named local and this zpool
can be in use only from this domain. This zpool provides some zfs for an
application and
Daniel,
You can replace the disks in both of the supported root pool
configurations:
- single disk (non-redundant) root pool
- mirrored (redundant) root pool
I've tried both recently and I prefer attaching the replacement disk to
the single-disk root pool and then detaching the old disk, using
Cindy,
This is helpful! Thank you very much :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1. sorry for the delay in replying.
2. the reason that I was originally using zfs destroy was that beadm destroy
failed.
3. Current state of affairs:
~# beadm list
BEActive Mountpoint SpacePolicy Created
---- -- --- ---
Ethan,
1. No zones.
2. with BE_PRINT_ERR=true (sorry destroy now works)
~# beadm list
BEActive Mountpoint SpacePolicy Created
---- -- --- ---
b101b - - 6.14Gstatic 2008-11-14 09:17
b103pre
On Thu, Dec 18, 2008 at 07:32:33AM -0800, Pedro Lobo wrote:
I've just installed OpensSolaris 2008.11 and among the great features
(zfs being THE feature) I'm finding some minor annoyances. One of them
is that I can't create a zfs filesystem with an accented character in
its name (the encoding
Torsten Weigel wrote:
Hi,
some more details to the question above.
We are using ldom in a cluster environment, which means the ldom is
relocatable between two execution hosts. The ldom owns one zpool named local
and this zpool can be in use only from this domain. This zpool provides some
On Wed, Dec 17, 2008 at 10:02:18AM -0800, Ross wrote:
In fact, thinking about it, could this be more generic than just a USB
backup service?
Absolutely.
The tool shouldn't need to know that the backup disk is accessed via
USB, or whatever. The GUI should, however, present devices
Seymour Krebs wrote:
Ethan,
1. No zones.
2. with BE_PRINT_ERR=true (sorry destroy now works)
~# beadm list
BEActive Mountpoint SpacePolicy Created
---- -- --- ---
b101b - - 6.14G
Absolutely.
The tool shouldn't need to know that the backup disk is accessed via
USB, or whatever. The GUI should, however, present devices
intelligently, not as cXtYdZ!
Yup, and that's easily achieved by simply prompting for a user
friendly name as devices are attached. Now you could
On Thu, Dec 18, 2008 at 07:05:44PM +, Ross Smith wrote:
Absolutely.
The tool shouldn't need to know that the backup disk is accessed via
USB, or whatever. The GUI should, however, present devices
intelligently, not as cXtYdZ!
Yup, and that's easily achieved by simply prompting
Hi All,
I see from the zfs Best practices guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS Root Pool Considerations
* A root pool must be created with disk slices rather than whole disks.
Allocate the entire disk capacity for the root pool to slice 0,
On Thu, Dec 18, 2008 at 7:11 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Thu, Dec 18, 2008 at 07:05:44PM +, Ross Smith wrote:
Absolutely.
The tool shouldn't need to know that the backup disk is accessed via
USB, or whatever. The GUI should, however, present devices
Shawn joy wrote:
Hi All,
I see from the zfs Best practices guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS Root Pool Considerations
* A root pool must be created with disk slices rather than whole disks.
Allocate the entire disk capacity for the
Shawn joy wrote:
Hi All,
I see from the zfs Best practices guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS Root Pool Considerations
* A root pool must be created with disk slices rather than whole
disks. Allocate the entire disk capacity for the
On Thu, Dec 18, 2008 at 07:55:14PM +, Ross Smith wrote:
On Thu, Dec 18, 2008 at 7:11 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
I was thinking more something like:
- find all disk devices and slices that have ZFS pools on them
- show users the devices and pool names (and
' is the pool of 2008.11 and 'rootpool' is the pool of snv_103.
then entered
#zfs snapshot rootpool/u...@20081218:00:30
#zfs send rootpool/u...@20081218:00:30 /tmp/temp.snapshot zfs
receive -F rpool/u01 /tmp/temp.snapshot
It all worked as expected and I could still boot from the 2008.11 usb drive
On 12/18/08 12:57, Ian Collins wrote:
Shawn joy wrote:
Hi All,
I see from the zfs Best practices guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS Root Pool Considerations
* A root pool must be created with disk slices rather than whole disks.
Nicolas Williams wrote:
On Thu, Dec 18, 2008 at 07:55:14PM +, Ross Smith wrote:
On Thu, Dec 18, 2008 at 7:11 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
I was thinking more something like:
- find all disk devices and slices that have ZFS pools on them
- show users
Of course, you'll need some settings for this so it's not annoying if
people don't want to use it. A simple tick box on that pop up dialog
allowing people to say don't ask me again would probably do.
I would like something better than that. Don't ask me again sucks
when much, much later
I was thinking more something like:
- find all disk devices and slices that have ZFS pools on them
- show users the devices and pool names (and UUIDs and device paths in
case of conflicts)..
I was thinking that device pool names are too variable, you need to
be reading serial numbers
#zpool export rpool
I swapped the 2 drives around again, so that 2008.11 was in the laptop.
Booting up brings up the splash screen; it starts as though it means to boot up
followed by dark screen then back to the splash screen.
Starting in text mode brings up the first few lines and then back
On Thu, Dec 18, 2008 at 12:57:54PM -0800, Richard Elling wrote:
Nicolas Williams wrote:
Device names are, but there's no harm in showing them if there's
something else that's less variable. Pool names are not very variable
at all.
I was thinking of something a little different. Don't
Is anyone out there replicating a thousand or more ZFS filesystems between
hosts using zfs send/receive?
I have been attempting to do this, but I keep producing toxic streams that
panic the receiving host. So far, about 1 in 1500 (2 out of about 3000)
incremental steams appear toxic.
--
If one chooses to do this what happens if you have a disk failure.
From the ZFS Best practices guide.
The recovery process of replacing a failed disk is more complex when disks
contain both ZFS and UFS file systems on
slices.
Shawn
--
This message posted from opensolaris.org
I have read the ZFS best practice guide located at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
However I have questions whether we support using slices for data on the
same disk as we use for ZFS boot. What issues does this create if we
have a disk failure in a
On Fri 19/12/08 14:52 , Shawn Joy shawn@sun.com sent:
I have read the ZFS best practice guide located at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
However I have questions whether we support using slices for data on the
same disk as we use for ZFS boot.
On Thu, Dec 18, 2008 at 4:57 PM, Ian Collins i...@ianshome.com wrote:
Is anyone out there replicating a thousand or more ZFS filesystems between
hosts using zfs send/receive?
I did this with about 2000 data sets on 2x x4500s with Solaris 10U5 that was
patched. Most directories had just a
What version of Solaris and ZFS are you running there?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ross wrote:
What version of Solaris and ZFS are you running there?
Solaris 10 update 6 and update 5. Al the filesystems are version 1.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Well, I really like the idea of an automatic service to manage send/receives to
backup devices, so if you guys don't mind, I'm going to share some other ideas
for features I think would be useful.
One of the first is that you need some kind of capacity management and snapshot
deletion.
40 matches
Mail list logo