On 11/05/2011 01:07, Daniel Carosone wrote:
Sorry for abusing the mailing list, but I don't know how to report
bugs anymore and have no visibility of whether this is a
known/resolved issue. So, just in case it is not...
Log a support call with Oracle if you have a support contract.
With
I guessed you wouldn't be able to say, even if...
The only shortfall in capability that I'm aware of is the secure boot/FDE,
which we discussed previously.
I am mostly interested in the source to see how features have been
implemented and to understand the system structure. I certainly wouldn't
Op 10-05-11 06:56, Edward Ned Harvey schreef:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
BTW, here's how to tune it:
echo arc_meta_limit/Z 0x3000 | sudo mdb -kw
echo ::arc | sudo mdb -k | grep meta_limit
Technically bootfs ID is a string which names the root dataset, typically
rpool/ROOT/solarisReleaseNameCode. This string can be passed to Solaris
kernel as a parameter manually or by bootloader, otherwise a default current
bootfs is read from the root pool's attributes (not dataset attributes!
You can try to workaround - no idea if this would really work -
0) Disable stmf and iscsi/* services
1) Create your volume's clone
2) Rename the original live volume dataset to some other name
3) Rename the clone to original dataset's name
4) Promote the clone
- now for the system it SHOULD seem
Disks that have been in use for a longer time may have very fragmented free
space on one hand, and not so much of it on another, but ZFS is still
trying to push
bits around evenly. And while it's waiting on some disks, others may be
blocked as
well. Something like that...
This could
Sorry, I did not hit this type of error...
AFAIK the pool writes during zfs receive are done by current code (i.e. ZFSv22
for you) based on data read from the backup stream. So unless there are
corruptions on the pool which happened to be at the same time as you did your
restore, this
Hi,
Thanks for the response, Here is my problem.
I have a zfs stream back up took on zfs version 15, currently i have upgraded
my OS, so new zfs version is 22. Restore process went well from old stream
backup to new zfs pool. but on reboot i got error unable to mount pool tank.
So there is
Keep in mind zfs_vdev_max_pending. In the latest version of S10, this is set
to 10. ZFS will not issue more than the value of this variable requests at
a time for a LUN. Your disks may look relatively idle while ZFS
has a lot of data piled up inside just waiting to be read or written.
I have
I can't actually disable the STMF framework to do this but I can try renaming
things and dumping the properties from one device to another and see if it
works- it might actually do it. I will let you know.
--
This message posted from opensolaris.org
Hello,
Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
and restore it back if in case the disks are lost.
Backup would be done with an enterprise tool like tsm, legato etc.
As an example, here is the layout:
# zfs list
NAME USED AVAIL
On 05/10/11 09:45 PM, Don wrote:
Is it possible to modify the GUID associated with a ZFS volume imported
into STMF?
To clarify- I have a ZFS volume I have imported into STMF and export via
iscsi. I have a number of snapshots of this volume. I need to temporarily
go back to an older snapshot
It sent a series of blocks to write from the queue, newer disks wrote them
and stay
dormant, while older disks seek around to fit that piece of data... When old
disks
complete the writes, ZFS batches them a new set of tasks.
The thing is- as far as I know the OS doesn't ask the disk to find
The press embargo on Intel Z68 chipset has been lifted and so there's a bunch
of press on it. One feature called Smart Response Technology (SRT) will sound
familiar to users of ZFS:
Intel's SRT functions like an actual cache. Rather than caching individual
files, Intel focuses on frequently
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Naveen surisetty
I have a zfs stream back up took on zfs version 15, currently i have
upgraded
my OS, so new zfs version is 22. Restore process went well from old stream
backup to new zfs
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Arjun YK
Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
and restore it back if in case the disks are lost.
Backup would be done with an enterprise tool like tsm,
On May 10, 2011, at 11:21 PM, Naveen surisetty wrote:
Hi,
Thanks for the response, Here is my problem.
I have a zfs stream back up took on zfs version 15, currently i have upgraded
my OS, so new zfs version is 22. Restore process went well from old stream
backup to new zfs pool. but on
* Edward Ned Harvey (opensolarisisdeadlongliveopensola...@nedharvey.com) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Arjun YK
Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
and restore it back if
So y my system is not coming up .. i jumpstarted the system again ... but it
panics like earlier .. so how should i recover it .. and get it up ?
System was booted from network into single user mode and then rpool imported
and following is the listing
# zpool list
NAMESIZE ALLOC
Hi Ketan,
What steps lead up to this problem?
I believe the boot failure messages below are related to a mismatch
between the pool version and the installed OS version.
If you're using the JumpStart installation method, then the root pool is
re-created each time, I believe. Does it also
It turns out this was actually as simple as:
stmfadm create-lu -p guid=XXX..
I kept looking at modify-lu to change this and never thought to check the
create-lu options.
Thanks to Evaldas for the suggestion.
--
This message posted from opensolaris.org
On 2011-May-12 00:20:28 +0800, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Backup/restore of bootable rpool to tape with a 3rd party application like
legato etc is kind of difficult. Because if you need to do a bare metal
restore, how are you going to do it?
This
Hello Jim,
Thanks for the reply following is my o/p before setting bootfs parameter
# zpool get all rpool
NAME PROPERTY VALUE SOURCE
rpool size 68G -
rpool capacity 5%-
rpool altroot-
23 matches
Mail list logo