that
shouldn't be imported on boot before the shutdown that precedes it.
Thanks,
Bogdan
--
Groet, Cordialement, Pozdrawiam, Regards,
Kees Nuyt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
mirror
[]
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of a corrupted block.
http://hub.opensolaris.org/bin/view/Community+Group+zfs/selfheal
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
$3Euhlh2Y$E9qTjs62HIoipqTwY75Ox.JDVgk/9QFglv.w1rE4wE0
Hope this helps.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, something like this:
- quiesce the apps / databases
- take the zfs snapshot(s)
- thaw the apps
- take the hardware snapshot
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On Sun, 17 Oct 2010 03:05:34 PDT, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
here are some links.
Wow, that's a great overview, thanks!
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
pools afterwards.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/ black boxes.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the pool again?
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mountpoint (altroot).
For details see: man zpool
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
every disk in vdev (mirror, raidz, raidz2)
by a bigger one, one by one, resilvering after every
replacement, and grow the existing vdev, thus growing the
pool while keeping hte same configuration.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
will be obsoleted soon, because
the space for the pointer to the third instance of the data
block was needed for some other purpose (I forgot which).
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
devices.
You'll have to un-U3 it before you can use it.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. Bond
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it's more common to use the word JBOD to indicate a
set of individually addressable disks indeed.
JBOD isn't an extra technology ZFS needs,
it's just a way of saying it doesn't need
RAID and that standard controllers work
just fine.
--
( Kees Nuyt
)
c
and just needs to be restored.
You'll have to understand the internals, the on-disk format
is documented, but not easy to grasp.
zdb is the program you'd use to analyse the zpool.
thank you very much
Stephen
Good luck.
--
( Kees Nuyt
)
c[_]
___
zfs
detach it and import it in some other system
as an unmirrored pool.
In other words: you don't have to create a pool to access
one side of a mirror. After all, it;s a mirror, so the pool
is already in place.
thank you all.
Good luck.
--
( Kees Nuyt
)
c
you?
In Opensolaris, use Timeslider.
thanks
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, 07 Jun 2009 13:20:31 +0200, Kees Nuyt
k.n...@zonnet.nl wrote:
You can find accidentally deleted files in the snapshots in
the .zfs directory in the root of every zfs filesystem.
Addition: you may have to execute
# zfs set snapdir=visible yourpoolname
to see the .zfs directories
overwritten all labels, ueberblocks and pointer
blocks of the original pool.
--Stig
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
systems overcome this decay
by not just reading, but also writing all blocks during a
scrub. In those systems, scrubbing is done semi-continously
in the background, not on user/admin demand.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs
if someone designs a MySQL
storage engine which is aware of zfs and uses zfs
copy-on-write primitives.
I doubt it will be called InnoDB, because InnoDB is
Oracle-owned, and probably not maintained by SUN.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing
.
The mountpoint can be in a zfs in a zpool, but that doesn't
make it a zfs.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in one slot at a
time, and, very important, leaving all other slots empty(!).
Repeat for as many disks as you have, seating each disk in
its own slot, and all other slots empty.
(ok, it's just hear-say, but it might be worth a try with
the first 4 disks or so).
--
( Kees Nuyt
)
c
of the Opensolaris incarnations like Nexenta, Indiana,
SXCE, ...).
For the root pool including dump and swap, 16 GByte should
do for any of them, and still leave room for live upgrades.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss
. 9.
Exactly. Or even a zpool per application, if there is more
than one application in a zone. In that sense, I would call
a zpool a unit of maintenance. Or a unit of failure.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss
:
devfsadm -v
ZFS knows where to find devices, the name part suffices:
zpool create -f test c3d0p9
HTH
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
thing here? or is this a bug?
My guess is /a is occupied by the mount of the just
installed root pool.
You'll have to create a new mountpoint, something like /b,
and have your zdata0 pool mount there temporarily.
-Kyle
--
( Kees Nuyt
)
c
is called canmount.
man zfs
/canmount
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, 31 Jan 2009 09:49:09 -0800, Frank Cusack
fcus...@fcusack.com wrote:
On January 31, 2009 10:57:11 AM +0100 Kees Nuyt k.n...@zonnet.nl wrote:
On Fri, 30 Jan 2009 16:49:15 -0800, Frank Cusack
fcus...@fcusack.com wrote:
zfs set only seems to accept an absolute path, which even if you set
where each side of
the mirror is a HW RAID set in itself.
zpool create mirror (RAID5 lun1) (RAID5 lun2)
man zpool
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
beloved
zfs-discuss@opensolaris.org , especially large, binary ones?
They are not welcome here, and I'm pretty sure I'm not the
only one with that opinion.
Thanks in advance for your cooperation.
Regards,
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing
- MySQL desparately needs a replacement
for the InnoDB storage engine
- MySQL has been acquired by SUN
- ZFS (ZPL,DMU) is by SUN.
- performance of the MySQL/InnoDB/ZFS stack is sub-optimal.
No, I don't have any inside information.
--
( Kees Nuyt
)
c
to log
in on an otherwise completely stuck system, and very useful
as such.
Typically, 24 pages in the super-users' home file system
(AKA public volume set) would be enough.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss
me well, the default value for the
DESTROY fileattribute can be determined per volumeset
(=catalog=filesystem).
--
Andrew
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
dependent, it is advised to receive
it immediately.
If the receiving zfs pool uses a file as its block device,
you could export the pool and bzip that file.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the
recordsize they were created with.
Also, it could be chained I/O, where consecutive, adjacent
records are handled in one I/O call.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
it to
continue (and keep your fingers crossed)?
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it.
Now one wonders why zfs doesn't have a rescue like that
built-in...
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
engine does copy on write within its data files,
so things might be different there.
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
;zfs list -r ${fsnm}|grep ${fsnm}@
Thanks,
Mike
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, ...
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the page_size is doubled.
[snip]
- Bill
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
44 matches
Mail list logo