running a scrub often have problems after shutting down
during the scrub. I have learned to HALT the scrub before going
offline.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis
I've a new MB (tyhe same as before butthis one works..) and I want to
change the way my SATA drives were connected. I had a ZFS boot mirror
conncted to SATA3 and 4 and I wat those drives to be on SATA1 and 2 now.
Question: will ZFS see this and boot the system OK or will I have to
take some
?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Sun, 23 Aug 2009 13:15:37 +0200
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote:
dick hoogendijk d...@nagual.nl wrote:
FULL backup to a file
zfs snapshot -r rp...@0908
zfs send -Rv rp...@0908 /net/remote/rpool/snaps/rpool.0908
INCREMENTAL backup to a file
zfs
on creating a zfs send to a
file somewhere? ZFS Root Pool Recovery from the ZFS Troubleshooting
Guide clearly mentions the creation of a -file- :
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http
?)
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
) into it or can I use your example
like: zfs send rp...@0908 | zfs receive -Fd bac...@0908
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol
On Mon, 24 Aug 2009 16:36:13 +0100
Darren J Moffat darr...@opensolaris.org wrote:
Joerg Schilling wrote:
dick hoogendijk d...@nagual.nl wrote:
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:
zfs send -Rv rp...@0908 /net/remote/rpool/snaps/rpool
the solaris dox I udnerstand I have to do this line:
# zfs send -Rv rp...@0908 | zfs receive -Fd backup/server/rp...@0908
I'm not quite sure about the -Fd option of receive
Is this correct?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06
. In other words how can I be sure of the validity of the received
file in the next command line:
# zfs send -Rv rp...@090902 /backup/snaps/rpool.090902
I only want to know how to check the integrity of the received file.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl
into ZFS?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
it manualy on all those backup up FS's.
I wonder how other people overcome this mountpoint issue.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b122
+ All that's really worth doing is what we do for others (Lewis Carrol
receive
command.
# zfs send -Rv rp...@0909 | zfs receive -Fdu backup/snaps
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121
+ All that's really worth doing is what we do for others (Lewis Carrol
Lori Alt wrote:
On 09/04/09 10:17, dick hoogendijk wrote:
Lori Alt wrote:
The -u option to zfs recv (which was just added to support flash
archive installs, but it's useful for other reasons too) suppresses
all mounts of the received file systems. So you can mount them
yourself afterward
/ROOT/b...@0902 | zfs recv -vF
# tank
What I'd like to see confirmed is that the incremental backup is
received in the -same- filesystem as the originally backup up one
(tank)
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b122
+ All that's
On Sat, 12 Sep 2009 07:38:43 PDT
Hamed bar...@etek.chalmers.se wrote:
Please help me. I really need help. I did a stupid thing i know.
Afaik help does not exist in this case other than making a full
backup / restore. There is no return to former zfs versions possible.
--
Dick Hoogendijk
it if the
first disk fails, but you may not *swap* them.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss
-fuse
sudo zpool create hazz0 /dev/sda1
sudo zpool destroy hazz0
sudo reboot
Now opensolaris is not booting everything is vanished
Is there anyhow to restore everything?
Any idea about the meaning of the verb DESTROY ?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl
Are there any known issues involving VirtualBox using shared folders
from a ZFS filesystem?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol
Bruno Sousa wrote:
Action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
metadata:0x0
metadata:0x15
Hmm, and what file(s) would this be?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09
Every ZFS filesystem uses system memory, but is this also true for
-NOT- mounted filesystems (with the canmount=noauto option set)?
Second question: would it make much difference to have 12 or 22 ZFS
filesystems? What's the memory footprint of a ZFS filesystem
--
Dick Hoogendijk -- PGP/GnuPG
working vfstab alone. This behaviour is also related
to errors with zones btw. The fact that lines in vfstab are created is
neglected in reactions so far. I think that is weird.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's
/zoneRoot/common-10u8': legacy mountpoint
use mount(1M) to mount this filesystem
That's how LU scr*s things up.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol
Any known issues for the new ZFS on solaris 10 update 8?
Or is it still wiser to wait doing a zpool upgrade? Because older ABE's
can no longer be accessed then.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really
Any known issues for the new ZFS on solaris 10 update 8?
Or is it still wiser to wait doing a zpool upgrade? Because older ABE's
can no longer be accessed then.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really
On Sat, 2009-10-17 at 08:11 -0700, Philip Brown wrote:
same problem here on sun x2100 amd64
It's a bootblock issue. If you really want to get back to u6 you have to
installgrub /boot/grub/stage1 /boot/grub/stage2 from th update 6 image
so mount it (with lumount or easier, with zfs mount) and
On Sun, 2009-10-18 at 18:12 +0200, Sander Smeenk wrote:
Well, thats what i would expect too. It seems strange that you can't
edit or remove singular files from snapshots [...]
That would make the snapshot not a snapshot anymore. There would be
differences..
Mark Horstman wrote:
I don't see anything wrong with my /etc/vfstab. Until I get this resolved, I'm
afraid to patch and use the new BE.
It's the vfstab file in the newly created ABE that is wrongly written to.
Try to mount this new ABE and check out for yourself.
--
Dick Hoogendijk -- PGP
. Grab a
new iso.
It might be that I can't read but does OP not state he is booting off
Solaris 10 update 8 DVD?
What can be newer than that one? If the miniroot really only supports
ZFS v10 then this is indeed not good (unworkable/unusable/..)
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http
in /etc/lutab from the /etc/lu directory. You'll notice that
with lustatus the BE is gone.
Remove the ZFS datasets and snapshots for the BE you just deleted.
I've done this hack in the past quite some times and it always worked fine.
It's not supported by SUN though.
--
Dick Hoogendijk -- PGP
glidic anthony wrote:
I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
use the sharemgr command.
Then you prefere wrong. ZFS filesystems are not shared this way.
Read up on ZFS and NFS.
--
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u7 05
On Wed, 2009-11-25 at 10:00 -0500, Kyle McDonald wrote:
To each their own.
[cut the rest of your reply]
In general: I stand corrected. I was rude.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sat, 2009-12-05 at 09:22 -0600, Bob Friesenhahn wrote:
You can also stream into a gzip or lzop wrapper in order to obtain the
benefit of incremental CRCs and some compression as well.
Can you give an example command line for this option please?
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
The host identity had - of course - changed with the new motherboard
and it no longer recognised the zpool as its own. 'zpool import -f
rpool' to take ownership, reboot and it all worked no problem (which
was amazing in itself as I
On Sat, 2009-12-12 at 09:08 -0800, Richard Elling wrote:
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
The host identity had - of course - changed with the new motherboard
and it no longer recognised the zpool as its own
I just noticed that my zpool is still running v10 and my zfs filesystems
are on v3. This is on solaris 10U3. Before upgrading the zpool and ZFS
versions I'd like to know the supported versions by solaris 10 update.7
I'd rather not make my zpools unaccessable ;)
On Sat, 2010-01-16 at 07:24 -0500, Edward Ned Harvey wrote:
Personally, I use zfs send | zfs receive to an external disk. Initially a
full image, and later incrementals.
Do these incrementals go into the same filesystem that received the
original zfs stream?
Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem
version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ;
ZFS filesystem version 4)?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.03 b131
+ All that's really
cannot attach c5d0s0 to c4d0s0: device is too small
So I guess I installed OpenSolaris onto the smallest disk. Now I cannot
create a mirrored root, because the device is smaller.
What is the best way to correct this except starting all over with two
disks of the same size (which I don't have)?
that is an equivalent size, but not exactly the same geometry.
Which OpenSolaris release is this?
b131
And this only works if the difference is realy (REALLY) small. :)
--
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u7 05/09 ZFS
Op 28-1-2010 16:52, Thomas Maier-Komor schreef:
have you considered creating an alternate boot environment on the
smaller disk, rebooting into this new boot environment, and then
attaching the larger disk after destroy the old boot environment?
beadm might do this job for you...
What a
Op 28-1-2010 17:35, Cindy Swearingen schreef:
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.
Dick, you will need to update the BIOS to
Op 28-1-2010 17:35, Cindy Swearingen schreef:
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.
Dick, you will need to update the BIOS to
On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
But those could be copied by send/recv from the larger disk (current
root pool) to the smaller disk (intended new root pool). You won't be
attaching anything until you can boot off the smaller disk and then it
won't matter what's on the
On Thu, 2010-01-28 at 08:44 -0700, Cindy Swearingen wrote:
Or, if possible, connect another larger disk and attach it to the original
root
disk or even replace the smaller root pool disk with the larger disk.
I go for that one. But since it's a smoewhat older system I only have
IDE and
Op 28-1-2010 17:35, Cindy Swearingen schreef:
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.
It turns out not to be excellent at all.
Op 30-1-2010 20:53, Mark schreef:
I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB
SATA drives. When I install opensolaris, I assume it will want to use all or
part of one of those drives for the install. That leaves me with the remaining
part of disk 1, and all
it on /usr/perl5/mumble ? Or is this too simple a thought?
--
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
guess it will no longer be accessible from the GZ then. That would be
good, because I want to seperate my webserver from my global zone.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b131
+ All that's really worth doing is what we do for others (Lewis Carrol
on every os.
And also you shouldnt forget the extra capabilities of zfs like
snaphots ...
I'll go with ZFS. Like someone said with 'copies=2' for extra safety.
That should do it I think.
Compression will slow my system down too much, so I'll skip that one.
--
Dick Hoogendijk -- PGP/GnuPG key
I want zfs on a single drive so I use copies=2 for -some- extra safety.
But I wonder if dedup=on could mean something in this case too? That way
the same blocks would never be written more than twice. Or would that
harm the reliability of the drive and should I just use copies=2?
--
Dick
of the story ...
+1
Carefully constructed ACL's should -never- be destroyed by an
(unwanted/unexpected) chmod. Extra aclmode properties should not be so
hard to implement.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b131
+ All that's really worth doing
...
Or would it be called 2010.04 ? ;-)
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b134
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss
I have some ZFS datasets that are shared through CIFS/NFS. So I created
them with sharenfs/sharesmb options.
I have full access from windows (through cifs) to the datasets, however,
all files and directories are created with (UNIX) permisions of
(--)/(d--). So, although I can access
the pool before shutdown.
Why don't you set the canmount=noauto option in the zfs dataset.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b134
+ All that's really worth doing is what we do for others (Lewis Carrol
to be
the truth.
If things are told often enough they have a tendency to become true, even
if they are not.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.xx b134
+ All that's really worth doing is what we do for others (Lewis Carrol
On 28-6-2010 12:13, Gabriele Bulfon wrote:
*sweat*
These systems are all running for years nowand I considered them safe...
Have I been at risk all this time?!
They're still running, are they not? So, stop sweating. g
But you're right about the changed patching service from Oracle.
It
On 1-8-2010 19:57, David Dyer-Bennet wrote:
I've kind of given up on that. This is a home production server;
it's got all my photos on it.
The uncertainty around OpenSolaris made me drop it. I'm very sorry to
say, because I loved the system. I do not want to worry all the time
though, so
If I create a ZFS mirrored zpool on FreeBSD (zfs v14) will I be able
to boot off an OpenSolaris-b131 CD and copy my data off (another) ZFS
mirror created by OpenSolaris (ZFS v22)? A simple question, but my data
is precious, so I ask beforehand. ;-)
I want to transfer a lot of ZFS data from an old OpenSolaris ZFS mirror
(v22) to a new FreeBSD-8.1 ZFs mirror (v14).
If I boot off the OpenSolaris boot CD and import both mirrors will the
copying from v22 ZFS to v14 ZFS be harmless?
I'm not sure if this is teh right mailinglist for this
On 13-8-2010 22:43, Gary Mills wrote:
If this information is correct,
http://opensolaris.org/jive/thread.jspa?threadID=133043
further development of ZFS will take place behind closed doors.
Opensolaris will become the internal development version of Solaris
with no public distributions.
On 14-8-2010 14:58, Russ Price wrote:
6. Abandon ZFS completely and go back to LVM/MD-RAID. I ran it for
years before switching to ZFS, and it works - but it's a bitter pill
to swallow after drinking the ZFS Kool-Aid.
Nice summary. ;-)
I switched to FreeBSD for the moment and it works very
On 14-8-2010 15:56, Constantine wrote:
Hi.
I've got the ZFS filesystem (opensolaris 2009.06), witch, as i can see, was
automatically rollbacked by OS to the lastest snapshot after the power failure.
There is a trouble - snapshot is too old, and ,consequently, there is a
questions -- Can I
On 23-9-2010 10:25, casper@sun.com wrote:
I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).
I'm using ZFS on a non-ECC machine for years now without any issues.
Never had errors. Plus, like others said, other OS'ses have the same
problems and also run quite well. If not,
On 23-9-2010 16:34, Frank Middleton wrote:
For home use, used Suns are available at ridiculously low prices and
they seem to be much better engineered than your typical PC. Memory
failures are much more likely than winning the pick 6 lotto...
And about what SUN systems are you thinking
OK, I've got a proble I can't solve by myself. I've installed solaris 11
using just one drive.
Now I want to create a mirror by attached a second one tot the rpool.
However, the first one has NO partition 9 but the second one does. This
way the sizes differ if I create a partiotion 0 (needed
On 29-11-2010 14:35, rwali...@washdcmail.com wrote:
I haven't done this on Solaris 11 Express, but this worked on
OpenSolaris 2009-06:
prtvtoc /dev/rdsk/c5t0d0s0 | fmthard -s - /dev/rdsk/c5t1d0s0
Where the first disk is the current root and the second one is the new mirror.
It works om
Op 16-5-2011 22:55 schreef Freddie Cash:
On Fri, Apr 29, 2011 at 5:17 PM, Brandon Highbh...@freaks.com wrote:
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cashfjwc...@gmail.com wrote:
Running ZFSv28 on 64-bit FreeBSD 8-STABLE.
I'd suggest trying to import the pool into snv_151a (Solaris 11
Op 24-7-2011 16:51 schreef Orvar Korvar:
I dont get it. I created users with
System - Administration - Users and Groups
meny.
I thought every user will get his own ZFS filesystem? But when I do
# zfs list
I can not see a zfs listing for each user. I only see this:
rpool/export
101 - 169 of 169 matches
Mail list logo