Re: [zfs-discuss] Each user has his own zfs filesystem??

2011-07-24 Thread Dick Hoogendijk

Op 24-7-2011 16:51 schreef Orvar Korvar:

I dont get it. I created users with
System -  Administration -  Users and Groups
meny.

I thought every user will get his own ZFS filesystem? But when I do
# zfs list
I can not see a zfs listing for each user. I only see this:
rpool/export  60,4G   131G32K  /export
rpool/export/home 60,4G   131G  60,4G  /export/home
rpool/export/home/michaelroot 760K   131G   760K  
/export/home/michaelroot

I have at least three users in my system. I thought they would be listed when I do 
zfs list?

Maybe you don't have their home directories created while adding them...?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Still no way to recover a corrupted pool

2011-05-17 Thread Dick Hoogendijk

Op 16-5-2011 22:55 schreef Freddie Cash:

On Fri, Apr 29, 2011 at 5:17 PM, Brandon Highbh...@freaks.com  wrote:

On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cashfjwc...@gmail.com  wrote:

Running ZFSv28 on 64-bit FreeBSD 8-STABLE.

I'd suggest trying to import the pool into snv_151a (Solaris 11
Express), which is the reference and development platform for ZFS.

Would not import in Solaris 11 Express.  :(  Could not even find any
pools to import.  Even when using zpool import -d /dev/dsk or any
other import commands.  Most likely due to using a FreeBSD-specific
method of labelling the disks.


That should not be the case. You either use ZFS or you don't


What's most frustrating is that this is the third time I've built this
pool due to corruption like this, within three months.  :(


Three times in three months is not normal. You should be looking for the 
cause within your hardware imho. ZFS is very stable in my experiences, 
but I must say I always have run solaris (10/11) with it and not 
FreeBSD. I once tried and immediately got troubles I never had before, 
so I dropped the zfs/fbsd unity. ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mirrored drive

2010-11-29 Thread Dick Hoogendijk
OK, I've got a proble I can't solve by myself. I've installed solaris 11 
using just one drive.

Now I want to create a mirror by attached a second one tot the rpool.
However, the first one has NO partition 9 but the second one does. This 
way the sizes differ if I create a partiotion 0 (needed because it's a 
boot disk)..


How can I get the second disk look exactly the same like the first?
Or can't that be done.

Dick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirrored drive

2010-11-29 Thread Dick Hoogendijk

On 29-11-2010 14:35, rwali...@washdcmail.com wrote:
I haven't done this on Solaris 11 Express, but this worked on 
OpenSolaris 2009-06:

  prtvtoc /dev/rdsk/c5t0d0s0 | fmthard -s - /dev/rdsk/c5t1d0s0

Where the first disk is the current root and the second one is the new mirror.
It works om solaris 11 too. Someone else pointed ut he webpage. It 
worked very well.

Disks are mirrored and running..
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Dick Hoogendijk

 On 23-9-2010 10:25, casper@sun.com wrote:

I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).


I'm using ZFS on a non-ECC machine for years now without any issues. 
Never had errors. Plus, like others said, other OS'ses have the same 
problems and also run quite well. If not, you don't know it. With ZFS 
you will know.

I would say - just go for it. You will never want to go back.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-23 Thread Dick Hoogendijk

 On 23-9-2010 16:34, Frank Middleton wrote:


 For home use, used Suns are available at ridiculously low prices and
 they seem to be much better engineered than your typical PC. Memory
 failures are much more likely than winning the pick 6 lotto...


And about what SUN systems are you thinking for 'home use' ?
The likeliness of memory failures might be much higher than becoming a 
millionair, but in the years past I have never had one. And my home 
sytems are rather cheap. Mind you, not the cheapest, but rather cheap. I 
do buy good memory though. So, to me, with a good backup I feel rather 
safe using ZFS. I also had it running for quite some time on a 32bits 
machine and that also worked out fine.


The fact that a perfectly good file can not be read because of a bad 
checksum is a design failure imho. There should be an option to overrule 
this behaviour of ZFS.


My 2çt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Dick Hoogendijk

 On 14-8-2010 14:58, Russ Price wrote:
6. Abandon ZFS completely and go back to LVM/MD-RAID. I ran it for 
years before switching to ZFS, and it works - but it's a bitter pill 
to swallow after drinking the ZFS Kool-Aid.

Nice summary. ;-)
I switched to FreeBSD for the moment and it works very well although I 
have some ZFS issues I do not have in the latest OpenSolaris b134 
release. The pkg system is fine too. Binary updates are a piece of cake. 
I'm no fan of LVM and although I have some ZFS issues now I'm sure they 
will be solved. In the meantime I created some gmirrors and they do the 
job well. I'd love to see the day coming I'm able to use ZFS again. 
Kool-Aid? An understatement. Once used to ZFs it is very difficult to do 
without. My main hopes are for FreeBSD or maybe Illumos, the latter has 
a long way to go yet.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS automatic rollback and data rescue.

2010-08-14 Thread Dick Hoogendijk

 On 14-8-2010 15:56, Constantine wrote:

Hi.

I've got the ZFS filesystem (opensolaris 2009.06), witch, as i can see, was 
automatically rollbacked by OS to the lastest snapshot after the power failure. 
There is a trouble - snapshot is too old, and ,consequently,  there is a 
questions -- Can I browse pre-rollbacked corrupted branch of FS ? And, if I 
can,  how ?

Look in the (hidden) .zfs directory (mind the dot)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-13 Thread Dick Hoogendijk

 On 13-8-2010 22:43, Gary Mills wrote:

If this information is correct,

 http://opensolaris.org/jive/thread.jspa?threadID=133043

further development of ZFS will take place behind closed doors.
Opensolaris will become the internal development version of Solaris
with no public distributions.  The community has been abandoned.
True and very sad. I changed my LAN back to FreeBSD. It does not even 
come close to OpenSolaris but it is stable and it is developed and open. 
And it (still) has ZFS support. I wonder for how long..

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS data from snv_b134 to fbsd

2010-08-12 Thread Dick Hoogendijk
I want to transfer a lot of ZFS data from an old OpenSolaris ZFS mirror 
(v22) to a new FreeBSD-8.1 ZFs mirror (v14).
If I boot off the OpenSolaris boot CD and import both mirrors will the 
copying from v22 ZFS to v14 ZFS be harmless?
I'm not sure if this is teh right mailinglist for this question. Let me 
know.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading 2009.06 to something current

2010-08-10 Thread Dick Hoogendijk

 On 1-8-2010 19:57, David Dyer-Bennet wrote:
 I've kind of given up on that.  This is a home production server; 
it's got all my photos on it.


The uncertainty around OpenSolaris made me drop it. I'm very sorry to 
say, because I loved the system. I do not want to worry all the time 
though, so I changed (back) to FreeBSD/amd64 with ZFS. I must say it 
runs very well and maintenance is very easy/clear. Not much help for 
you. I do hope there will be another OpenSolaris release, but I have my 
doubts.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS OpenSolaris and FreeBSD

2010-08-10 Thread Dick Hoogendijk
 If I create a ZFS mirrored zpool on FreeBSD (zfs v14) will I be able 
to boot off an OpenSolaris-b131 CD and copy my data off (another) ZFS 
mirror created by OpenSolaris (ZFS v22)? A simple question, but my data 
is precious, so I ask beforehand. ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Dick Hoogendijk

On 28-6-2010 12:13, Gabriele Bulfon wrote:


*sweat*
These systems are all running for years nowand I considered them safe...
Have I been at risk all this time?!


They're still running, are they not? So, stop sweating. g
But you're right about the changed patching service from Oracle.
It sucks big time. Safety patches should be available, even it the OS is 
free. You can't expect users to run unsafe systems just because they 
have not payed for the OS. After all, it's Oracle (SUN) who gives away 
the OS.


--
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Native ZFS for Linux

2010-06-12 Thread Dick Hoogendijk
Op Sat, 12 Jun 2010 12:00:39 +0200 schreef Joerg Schilling  
joerg.schill...@fokus.fraunhofer.de:



The main problem with GPL related license debates seems to be that
very few people did read the GPL license text.


Or simply do not want to and just believe what they have been told to be  
the truth.
If things are told often enough they have a tendency to become true, even  
if they are not.


--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.xx b134
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] don't mount a zpool on boot

2010-05-29 Thread Dick Hoogendijk

Op Sat, 29 May 2010 20:34:54 +0200 schreef Kees Nuyt k.n...@zonnet.nl:


On Thu, 20 May 2010 11:53:17 -0700, John Andrunas
j...@andrunas.net wrote:


Can I make a pool not mount on boot?  I seem to recall reading
somewhere how to do it, but can't seem to find it now.


As Tomas said, export the pool before shutdown.


Why don't you set the canmount=noauto option in the zfs dataset.

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b134
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] diff between sharenfs and sharesmb

2010-05-04 Thread Dick Hoogendijk
I have some ZFS datasets that are shared through CIFS/NFS. So I created 
them with sharenfs/sharesmb options.


I have full access from windows (through cifs) to the datasets, however, 
all files and directories are created with (UNIX) permisions of 
(--)/(d--). So, although I can access the files now from my 
windows machiens, I can -NOT- access the same files with NFS.
I know I gave myself full permissions in the ACL list. That's why 
sharesmb works I guess. But what do I have to do to make -BOTH- work?


--
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] b134 - Mirrored rpool won't boot unless both mirrors are present

2010-03-28 Thread Dick Hoogendijk

On 28-3-2010 7:35, Victor Latushkin wrote:

This problem is known an fixed in later builds:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6923585

AFAIK it is going to be included into b134a as well


It's now March 28.
For OpenSolaris 2010.03 that means only a few days remaining...
Or would it be called 2010.04 ? ;-)

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b134
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS aclmode property

2010-03-06 Thread Dick Hoogendijk

On 6-3-2010 18:41, Ralf Utermann wrote:

So from this site: we very much support the idea of adding ignore
and deny values for the aclmode property!

However, reading PSARC/2010/029, it looks like we will get
aclmode=discard for everybody and the property removed.
I hope this is not the end of the story ...


+1
Carefully constructed ACL's should -never- be destroyed by an 
(unwanted/unexpected) chmod. Extra aclmode properties should not be so 
hard to implement.


--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b131
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fat32 ntfs or zfs

2010-02-27 Thread Dick Hoogendijk

Op 27-2-2010 13:15, Mertol Ozyoney schreef:
This depends on what you are looking for. Generaly zfs will be more 
secure due to checksum feature. Having seen a lot of ntfs / fat drives 
going south die to bad sectors i'd not clasify them very secure. 
However ntfs and fat can be used nearly on every os.


And also you shouldnt forget the extra capabilities of zfs like 
snaphots ...


I'll go with ZFS. Like someone said with 'copies=2' for extra safety. 
That should do it I think.

Compression will slow my system down too much, so I'll skip that one.

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b131
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] copies=2 and dedup

2010-02-27 Thread Dick Hoogendijk
I want zfs on a single drive so I use copies=2 for -some- extra safety. 
But I wonder if dedup=on could mean something in this case too? That way 
the same blocks would never be written more than twice. Or would that 
harm the reliability of the drive and should I just use copies=2?


--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b131
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] move ZFS fs to a zone

2010-02-07 Thread dick hoogendijk
# zfs list
rpool/www 3.64G   377G  3.64G  /var/www
rpool/zones   3.00G   377G24K  /zones
rpool/zones/anduin1.94G   377G24K  /zones/anduin
rpool/zones/anduin/ROOT   1.94G   377G21K  legacy
rpool/zones/anduin/ROOT/zbe   1.94G   377G  1.91G  legacy
rpool/zones/midgard   1.06G   377G24K  /zones/midgard
rpool/zones/midgard/ROOT  1.06G   377G21K  legacy
rpool/zones/midgard/ROOT/zbe  1.06G   377G  1.06G  legacy

I want to move the rpool/www zfs filesystem to zone midgard.
Is this possible and how is this done?
I guess it will no longer be accessible from the GZ then. That would be
good, because I want to seperate my webserver from my global zone.
-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b131
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unionfs help

2010-02-04 Thread dick hoogendijk

Frank Cusack wrote:
 Is it possible to emulate a unionfs with zfs and zones somehow?  My zones
 are sparse zones and I want to make part of /usr writable within a zone.
 (/usr/perl5/mumble to be exact)

Why don't you just export that directory with NFS (rw) to your sparse zone
and mount it on /usr/perl5/mumble ? Or is this too simple a thought?

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-30 Thread Dick Hoogendijk

Op 30-1-2010 20:53, Mark schreef:

I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB 
SATA drives. When I install opensolaris, I assume it will want to use all or 
part of one of those drives for the install. That leaves me with the remaining 
part of disk 1, and all of disk 2.

Question is, how do I best install OS to maximize my ability to use ZFS 
snapshots and recover if one drive fails?
   


Install on one drive. After that attach the second and crate a mirror. 
You -NEED- redundancy.

Alternatively, I guess I could add a small USB drive to use solely for the OS  and then 
have all of the 2 750 drives for ZFS. Is that a bad idea since the OS drive will be 
standalone?
   

Very bad idea. Not safe. ZFS on one disk is asking for trouble.
Take two smaller disks for the OS (mirrored vdev) and the two larger 
ones as a second vdev (mirrored too)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-29 Thread Dick Hoogendijk

Op 28-1-2010 17:35, Cindy Swearingen schreef:

Thomas,

Excellent and much better suggestion... :-)

You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.


It turns out not to be excellent at all. Beadm does create a new ABE in 
the new zpool, but the old zpool remains the leading zpool. This means 
you can not destroy it from within the newly booted BE plus /export/... 
comes from the main original zpool. The latter can be solved by 
umounting with the -f option and remounting the /export from the new BE, 
but the main issue is that the old zpool remains in charge. I'm gonna 
try booting off an USB stick later this evening and if that fails will 
go the way of restoring the system like Cindy mentions in her wonderful 
ZFS manual.


http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Dick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread dick hoogendijk

Cindy Swearingen wrote:

 On some disks, the default partitioning is not optimal and you have to
 modify it so that the bulk of the disk space is in slice 0.

Yes, I know, but in this case the second disk indeed is smaller ;-(
So I wonder, should I reinstall the whole thing on this smaller disk and
thren let the bigger second attach? That would mean opening up the case
and all that, because I don't have a DVD player built in.
So I thought I'd go the zfs send|recv way. What are yout thoughts about this?

 Another thought is that a recent improvement was that you can attach a
 disk that is an equivalent size, but not exactly the same geometry.
 Which OpenSolaris release is this?

b131
And this only works if the difference is realy (REALLY) small. :)

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u7 05/09 ZFS+

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Dick Hoogendijk

Op 28-1-2010 16:52, Thomas Maier-Komor schreef:

have you considered creating an alternate boot environment on the
smaller disk, rebooting into this new boot environment, and then
attaching the larger disk after destroy the old boot environment?

beadm might do this job for you...
   


What a great idea. Are there any special preparations I have to do on 
the second smaller disk before I can create this ABE? It sounds like the 
simplest option after installing new hardware. ;-) I guess it's enough 
if the disk has a sun partitionon it?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Dick Hoogendijk

Op 28-1-2010 17:35, Cindy Swearingen schreef:

Thomas,

Excellent and much better suggestion... :-)

You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.

Dick, you will need to update the BIOS to boot from the smaller disk.


Yes yes yes. It's a great idea. So, I first create thsi new root pool on 
the smaller disk and then I use beadm?

I can't use the same name (rpool) I guess.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Dick Hoogendijk

Op 28-1-2010 17:35, Cindy Swearingen schreef:

Thomas,

Excellent and much better suggestion... :-)

You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.

Dick, you will need to update the BIOS to boot from the smaller disk.
It's not that great an idea after all. Creating a new ABE in the new 
root pool goes wel, BUT all other files systems on rpool (rpool/export, 
export/home, etc) don't get transfered. So, attaching is not possible 
because '/export/home/me' is busy ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread dick hoogendijk
On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
 But those could be copied by send/recv from the larger disk (current 
 root pool) to the smaller disk (intended new root pool).  You won't be 
 attaching anything until you can boot off the smaller disk and then it 
 won't matter what's on the larger disk because attaching the larger disk 
 to the root  mirror will destroy the contents of the larger disk anyway.

You are right of course.
Are these right values for amd64 swap/dump:
zfs create -V 2G rpool/dump
zfs create -V 2G -b 4k rpool/swap
Are these -b 4k values OK?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small connect another disk

2010-01-28 Thread dick hoogendijk
On Thu, 2010-01-28 at 08:44 -0700, Cindy Swearingen wrote:
 Or, if possible, connect another larger disk and attach it to the original 
 root 
 disk or even replace the smaller root pool disk with the larger disk.

I go for that one. But since it's a smoewhat older system I only have
IDE and SATA(150) connections. IDE disks are rare these days.

Question: do SATA2 disks work on SATA(1) connections?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-27 Thread dick hoogendijk
cannot attach c5d0s0 to c4d0s0: device is too small

So I guess I installed OpenSolaris onto the smallest disk. Now I cannot
create a mirrored root, because the device is smaller.
What is the best way to correct this except starting all over with two
disks of the same size (which I don't have)?

Do I zfs send the stream to the smallest disk and will the bigger one
attach itself? Or is there another way. I need redundency, so I hope to
get answers soon. ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs streams

2010-01-24 Thread dick hoogendijk
Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem
version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ;
ZFS filesystem version 4)?
-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.03 b131
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-16 Thread dick hoogendijk
On Sat, 2010-01-16 at 07:24 -0500, Edward Ned Harvey wrote:

 Personally, I use zfs send | zfs receive to an external disk.  Initially a
 full image, and later incrementals.

Do these incrementals go into the same filesystem that received the
original zfs stream?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] solaris 10U7

2009-12-13 Thread dick hoogendijk
I just noticed that my zpool is still running v10 and my zfs filesystems
are on v3. This is on solaris 10U3. Before upgrading the zpool and ZFS
versions I'd like to know the supported versions by solaris 10 update.7
I'd rather not make my zpools unaccessable ;)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread dick hoogendijk
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:

 The host identity had - of course - changed with the new motherboard
  and it no longer recognised the zpool as its own.  'zpool import -f
  rpool' to take ownership, reboot and it all worked no problem (which
  was amazing in itself as I had switched from AMD to Intel ...).

Do I understand correctly if I read this as: OpenSolaris is able to
switch between systems without reinstalling? Just a zfs import -f and
everything runs? Wow, that would be an improvemment and would make
things more like *BSD/linux.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread dick hoogendijk
On Sat, 2009-12-12 at 09:08 -0800, Richard Elling wrote:
 On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
 
  On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
 
  The host identity had - of course - changed with the new motherboard
  and it no longer recognised the zpool as its own.  'zpool import -f
  rpool' to take ownership, reboot and it all worked no problem (which
  was amazing in itself as I had switched from AMD to Intel ...).
 
  Do I understand correctly if I read this as: OpenSolaris is able to
  switch between systems without reinstalling? Just a zfs import -f and
  everything runs? Wow, that would be an improvemment and would make
  things more like *BSD/linux.
 
 Solaris has been able to do that for 20+ years.  Why do you think
 it should be broken now?

Because, like I said, I always understood it was very difficult to
change disks to another system and run the installed solaris version on
that new hardware.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b129
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send | verify | receive

2009-12-05 Thread dick hoogendijk
On Sat, 2009-12-05 at 09:22 -0600, Bob Friesenhahn wrote:

 You can also stream into a gzip or lzop wrapper in order to obtain the 
 benefit of incremental CRCs and some compression as well.

Can you give an example command line for this option please?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharemgr

2009-11-25 Thread dick hoogendijk
On Wed, 2009-11-25 at 10:00 -0500, Kyle McDonald wrote:

 To each their own.
[cut the rest of your reply]

In general: I stand corrected. I was rude.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharemgr

2009-11-24 Thread dick hoogendijk

glidic anthony wrote:

 I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
 use the sharemgr command.

Then you prefere wrong. ZFS filesystems are not shared this way.
Read up on ZFS and NFS.

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u7 05/09 ZFS+

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-28 Thread dick hoogendijk

Ben Middleton wrote:

I'm just looking for a clean way to remove the old BE, and then remove the old 
snapshot without interfering with Live Upgrade from working in the future.
  

Remove the right line from /etc/lutab
Remove the ICF.number and INODE.number where number is the same as 
the line in /etc/lutab from the /etc/lu directory. You'll notice that 
with lustatus the BE is gone.

Remove the ZFS datasets and snapshots for the BE you just deleted.

I've done this hack in the past quite some times and it always worked fine.
It's not supported by SUN though.

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.03 b125
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import single user mode incompatible version

2009-10-27 Thread dick hoogendijk

Tim Cook wrote:



On Tue, Oct 27, 2009 at 4:25 PM, Paul Lyons paulrly...@gmail.com 
mailto:paulrly...@gmail.com wrote:


When I boot off Solaris 10 U8 I get the error that pool is
formatted using an incompatible version.


You're booting from an old cd that has an old version of zfs.  Grab a 
new iso. 
It might be that I can't read but does OP not state he is booting off 
Solaris 10 update 8 DVD?
What can be newer than that one? If the miniroot really only supports 
ZFS v10 then this is indeed not good (unworkable/unusable/..)


--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.03 b125
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread dick hoogendijk

Mark Horstman wrote:

I don't see anything wrong with my /etc/vfstab. Until I get this resolved, I'm 
afraid to patch and use the new BE.
  

It's the vfstab file in the newly created ABE that is wrongly written to.
Try to mount this new ABE and check out for yourself.

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots rsync --delete

2009-10-18 Thread dick hoogendijk
On Sun, 2009-10-18 at 18:12 +0200, Sander Smeenk wrote:

 Well, thats what i would expect too. It seems strange that you can't
 edit or remove singular files from snapshots [...]

That would make the snapshot not a snapshot anymore. There would be
differences..

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-17 Thread dick hoogendijk
On Sat, 2009-10-17 at 08:11 -0700, Philip Brown wrote:
 same problem here on sun x2100 amd64

It's a bootblock issue. If you really want to get back to u6 you have to
installgrub /boot/grub/stage1 /boot/grub/stage2 from th update 6 image
so mount it (with lumount or easier, with zfs mount) and make sure you
take the stage1 stage2 from this update.
***WARNING*** adter doing so, you're u6 will boot, but you're u8 will
not. In activating update 8 all GRUB items are synced. That way all BE's
are bootable. That's the way it's supposed to be. Maybe something went
wrong and only the new u8 BE has the understanding of the new
bootblocks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs on s10u8

2009-10-16 Thread dick hoogendijk
Any known issues for the new ZFS on solaris 10 update 8?
Or is it still wiser to wait doing a zpool upgrade? Because older ABE's
can no longer be accessed then.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs on s10u8

2009-10-16 Thread dick hoogendijk
Any known issues for the new ZFS on solaris 10 update 8?
Or is it still wiser to wait doing a zpool upgrade? Because older ABE's
can no longer be accessed then.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-15 Thread dick hoogendijk
On Wed, 14 Oct 2009 22:07:54 PDT
Brian bmul...@gmail.com wrote:

 ERROR: svc:/system/filesystem/minimal:default failed to mount /var

You have found it yourself already. LU packages from update 8 create
lines in /etc/vfstab for ZFS filesystems! Former LU packages never did!
They left a good working vfstab alone. This behaviour is also related
to errors with zones btw. The fact that lines in vfstab are created is
neglected in reactions so far. I think that is weird.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u8: lots of fixes, any commentary?

2009-10-15 Thread dick hoogendijk
On Fri, 16 Oct 2009 07:06:23 +1300
Ian Collins i...@ianshome.com wrote:

 + [ /zoneRoot/common = legacy ]

ABSURT. I'm sure the zone has it's own mountpoint set. And now LU
presumes all of a sudden a legacy mp? (It has even set it
in /etc/vfstab in the new ABE).

 cannot mount 'rpool/ROOT/10u8/zoneRoot/common-10u8': legacy mountpoint
 use mount(1M) to mount this filesystem

That's how LU scr*s things up.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] memory use

2009-10-13 Thread dick hoogendijk
Every ZFS filesystem uses system memory, but is this also true for
-NOT- mounted filesystems (with the canmount=noauto option set)?

Second question: would it make much difference to have 12 or 22 ZFS
filesystems? What's the memory footprint of a ZFS filesystem

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZPOOL Metadata / Data Error - Help

2009-10-04 Thread dick hoogendijk

Bruno Sousa wrote:

Action: Restore the file in question if possible. Otherwise restore the
  entire pool from backup.
  metadata:0x0
  metadata:0x15


Hmm, and what file(s) would this be?

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs vbox and shared folders

2009-09-27 Thread dick hoogendijk
Are there any known issues involving VirtualBox using shared folders 
from a ZFS filesystem?


--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] backup disk of rpool on solaris

2009-09-20 Thread dick hoogendijk
On Sat, 2009-09-19 at 22:03 -0400, Jeremy Kister wrote:
 I added a disk to the rpool of my zfs root:
 # zpool attach rpool c1t0d0s0 c1t1d0s0
 # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
 
 I waited for the resilver to complete, then i shut the system down.
 
 then i physically removed c1t0d0 and put c1t1d0 in it's place.
 
 I tried to boot the system, but it panics:

Afaik you can't remove the first disk. You've created a mirror of two
disks from either which you may boot the system. BUT the second disk
must remain where it is. You can set the bios to boot from it if the
first disk fails, but you may not *swap* them.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Real help

2009-09-20 Thread dick hoogendijk

On Sun, 2009-09-20 at 11:41 -0700, vattini giacomo wrote:
 Hi there,i'm in a bad situation,under Ubuntu i was tring to import a solaris 
 zpool that is in /dev/sda1,while the Ubuntu is in /dev/sda5;not being able to 
 mount the solaris pool i decide to destroy the pool created like that
 sudo zfs-fuse
 sudo zpool  create hazz0 /dev/sda1
 sudo zpool destroy hazz0
 sudo reboot
  Now opensolaris is not booting everything is vanished
 Is there anyhow to restore everything?

Any idea about the meaning of the verb DESTROY ?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Q about incremental zfs send recv

2009-09-12 Thread dick hoogendijk
Probably a dumb (but basic) question about incremental zfs backups.
After reading docs I'm still nnot sure, so I ask here.

# zfs snapshot -r rpool/ROOT/b...@0901
# zfs send rpool/ROOT/b...@0901 | zfs recv -Fdu tank
# zfs snapshot -r rpool/ROOT/b...@0902
# zfs send -i rpool/ROOT/b...@0901 rpool/ROOT/b...@0902 | zfs recv -vF
# tank

What I'd like to see confirmed is that the incremental backup is
received in the -same- filesystem as the originally backup up one
(tank)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b122
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with snv_122 Zpool issue

2009-09-12 Thread dick hoogendijk
On Sat, 12 Sep 2009 07:38:43 PDT
Hamed bar...@etek.chalmers.se wrote:

 Please help me. I really need help. I did a stupid thing i know.

Afaik help does not exist in this case other than making a full
backup / restore. There is no return to former zfs versions possible.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b122
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] check a zfs rcvd file

2009-09-05 Thread dick hoogendijk

Lori Alt wrote:

On 09/04/09 10:17, dick hoogendijk wrote:

Lori Alt wrote:
The -u option to zfs recv (which was just added to support flash 
archive installs, but it's useful for other reasons too) suppresses 
all mounts of the received file systems.  So you can mount them 
yourself afterward in whatever order is appropriate, or do a 'zfs 
mount -a'.
You misunderstood my problem. It is very convenient that the 
filesystems are not mounted. I only wish they could stay that way!. 
Alas, they ARE mounted (even if I don't want them to) when I  
*reboot* the system. And THAT's when thing get ugly. I then have 
different zfs filesystems using the same mountpoints! The backed up 
ones have the same mountpoints as their origin :-/  - The only way 
to stop it is to *export* the backup zpool OR to change *manualy* 
the zfs prop canmount=noauto in all backed up snapshots/filesystems.


As I understand I cannot give this canmount=noauto to the zfs 
receive command.

# zfs send -Rv rp...@0909 | zfs receive -Fdu backup/snaps
There is a RFE to allow zfs recv to assign properties, but I'm not 
sure whether it would help in your case.  I would have thought that 
canmount=noauto would have already been set on the sending side, 
however.  In that case, the property should be preserved when the 
stream is preserved.

Well, I checked again today. This is what happens:

NAME PROPERTY  VALUESOURCE
tank/ROOT/daffy  canmount  on   default
NAME  PROPERTY  VALUE SOURCE
rpool/ROOT/daffy  canmount  noautolocal

As you can see the original dataset (rpool/ROOT/daffy) has 
canmount=noauto set. However, the received dataset (zfs send 
rpool/ROOT/da...@090905 | zfs receive -Fdu /tank) has this property 
changed(!) into canmount=on. So, what you state is not true. The 
property is NOT preserved.

Is this a bug?

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] check a zfs rcvd file

2009-09-04 Thread dick hoogendijk

Lori Alt wrote:
The -n option does some verification.  It verifies that the record 
headers distributed throughout the stream are syntactically valid.  
Since each record header contains a length field which allows the next 
header to be found, one bad header will cause the processing of the 
stream to abort.  But it doesn't verify the content of the data 
associated with each record.


So, storing the stream in a zfs received filesystem is the better 
option. Alas, it also is the most difficult one. Storing to a file with 
zfs send -Rv is easy. The result is just a file and if your reboot the 
system all is OK. However, if I zfs receive -Fdu into a zfs filesystem 
I'm in trouble when I reboot the system. I get confusion on mountpoints! 
Let me explain:


Some time ago I backup up my rpool and my /export ; /export/home to 
/backup/snaps (with  zfs receive -Fdu). All's OK because the newly 
created zfs FS's stay unmounted 'till the next reboot(!). When I 
rebooted my system (due to a kernel upgrade) the system would nog boot, 
because it had mounted the zfs FS backup/snaps/export on /export and 
backup/snaps/export/home on /export/home. The system itself had those 
FS's too, of course. So, there was a mix up. It would be nice if the 
backup FS's would not be mounted (canmount=noauto), but I cannot give 
this option when I create the zfs send | receive, can I? And giving this 
option later on is very difficult, because canmount is NOT recursive! 
And I don't want to set it manualy on all those backup up FS's.


I wonder how other people overcome this mountpoint issue.

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b122
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] check a zfs rcvd file

2009-09-04 Thread dick hoogendijk

Lori Alt wrote:
The -u option to zfs recv (which was just added to support flash 
archive installs, but it's useful for other reasons too) suppresses 
all mounts of the received file systems.  So you can mount them 
yourself afterward in whatever order is appropriate, or do a 'zfs 
mount -a'.
You misunderstood my problem. It is very convenient that the filesystems 
are not mounted. I only wish they could stay that way!. Alas, they ARE 
mounted (even if I don't want them to) when I  *reboot* the system. And 
THAT's when thing get ugly. I then have different zfs filesystems using 
the same mountpoints! The backed up ones have the same mountpoints as 
their origin :-/  - The only way to stop it is to *export* the backup 
zpool OR to change *manualy* the zfs prop canmount=noauto in all 
backed up snapshots/filesystems.


As I understand I cannot give this canmount=noauto to the zfs receive 
command.

# zfs send -Rv rp...@0909 | zfs receive -Fdu backup/snaps

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] check a zfs rcvd file

2009-09-03 Thread dick hoogendijk
On Wed, 2 Sep 2009 13:06:35 -0500 (CDT)
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
 Nothing prevents validating the self-verifying archive file via this
 zfs recv -vn  technique.

Does this verify the ZFS format/integrity of the stream?
Or is the only way to do that to zfs recv the stream into ZFS?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] check a zfs rcvd file

2009-09-02 Thread Dick Hoogendijk


Some time ago there was some discussion on zfs send | rcvd TO A FILE.
Apart form the disadvantages which I now know someone mentioned a CHECK to  
be at least sure that the file itself was OK (without one or more bits  
that felt over). I lost this reply and would love to hear this check  
again. In other words how can I be sure of the validity of the received  
file in the next command line:


# zfs send -Rv rp...@090902  /backup/snaps/rpool.090902

I only want to know how to check the integrity of the received file.

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incremental backup with zfs to file

2009-08-24 Thread dick hoogendijk
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:

  zfs send -Rv rp...@0908  /net/remote/rpool/snaps/rpool.0908
 
 The recommended thing is to zfs send | zfs receive [...]

[cut the rest of the reply]

I want to thank everyone for the insights shared on this matter. I
learned a lot and will change the procedure to a send/recv. The
receiving system is on the exact same level of ZFS, so that's fine.

I -DO- think however that the advice in the mentioned link should be
rewrote to this procedure or at least it should be clearly mentioned as
a way to go. (CINDY?)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incremental backup with zfs to file

2009-08-24 Thread dick hoogendijk
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:

  zfs send -Rv rp...@0908  /net/remote/rpool/snaps/rpool.0908
 
 The recommended thing is to zfs send | zfs receive 

I have a zpool named backup for this purpose (mirrored).

Do I create a seperate FS (backup/FS) into it or can I use your example
like: zfs send rp...@0908 | zfs receive -Fd bac...@0908

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incremental backup with zfs to file

2009-08-24 Thread dick hoogendijk
On Mon, 24 Aug 2009 16:36:13 +0100
Darren J Moffat darr...@opensolaris.org wrote:

 Joerg Schilling wrote:
  dick hoogendijk d...@nagual.nl wrote:
  
  On Sun, 23 Aug 2009 22:05:15 -0400
  Edward Ned Harvey sola...@nedharvey.com wrote:
 
  zfs send -Rv rp...@0908  /net/remote/rpool/snaps/rpool.0908
  The recommended thing is to zfs send | zfs receive 
  I have a zpool named backup for this purpose (mirrored).
 
  Do I create a seperate FS (backup/FS) into it or can I use your
  example like: zfs send rp...@0908 | zfs receive -Fd bac...@0908
  
  Unless this second pool is on a different physical location, this
  is not a backup.
 
 That depends what backup means in this particular environment and
 what the risk model is.  If fire, theft or other things that normally
 require and offsite copy aren't part of this persons risk model then
 it may well be a perfectly sufficient backup for them.
 
  A real backup is able to survive a fire, theft or similar problems.
 
 Not all secondary/offline copies of data need to survive those
 risks.
 
 This particular case could be, for example:  if backup is a pool
 made from a disk (or set of disks) that are either physically removed
 or otherwise protected from fire and/or theft then it is a backup by
 your definition. The drives may get detached when the receive is
 finished and put them into a firesafe.

Thank you for the analyze. This is the case. The drives are stored
somewhere else after the backup has been made.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incremental backup with zfs to file

2009-08-24 Thread dick hoogendijk
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:

 ## Create Full snapshot and send it
 zfs send sourc...@uniquesnapname | ssh somehost 'zfs receive -F
 targe...@uniquesnapname'

this is what I want to do. However I want a recursive backup from the
root pool. From the solaris dox I udnerstand I have to do this line:

# zfs send -Rv rp...@0908 | zfs receive -Fd backup/server/rp...@0908

I'm not quite sure about the -Fd option of receive
Is this correct?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] incremental backup with zfs to file

2009-08-23 Thread dick hoogendijk
FULL backup to a file
zfs snapshot -r rp...@0908
zfs send -Rv rp...@0908  /net/remote/rpool/snaps/rpool.0908

INCREMENTAL backup to a file
zfs snapshot -i rp...@0908 rp...@090822
zfs send -Rv rp...@090822  /net/remote/rpool/snaps/rpool.090822

As I understand the latter gives a file with changes between 0908 and
090822. Is this correct?

How do I restore those files? I know how to recreate the root pool
and how to restore the first one (.../snaps/rpool.0908)
But what is the exact zfs syntax to restore the second file on top of
the first one, containing the differences between the two?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incremental backup with zfs to file

2009-08-23 Thread dick hoogendijk
On Sun, 23 Aug 2009 13:15:37 +0200
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote:

 dick hoogendijk d...@nagual.nl wrote:
 
  FULL backup to a file
  zfs snapshot -r rp...@0908
  zfs send -Rv rp...@0908  /net/remote/rpool/snaps/rpool.0908
 
  INCREMENTAL backup to a file
  zfs snapshot -i rp...@0908 rp...@090822
  zfs send -Rv rp...@090822  /net/remote/rpool/snaps/rpool.090822
 
  As I understand the latter gives a file with changes between 0908
  and
  090822. Is this correct?
 
 What do you understand by incremental backup?

I do not want to process the first zfs send option everytime I make a
backup of my root pool. It simply takes too long and too much space.
I do, however, want to be able to restore my root pool in case of a
disastre, as good and recent as possible.

 If you like to be able to restore single files, I recommend you to
 use star for the incrementals.

I have no need for restoring single files. I use star / rsync for this
already. I want to be able to restore my root pool in case of disk
failure. So, I can always do a zfs send of the whole root, but I
thought it might be possible to do this onece, followed by incremental
differences.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incremental backup with zfs to file

2009-08-23 Thread dick hoogendijk
On Sun, 23 Aug 2009 09:54:07 PDT
Ross myxi...@googlemail.com wrote:

 If you really want to store a backup, create another ZFS filesystem
 somewhere and do a send/receive into it.  Please don't try to dump
 zfs send to a file and store the results.

If this is true than WHY does SUN advice on creating a zfs send to a
file somewhere? ZFS Root Pool Recovery from the ZFS Troubleshooting
Guide clearly mentions the creation of a -file- :

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] changing SATA ports

2009-08-07 Thread Dick Hoogendijk
I've a new MB (tyhe same as before butthis one works..) and I want to 
change the way my SATA drives were connected. I had a ZFS boot mirror 
conncted to SATA3 and 4 and I wat those drives to be on SATA1 and 2 now.


Question: will ZFS see this and boot the system OK or will I have to 
take some precautions beforehand?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool causing boot to hang

2009-08-01 Thread dick hoogendijk
On Fri, 31 Jul 2009 15:43:11 -0400
Mark Johnson mark.john...@sun.com wrote:

 One thing that could be related is that I was running
 a scrub when I had powered off the system. The scrub
 started up again after I had imported the pool.
 
 Anyone know if this is a known problem?

I knwo people running a scrub often have problems after shutting down
during the scrub. I have learned to HALT the scrub before going
offline.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-31 Thread dick hoogendijk
On Wed, 29 Jul 2009 17:34:53 -0700
Roman V Shaposhnik r...@sun.com wrote:

 On the read-write front: wouldn't it be cool to be able to snapshot
 things by:
 $ mkdir .zfs/snapshot/snap-name

I've followed this thread but I fail to see the advantages of this. I
guess I miss something here. Can you explain to me why the above would
be better (nice to have) then zfs create whate...@now?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-31 Thread dick hoogendijk
On Fri, 31 Jul 2009 18:38:16 +1000
Tristan Ball tristan.b...@leica-microsystems.com wrote:

 Because it means you can create zfs snapshots from a non solaris/non 
 local client...
 
 Like a linux nfs client, or a windows cifs client.

So if I want a snapshot of i.e. rpool/export/home/dick I can do a zfs
snapshot rpool/export/home/dick, but what is the exact syntax for the
same snapshot using this other method?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] USF drive on S10u7

2009-07-28 Thread dick hoogendijk
What is the best way to attach an USB harddisk to Solaris 10u7?
I know some program is running to auto detect such a device (have
forgotten the name, because I do almost all work on OSOL (hal).
do I use that program or disable it an manualy attach the drive to
the system?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 05/09 | OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USF drive on S10u7

2009-07-28 Thread dick hoogendijk
On Tue, 28 Jul 2009 09:03:14 -0600
cindy.swearin...@sun.com wrote:

 The Solaris 10 volume management service is volfs.
#svcs -a | grep vol has told me that ;-)

 If the auto-mounting doesn't occur, you can disable volfs and mount
 it manually.

I don't want the automounting to occur, so I diabled volfs.
I then did a rmformat to learn the device name, followed by a zpool
create archive /dev/rdsk/devicename

All running nicely. Thanks for the advice.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 05/09 | OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs question

2009-07-27 Thread dick hoogendijk
On Mon, 27 Jul 2009 08:26:06 -0600
Mark Shellenbaum mark.shellenb...@sun.com wrote:

 I would suggest you open a bug on this.
 http://defect.opensolaris.org/bz/

Done. Bugzilla – Bug 10294 Submitted

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 05/09 | OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-26 Thread dick hoogendijk
On Sun, 26 Jul 2009 12:14:03 -0400
Oscar del Rio del...@mie.utoronto.ca wrote:

 dick hoogendijk wrote:
 
  r...@westmark:/# share
  -...@store/snaps   /store/snaps   sec=sys,rw=arwen,root=arwen 
  
  arwen# zfs send -Rv rp...@0906
  arwen#  /net/westmark/store/snaps/rpool.0906
  zsh: permission denied: /net/westmark/store/snaps/rpool.0906
 
 try sharing with the @ network syntax.  See man share_nfs
 
 r...@192.168.xx.xx/32,ro...@192.168.xx.xx/32

Does not work! The root part is to blame for that.
This rule does work:
r...@192.168.xx.xx/32,root=arwen

I have no idea why root=arwen has to be specified as a name, while the
nodename can be a @ form.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 05/09 | OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] sharenfs question

2009-07-25 Thread dick hoogendijk
# zfs create store/snaps
# zfs set sharenfs='rw=arwen,root=arwen' store/snaps
# share
-...@store/snaps   /store/snaps   sec=sys,rw=arwen,root=arwen 

arwen# zfs send -Rv rp...@0906  /net/westmark/store/snaps/rpool.0906
zsh: permission denied: /net/westmark/store/snaps/rpool.0906

*** BOTH systems have NFSMAPID DOMAIN=nagual.nl set in the
*** file /etc/default/nfs

The NFS docs mention that the rw option can be a node (like arwen).
But as you can see I get no access when I set rw=arwen.
And yet arwen is known!
This rule works:
#zfs set sharenfs='root=arwen' store/snaps
The snapshots are send from arwen to the remote machine and get the
root:root privileges. So that,s OK.
This rule does NOT work:
# zfs set sharenfs='rw=arwen,root=arwen' store/snaps
I get a permission denied. Apparently rw=arwen is nog reckognized.

Is something wrong in the syntax the way ZFS uses sharenfs?
Or have I misread the manual of share_nfs?
What can be wrong is the line zfs set sharenfs='rw=arwen,root=arwen'
store/snaps

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread dick hoogendijk
Hi, I followed the faq on this, but get erros I can't understand. As I
do want to make backups I really hope someone can tell me what's wrong.
== [ what I did ]
[my remote system]

westmark# zfs create store/snaps
westmark# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
store 108K  8.24G19K  /store
store/snaps18K  8.24G18K  /store/snaps

[share to local system]
westmark# zfs set sharenfs='rw=192.168.11.22,root=root' store/snaps
# share
-...@rpool/snaps   /rpool/snaps   sec=sys,rw=192.168.11.22,root=root

# Create a recursive snapshot of the root pool.

[my local system]
arwen# zfs snapshot -r rp...@0906

Send the entire recursive snapshot:

# zfs send -Rv rp...@0906  /net/192.168.11.22/rpool/snaps/rpool.0906

And then I get this permission denied so I MUST have done something
wrong. Can anybody point me at my error(s)?


arwen# zfs send -Rv rp...@0906
arwen#  /net/192.168.11.22/store/snaps/rpool.0906
zsh: permission denied: /net/192.168.11.22/store/snaps/rpool.0906
arwen# 


-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 15:55:02 +0200
dick hoogendijk d...@nagual.nl wrote:

 [share to local system]
 westmark# zfs set sharenfs=on store/snaps

I left out the options and changed the /store/snaps directory
permissions to 777. Now the snapshot can be send from the host but it
gets u:g permssions like nobody:nobody. Is this bad?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 07:19:40 -0700 (PDT)
Rich Teer rich.t...@rite-group.com wrote:

 Given that data integrity is presumably important in every non-gaming
 computing use, I don't understand why people even consider not using
 ECC RAM all the time.  The hardware cost delta is a red herring:

I live in Holland and it is not easy to find motherboards that (a)
truly support ECC ram and (b) are (Open)Solaris compatible.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 10:44:36 -0400
Kyle McDonald kmcdon...@egenera.com wrote:
 ... then it seems  like a shame (or a waste?)  not to equally
 protect the data both before it's given to ZFS for writing, and after
 ZFS reads it back and returns it to you.

But that was not the question.
The question was: [quote] My question is: is there any technical
reason, in ZFS's design, that makes it particularly important for ZFS
to require ECC RAM?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 10:00:30 -0600
cindy.swearin...@sun.com wrote:

 Reproducing this will be difficult in my environment since
 our domain info is automatically setup...

Hey, no sweat ;-) I only asked because I don't want to do the send
blah again.  but then again, computers don't get tired.

 I would suggest fixing the nobody problem because you might
 have other NFS-related problems accessing your files.

Right. I have set the domain name in /etc/default/nfs on all machines
of my local network. So I guess that's OK now.

Remains the question what this line should actualy read:

#zfs set sharenfs='rw=local-system,root=local-system' store/snaps

rw=local-system becomes arwen that's obvious, but how about the
root part. I run zfs send ... as root on the local machine (do I have
to ??) and on the remote machine the /store/snaps has uid:gid root:root

The backup being OK is important so I appreciate your help.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 19:36:52 +0200
dick hoogendijk d...@nagual.nl wrote:

 Thank you for your support 'till now. One final question:..

Alas, it's not a final qustion. It still does not work. I have no idea
what else I could have forgotten. This is what I have on arwen (local)
and westmark (remote):

r...@westmark:/# share
-...@store/snaps   /store/snaps   sec=sys,rw=arwen,root=arwen 

arwen# zfs send -Rv rp...@0906  /net/westmark/store/snaps/rpool.0906
zsh: permission denied: /net/westmark/store/snaps/rpool.0906

*** BOTH systems have NFSMAPID DOMAIN=nagual.nl set in the
*** file /etc/default/nfs

NFS services on both systems:
arwen# svcs -a | grep nfs  -- local
disabled   20:02:06 svc:/network/nfs/client:default
online 20:02:24 svc:/network/nfs/cbd:default
online 20:02:24 svc:/network/nfs/mapid:default
online 20:02:24 svc:/network/nfs/status:default
online 20:02:25 svc:/network/nfs/nlockmgr:default
online 20:02:27 svc:/network/nfs/server:default
online 20:02:27 svc:/network/nfs/rquota:default
arwen# 

r...@westmark:~# svcs -a | grep nfs  -- remote
online 20:01:25 svc:/network/nfs/mapid:default
online 20:01:25 svc:/network/nfs/status:default
online 20:01:25 svc:/network/nfs/nlockmgr:default
online 20:01:27 svc:/network/nfs/server:default
online 20:01:27 svc:/network/nfs/client:default
online 20:01:27 svc:/network/nfs/cbd:default
online 20:01:28 svc:/network/nfs/rquota:default

Where do I look next?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread dick hoogendijk
On Sun, 19 Jul 2009 00:00:06 -0700
Brent Jones br...@servuhome.net wrote:

 No offense, but you trusted 10TB of important data, running in
 OpenSolaris from inside Virtualbox (not stable) on top of Windows XP
 (arguably not stable, especially for production) on probably consumer
 grade hardware with unknown support for any of the above products?

Running this kind of setup absolutely can give you NO garanties at all.
Virtualisation, OSOL/zfs on WinXP. It's nice to play with and see it
working but would I TRUST precious data to it? No way!

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread dick hoogendijk
On Sun, 19 Jul 2009 01:48:40 PDT
Ross no-re...@opensolaris.org wrote:

 As far as I can see, the ZFS Administrator Guide is sorely lacking in
 any warning that you are risking data loss if you run on consumer
 grade hardware.

And yet, ZFS is not only for NON-consumer grade hardware is it? the
fact that many, many people run normal consumer hardware does not
rule them out fro ZFS, does it? The best filesystem ever, the end of
all other filesystems would be nothing more than a dream if that was
true. Furthermore, much so-called consumer hardware is very good these
days. My guess is ZFS should work quite reliably on that hardware.
(i.e. non ECC memory should work fine!) / mirroring is a -must- !

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2010.02 B118
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread dick hoogendijk
On Sat, 4 Jul 2009 13:03:52 -0500 (CDT)
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Sat, 4 Jul 2009, Joerg Schilling wrote:

  Did you try to use highly performant software like star?
 
 No, because I don't want to tarnish your software's stellar 
 reputation.  I am focusing on Solaris 10 bugs today.

Blunt.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread dick hoogendijk
On Wed, 24 Jun 2009 03:14:52 PDT
Ben no-re...@opensolaris.org wrote:

 If I detach c5d1s0, add a 1TB drive, attach that, wait for it to
 resilver, then detach c5d0s0 and add another 1TB drive and attach
 that to the zpool, will that up the storage of the pool?

That will do the trick perfectly. I just did the same last week ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Turn off the time slider on some zpools

2009-06-22 Thread dick hoogendijk
On Mon, 22 Jun 2009 09:38:18 -0500
Harry Putnam rea...@newsguy.com wrote:

 How to turn off the timeslider snapshots on certain file systems?

http://wikis.sun.com/display/OpenSolarisInfo/How+to+Manage+the+Automatic+ZFS+Snapshot+Service

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering fs's

2009-06-22 Thread dick hoogendijk
On Mon, 22 Jun 2009 21:42:23 +0100
Matt Harrison iwasinnamuk...@genestate.com wrote:

 She's now desperate to get it back as she's realised there some 
 important work stuff hidden away in there.

Without snapshots you're lost.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to destroy a pool by id?

2009-06-21 Thread dick hoogendijk
On Sun, 21 Jun 2009 13:56:52 +1000 (EST)
Andre van Eyssen an...@purplecow.org wrote:

 On Sat, 20 Jun 2009, Cindy Swearingen wrote:
 
  I wish we had a zpool destroy option like this:
 
  # zpool destroy -really_dead tank2
 
 Cindy,
 
 The moment we implemented such a thing, there would be a rash of
 requests saying:
 
 a) I just destroyed my pool with -really_dead - how can I get my data 
 back??!
 b) I was able to recover my data from -really_dead - can we have 
 -ultra-nuke please?

Following your logic there shouldn't have existed a rm -f * option
too.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread dick hoogendijk
On Sun, 21 Jun 2009 06:35:50 PDT
Simon Breden no-re...@opensolaris.org wrote:

 If anyone can throw some light on these topics, I would be pleased to
 hear from you. Thanks a lot.

I follow this thread with much interest.
Curious to see what'll come out of it.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread dick hoogendijk
On Sun, 21 Jun 2009 14:07:49 PDT
roland no-re...@opensolaris.org wrote:

 just a side-question:
 
 I folthis thread with much interest.
 
 what are these * for ?
 
 why is followed turned into fol* on this board?

The text of my original message was:

On Sun, 21 Jun 2009 06:35:50 PDT
Simon Breden no-re...@opensolaris.org wrote:

 If anyone can throw some light on these topics, I would be pleased to
 hear from you. Thanks a lot.

I follow this thread with much interest.
Curious to see what'll come out of it.

Does the change occur again?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mobo SATA migration to AOC-SAT2-MV8 SATA card

2009-06-20 Thread dick hoogendijk
On Fri, 19 Jun 2009 16:42:43 -0700
Jeff Bonwick jeff.bonw...@sun.com wrote:

 Yep, right again.

That is, if the boot drives are not one of those.. ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread dick hoogendijk
On Mon, 15 Jun 2009 22:51:12 +0200
Thommy M. thommy.m.malmst...@gmail.com wrote:

 IIRC there was a blog about I/O performance with ZFS stating that it
 was faster with compression ON as it didn't have to wait for so much
 data from the disks and that the CPU was fast at unpacking data. But
 sure, it uses more CPU (and probably memory).

IF at all, it certainly should not be the DEFAULT.
Compression is a choice, nothing more.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ata - sata quustion

2009-06-12 Thread dick hoogendijk

casper@sun.com wrote:

I boot my OpenSolaris 2009.06 system off ONE ata drive.
I want to change that to a mirrored boot from two SATA drives.

Is it possible to FIRST make a mirror of the existing ata drive
PLUS one new sata drive and after resilvering, remove the ata drive and
replace it with another (second) SATA one?

 Yes, that's what I did.  Make sure that the sata drive is at least as big
 as the ata drive; make sure you make the appropriate Solaris FDISK
 partition and don't use an EFI label (can't boot those).

I'm always not very confident with solaris format. I love the creation of
an EFI labeled ZFS disk. But alas, that's not possible here cause it has
to boot. So, what steps exactly do I take with format: how do I see the
exact sieze of the new disk? Do I slice it up (s1, s2) or do I only need a
s0 slice? Please give some advice on this? It's not everyday I put in new
disks.

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u7 05/09 ZFS+

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ata - sata quustion

2009-06-10 Thread dick hoogendijk
I boot my OpenSolaris 2009.06 system off ONE ata drive.
I want to change that to a mirrored boot from two SATA drives.

Is it possible to FIRST make a mirror of the existing ata drive
PLUS one new sata drive and after resilvering, remove the ata drive and
replace it with another (second) SATA one?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ata - sata quustion

2009-06-10 Thread dick hoogendijk
On Wed, 10 Jun 2009 14:52:37 +0200
casper@sun.com wrote:
 make sure you make the appropriate Solaris FDISK partition and
 don't use an EFI label (can't boot those).

Thank you Casper (and James too). This EFI label is a nice reminder.
Installing grub is second nature ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send/rcv between nevada and solaris10

2009-06-07 Thread dick hoogendijk
Can I do a zfs send filesystem from Solaris-10 into a file.zfs to a
machine running OpenSolaris-2009.06, store it there and later do a zfs
receive TO the solaris-10 machine from this file.zfs (onto the OSOl
machine? I ask because I'm not sure if the zfs rpool versions will get
in the way. S10 has v10 and Opensolaris is on v14.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] eon or nexentacore or opensolaris

2009-05-24 Thread dick hoogendijk
On Sun, 24 May 2009 18:24:35 +0900
Bogdan M. Maryniuk bogdan.maryn...@gmail.com wrote:

 Anyone add StormOS to the distrowatch, please?

If so, than you may add OSUNIX too. It seems like a nice new project,
not BeneLix has joined the community

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / opensolaris sharing the same ROOT pool
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs and b114 version

2009-05-18 Thread dick hoogendijk

Sean Sprague wrote:
 There appears to be a minor glitch in /etc/driver_aliases where a
 spurious line for qlc has appeared in /etc/driver_aliases, but I have it
 installed and running.

What's a spurious line (I'm dutch) and how did you solve it?
Delete the (qlc) line?

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u7 05/09 ZFS+

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send/receive

2009-05-16 Thread dick hoogendijk
If I wanted to backup a server with non-global zones (all on zfs
filesystems) with zfs send I guess I don't have to halt the zones
first, because I create snapshots to send from. Is that right?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / opensolaris sharing the same ROOT pool
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   >