Re: [zfs-discuss] One dataset per user?

2010-07-28 Thread Juergen Nickelsen
Edward Ned Harvey solar...@nedharvey.com writes:

 There are legitimate specific reasons to use separate filesystems
 in some circumstances. But if you can't name one reason why it's
 better ... then it's not better for you.

Having separate filesystems per user lets you create user-specific
quotas and reservations, lets you allow users to make their own
snapshots, and lets you do zfs send/recv replication of single user
home directories (for backup or move to another pool), and even
allow the users to do that on their own.

-- 
Usenet is not a right. It is a right, a left, a jab, and a sharp
uppercut to the jaw. The postman hits! You have new mail.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Juergen Nickelsen
Richard Elling rich...@nexenta.com writes:

 And some time before I had suggested to a my buddy zfs for his new
 home storage server, but he turned it down since there is no
 expansion available for a pool.

 Heck, let him buy a NetApp :-)

Definitely a possibility, given the availability and pricing of
oldish NetApp hardware on eBay. Although for home use, it is easier
to put together something adequately power-saving and silent with
OpenSolaris and PC hardware than with NetApp gear.

-- 
I wasn't so desperate yet that I actually looked into documentation.
 -- Juergen Nickelsen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv reliability

2010-05-28 Thread Juergen Nickelsen
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:

 On Fri, 28 May 2010, Gregory J. Benscoter wrote:
 
 I’m primarily concerned with in the possibility of a bit flop. If 
 this occurs will the stream be lost? Or will the file that that bit 
 flop occurred in be the only degraded file? Lastly how does the 
 reliability of this plan compare to more traditional backup tools 
 like tar, cpio, etc…?

 The whole stream will be rejected if a single bit is flopped.  Tar and 
 cpio will happily barge on through the error.

That is one of the reasons why we at work do send/recv only into
live ZFS file systems -- any error would become apparent
immediately. Not that we have seen that happen yet, and I alone have
been doing hourly sends/recvs for years with a growing number of ZFS
file systems, over a hundred in between.

-- 
Herr Rowohlt, Sie schrieben einmal, bei Schwaebisch ziehe sich Ihnen
das Skrotum zusammen. Isch des im Augebligg au dr Fall?
 -- Verena Schmidt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mount Errors

2010-02-16 Thread Juergen Nickelsen
Tony MacDoodle tpsdoo...@gmail.com writes:

 Mounting ZFS filesystems: (1/6)cannot mount '/data/apache': directory is not
 empty
 (6/6)
 svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
 failed: exit status 1

 And yes, there is data in the /data/apache file system...

I think it is complaining about entries in the *mountpoint
directory*. See this:

# mkdir /gaga
# zfs create -o mountpoint=/gaga rpool/gaga
# zfs umount rpool/gaga
# touch /gaga/boo
# zfs mount rpool/gaga
cannot mount '/gaga': directory is not empty
# rm /gaga/boo
# zfs mount rpool/gaga
# 

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (Practical) limit on the number of snapshots?

2010-01-07 Thread Juergen Nickelsen
Lutz Schumann presa...@storageconcepts.de writes:

 When importing a pool with many snapshots (which happens during
 reboot also) the import may take a long time (example: 1
 snapshots ~ 1-2 days).

 I've not tested the new release of Solaris (svn_125++) which fixes
 this regarding this issue. So a test with osol 125++ would be nice
 :)

That is indeed significant. I do not know which software version the
platform for storage for our customers runs on, but there is
something to look out for.

Thanks to you and the others for the answers!

-- 
Hello, IT... Have you tried turning it off and on again?
-- The IT Crowd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] (Practical) limit on the number of snapshots?

2010-01-05 Thread Juergen Nickelsen
Is there any limit on the number of snapshots in a file system?

The documentation -- manual page, admin guide, troubleshooting guide
-- does not mention any. That seems to confirm my assumption that is
is probably not a fixed limit, but there may still be a practical
one, just like there is no limit on the number of file systems in a
pool, but nobody would find having a million file systems practical.

I have tried to create a number of snapshots in a file system for a
few hours. An otherwise unloaded X4250 with a nearly empty RAID-Z2
pool of six builtin disks (146 GB, 10K rpm) managed to create a few
snapshots per second in an empty file system.

It had not visibly slowed down when it reached 36051 snapshots after
hours and I stopped it; to my surprise destroying the file system
(with all these snapshots in it) took about as long. With ``iostat
-xn 1'' I could see that the disk usage was still low, at about 13%
IIRC.

So 36000 snapshots in an empty file system is not a problem. Is it
different with a file system that is, say, to 70% full? Or on a
bigger pool? Or with a significantly larger number of snapshots,
say, a million? I am asking for real experience here, not for the
theory.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] preview of new SSD based on SandForce controller

2010-01-03 Thread Juergen Nickelsen
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) writes:

 The netapps patents contain claims on ideas that I invented for my Diploma 
 thesis work between 1989 and 1991, so the netapps patents only describe prior
 art. The new ideas introduced with wofs include the ideas on how to use COW
 for filesystems and on how to find the most recent superblock on a COW 
 filesystem. The ieas for the latter method have been developed while 
 discussing the wofs structure with Casten Bormann at TU-Berlin.

Would you perhaps be willing to share the text? Sounds quite
interesting, especially to compare it with ZFS and with Netapp's
introduction to WAFL that I read a while ago.

(And I know that discussions with Carsten Bormann can
result in remarkable results -- not that I would want to disregard
your own part in these ideas. :-)

Regards, Juergen.

-- 
Viele Informatiker sind Taeter; sie haben beispielweise Sexualdelikte
begangen oder sind verantwortlich fuer Bluttaten in der eigenen
Familie. Meiner Meinung nach ist ihr Gehirn in der Informatiktechnik
ausser Kontrolle geraten.   -- Karl Notter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove the exported zpool

2009-07-06 Thread Juergen Nickelsen
Ketan no-re...@opensolaris.org writes:

 I had a pool which was exported and due to some issues on my SAN i
 was never able to import it again. Can anyone tell me how can i
 destroy the exported pool to free up the LUN.

I did that once; I *think* that was with the -f option to zpool
destroy.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disappearing snapshots

2009-07-06 Thread Juergen Nickelsen
DL Consulting no-re...@opensolaris.org writes:

 It takes daily snapshots and sends them to another machine as a
 backup. The sending and receiving is scripted and run from a
 cronjob. The problem is that some of the snapshots disappear from
 monster after they've been sent to the backup machine.

Do not use the snapshots made for the time slider feature. These are
under control of the auto-snapshot service for exactly the time
slider and not for anything else.

Snapshots are cheap; create your own for file system replication. 
As you always need to keep the last common snapshot on both source
and target of the replication, you want to have snapshot creation
and deletion under your own control and not under the control of a
service that is made for something else.

For my own filesystem replication I have written a script that looks
at the snapshots on the target side, locates the last one of those,
and then makes an incremental replication with a newly created
snapshot relativ to the last common one. That one is then destroyed
after the replication was successful, so the new snapshot is now the
last common one.

Once your replication gets out of sync such that the last snapshot
on the target is not the common one, you must delete snapshots on
the target until the common one is the last one; if there is no
common one any more, you have to start the replication anew with
deleting (or renaming) the file system on the target and doing a
non-incremental send of a source snapshot to the target.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Virutal zfs server vs hardware zfs server

2009-03-02 Thread Juergen Nickelsen
Juergen Nickelsen n...@jnickelsen.de writes:

 Solaris Bundled Driver: * vgatext/ ** radeon
 Video
 ATI Technologies Inc
 R360 NJ [Radeon 9800 XT]

 I *think* this is the same driver used with my work laptop (which I
 don't have at hand to check, unfortunately), also with ATI graphics
 hardware.

Confirmed.
Regards, Juergen.

-- 
What you won was the obligation to pay more for something than
anybody else thought it was worth.
   -- Delainey and Rasmussen's Betty about eBay
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Virutal zfs server vs hardware zfs server

2009-03-01 Thread Juergen Nickelsen
Harry Putnam rea...@newsguy.com writes:

www.jtan.com/~reader/SDDToolReport-chub-OpenSolaris.html

I see the following there:

Solaris Bundled Driver: * vgatext/ ** radeon
Video
ATI Technologies Inc
R360 NJ [Radeon 9800 XT]

I *think* this is the same driver used with my work laptop (which I
don't have at hand to check, unfortunately), also with ATI graphics
hardware.


As far as I know the situation with ATI is that, while ATI supplies
well-performing binary drivers for MS Windows (of course) and Linux,
there is no such thing for other OSs. So OpenSolaris uses
standardized interfaces of the graphics hardware, which have
comparatively low bandwidth.

This leads to very unimpressive graphics performance, up to the
point that the machine nearly freezes when large images are loaded
into the graphics adapter.


Most of my work is text-oriented (lots of XTerms and one XEmacs,
mostly) with some web browsing and the occasional GUI tool thrown
in, and this works mostly fine on the system. Even picture
processing with Gimp from time to time is okay, while not fast. (And
I do not mean not blindingly fast, but rather really not fast.)

But there are things that really are a pain, e. g. web pages that
constantly blend one picture into the other, for instance
http://www.strato.de/ . While you would not notice that, usually,
this page makes my laptop really slow, such that it requires
significant effort even to find and press the button to close the
window.

Still, I find that bearable given that I have Solaris running on the
machine (as my target platform is Solaris 10) including ZFS
goodness.


On the other hand, I understand that you want to build a server, not
a workstation type machine. Graphics performance should be
irrelevant in this case.

If it is not, you might consider another graphics adapter. To my
knowledge the situation is much better with NVIDIA hardware.

Regards, Juergen.

-- 
Unix gives you just enough rope to hang yourself -- and then a
couple of more feet, just to be sure.-- Eric Allman
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-21 Thread Juergen Nickelsen
Ian Collins i...@ianshome.com writes:

 I suspect that a 'zfs copy' or somesuch would be a nice utility
 when wanting to shove a parent and all of it's snapshots to
 another system.
   
 If that's what you want, do an incremental send (-I).

To be a bit more detailed, first create the file system on the
target machine by sending the first snapshot that you want to have
replicated in full. After that, send each of the following snapshots
incrementally, based on the previous.

So if you have this on host a:

tank/m...@snap1
tank/m...@snap2
tank/m...@snap3
tank/m...@snap4

and a pool data on host b, do it like this:

a# zfs send tank/m...@snap1 | ssh b zfs recv -d data
a# zfs send -i tank/m...@snap1 tank/m...@snap2 | ssh b zfs recv data/myfs
a# zfs send -i tank/m...@snap2 tank/m...@snap3 | ssh b zfs recv data/myfs
a# zfs send -i tank/m...@snap3 tank/m...@snap4 | ssh b zfs recv data/myfs

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-21 Thread Juergen Nickelsen
Juergen Nickelsen n...@jnickelsen.de writes:

 If that's what you want, do an incremental send (-I).

 To be a bit more detailed, first create the file system on the
 target machine by sending the first snapshot that you want to have
 replicated in full. After that, send each of the following snapshots
 incrementally, based on the previous.

Sorry, I was confused. I assumed the -i option; the -I does, to
my knowledge, automatically what I have outlined here in full, but
does not yet exist with the Solaris 10 release we use.

-- 
It is easy to be blinded to the essential uselessness of computers by
the sense of accomplishment you get from getting them to run at all.
  -- Douglas Adams
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun /

2008-10-11 Thread Juergen Nickelsen
Timh Bergström [EMAIL PROTECTED] writes:

 Unfortunely I can only agree to the doubts about running ZFS in
 production environments, i've lost ditto-blocks, i''ve gotten
 corrupted pools and a bunch of other failures even in
 mirror/raidz/raidz2 setups with or without hardware mirrors/raid5/6.
 Plus the insecurity of a sudden crash/reboot will corrupt or even
 destroy the pools with restore from backup as the only advice. I've
 been lucky so far about getting my pools back thanks to people like
 Victor.

With which release was that? Solaris 10 or OpenSolaris?

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import of bootable root pool renders it

2008-10-03 Thread Juergen Nickelsen
David Finberg [EMAIL PROTECTED] writes:

 JN I had a similar problem: After installing and booting Opensolaris
 JN 2008.05, I succeded to lock myself out through some passwd/shadow
 JN inconsistency (totally my own fault). Not a problem, I thought -- I
 JN booted from the install disk, imported the root pool, fixed the
 JN inconsistency, and rebooted. Lo, instant panic.
[...]
 I think this is 6737463

Quite possible. Only the the machine rebooted so fast that I barely
could see the word panic, let alone anything more specific. :-}

In a development version of a system I am not *that* suprised by a
panic like this; I was rather baffled about the system not being
able, apparently, to boot from the (just so slightly, I thought)
externally modified root pool.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird ZFS recv / NFS export problem

2008-10-03 Thread Juergen Nickelsen
(Haven't I already written an answer to this? Anyway, I cannot find it.)

Nils Goroll [EMAIL PROTECTED] writes:

 In a snoop I see that, when the access(2) fails, the nfsclient gets
 a Stale NFS file handle response, which gets translated to an
 ENOENT.

 What happens if you use the noac NFS mount option on the client?

No change. (I'll skip your other questions, because:)

In between a colleague of mine has found the apparent root cause of
the problem. The zfs man page reads, under zfs receive:

 If  an  incremental  stream  is   received,   then   the
 destination file system must already exist, and its most
 recent snapshot  must  match  the  incremental  stream's
 source.  The  destination  file  system is unmounted and
 cannot be accessed during the receive operation.

I still think there might be an NFS issue involved, as in my
understanding a temporary unmount should not affect the NFS mount
much, if a server reboot doesn't. But the exported file system being
unmounted in between makes this behaviour much more plausible and
leaves us with little hope that this might be resolved very soon.

Mounting the file system directly from the primary source is a
feasible workaround, so that problem is not an issue for me any more
at the moment. Of course, thanks for your help anyway!

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird ZFS recv / NFS export problem

2008-10-03 Thread Juergen Nickelsen
(I found the saved draft of the answer I thought I had send; I send
it just for completeness's sake.)


Nils Goroll [EMAIL PROTECTED] writes:

 What happens if you use the noac NFS mount option on the client?

That does not seem to change the behaviour. (I have not tried it
with this test setup, but it happens with noac in the real
scenario, too.)

 I'd not recommend to use it for production environments unless you
 really need to, but this looks like a nfs client caching issue.

The real scenario is indeed a production environment, but with
very low traffic, so we thought noac ould be an options.

 Is this an nfsv3 or nfsv4 mount? What happens if you use one or the other? 

This is v3; I have not yet tried v4 yet. We don't have v4 in use for
reasons I don't know (but which I am sure exist and are valid for
our environment).

 Please provide nfsstat -m output.

/mnt/nfsw from repltarget:/pool1/nfsw/.zfs/snapshot
 Flags:
 
vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,acl,rsize=32768,wsize=32768,retrans=5,timeo=600
 Attr cache:acregmin=3,acregmax=60,acdirmin=30,acdirmax=60


As mentioned in the previous post, the problem is no longer an issue
for me. Still I'd be curious to here more about it if something
turns up.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Weird ZFS recv / NFS export problem

2008-10-01 Thread Juergen Nickelsen
Hello all,

in the setup I try to build I want to have snapshots of a file
system replicated from host replsource to host repltarget and
from there NFS-mounted on host nfsclient to access snapshots
directly:

replsource# zfs create pool1/nfsw
replsource# mkdir /pool1/nfsw/lala
replsource# zfs snapshot pool1/[EMAIL PROTECTED]
replsource# zfs send pool1/[EMAIL PROTECTED] | \
ssh repltarget zfs receive -d pool1

  (a pool1 exists on repltarget as well.)

repltarget# zfs set sharenfs=ro=nfsclient pool1/nfsw

nfsclient# mount repltarget:/pool1/nfsw/.zfs/snapshot /mnt/nfsw/

nfsclient# cd /mnt/nfsw/snap1
nfsclient# access ./lala
access(./lala, R_OK | X_OK) == 0

So far, so good. But now I see the following:

  (wait a bit, for instance 3 minutes, then replicate another
   snapshot)

replsource# zfs snapshot pool1/[EMAIL PROTECTED]
replsource# zfs send -i pool1/[EMAIL PROTECTED] pool1/[EMAIL PROTECTED] | \
ssh repltarget zfs receive pool1/nfsw

  (the PWD of the shell on nfsclient is still /mnt/nfsw/snap1)

nfsclient# access ./lala
access(./lala, R_OK | X_OK) == -1

  (if you think that is surprising, watch this:)

nfsclient# ls /mnt/nfsw
snap1  snap2
nfsclient# access ./lala
access(./lala, R_OK | X_OK) == 0

The access program does exactly the access(2) call illustrated in
its output.

The weird thing is that a directory can be accessed, then not
accessed after the exported file system on repltarget has been
updated by a zfs recv, then again be accessed after an ls of the
mounted directory.

In a snoop I see that, when the access(2) fails, the nfsclient gets
a Stale NFS file handle response, which gets translated to an
ENOENT.

My problem is that the application accessing the contents inside of
the NFS-mounted snapshot cannot find the content any more after the
filesystem on repltarget has been updated. Is this a known problem?
More important, is there a known workaround?

All machines are running SunOS 5.10 Generic_127128-11 i86pc. If some
more information could be helpful, I'll gladly provide it.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import of bootable root pool renders it

2008-09-30 Thread Juergen Nickelsen
Stephen Quintero [EMAIL PROTECTED] writes:

 I am running OpenSolaris 2008.05 as a PV guest under Xen. If you
 import the bootable root pool of a VM into another Solaris VM, the
 root pool is no longer bootable.

I had a similar problem: After installing and booting Opensolaris
2008.05, I succeded to lock myself out through some passwd/shadow
inconsistency (totally my own fault). Not a problem, I thought -- I
booted from the install disk, imported the root pool, fixed the
inconsistency, and rebooted. Lo, instant panic.

No idea why, though, I am not that familiar with the underlying
code. I just did a reinstall.

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss