Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread James Carlson
Frank Batschulat (Home) wrote:
 This just can't be an accident, there must be some coincidence and thus 
 there's a good chance
 that these CHKSUM errors must have a common source, either in ZFS or in NFS ?

One possible cause would be a lack of substantial exercise.  The man
page says:

 A regular file. The use of files as a backing  store  is
 strongly  discouraged.  It  is  designed  primarily  for
 experimental purposes, as the fault tolerance of a  file
 is  only  as  good  as  the file system of which it is a
 part. A file must be specified by a full path.

Could it be that discouraged and experimental mean not tested as
thoroughly as you might like, and certainly not a good idea in any sort
of production environment?

It sounds like a bug, sure, but the fix might be to remove the option.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread James Carlson
Mike Gerdts wrote:
 This unsupported feature is supported with the use of Sun Ops Center
 2.5 when a zone is put on a NAS Storage Library.

Ah, ok.  I didn't know that.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [install-discuss] Will OpenSolaris and Nevada co-exist in peace on the same root zpool

2008-09-05 Thread James Carlson
Johan Hartzenberg writes:
 I am guessing the answer is YMMV depending on the differences in versions
 of, for example Firefox, Gnome, Thunderbird, etc, and based on how well
 these cope with settings that was changed by another potentially newer
 version of itself.

The answers to your questions are basically all no.  The new
installer wants a primary partition or a whole disk.

However, there are helpful blogs from folks who've made the
transition.  Poor Ed seems to have a broken 'shift' key, but he gives
great details here:

  http://blogs.sun.com/edp/entry/moving_from_nevada_and_live

-- 
James Carlson, Solaris Networking  [EMAIL PROTECTED]
Sun Microsystems / 35 Network Drive71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-code] /usr/bin and /usr/xpg4/bin differences

2007-12-18 Thread James Carlson
Sasidhar Kasturi writes:
  Is it that /usr/bin binaries are more advanced than that of /xpg4
 things or .. the extensions of the /xpg4 things?

No.  They're just different.

 If i want to make some modifications in the code.. Can i do it for /xpg4/bin
 commands or .. i should do it for /usr/bin commands??

There's no simple answer to that question.

If your modifications affect things that are covered by the relevant
standards, and if your modifications are not in compliance with those
standards, then you should not be changing the /usr/xpg4/bin or
/usr/xpg6/bin versions of the utility.

If your modifications affect compatibility with historic Solaris or
SunOS behavior, then you'll need to look closely at how your changes
fit into the existing /usr/bin utility.

In general, I think we'd like to see new features added to both where
possible and where conflicts are not present.  But each proposal is
different.

I'd suggest doing one (or maybe more) of the following:

  - putting together a proposal for a change, getting a sponsor
through the usual process, and then bring the issues up in an ARC
review.

  - finding an expert in the area you're planning to change to help
give you some advice.

  - getting a copy of the standards documents (most are on-line these
days; see www.opengroup.org) and figuring out what issues apply in
your case.

-- 
James Carlson, Solaris Networking  [EMAIL PROTECTED]
Sun Microsystems / 35 Network Drive71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard-hang on snapshot rename

2007-01-09 Thread James Carlson
[EMAIL PROTECTED] writes:
  Understood. Two things,  does the rename loop hit any of the fs in
  question, and does putting a  sort -r |  before the while make any
  difference?
 
 
 The reason I ask is because I had a similar issue running through batch
 renames (from epoch - human) of my snapshots.  It seemed to cause a system
 lock unless I did the batch depth first (sort -r).

Well, it still hung, but the test itself revealed an interesting clue
to the problem.

In the previous trials, the last two file systems were left unchanged
(snapshots were unrenamed) after rebooting.  In the sort -r trial,
only the last one was changed.  This means that the penultimate entry
in the list is 'magical' somehow.

It's not shared via NFS, nor used for Zones, nor does it have
compression enabled.  The only thing special about that one file
system is that it has over 100K tiny files on it.

I'll do some more digging as time (and other users) permit.

-- 
James Carlson, Solaris Networking  [EMAIL PROTECTED]
Sun Microsystems / 1 Network Drive 71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] hard-hang on snapshot rename

2007-01-08 Thread James Carlson
[Initial version of this message originally sent to zfs-interest by
mistake.  Sorry if this appears anywhere as a duplicate.]

I was noodling around with creating a backup script for my home
system, and I ran into a problem that I'm having a little trouble
diagnosing.  Has anyone seen anything like this or have any debug
advice?

I did a zfs create -r to set a snapshot on all of the members of a
given pool.  Later, for reasons that are probably obscure, I wanted to
rename that snapshot.  There's no zfs rename -r function, so I tried
to write a crude one on my own:

zfs list -rHo name -t filesystem pool |
while read name; do
zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
done

The results were disappointing.  The system was extremely busy for a
moment and then went completely catatonic.  Most network traffic
appeared to stop, though I _think_ network driver interrupts were
still working.  The keyboard and mouse (traditional PS/2 types; not
USB) went dead -- not even keyboard lights were working (nothing from
Caps Lock).  The disk light stopped flashing and went dark.  The CPU
temperature started to climb (as measured by an external sensor).  No
messages were written to /var/adm/messages or dmesg on reboot.

The system turned into an increasingly warm brick.  As all of my
inputs to the system were gone, I really had no good way immediately
available to debug the problem.  Thinking this was just a fluke or
perhaps something induced by hardware, I shut everything down, cooled
off, and tried again.  Three times.  The same thing happened each
time.

System details:

  - snv_55

  - Tyan 2885 motherboard with 4GB RAM (four 1GB modules) and one
Opteron 246 (model 5 step 8).

  - AMI BIOS version 080010, dated 06/14/2005.  No tweaks applied,
system is always on; no power management.

  - Silicon Image 3114 SATA controller configured for legacy (not
RAID) mode.

  - Three SATA disks in the system, no IDE as they've gone to the
great bit-bucket in the sky.  The SATA drives are one WDC
WD740GD-32F (not part of this ZFS pool), and a pair of
ST3250623NS.

  - The two Seagate drives are partitioned like this:

  0   rootwm   3 -   6555.00GB(653/0/0)10490445
  1   swapwm 656 -   9162.00GB(261/0/0) 4192965
  2 backupwu   0 - 30397  232.86GB(30398/0/0) 488343870
  3   reservedwm 917 -   9177.84MB(1/0/0) 16065
  4 unassignedwu   00 (0/0/0) 0
  5 unassignedwu   00 (0/0/0) 0
  6 unassignedwu   00 (0/0/0) 0
  7   homewm 918 - 30397  225.83GB(29480/0/0) 473596200
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 alternateswm   1 - 2   15.69MB(2/0/0) 32130

  - For both disks: slice 0 is for an SVM mirrored root, slice 1 has
swap, slice 3 has the SVM metadata, and slice 7 is in the ZFS pool
named pool as a mirror.  No, I'm not using whole-disk or EFI.

  - Zpool status:

  pool: pool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
poolONLINE   0 0 0
  mirrorONLINE   0 0 0
c4d0s7  ONLINE   0 0 0
c4d1s7  ONLINE   0 0 0

  - 'zfs list -rt filesystem pool | wc -l' says 37.

  - Iostat -E doesn't show any errors of any kind on the drives.

  - I read through CR 6421427, but that seems to be SPARC-only.

Next step will probably be to set the 'snooping' flag and maybe hack
the bge driver to do an abort_sequence_enter() call on a magic packet
so that I can wrest control back.  Before I do something that drastic,
does anyone else have ideas?

-- 
James Carlson, Solaris Networking  [EMAIL PROTECTED]
Sun Microsystems / 1 Network Drive 71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard-hang on snapshot rename

2007-01-08 Thread James Carlson
[EMAIL PROTECTED] writes:
  I was noodling around with creating a backup script for my home
  system, and I ran into a problem that I'm having a little trouble
  diagnosing.  Has anyone seen anything like this or have any debug
  advice?
 
  I did a zfs create -r to set a snapshot on all of the members of a
  given pool.  Later, for reasons that are probably obscure, I wanted to
  rename that snapshot.  There's no zfs rename -r function, so I tried
  to write a crude one on my own:
 
 do you mean zfs snapshot -r fsname@foo instead of the create?

Yes; sorry.  A bit of a typo there.

 hmm,  just to verify sanity,  have can you show the output of:
 
 zfs list -rHo name -t filesystem pool
 
 and
 
  zfs list -rHo name -t filesystem pool |
  while read name; do
 echo zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
  done
 
 (note the echo inserted above)

Sure, but it's not a shell problem.  I should have mentioned that when
I brought the system back up, *most* of the renames had actually taken
place, but not *all* of them.  I ended up with mostly [EMAIL PROTECTED], but
with a handful of stragglers near the end of the list [EMAIL PROTECTED]

The output looks a bit like this (not _all_ file systems shown, but
representative ones):

pool
pool/HTSData
pool/apache
pool/client
pool/csw
pool/home
pool/home/benjamin
pool/home/beth
pool/home/carlsonj
pool/home/ftp
pool/laptop
pool/local
pool/music
pool/photo
pool/sys
pool/sys/core
pool/sys/dhcp
pool/sys/mail
pool/sys/named

And then:

zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/home/[EMAIL PROTECTED] pool/home/[EMAIL PROTECTED]
zfs rename pool/home/[EMAIL PROTECTED] pool/home/[EMAIL PROTECTED]
zfs rename pool/home/[EMAIL PROTECTED] pool/home/[EMAIL PROTECTED]
zfs rename pool/home/[EMAIL PROTECTED] pool/home/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/[EMAIL PROTECTED] pool/[EMAIL PROTECTED]
zfs rename pool/sys/[EMAIL PROTECTED] pool/sys/[EMAIL PROTECTED]
zfs rename pool/sys/[EMAIL PROTECTED] pool/sys/[EMAIL PROTECTED]
zfs rename pool/sys/[EMAIL PROTECTED] pool/sys/[EMAIL PROTECTED]
zfs rename pool/sys/[EMAIL PROTECTED] pool/sys/[EMAIL PROTECTED]

It's not a matter of the shell script not working; it's a matter of
something inside the kernel (perhaps not even ZFS but instead a driver
related to SATA?) experiencing vapor-lock.

Other heavy load on the system, though, doesn't cause this to happen.
This one operation does cause the lock-up.

-- 
James Carlson, Solaris Networking  [EMAIL PROTECTED]
Sun Microsystems / 1 Network Drive 71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard-hang on snapshot rename

2007-01-08 Thread James Carlson
[EMAIL PROTECTED] writes:
  Other heavy load on the system, though, doesn't cause this to happen.
  This one operation does cause the lock-up.
 
 
 Understood. Two things,  does the rename loop hit any of the fs in
 question,

No; the loop you saw is essentially what I ran.  (Other than that it
was level0new and level0 instead of foo and bar.)

Thinking it was some locking issue, I did try saving off the list in a
file (on tmpfs), and then running it through the while loop -- that
produced the same result.

 and does putting a  sort -r |  before the while make any
 difference?

I'll give it a try tonight and see.  It's a production system, so I
have to wait until all of the users are asleep or otherwise occupied
by Two And A Half Men reruns to try something hazardous like that.

-- 
James Carlson, Solaris Networking  [EMAIL PROTECTED]
Sun Microsystems / 1 Network Drive 71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-20 Thread James Carlson
Pawel Jakub Dawidek writes:
  The goal is the same as the goal for things like compression in ZFS, no 
  application change it is free for the applications.
 
 I like the idea, I really do, but it will be s expensive because of
 ZFS' COW model. Not only file removal or truncation will call bleaching,
 but every single file system modification... Heh, well, if privacy of
 your data is important enough, you probably don't care too much about
 performance. I for one would prefer encryption, which may turns out to be
 much faster than bleaching and also more secure.

I think the idea here is that since ZFS encourages the use of lots of
small file systems (rather than one or two very big ones), you'd have
just one or two very small file systems with crucial data marked as
needing bleach, while the others would get by with the usual
complement of detergent and switch fabric softener.

Having _every_ file modification result in dozens of I/Os would
probably be bad, but perhaps not if it's not _every_ modification
that's affected.

-- 
James Carlson, KISS Network[EMAIL PROTECTED]
Sun Microsystems / 1 Network Drive 71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss