Re: [zfs-discuss] Backup complete rpool structure and data to tape

2011-05-11 Thread Glenn Lagasse
* Edward Ned Harvey (opensolarisisdeadlongliveopensola...@nedharvey.com) wrote:
  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Arjun YK
  
  Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
  and restore it backĀ if in case the disks are lost.
  Backup would be done with an enterprise tool like tsm, legato etc.
 
 Backup/restore of bootable rpool to tape with a 3rd party application like
 legato etc is kind of difficult.  Because if you need to do a bare metal
 restore, how are you going to do it?  The root of the problem is the fact
 that you need an OS with legato in order to restore the OS.  It's a

If you're talking about Solaris 11 Express, you could create your own
liveCD using the Distribution Constructor[1] and include the backup
software on the cd image.  You'll have to customize the Distribution
Constructor to install the backup software (presumably via an SVR4
package[2]) but that's not too difficult.  Once you've created the image,
you're good to go forevermore (unless you need to update the backup
software on the image, in which case if you keep your Distribution
Constructor manifests around should be a simple edit to just point at
the newer backup software package)..

If you're talking about S10, then that's a tougher nut to crack.

Cheers,

-- 
Glenn

[1] - http://download.oracle.com/docs/cd/E19963-01/html/820-6564/
[2] - http://download.oracle.com/docs/cd/E19963-01/html/820-6564/addpkg.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Glenn Lagasse
* Arjun YK (arju...@gmail.com) wrote:
 Hi,
 
 Let me add another query.
 I would assume it would be perfectly ok to choose any name for root
 pool, instead of 'rpool', during the OS install. Please suggest
 otherwise.

While there is nothing special about the name 'rpool' (thus you *could*
change it) none of the Installers allow you to specify the name of your
'root pool' during installation.

Cheers,

Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a checkpoint?

2010-11-09 Thread Glenn Lagasse
* Peter Taps (ptr...@yahoo.com) wrote:
 Thank you all for your help. Looks like beadm is the utility I was
 looking for.
 
 When I run beadm list, it gives me the complete list and indicates
 which one is currently active. It doesn't tell me which one is the
 default boot. Can I assume that whatever is active is also the
 default?

As outlined in beadm(1M):

beadm  list [-a | -ds] [-H] [beName]

 Lists information about the  existing  boot  environment
 named beName, or lists information for all boot environ-
 ments if beName is not provided. The Active field  indi-
 cates  whether  the  boot  environment  is  active  now,
 represented by N; active on reboot, represented by R; or

SunOS 5.11  Last change: 21 Jul 20103

System Administration Commands  beadm(1M)

 both, represented by NR.

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Glenn Lagasse
* Harry Putnam (rea...@newsguy.com) wrote:
 Toby Thain t...@telegraphics.com.au writes:
 
  On 27/10/10 4:21 PM, Krunal Desai wrote:
  I believe he meant a memory stress test, i.e. booting with a
  memtest86+ CD and seeing if it passed. 
 
  Correct. The POST tests are not adequate.
 
 Got it. Thank you.  
 
 Short of doing such a test, I have evidence already that machine will
 predictably shutdown after 15 to 20 minutes of uptime.
 
 It seems there ought to be something, some kind of evidence and clues
 if I only knew how to look for them, in the logs.
 
 Is there not some semi standard kind of keywords to grep for that
 would indicate some clue as to the problem?

If it's a thermal problem, then no there wouldn't be.  Thermal shutdown
is handled by the BIOS iirc and thus there isn't any notification to the
host OS.  Certainly not on commodity PC hardware at any rate.

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unexpected ZFS space consumption

2010-10-04 Thread Glenn Lagasse
* JR Dalrymple (j...@jrssite.com) wrote:
 I'm pretty new to ZFS and OpenSolaris as a whole. I am an experienced
 storage administrator, however my storage equipment has typically been
 NetApp or EMC branded. I administer NetApp FAS2000 and FAS3000 series
 boxes to host a VMware only virtual infrastructure so I am versed on a
 pretty high level at storage provisioning for a virtual environment.
 
 My problem is unexpected disk usage on deduplicated datasets holding
 little more than VMDKs. I experimented with deduplication on ZFS and
 compared it to deduplication on NetApp and found basically identical
 returns on a mix of backup data and user data. I was pretty excited to
 put some VMDKs of my own on to a system of my own. I have been
 disappointed with the actual results however :(
 
 Upon building VMs on this storage I found the data to consume an as
 expected OS only amount of disk space. As time went on the VMDKs
 filled out to consume their entire allocated disk space. I was hoping
 I could recover the lost physical disk space by using sdelete on the
 guests to zero out unused space on the disks, however this didn't
 happen as per du or df on the storage host. After zeroing unused disk
 space I was really hoping that the VMDKs would only consume the amount
 of disk actually filled by the guest as they did when the VMs were
 fresh. I have properly aligned VMDKs so I don't think that the problem
 lies there.
 
 I'm not sure what information to offer that might be helpful except
 the following (nfs0 is the dataset I'm working with primarily):
 
 jrdal...@yac-stor1:~$ uname -a SunOS yac-stor1 5.11 snv_134 i86pc i386
 i86pc Solaris jrdal...@yac-stor1:~$ zpool list NAMESIZE  ALLOC
 FREECAP  DEDUP  HEALTH  ALTROOT rpool   540G   228G   312G42%
 1.26x  ONLINE  - jrdal...@yac-stor1:~$ zfs list NAME
 USED  AVAIL  REFER  MOUNTPOINT rpool329G
 238G  88.5K  /rpool rpool/ROOT  7.97G   238G19K
 legacy rpool/ROOT/opensolaris  8.41M   238G  2.85G  /
 rpool/ROOT/opensolaris-143.5M   238G  3.88G  /
 rpool/ROOT/opensolaris-27.92G   238G  5.52G  / rpool/dump
 2.00G   238G  2.00G  - rpool/export1.03G   238G23K
 /export rpool/export/home   1.03G   238G23K  /export/home
 rpool/export/home/jrdalrym  1.03G   238G  1.03G  /export/home/jrdalrym
 rpool/iscsi  103G   238G21K  /rpool/iscsi
 rpool/iscsi/iscsi0   103G   301G  40.5G  - rpool/nfs0
 153G  87.3G   153G  /rpool/nfs0 rpool/nfs1  49.6G
 238G  40.5G  /rpool/nfs1 rpool/nfs2  9.99G  50.0G
 9.94G  /rpool/nfs2 rpool/swap  2.00G   240G   100M  -
 jrdal...@yac-stor1:~$ zfs get all rpool/nfs0 NAMEPROPERTY
 VALUE   SOURCE rpool/nfs0  type
 filesystem  - rpool/nfs0  creation
 Wed Aug 25 20:28 2010   - rpool/nfs0  used
 153G- rpool/nfs0  available
 87.3G   - rpool/nfs0  referenced
 153G- rpool/nfs0  compressratio
 1.00x   - rpool/nfs0  mounted
 yes - rpool/nfs0  quota
 240Glocal rpool/nfs0  reservation
 nonedefault rpool/nfs0  recordsize
 128Kdefault rpool/nfs0  mountpoint
 /rpool/nfs0 default rpool/nfs0  sharenfs
 ro...@192.168.10.0/24   local rpool/nfs0  checksum
 on  default rpool/nfs0  compression
 off default rpool/nfs0  atime
 on  default rpool/nfs0  devices
 on  default rpool/nfs0  exec
 on  default rpool/nfs0  setuid
 on  default rpool/nfs0  readonly
 off default rpool/nfs0  zoned
 off default rpool/nfs0  snapdir
 hidden  default rpool/nfs0  aclmode
 groupmask   default rpool/nfs0  aclinherit
 restricted  default rpool/nfs0  canmount
 on  default rpool/nfs0  shareiscsi
 off default rpool/nfs0  xattr
 on  default rpool/nfs0  copies
 1   default rpool/nfs0  version
 4   - rpool/nfs0  utf8only
 off - rpool/nfs0  normalization
 none- rpool/nfs0  casesensitivity
 sensitive   - rpool/nfs0  vscan
 off default rpool/nfs0  nbmand
 off default rpool/nfs0  sharesmb
 off default rpool/nfs0  refquota
 nonedefault rpool/nfs0  refreservation
 nonedefault rpool/nfs0  primarycache
 all  

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-05 Thread Glenn Lagasse
* Edward Ned Harvey (solar...@nedharvey.com) wrote:
  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Matt Keenan
  
  Just wondering whether mirroring a USB drive with main laptop disk for
  backup purposes is recommended or not.
  
  Plan would be to connect the USB drive, once or twice a week, let it
  resilver, and then disconnect again. Connecting USB drive 24/7 would
  AFAIK have performance issues for the Laptop.
 
 MMmmm...  If it works, sounds good.  But I don't think it'll work as
 expected, for a number of reasons, outlined below.

It used to work for James Gosling.

http://blogs.sun.com/jag/entry/solaris_and_os_x

[snip]
 
  This would have the added benefit of the USB drive being bootable.
 
 By default, AFAIK, that's not correct.  When you mirror rpool to
 another device, by default the 2nd device is not bootable, because
 it's just got an rpool in there.  No boot loader.

That's true, but easily fixed (just like for any other mirrored pool
configuration).

installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/{disk}

 Even if you do this mirror idea, which I believe will be slower and
 less reliable than zfs send | zfs receive you still haven't gained
 anything as compared to the zfs send | zfs receive procedure, which
 is known to work reliable with optimal performance.

How about ease-of-use, all you have to do is plug in the usb disk and
zfs will 'do the right thing'.  You don't have to remember to run zfs
send | zfs receive, or bother with figuring out what to send/recv etc
etc etc.

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Glenn Lagasse
* Brian (broco...@vt.edu) wrote:
 I am Starting to put together a home NAS server that will have the
 following roles:
 
 (1) Store TV recordings from SageTV over either iSCSI or CIFS.  Up to
 4 or 5 HD streams at a time.  These will be streamed live to the NAS
 box during recording.  (2) Playback TV (could be stream being
 recorded, could be others) to 3 or more extenders (3) Hold a music
 repository (4) Hold backups from windows machines, mac (time machine),
 linux.  (5) Be an iSCSI target for several different Virtual Boxes.
 
 Function 4 will use compression and deduplication.  Function 5 will
 use deduplication.
 
 I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2
 mirrored boot drives.  
 
 I have been reading these forums off and on for about 6 months trying
 to figure out how to best piece together this system.
 
 I am first trying to select the CPU.  I am leaning towards AMD because
 of ECC support and power consumption.

I can't comment on most of your question, but I will point you at:

http://blogs.sun.com/mhaywood/entry/powernow_for_solaris

I *think* the cpu's you're looking at won't be an issue but just something
to be aware of when looking at AMD kit (especially if you want to manage
the processor speed).

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris installation Exiting (caught signal 11 ) and reboot crashes

2010-01-06 Thread Glenn Lagasse
* Richard Elling (richard.ell...@gmail.com) wrote:
 Hi Pradeep,
 This is the ZFS forum. You might have better luck on the caiman-discuss
 forum which is where the folks who work on the installers hang out.

Except, that's not where the people who work on the legacy Solaris 10
installers hang out.  To engage those folks, you should open a support
ticket.

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Planed ZFS-Features - Is there a List or something else

2009-12-09 Thread Glenn Lagasse
* R.G. Keen (k...@geofex.com) wrote:
 I didn't see remove a simple device anywhere in there.
 
 Is it:
 too hard to even contemplate doing, 
 or
 too silly a thing to do to even consider letting that happen
 or 
 too stupid a question to even consider
 or
 too easy and straightforward to do the procedure I see recommended (export 
 the whole pool, destroy the pool, remove the device, remake the pool, then 
 reimport the pool) to even bother with?

You missed:

Too hard to do correctly with current resource levels and other higher
priority work.

As always, volunteers I'm sure are welcome. :-)

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Planed ZFS-Features - Is there a List or something else

2009-12-09 Thread Glenn Lagasse
* Neil Perrin (neil.per...@sun.com) wrote:
 
 
 On 12/09/09 13:52, Glenn Lagasse wrote:
 * R.G. Keen (k...@geofex.com) wrote:
 I didn't see remove a simple device anywhere in there.
 
 Is it:
 too hard to even contemplate doing, or
 too silly a thing to do to even consider letting that happen
 or too stupid a question to even consider
 or
 too easy and straightforward to do the procedure I see recommended (export 
 the whole pool, destroy the pool, remove the device, remake the pool, then 
 reimport the pool) to even bother with?
 
 You missed:
 
 Too hard to do correctly with current resource levels and other higher
 priority work.
 
 As always, volunteers I'm sure are welcome. :-)
 
 
 This gives the impression that development is not actively working
 on it. This is not true. As has been said often it is a difficult problem

True.  I apologize for the misleading nature of my comment.  I should
have pointed out that I don't work on the ZFS project but was relating
what I believed the possible answer could be based upon past list
postings of the subject.

 and has been actively worked on for a few months now. I don't think
 we are prepared to give a date as to when it will be delivered though.

Cool!

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-25 Thread Glenn Lagasse
* David Abrahams (d...@boostpro.com) wrote:
 
 on Fri Sep 25 2009, Cindy Swearingen Cindy.Swearingen-AT-Sun.COM wrote:
 
  Hi David,
 
  All system-related components should remain in the root pool, such as
  the components needed for booting and running the OS.
 
 Yes, of course.  But which *are* those?
 
  If you have datasets like /export/home or other non-system-related
  datasets in the root pool, then feel free to move them out.
 
 Well, for example, surely /opt can be moved?

Don't be so sure.

  Moving OS components out of the root pool is not tested by us and I've
  heard of one example recently of breakage when usr and var were moved
  to a non-root RAIDZ pool.
 
  It would be cheaper and easier to buy another disk to mirror your root
  pool then it would be to take the time to figure out what could move out
  and then possibly deal with an unbootable system.
 
  Buy another disk and we'll all sleep better.
 
 Easy for you to say.  There's no room left in the machine for another disk.

The question you're asking can't easily be answered.  Sun doesn't test
configs like that.  If you really want to do this, you'll pretty much
have to 'try it and see what breaks'.  And you get to keep both pieces
if anything breaks.

There's very little you can safely move in my experience.  /export
certainly.  Anything else, not really (though ymmv).  I tried to create
a seperate zfs dataset for /usr/local.  That worked some of the time,
but it also screwed up my system a time or two during
image-updates/package installs.

On my 2010.02/123 system I see:

bin Symlink to /usr/bin
boot/
dev/
devices/
etc/
export/ Safe to move, not tied to the 'root' system
kernel/
lib/
media/
mnt/
net/
opt/
platform/
proc/
rmdisk/
root/   Could probably move root's homedir
rpool/
sbin/
system/
tmp/
usr/
var/

Other than /export, everything else is considered 'part of the root
system'.  Thus part of the root pool.

Really, if you can't add a mirror for your root pool, then make backups
of your root pool (left as an exercise to the reader) and store the
non-system specific bits (/export) on you're raidz2 pool.

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-25 Thread Glenn Lagasse
* David Magda (dma...@ee.ryerson.ca) wrote:
 On Sep 25, 2009, at 16:39, Glenn Lagasse wrote:
 
 There's very little you can safely move in my experience.  /export
 certainly.  Anything else, not really (though ymmv).  I tried to
 create
 a seperate zfs dataset for /usr/local.  That worked some of the time,
 but it also screwed up my system a time or two during
 image-updates/package installs.
 
 I'd be very surprised (disappointed?) if /usr/local couldn't be
 detached from the rpool. Given that in many cases it's an NFS mount,
 I'm curious to know why it would need to be part of the rpool. If it
 is a 'dependency' I would consider that a bug.

It can be detached, however one issue I ran in to was packages which
installed into /usr/local caused problems when those packages were
upgraded.  Essentially what occurred was that /usr/local was created on
the root pool and upon reboot caused the filesystem service to go into
maintenance because it couldn't mount the zfs /usr/local dataset on top
of the filled /usr/local root pool location.  I didn't have time to
investigate into it fully.  At that point, spinning /usr/local off into
it's own zfs dataset just didn't seem worth the hassle.

Others mileage may vary.

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fed up with ZFS causing data loss

2009-07-30 Thread Glenn Lagasse
* Rob Terhaar (rob...@robbyt.net) wrote:
 I'm sure this has been discussed in the past. But its very hard to
 understand, or even patch incredibly advanced software such as ZFS
 without a deep understanding of the internals.

It's also very hard for the primary ZFS developers to satisfy everyone's
itch :-)

 It will take quite a while before anyone can start understanding a
 file system which was developed behind closed doors for nearly a
 decade, and then released into opensource land via tarballs thrown
 over the wall. Only until recently the source has become more
 available to normal humans via projects such as indiana.

I don't think you've got your facts straight.  OpenSolaris was launched
in June 2005.  ZFS was integrated October 31st, 2005 after being in
development (of a sort) from October 31st 2001[1].  It hasn't been
developed behind closed doors for nearly a decade.  Four years at most
and it was available for all to see (in much better form than 'tarballs
thrown over the wall') LONG before Indiana was even a gleam in Ian's
eyes.

 Saying if you don't like it, patch it is an ignorant cop-out, and a
 troll response to people's problems with software.

And people seemingly expecting that the ZFS team (or any technology team
working on OpenSolaris) has infinite cycles to solve everyone's itches
is equally ignorant imo.  OpenSolaris (the project) is meant to be a
community project.  As in allowing contributions from entities outside
of sun.com.  So, saying 'patches welcomed' is mostly an appropriate
response (depending on how it's presented) because they are in fact
welcome.  That's sort of how opensource works (at least in my
experience).  If the primary developers aren't 'scratching your itch'
then you (or someone you can get to do the work for you) can fix your
own problems and contribute them back to the community as a whole where
everyone wins.

Cheers,

-- 
Glenn

1 - http://blogs.sun.com/bonwick/entry/zfs_the_last_word_in
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Glenn Lagasse
* Shannon Fiume (shannon.fi...@sun.com) wrote:
 Hi,
 
 I just installed 2009.06 and found that compression isn't enabled by
 default when filesystems are created. Does is make sense to have an
 RFE open for this? (I'll open one tonight if need be.) We keep telling
 people to turn on compression. Are there any situations where turning
 on compression doesn't make sense, like rpool/swap? what about
 rpool/dump?

That would be enhancement request #86.

http://defect.opensolaris.org/bz/show_bug.cgi?id=86

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24x1TB ZFS system. Best practices for OS install without wasting space.

2009-06-01 Thread Glenn Lagasse
Hi Ray,

* Ray Van Dolson (rvandol...@esri.com) wrote:
 So we have a 24x1TB system (from Silicon Mechanics).  It's using an LSI
 SAS card so we don't have any hardware RAID virtual drive type options.
 
 Solaris 10 05/09
 
 I was hoping we could set up one large zpool (RAIDZ) from the installer
 and set up a small zfs filesystem off of that for the OS leaving the
 rest of the zpool for our data.
 
 However, it sounds like this isn't possible... and that a ZFS root
 install must be done on a mirrored set of slices or disks.  So it looks
 like either way we might be losing out on minimum 2TB (two disks).
 
 Obviously we could throw in a couple smaller drives internally, or
 elsewhere... but are there any other options here?

Currently, no.  ZFS can only boot from single or mirrord zpools.

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-09 Thread Glenn Lagasse
* Orvar Korvar (knatte_fnatte_tja...@yahoo.com) wrote:
 Seagate7,
 
 You are not using ZFS correctly. You have misunderstood how it is
 used. If you dont follow the manual (which you havent) then any
 filesystem will cause problems and corruption, even ZFS or ntfs or
 FAT32, etc. You must use ZFS correctly. Start by reading the manual.
 
 For ZFS to be able to repair errors, you must use two drives or more.
 This is clearly written in the manual. If you only use one drive then
 ZFS can not repair errors. If you use one drive, then ZFS can only
 detect errors, but not repair errors. This is also clearly written in
 the manual.

Or, you can set copies  1 on your zfs filesystems.  This at least
protects you in cases of data corruption on a single drive but not if
the entire drive goes belly up.

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is scrubbing safe in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread Glenn Lagasse
* David Dyer-Bennet (d...@dd-b.net) wrote:
 
 On Fri, January 23, 2009 09:52, casper@sun.com wrote:
 
 Which leaves me wondering, how safe is running a scrub?  Scrub is one of
 the things that made ZFS so attractive to me, and my automatic reaction
 when I first hook up the data disks during a recovery is run a scrub!.
 
 
  If your memory is bad, anything can happen.  A scrub can rewrite bad
  data; but it can be the case that the disk is fine but the memory is
  bad.  Then, if the data is replicated it can be copied and rewritten;
  it is then possible to write incorrect data (and if they need to recompute
  the checksum, then oops)
 
 The memory was ECC, so it *should* have mostly detected problems early
 enough to avoid writing bad data.  And so far nothing has been detected as
 bad in the pool during light use.  But I haven't yet run a scrub since
 fixing the memory, so I have no idea what horrors may be lurking in wait.
 
 The pool is two mirror vdevs, and then I have two backups on external hard
 drives, and then I have two sets of optical disks of the photos, one of
 them off-site (I'd lose several months of photos if I had to fall back to
 the optical disks, I'm a bit behind there).  So I'm not yet in great fear
 of actually losing anything, and have very little risk of actually losing
 a LOT.
 
 But what I'm wondering is, are there known bugs in 101b that make
 scrubbing inadvisable with that code?  I'd love to *find out* what horrors
 may be lurking.

There's nothing in the release notes for 2008.11 (based on 101b) about
issues running scrub.  I've been using 101b for some time now and
haven't seen or heard of any issues running scrub.

There's always bugs.  But I'm pretty certain there isn't a known 'zfs
scrub is inadvisable under any and all conditions' bug laying about.
I've certainly not heard of such a thing (and it would be pretty big
news for 2008.11 if true).

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot / root in Nevada build 101

2008-10-29 Thread Glenn Lagasse
* Peter Baer Galvin ([EMAIL PROTECTED]) wrote:
 This seems like a n00b question but I'm stuck.
 
 Nevada build 101. Doing fresh install (in vmware fusion). I don't see
 any way to select zfs as the root file system. Looks to me like UFS is
 the default, but I don't see any option box to allow that to be
 changed to zfs. What am I missing?! Thanks.

You need to use the text mode installer, that's the only installer that
has support for ZFS root installations in SXCE.

Cheers,

Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to remove any references to a zpool that's gone

2008-09-18 Thread Glenn Lagasse
Hey Mark,

* Mark J Musante ([EMAIL PROTECTED]) wrote:
 Hi Glenn,

 Where is it hanging?  Could you provide a stack trace?  It's possible  
 that it's just a bug and not a configuration issue.

I'll have to recreate the situation (won't be able to do so until next
week).  I had a zpool status (and subsequently a zpool destroy) command
that was hung, subsequent zfs commands also would hang.  I couldn't even
do a zpool export (which someone privately told me should work).  What
worked was to reboot (which I actually had to power the machine off
physically, init and reboot did nothing) and then I could export the
'broken' pool.  So I'm not sure where the bug is, but this shouldn't be
too hard to replicate and I believe running zpool status with this type
of setup will cause a hang and then you're stuck until you power off the
machine and reboot to do the export.  I'll report back next week once I
replicate this.

Thanks,

Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! Possible with b80 and newest ZFS?

2008-08-21 Thread Glenn Lagasse
* Orvar Korvar ([EMAIL PROTECTED]) wrote:
 I dont think the mother board is on the HCL. But everything worked fine in 
 b90.
 
 I realize I havent provided all necessary info. Here is more info.
 http://www.opensolaris.org/jive/thread.jspa?threadID=69654tstart=0
 
 The thing is, Ive upgraded ZFS to the newest version with b95. And b95
 is very unstable, internet dies suddenly, movie playback kicks me out
 to login screen, wine dies upon startup, etc. So now I dont know what
 to do. I want an older build, 80-90 and access to my ZFS which is the
 latest version from b95. Impossible equation?

Ok, so you upgraded your ZFS pool to the new version introduced in build
95 (I think that's when it was introduced, regardless).  And now you
want to roll back because you're having problems with build 95.  I think
you're only option is to save your data and reinstall an earlier build.
There is no mechanism (that I know of, though I'm not a ZFS expert) to
'downgrade' a pool.  When you upgrade the version of a ZFS pool, you are
then required to run the build that has support for that pool version.

Why did you upgrade the pool in the first place out of curiousity?  It
would have run just fine in build 95 (and later).

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss