Re: ZFS question

2013-03-21 Thread Jeremy Chadwick
On Wed, Mar 20, 2013 at 10:10:20PM -0700, Reed A. Cartwright wrote:
 {snipped stuff about CAM and mps and ZFS deadman}

 Jeremy, I have a question about enabling kernel dumps based on my
 current swap config.

 I currently have a 1TB drive split into 4 geli encrypted swap
 partitions (Freebsd doesn't like swap partitions over ~250 GB and I
 have lots of RAM).

 These partitions are UFS-swap partitions and are
 not backed by any mirroing or ZFSing.
 
 So, how do I best enable crash dumps?  If I need to remove encryption,
 I can do that.

I have zero familiarity with geli(8), gbde(8), and file-based swap.

My gut feeling is that you cannot use this to achieve a proper kernel
panic dump, but I have not tried it.

You can force a kernel panic via sysctl debug.kdb.panic=1.  I'm not
sure if an automatic memory dump to swap happens with the stock GENERIC
kernel however.  I can talk more about that if needed (it involves
adding some options to your kernel config, and one rc.conf variable).

Regarding enabling crash dumps as a general concept:

In rc.conf you need to have dumpdev=auto (or point it to a specific
disk slice, but auto works just fine assuming you have a swap or
dump device defines in /etc/fstab -- see savecore(8) man page).  Full
details are in rc.conf(5).  How this works:

After a system reboots, during rc script startup, rc.d/savecore runs
savecore which examines the configured dumpdev for headers + tries to
detect if there was previously a kernel panic.  If it finds one, it
begins pulling the data out of swap and writing the results directly to
/var/crash in a series of files (again, see savecore(8)).  It does this
***before*** swapon(8) is run (reason why should be obvious) via
rc.d/swapon.  After it finishes, swapon is run (meaning anything
previously written to the swap slice is effectively lost), and the
system continues through the rest of the rc scripts.

Purely for educational purposes: to examine system rc script order, see
rcorder(8) or run rcorder /etc/rc.d/*.

-- 
| Jeremy Chadwick   j...@koitsu.org |
| UNIX Systems Administratorhttp://jdc.koitsu.org/ |
| Mountain View, CA, US|
| Making life hard for others since 1977. PGP 4BD6C0CB |
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS question

2013-03-21 Thread Jeremy Chadwick
{thread snip}

For those following/interested in this conversation, it's been moved to
freebsd-fs:

http://lists.freebsd.org/pipermail/freebsd-fs/2013-March/016812.html
http://lists.freebsd.org/pipermail/freebsd-fs/2013-March/016813.html

And the long/more recent analysis I did of the problem stated:

http://lists.freebsd.org/pipermail/freebsd-fs/2013-March/016814.html

-- 
| Jeremy Chadwick   j...@koitsu.org |
| UNIX Systems Administratorhttp://jdc.koitsu.org/ |
| Mountain View, CA, US|
| Making life hard for others since 1977. PGP 4BD6C0CB |
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS question

2013-03-20 Thread Reed A. Cartwright
Several people, including me, have an issue like this with 9.1.  Your
best bet is to try 9.0.

On Wed, Mar 20, 2013 at 5:49 PM, Quartz qua...@sneakertech.com wrote:
 I'm experiencing fatal issues with pools hanging my machine requiring a
 hard-reset. I'm new to freebsd and these mailing lists in particular, is
 this the place to ask for help?

 __
 it has a certain smooth-brained appeal
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



-- 
Reed A. Cartwright, PhD
Assistant Professor of Genomics, Evolution, and Bioinformatics
School of Life Sciences
Center for Evolutionary Medicine and Informatics
The Biodesign Institute
Arizona State University
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS question

2013-03-20 Thread Quartz



Several people, including me, have an issue like this with 9.1.  Your
best bet is to try 9.0.


Hmm... interesting. Is there any consensus as to what's going on?

Before anyone jumps to conclusions though, lemme just post the whole 
issue so we're on the same page (apologizes if it turns out this isn't 
the right mailing list for this):





I have a raidz2 comprised of six sata drives connected via my 
motherboard's intel southbridge sata ports. All of the bios raid options 
are disabled and the drives are in straight ahci mode (hotswap enabled). 
The system (accounts, home dir, etc) is installed on a separate 7th 
drive formatted as normal ufs, connected to a separate non-intel 
motherboard port.


As part of my initial stress testing, I'm simulating failures by popping 
the sata cable to various drives in the 6x pool. If I pop two drives, 
the pool goes into 'degraded' mode and everything works as expected. I 
can zero and replace the drives, etc, no problem. However, when I pop a 
third drive, the machine becomes VERY unstable. I can nose around the 
boot drive just fine, but anything involving i/o that so much as sneezes 
in the general direction of the pool hangs the machine. Once this 
happens I can log in via ssh, but that's pretty much it. I've 
reinstalled and tested this over a dozen times, and it's perfectly 
repeatable:


`ls` the dir where the pool is mounted? hang.
I'm already in the dir, and try to `cd` back to my home dir? hang.
zpool destroy? hang.
zpool replace? hang.
zpool history? hang.
shutdown -r now? gets halfway through, then hang.
reboot -q? same as shutdown.

The machine never recovers (at least, not inside 35 minutes, which is 
the most I'm willing to wait). Reconnecting the drives has no effect. My 
only option is to hard reset the machine with the front panel button. 
Googling for info suggested I try changing the pool's failmode setting 
from wait to continue, but that doesn't appear to make any 
difference. For reference, this is a virgin 9.1-release installed off 
the dvd image with no ports or packages or any extra anything.


I don't think I'm doing anything wrong procedure wise. I fully 
understand and accept that a raidz2 with three dead drives is toast, but 
I will NOT accept having it take down the rest of the machine with it. 
As it stands, I can't even reliably look at what state the pool is in. I 
can't even nuke the pool and start over without taking the whole machine 
offline.


__
it has a certain smooth-brained appeal
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS question

2013-03-20 Thread Jeremy Chadwick
(Please keep me CC'd as I'm not subscribed to -questions)


Lots to say about this.

1. freebsd-fs is the proper list for filesystem-oriented questions of
this sort, especially for ZFS.

2. The issue you've described is experienced by some, and **not**
experienced by even more/just as many, so please keep that in mind.
Each/every person's situation/environment/issue has to be treated
separately/as unique.

3. You haven't provided any useful details, even in your follow-up post
here:

http://lists.freebsd.org/pipermail/freebsd-questions/2013-March/249958.html

All you've provided is a general overview with no technical details,
no actual data.  You need to provide that data verbatim.  You need to
provide:

- Contents of /boot/loader.conf
- Contents of /etc/sysctl.conf
- Output from zpool status
- Output from zpool get all
- Output from zfs get all
- Output from dmesg (probably the most important)
- Output from sysctl vfs.zfs kstat.zfs

I particularly tend to assist with disk-level problems, so if this turns
out to be a disk-level issue (and NOT a controller or controller driver
issue), I can help quite a bit with that.

4. I would **not** suggest rolling back to 9.0.  This recommendation is
solves nothing -- if there is truly a bug/livelock issue, then that
needs to be tracked down.  By rolling back, if there is an issue, you're
effectively ensuring it'll never get investigated or fixed, which means
you can probably expect to see this in 9.2, 9.3, or even 10.x onward.

If you can't deal with the instability, or don't have the
time/cycles/interest to help track it down, that's perfectly okay too:
my recommendation is to go back to UFS (there's no shame in that).

Else, as always, I strongly recommend running stable/9 (keep reading).

5. stable/9 (a.k.a. FreeBSD 9.1-STABLE) just recently (~5 days ago)
MFC'd an Illumos ZFS feature solely to help debug/troubleshoot this
exact type of situation: introduction of the ZFS deadmean thread.
Reference materials for what that is:

http://svnweb.freebsd.org/base?view=revisionrevision=248369
http://svnweb.freebsd.org/base?view=revisionrevision=247265
https://www.illumos.org/issues/3246

The purpose of this feature (enabled by default) is to induce a kernel
panic when ZFS I/O stalls/hangs for unexpectedly long periods of time
(configurable via vfs.zfs.deadman_synctime).

Once the panic happens (assuming your system is configured with a slice
dedicated to swap (ZFS-backed swap = bad bad bad) and use of
dumpdev=auto in rc.conf), upon reboot the system should extract the
crash dump from swap and save it into /var/crash.  At that point kernel
developers on the -fs list can help tell you *exactly* what to do with
kgdb(1) that can shed some light on what happened/where the issue may
lie.

All that's assuming that the issue truly is ZFS waiting for I/O and not
something else (like ZFS internally spinning hard in its own code).

Good luck, and let us know how you want to proceed.

-- 
| Jeremy Chadwick   j...@koitsu.org |
| UNIX Systems Administratorhttp://jdc.koitsu.org/ |
| Mountain View, CA, US|
| Making life hard for others since 1977. PGP 4BD6C0CB |

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS question

2013-03-20 Thread Reed A. Cartwright
Note that my issue seems to do with an interaction between the CAM
system and the MPS driver in 9.1.  Thus it is more than likely
different than what you are experiencing Quartz.

Now that ZFS deadman has been incorporated into stable, I'll probably
give a 9.1 (i.e. 9/stable) another try.

Jeremy, I have a question about enabling kernel dumps based on my
current swap config.

I currently have a 1TB drive split into 4 geli encrypted swap
partitions (Freebsd doesn't like swap partitions over ~250 GB and I
have lots of RAM).  These partitions are UFS-swap partitions and are
not backed by any mirroing or ZFSing.

So, how do I best enable crash dumps?  If I need to remove encryption,
I can do that.

On Wed, Mar 20, 2013 at 9:45 PM, Jeremy Chadwick j...@koitsu.org wrote:
 (Please keep me CC'd as I'm not subscribed to -questions)


 Lots to say about this.

 1. freebsd-fs is the proper list for filesystem-oriented questions of
 this sort, especially for ZFS.

 2. The issue you've described is experienced by some, and **not**
 experienced by even more/just as many, so please keep that in mind.
 Each/every person's situation/environment/issue has to be treated
 separately/as unique.

 3. You haven't provided any useful details, even in your follow-up post
 here:

 http://lists.freebsd.org/pipermail/freebsd-questions/2013-March/249958.html

 All you've provided is a general overview with no technical details,
 no actual data.  You need to provide that data verbatim.  You need to
 provide:

 - Contents of /boot/loader.conf
 - Contents of /etc/sysctl.conf
 - Output from zpool status
 - Output from zpool get all
 - Output from zfs get all
 - Output from dmesg (probably the most important)
 - Output from sysctl vfs.zfs kstat.zfs

 I particularly tend to assist with disk-level problems, so if this turns
 out to be a disk-level issue (and NOT a controller or controller driver
 issue), I can help quite a bit with that.

 4. I would **not** suggest rolling back to 9.0.  This recommendation is
 solves nothing -- if there is truly a bug/livelock issue, then that
 needs to be tracked down.  By rolling back, if there is an issue, you're
 effectively ensuring it'll never get investigated or fixed, which means
 you can probably expect to see this in 9.2, 9.3, or even 10.x onward.

 If you can't deal with the instability, or don't have the
 time/cycles/interest to help track it down, that's perfectly okay too:
 my recommendation is to go back to UFS (there's no shame in that).

 Else, as always, I strongly recommend running stable/9 (keep reading).

 5. stable/9 (a.k.a. FreeBSD 9.1-STABLE) just recently (~5 days ago)
 MFC'd an Illumos ZFS feature solely to help debug/troubleshoot this
 exact type of situation: introduction of the ZFS deadmean thread.
 Reference materials for what that is:

 http://svnweb.freebsd.org/base?view=revisionrevision=248369
 http://svnweb.freebsd.org/base?view=revisionrevision=247265
 https://www.illumos.org/issues/3246

 The purpose of this feature (enabled by default) is to induce a kernel
 panic when ZFS I/O stalls/hangs for unexpectedly long periods of time
 (configurable via vfs.zfs.deadman_synctime).

 Once the panic happens (assuming your system is configured with a slice
 dedicated to swap (ZFS-backed swap = bad bad bad) and use of
 dumpdev=auto in rc.conf), upon reboot the system should extract the
 crash dump from swap and save it into /var/crash.  At that point kernel
 developers on the -fs list can help tell you *exactly* what to do with
 kgdb(1) that can shed some light on what happened/where the issue may
 lie.

 All that's assuming that the issue truly is ZFS waiting for I/O and not
 something else (like ZFS internally spinning hard in its own code).

 Good luck, and let us know how you want to proceed.

 --
 | Jeremy Chadwick   j...@koitsu.org |
 | UNIX Systems Administratorhttp://jdc.koitsu.org/ |
 | Mountain View, CA, US|
 | Making life hard for others since 1977. PGP 4BD6C0CB |

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



-- 
Reed A. Cartwright, PhD
Assistant Professor of Genomics, Evolution, and Bioinformatics
School of Life Sciences
Center for Evolutionary Medicine and Informatics
The Biodesign Institute
Arizona State University
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS question

2013-03-20 Thread Quartz



1. freebsd-fs is the proper list for filesystem-oriented questions of
this sort, especially for ZFS.


Ok, I'm assuming I should subscribe to that list and post there then?



2. The issue you've described is experienced by some, and **not**
experienced by even more/just as many, so please keep that in mind.


Well, that's a given. Presumably if zfs was flat out totally broken, 9.x 
wouldn't have been released or I would've already found a million pages 
about this via google. I'm assuming my problem is a corner case and 
there might've been a bug/regression, or I fundamentally don't 
understand how this works.




3. You haven't provided any useful details, even in your follow-up post
here:


I got the impression that there wasn't a lot of overlap between the 
mailing lists and the forums, so I wanted to post in both simultaneously.




- Contents of /boot/loader.conf
- Contents of /etc/sysctl.conf
- Output from zpool get all
- Output from zfs get all
- Output from sysctl vfs.zfs kstat.zfs


I'm running a *virgin* 9.1 with no installed software or modifications 
of any kind (past setting up a non-root user). All of these will be at 
their install defaults (with the possible exception of the failmode 
setting, but that didn't help when I tried it the first time, so I 
didn't bother during later re-installs).




- Output from zpool status


There isn't a lot of detail to be had here after I pop the 3rd 
drive, zfs/zpool commands almost always cause the system to hang, so I'm 
not sure if I can get anything out of them. Prior to the hang it will 
just tell you I have a six-drive raidz2 with two of the drives 
removed, so I'm not sure how that will be terribly useful.


I can tell you though that I'm creating the array with the following 
command:

zpool create -f array raidz2 ada{2,3,4,5,6,7}

There are eight drives in the machine at the moment, and I'm not messing 
with partitions yet because I don't want to complicate things. (I will 
eventually be going that route though as the controller tends to 
renumber drives in a first-come-first-serve order that makes some things 
difficult).




- Output from dmesg (probably the most important)


When? ie; right after boot, or after I've hot plugged a few drives, or 
yanked them, or created a pool, or what?




I particularly tend to assist with disk-level problems,


This machine is using a pile of spare seagate 250gb drives, if that 
makes any difference.




By rolling back, if there is an issue, you're
effectively ensuring it'll never get investigated or fixed,


That's why I asked for clarification, to see if it was a known 
regression in 9.1 or something similar.





or don't have the
time/cycles/interest to help track it down,


I have plenty of all that, for better or worse :)



that's perfectly okay too:
my recommendation is to go back to UFS (there's no shame in that).


At the risk of being flamed off the list, I'll switch to debian if it 
comes to that. I use freebsd exclusively for zfs.




Else, as always, I strongly recommend running stable/9 (keep reading).


My problem with tracking -stable is the relative volatility. If I'm 
trying to debug a problem it's not always easy or possible to keep 
consistent/known versions of things. With -release I know exactly what 
I'm getting and it cuts out a lot of variables.




just recently (~5 days ago)
MFC'd an Illumos ZFS feature solely to help debug/troubleshoot this
exact type of situation: introduction of the ZFS deadmean thread.


Yes, I already discovered this from various solaris threads I encountered.



The purpose of this feature (enabled by default) is to induce a kernel
panic when ZFS I/O stalls/hangs


This doesn't really help my situation though. If I wanted a panic I'd 
just set failmode=panic.




All that's assuming that the issue truly is ZFS waiting for I/O and not
something else


Well, everything I've read so far indicates that zfs has issues when 
dealing with un-writable pools, so I assume that's what's going on here.


__
it has a certain smooth-brained appeal
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS question

2012-02-18 Thread George Kontostanos
On Sat, Feb 18, 2012 at 12:35 PM, Denis Fortin for...@acm.org wrote:
 Good morning,

 On a small system using FreeBSD 9.0-RELEASE, ZFS is reporting an issue on a
 pool, that I am not certain is really an issue, but I don't know how to
 investgate...

 Here is the situation: I have created a ZFS pool on an external 1TB Maxstor
 USB drive.

 The ZFS pool sees little or no activity, I haven't started using it for real
 yet.

 The drive spins down frequently because of lack of activity, and takes quite
 a few seconds to spin up.

 Now, I frequently get errors in the 'zpool status' thus (like, a couple of
 times per day):

 [denis@datasink] ~ zpool status -v
   pool: maxstor
  state: ONLINE
 status: One or more devices has experienced an unrecoverable error.  An
         attempt was made to correct the error.  Applications are
 unaffected.
 action: Determine if the device needs to be replaced, and clear the errors
         using 'zpool clear' or replace the device with 'zpool replace'.
    see: http://www.sun.com/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0h0m with 0 errors on Sat Feb 18 08:49:41 2012
 config:

         NAME                                          STATE     READ WRITE
 CKSUM
         maxstor                                       ONLINE       0     0
     0
           gptid/64a30ca9-56ad-11e1-80c4-24ce7c30  ONLINE       1     0
     0

 errors: No known data errors
 [denis@datasink] ~ zpool iostat -v maxstor
                                            capacity     operations
  bandwidth
 pool                                    alloc   free   read  write   read
  write
 --  -  -  -  -  -
  -
 maxstor                                 1.10M   928G      0      0    455
  1.11K
   gptid/64a30ca9-56ad-11e1-80c4-24ce7c30  1.10M   928G      0      0
  455  1.11K
 --  -  -  -  -  -
  -

 I know that this sounds bad for the drive, but I cannot find anywhere in my
 logs (/var/log/messages, dmesg, etc) a reference to this supposed
 'unrecoverable error' that the drive has had, and the resilvering *always*
 works.

 I am wondering whether it might not simply be a timeout issue, that is: the
 drive is taking too long to spin up, which causes a timeout and a read error
 to be reported, which then disappears completely once the drive has spun up.

 Does anybody have a suggestion about how I could go about investigating this
 issue?  Shouldn't there be a log of the 'unrecoverable error' somewhere?

 Thank you all,

 Denis

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

The power management settings put your drive to sleep after some time
of inactivity.

Unfortunately the only way I have found to adjust this is from a
windows pc utility. (You can download it from their website)

To solve the problem you can export the pool when you don't use it and
import it back again. If that is not possible you can schedule a 5
minute cron job to query the status.

Regards
-- 
George Kontostanos
Aicom telecoms ltd
http://www.aisecure.net
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS Question

2010-08-16 Thread Elias Chrysocheris
On 8/15/2010 6:17 PM, Elias Chrysocheris wrote:
 On Monday 16 of August 2010 01:56:10 Depo Catcher wrote:

 Hi, I'm building a new file server.  Right now I'm on FreeBSD 6.4/UFS2
 and going to go to 8.1 with ZFS.

 Right now I have 3 disks, but one of them has data on it.  I'd like to
 setup a RaidZ but have a question on how to do this:
 Basically, I need to setup a mirror with the two empty drives, copy the
 data over and then add the third.  Is that even possible?
  
 Do you want to add the third drive as another mirror of the other two or 
you
 just want to add it, lets say, for another storage part of your system?

 Regards,
 Elias


Yes, add it for storage (ie. Raid 5).

Well, I don't know if you can add a hard drive and make it a stripe (RAID 5) 
with others that already have data...

Perhaps you could install the system in a free hard drive, then add the other 
two and make them a mirror. Then you could keep your data in the mirrored pool 
and the operating system, as long as some data that you don't care to be 
mirrored, in the single drive.

But as far as I can understand this is not what you asked for...

Regards
Elias
 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS Question

2010-08-15 Thread Sam Fourman Jr.
On Sun, Aug 15, 2010 at 5:56 PM, Depo Catcher depocatc...@gmail.com wrote:

 Hi, I'm building a new file server.  Right now I'm on FreeBSD 6.4/UFS2 and
 going to go to 8.1 with ZFS.


in a few weeks, ZFS v15 will be MFC'd to RELENG_8 this is a much more
mature and stable ZFS
I would suggest that you run RELENG_8 after the zfsv15 MFC.


-- 

Sam Fourman Jr.
Fourman Networks
http://www.fourmannetworks.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS Question

2010-08-15 Thread David Rawling

 On 16/08/2010 8:56 AM, Depo Catcher wrote:
Hi, I'm building a new file server.  Right now I'm on FreeBSD 6.4/UFS2 and 
going to go to 8.1 with ZFS.


Right now I have 3 disks, but one of them has data on it.  I'd like to setup 
a RaidZ but have a question on how to do this:
Basically, I need to setup a mirror with the two empty drives, copy the data 
over and then add the third.  Is that even possible?
That kind of expansion cannot be done with FreeBSD ZFS (yet - I believe it was 
being worked on in OpenSolaris and it would have filtered to FreeBSD). Once 
the pool uses a given RAID level, I believe that's set in stone.


What might work is this - paraphrased because I'm not 100% sure of the 
specific commands:


* Create a large (multiple GB) file on your existing disk - let's assume 
that's /disk1/file0 (dd if=/dev/zero of=/disk1/file0 bs=1024 count=104857600 
would be 100GB)
* Create a 3 disk RAIDZ1 pool using /dev/disk2, /dev/disk3 and /disk1/file0 
(zpool create tank raidz1 ...)

* Delete the file (the pool will be degraded)
* Copy data to the degraded pool
* Replace the missing disk file with /dev/disk1 (zpool replace?)
* Scrub the pool for consistency checks (then reset the counters so you can 
track the current state.


You'll want a backup just in case, though, so is there perhaps a case for 
getting 1 more disk and building the set clean? That way the old disk becomes 
a backup.


Dave.

--
David Rawling
PD Consulting And Security
Mob: +61 412 135 513
Email: d...@pdconsec.net

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: zfs question

2010-08-10 Thread David Rawling

 On 9/08/2010 2:52 AM, krad wrote:

On 8 August 2010 16:51, Adam Vande Moreamvandem...@gmail.com  wrote:

On Sun, Aug 8, 2010 at 10:37 AM, Dick Hoogendijkd...@nagual.nl  wrote:

  On 8-8-2010 14:27, Matthew Seaman wrote:

Yes. It works very well.
On amd64 you'll get a pretty reasonable setup out of the box (so to
speak) which will work fine for most purposes.

One other thing comes to mind. I want a very robus, fast rockl solid
*server*
It will be a file- email and webserver mostly.

Instead of using two ZFS mirrors I could also go for gmirror (I'm not
familiar with it, but it's been around for quite some time so it should

be

very stable). I don't get the data integrity that way, but my files would

be

safe, no?

Also, using gmirror I could use normal BSD UFS filesystems and normal
swap files devided across all disks?
Or am I wrong, thinking this way.

I'm not into fancy stuff; it has to be robust, fast and safe.


You do not *need* amd64, however it would the best choice.  I wouldn't even
mess around with gmirror.  It's great and I love it, but it has some
serious
drawback's compared to zfs mirroring.  One is there is no integrity
checking, and two is a full resyc is required on an unclean disconnect.

http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror

--
Adam Vande More

you could add a gjournal layer in there as well for better data integratity.
I think you can do softupdates + journal as well now although I have never
used it
If you're after a rock solid server, then to be brutally honest it is less 
important to decide what you run than it is to choose something that you know 
well.


Since you have 4 years of Solaris/OpenSolaris experience recently, you are 
likely to know ZFS better than gmirror.


So I ask you to ponder - at four o'clock in the morning, with mail down, web 
servers down and all the disks holding your files failing to mount - which 
file system or disk structure would you prefer to try to troubleshoot?


Dave.

--
David Rawling
Principal Consultant
PD Consulting And Security
Mob: +61 412 135 513
Email: d...@pdconsec.net

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: zfs question

2010-08-10 Thread Dick Hoogendijk

 On 10-8-2010 16:00, David Rawling wrote:

 On 9/08/2010 2:52 AM, krad wrote:
So I ask you to ponder - at four o'clock in the morning, with mail 
down, web servers down and all the disks holding your files failing to 
mount - which file system or disk structure would you prefer to try to 
troubleshoot?

ZFS. No question about it. Thank you for this eye opener. ;-)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: zfs question

2010-08-08 Thread Matthew Seaman
On 08/08/2010 12:43:48, Dick Hoogendijk wrote:
  Years back I ran FreeBSD, so I have some experience. The last couple of
 years I ran Solaris, followed by Opensolaris. I am very satisfied.
 However, considering the troubles after Oracle took over I have rebuild
 my server system under FreeBSD-8.1 (now running as a virtual machine
 under VirtualBox). All works very well and smooth so I'm going to
 transfer this VM to a real seperate harddisk.
 
 I have a couple of questions:
 
 [1] Transfering the VM is best done using dump/restore I guess? (This
 after a smallest creation of fbsd81 on the new harddisk) ?

Yes, that would be a pretty good way of doing your vtophys migration.

 My server has five disks: 1 PATA (160Gb), 2 SATA2 (500Gb) and 2 SATA2
 (1Tb) The fist is disabled at the moment and the others are ZFS mirrors
 under opensolaris. they are not usable for freebsd because the zfs
 versions don't match. I will have to rebuild. ;-)

% zpool list -H -o version zroot
14
% zfs list -H -o version /
3

Those are the latest available under 8-STABLE -- 8.1-RELEASE will be the
same.

 However, I'm a bit worried about the status of ZFS on FreeBSD-8.1 I
 don't want my system to boot off ZFS like I have now on OpenSolaris-b134
 
 I think it is wisest to have the 160Gb IDE drive installed for FreeBSD
 system drive w/ UFS2 and after that create two ZFS mirrors from my SATA
 drives.

Hmmm... well, booting FreeBSD off ZFS works perfectly well.  Apart from
the lack of support in sysinstall, I can't see any good reasons to avoid
it.  However, it's your system, and booting from UFS also works very
well, so do whatever pleases you.

There's more of a question over whether it's a good idea to put swap
onto zfs -- I think the recommendation is still to prefer using a raw
partition or gmirror for that.

 Is ZFS (v14) ready for production on FreeBSD-8.1 and if yes, will I
 still need special settings? The server system is 64bits and has 3Gb
 memory.

Yes.   It works very well.

On amd64 you'll get a pretty reasonable setup out of the box (so to
speak) which will work fine for most purposes.  Of course, if your
system has particularly demanding IO patterns, then you may have to
tweak some loader.conf or sysctl parameters to get the best results.
But that's hardly unique to ZFS.

 I hope to get some answers or good reading points.

The FreeBSD Wiki entries on ZFS are very useful to read:

http://wiki.freebsd.org/ZFS  (and the links from that page)

especially the recipes for installing various different ZFS based
configurations: eg. http://wiki.freebsd.org/RootOnZFS

Cheers,

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: zfs question

2010-08-08 Thread krad
On 8 August 2010 13:27, Matthew Seaman m.sea...@infracaninophile.co.ukwrote:

 On 08/08/2010 12:43:48, Dick Hoogendijk wrote:
   Years back I ran FreeBSD, so I have some experience. The last couple of
  years I ran Solaris, followed by Opensolaris. I am very satisfied.
  However, considering the troubles after Oracle took over I have rebuild
  my server system under FreeBSD-8.1 (now running as a virtual machine
  under VirtualBox). All works very well and smooth so I'm going to
  transfer this VM to a real seperate harddisk.
 
  I have a couple of questions:
 
  [1] Transfering the VM is best done using dump/restore I guess? (This
  after a smallest creation of fbsd81 on the new harddisk) ?

 Yes, that would be a pretty good way of doing your vtophys migration.

  My server has five disks: 1 PATA (160Gb), 2 SATA2 (500Gb) and 2 SATA2
  (1Tb) The fist is disabled at the moment and the others are ZFS mirrors
  under opensolaris. they are not usable for freebsd because the zfs
  versions don't match. I will have to rebuild. ;-)

 % zpool list -H -o version zroot
 14
 % zfs list -H -o version /
 3

 Those are the latest available under 8-STABLE -- 8.1-RELEASE will be the
 same.

  However, I'm a bit worried about the status of ZFS on FreeBSD-8.1 I
  don't want my system to boot off ZFS like I have now on OpenSolaris-b134
 
  I think it is wisest to have the 160Gb IDE drive installed for FreeBSD
  system drive w/ UFS2 and after that create two ZFS mirrors from my SATA
  drives.

 Hmmm... well, booting FreeBSD off ZFS works perfectly well.  Apart from
 the lack of support in sysinstall, I can't see any good reasons to avoid
 it.  However, it's your system, and booting from UFS also works very
 well, so do whatever pleases you.

 There's more of a question over whether it's a good idea to put swap
 onto zfs -- I think the recommendation is still to prefer using a raw
 partition or gmirror for that.

  Is ZFS (v14) ready for production on FreeBSD-8.1 and if yes, will I
  still need special settings? The server system is 64bits and has 3Gb
  memory.

 Yes.   It works very well.

 On amd64 you'll get a pretty reasonable setup out of the box (so to
 speak) which will work fine for most purposes.  Of course, if your
 system has particularly demanding IO patterns, then you may have to
 tweak some loader.conf or sysctl parameters to get the best results.
 But that's hardly unique to ZFS.

  I hope to get some answers or good reading points.

 The FreeBSD Wiki entries on ZFS are very useful to read:

 http://wiki.freebsd.org/ZFS  (and the links from that page)

 especially the recipes for installing various different ZFS based
 configurations: eg. http://wiki.freebsd.org/RootOnZFS

Cheers,

Matthew

 --
 Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
 PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
 JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW


if you want an easy zfsroot install use the pcbsd installer as it supports
zfs installation and can install plain freebsd
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: zfs question

2010-08-08 Thread Elias Chrysocheris
On Sunday 08 of August 2010 14:43:48 Dick Hoogendijk wrote:
   Years back I ran FreeBSD, so I have some experience. The last couple
 of years I ran Solaris, followed by Opensolaris. I am very satisfied.
 However, considering the troubles after Oracle took over I have rebuild
 my server system under FreeBSD-8.1 (now running as a virtual machine
 under VirtualBox). All works very well and smooth so I'm going to
 transfer this VM to a real seperate harddisk.
 
 I have a couple of questions:
 
 [1] Transfering the VM is best done using dump/restore I guess? (This
 after a smallest creation of fbsd81 on the new harddisk) ?

That's the way I've done it once. It worked for me, so I believe everything 
will go fine to you, too.

 
 My server has five disks: 1 PATA (160Gb), 2 SATA2 (500Gb) and 2 SATA2
 (1Tb) The fist is disabled at the moment and the others are ZFS mirrors
 under opensolaris. they are not usable for freebsd because the zfs
 versions don't match. I will have to rebuild. ;-)
 
 However, I'm a bit worried about the status of ZFS on FreeBSD-8.1 I
 don't want my system to boot off ZFS like I have now on OpenSolaris-b134
 
 I think it is wisest to have the 160Gb IDE drive installed for FreeBSD
 system drive w/ UFS2 and after that create two ZFS mirrors from my SATA
 drives.
 
 Is ZFS (v14) ready for production on FreeBSD-8.1 and if yes, will I
 still need special settings? The server system is 64bits and has 3Gb
 memory.
 
 I hope to get some answers or good reading points.

I have a FreeBSD amd64 machine that is ZFS-only since FreeBSD 8.0-RELEASE. ZFS 
Pool was in v13 then. It still works fine, even after the update to FreeBSD 
8.1-RELEASE with ZFS Pool v14. I have no problems, even though I pulled the 
plug off by accident (twice...). The system runs fine and the boot partition is 
also in ZFS.

There is no problem if you want to use UFS for the boot partition. I think is 
a matter of taste.

Whatever is your choice I believe that you'll stay happy using ZFS

Best regards
Elias
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: zfs question

2010-08-08 Thread Dick Hoogendijk

 On 8-8-2010 14:27, Matthew Seaman wrote:

Yes. It works very well.
On amd64 you'll get a pretty reasonable setup out of the box (so to
speak) which will work fine for most purposes.  Of course, if your
system has particularly demanding IO patterns, then you may have to
tweak some loader.conf or sysctl parameters to get the best results.
But that's hardly unique to ZFS.
Yes, you're quite right. ;-) But now you mention it: my virtual 
installation under virtualbox is i386. So, I guess it's better to 
reinstall, because the server is amd64. I also think that will be better 
in future use of ZFS (needs 64bits to be happy.


Am I right in believing I need the amd64 version (w/ ZFS) above the i386 
one?


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: zfs question

2010-08-08 Thread Dick Hoogendijk

 On 8-8-2010 14:27, Matthew Seaman wrote:

Yes. It works very well.
On amd64 you'll get a pretty reasonable setup out of the box (so to
speak) which will work fine for most purposes.
One other thing comes to mind. I want a very robus, fast rockl solid 
*server*

It will be a file- email and webserver mostly.

Instead of using two ZFS mirrors I could also go for gmirror (I'm not 
familiar with it, but it's been around for quite some time so it should 
be very stable). I don't get the data integrity that way, but my files 
would be safe, no?


Also, using gmirror I could use normal BSD UFS filesystems and normal 
swap files devided across all disks?

Or am I wrong, thinking this way.

I'm not into fancy stuff; it has to be robust, fast and safe.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: zfs question

2010-08-08 Thread Adam Vande More
On Sun, Aug 8, 2010 at 10:37 AM, Dick Hoogendijk d...@nagual.nl wrote:

  On 8-8-2010 14:27, Matthew Seaman wrote:

 Yes. It works very well.
 On amd64 you'll get a pretty reasonable setup out of the box (so to
 speak) which will work fine for most purposes.

 One other thing comes to mind. I want a very robus, fast rockl solid
 *server*
 It will be a file- email and webserver mostly.

 Instead of using two ZFS mirrors I could also go for gmirror (I'm not
 familiar with it, but it's been around for quite some time so it should be
 very stable). I don't get the data integrity that way, but my files would be
 safe, no?

 Also, using gmirror I could use normal BSD UFS filesystems and normal
 swap files devided across all disks?
 Or am I wrong, thinking this way.

 I'm not into fancy stuff; it has to be robust, fast and safe.


You do not *need* amd64, however it would the best choice.  I wouldn't even
mess around with gmirror.  It's great and I love it, but it has some serious
drawback's compared to zfs mirroring.  One is there is no integrity
checking, and two is a full resyc is required on an unclean disconnect.

http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror

-- 
Adam Vande More
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: zfs question

2010-08-08 Thread krad
On 8 August 2010 16:51, Adam Vande More amvandem...@gmail.com wrote:

 On Sun, Aug 8, 2010 at 10:37 AM, Dick Hoogendijk d...@nagual.nl wrote:

   On 8-8-2010 14:27, Matthew Seaman wrote:
 
  Yes. It works very well.
  On amd64 you'll get a pretty reasonable setup out of the box (so to
  speak) which will work fine for most purposes.
 
  One other thing comes to mind. I want a very robus, fast rockl solid
  *server*
  It will be a file- email and webserver mostly.
 
  Instead of using two ZFS mirrors I could also go for gmirror (I'm not
  familiar with it, but it's been around for quite some time so it should
 be
  very stable). I don't get the data integrity that way, but my files would
 be
  safe, no?
 
  Also, using gmirror I could use normal BSD UFS filesystems and normal
  swap files devided across all disks?
  Or am I wrong, thinking this way.
 
  I'm not into fancy stuff; it has to be robust, fast and safe.


 You do not *need* amd64, however it would the best choice.  I wouldn't even
 mess around with gmirror.  It's great and I love it, but it has some
 serious
 drawback's compared to zfs mirroring.  One is there is no integrity
 checking, and two is a full resyc is required on an unclean disconnect.

 http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror

 --
 Adam Vande More
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to 
 freebsd-questions-unsubscr...@freebsd.org


you could add a gjournal layer in there as well for better data integratity.
I think you can do softupdates + journal as well now although I have never
used it
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS Question

2009-04-08 Thread Julien Cigar
On Wed, 2009-04-08 at 08:57 -0700, Amaru Netapshaak wrote:
 Hello,
 
 I am interested in using something like ZFS for its distributed nature. I run 
 a file server
 with samba acting as a PDC. I also run a second server as a BDC.  What I would
 like is a method for keeping both servers shared data drives in sync when 
 both the
 PDC and BDC are running. 
 
 I am currently doing an incremental update twice daily to the BDC using rsync 
 over
 SSH.  It works, but its just not good enough.. if the PDC goes down, anything 
 created
 or altered after midnight or so, isnt propagated to the BDC. 
 
 I understand I can use ZFS to accomplish this easily.. but from what I've 
 read, you still
 need to manually push updates to the backup server over ssh via cron.  So I 
 would still
 have windows of time where the file systems would not be in sync..  am I 
 heading in the
 wrong direction here? I am beginning to think I am.. 
 
 I've been afraid of NFS for some time.. remembering back to the days when it 
 was just
 not safe to use NFS.  I may have carried that fear on irrationally.. is NFS a 
 viable 
 solution to my problem these days?  
 
 Thanks for the advice!
 

you could use ggated/ggatec together with gmirror

 +-+ AMARU
 
 
 
   
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
-- 
Julien Cigar
Belgian Biodiversity Platform
http://www.biodiversity.be
Université Libre de Bruxelles (ULB)
Campus de la Plaine CP 257
Bâtiment NO, Bureau 4 N4 115C (Niveau 4)
Boulevard du Triomphe, entrée ULB 2
B-1050 Bruxelles
Mail: jci...@ulb.ac.be
@biobel: http://biobel.biodiversity.be/person/show/471
Tel : 02 650 57 52

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS question...

2008-04-10 Thread Dan Nelson
In the last episode (Apr 10), Wael Nasreddine said:
 Hello list,
 
 I have 3 external USB hard disks hooked to my server, serving media
 files via NFS, SSHFS and samba to my local network, laptops and
 Playstation 2, the sizes of these hard disks are 160Gb, 500Gb and
 750Gb, the 160Gb has no space left, my archive of Movies is on it, the
 500Gb will soon run out of space it has my archive of TV series and
 anime but the 750Gb is almost empty, it has only a few Gigs for my Mp3
 collection anyway I hate to have movies/series everywhere so I thought
 of combining them into one big array... RAID0 isn't an option, RAID5
 could be but since the smallest one is 160Gb the size of the array
 will be 320Gb which is ridiculous in my case... So I thought of having
 a ZFS over the 3 drives, but I don't know what size should I expect
 and how/where can I mirror or mirroring isn't possible for me??

You don't necessarily need ZFS for this; gmirror would work just as
well.  You can split your 750GB drive into three
partitions/slices/whatevers:

160GB - mirror this with your physical 160GB disk
500GB - mirror this with your physical 500GB disk
90GB - leftover unmirrored, use at your peril

ZFS would let you take those two mirrored vdevs and stripe them into a
single pool, but then again you could use gstripe or gconcat for that. 
The main benefit to ZFS would be if you regularly crash the system;
fscking a 750gb UFS filesystem could take a while.

-- 
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ZFS question...

2008-04-10 Thread Wael Nasreddine
This One Time, at Band Camp, Dan Nelson [EMAIL PROTECTED] said, On Thu, Apr 
10, 2008 at 01:14:02PM -0500:
 You don't necessarily need ZFS for this; gmirror would work just as
 well.  You can split your 750GB drive into three
 partitions/slices/whatevers:

 160GB - mirror this with your physical 160GB disk
 500GB - mirror this with your physical 500GB disk
 90GB - leftover unmirrored, use at your peril

 ZFS would let you take those two mirrored vdevs and stripe them into a
 single pool, but then again you could use gstripe or gconcat for that. 
 The main benefit to ZFS would be if you regularly crash the system;
 fscking a 750gb UFS filesystem could take a while.
That's not the desired behaviour actually, what I want is to gain the
maximum space without the possibility of loosing data, I hear that ZFS
is excellent at recovering data so I'm trying to figure out the
perfect installation with these drives and of course while keeping the
data safe... RAID0 is good for not wasting space at all but then again
if one drive fails I'll lose everything :(

What do you think guys? Should I do something or it's better just to
leave them the way they are ( every drive has it's own, currently ext3,
FS ) ??

-- 
Wael Nasreddine
http://wael.nasreddine.com
PGP: 1024D/C8DD18A2 06F6 1622 4BC8 4CEB D724  DE12 5565 3945 C8DD 18A2

/o\ These days the necessities of life cost you about three times what they
/o\ used to, and half the time they aren't even fit to drink.


pgpFcQzK4qBVp.pgp
Description: PGP signature


Re: ZFS question...

2008-04-10 Thread Dan Nelson
In the last episode (Apr 10), Wael Nasreddine said:
 This One Time, at Band Camp, Dan Nelson [EMAIL PROTECTED] said, On Thu, Apr 
 10, 2008 at 01:14:02PM -0500:
  You don't necessarily need ZFS for this; gmirror would work just as
  well.  You can split your 750GB drive into three
  partitions/slices/whatevers:
 
  160GB - mirror this with your physical 160GB disk
  500GB - mirror this with your physical 500GB disk
  90GB - leftover unmirrored, use at your peril
 
  ZFS would let you take those two mirrored vdevs and stripe them into a
  single pool, but then again you could use gstripe or gconcat for that. 
  The main benefit to ZFS would be if you regularly crash the system;
  fscking a 750gb UFS filesystem could take a while.

 That's not the desired behaviour actually, what I want is to gain the
 maximum space without the possibility of loosing data, I hear that
 ZFS is excellent at recovering data so I'm trying to figure out the
 perfect installation with these drives and of course while keeping
 the data safe... RAID0 is good for not wasting space at all but then
 again if one drive fails I'll lose everything :(

Thae above config will give you RAID1, not RAID0, since you're
mirroring each small drive onto a part of your large drive.  You'll end
up with 160+500 = 660GB of mirrored storage, with 90gb of unmirrored
space left over.  If you use ZFS, you would do something like this:
Replace /dev/md* with your usb devices, obviously :)

# mdconfig -a -t swap -s 160G
md1
# mdconfig -a -t swap -s 500G
md2
# mdconfig -a -t swap -s 750G
md3
# disklabel -R /dev/md3 /dev/stdin  DONE
 d: 160G * unknown
 e: 500G * unknown
 f: * * unknown
DONE
# zpool create usb mirror /dev/md1 /dev/md3d mirror /dev/md2 /dev/md3e
# zpool list usb
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
usb 655G112K655G 0%  ONLINE -
# zpool status usb
  pool: usb
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
usb ONLINE   0 0 0
  mirrorONLINE   0 0 0
md1 ONLINE   0 0 0
md3dONLINE   0 0 0
  mirrorONLINE   0 0 0
md2 ONLINE   0 0 0
md3eONLINE   0 0 0

errors: No known data errors
# df -k /usb
Filesystem 1024-blocks Used Avail Capacity  Mounted on
usb  6760856320 676085632 0%/usb

-- 
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]