On Tue, Nov 13, 2012 at 6:16 PM, Karl Wagner k...@mouse-hole.com wrote:
On 2012-11-13 17:42, Peter Tribble wrote:
Given storage provisioned off a SAN (I know, but sometimes that's
what you have to work with), what's the best way to expand a pool?
Specifically, I can either grow existing
when compared with dynamic stripes, mirrors, and
hardware raid LUNs.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
can recover anything you have enough redundancy for. Which
means everything, up to the redundancy of the vdev. Beyond that,
you may be able to recover dittoed data (of which metadata is just
one example) even if you've lost an entire vdev.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http
?
This is annoying, rather than critical: the system is out of service
and I can reconstruct the data if necessary. Although knowing
how to fix this would be generally useful in the future...
Thanks,
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
to intervene manually.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook t...@cook.ms wrote:
On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble peter.trib...@gmail.com
wrote:
On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook t...@cook.ms wrote:
Every scrub I've ever done that has found an error required manual
fixing.
Every
to slot that copy of the data
instantly into service if the primary copy fails.
For tar, you can substitute a free or commercial backup solution.
It works the same way.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
, so anything where the ACL is critical gets stored
on ufs [yuck].)
Also, aclmode is no longer listed in the usage message you see
if you do 'zfs get'.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss
On Tue, Sep 13, 2011 at 8:34 PM, Paul B. Henson hen...@acm.org wrote:
On 9/13/2011 5:21 AM, Peter Tribble wrote:
Update 10 has been out for about 3 weeks.
Where was any announcement posted? I haven't heard anything about it. As far
as I can tell, the Oracle site still only has update 9
a bottleneck. Something like vdbench, although there are others.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
to help with this little project?
I'm definitely interested in emulating arcstat in jkstat. OK, I have
an old version,
but it's pretty much out of date and I need to refresh it.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've never seen ZFS run out of inodes, though.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to keep safe,
you don't have to do it on the whole pool.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
playing with replacements
for sar. Top is still pretty useful.
For zfs, zpool iostat has some utility, but I find fsstat to be pretty useful.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs
in the log files, which
are pretty big,
but compress really well. So having both enabled works really well.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
if you make the same selections
you
With the new Oracle policies, it seems unlikely that you will be able to
reinstall the OS and achieve what you had before.
And what policies have Oracle introduced that mean you can't reinstall
your system?
--
-Peter Tribble
http://www.petertribble.co.uk
be a good idea.
(You are, I presume, using regular scrubs to catch latent errors.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
had old style file systems and exported these as a
whole iostat -x came in handy, however, with zpools, this is not the case
anymore, right?
fsstat?
Typically along the lines of
fsstat /tank/* 1
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
On Tue, Mar 30, 2010 at 10:42 PM, Eric Schrock eric.schr...@oracle.com wrote:
On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've
failed drive? And
can I hotspare it manually? I could do a straight replace, but that
isn't quite the same thing.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a short blurb on what
the state is, and what the options are.
Of course they can't. If they're in the know, then they're almost certainly
not in a position to talk about it in public. Asking here does not help,
as I doubt if anyone from Sun/Oracle would be wise to give any response.
--
-Peter
-optimal configuration
ought to have delivered.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and
other zones, but that's relatively harmless.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for other apps?
Also, what happens if a drive fails?
Swap it for a new one ;-)
(somewhat more complex with the dual layout as I described it).
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
into an installed
image. Yes, you can get rid of it, but the idea that you could pull
drives from a failed system and put them into any old system they
might happen to fit in and expect it to just work has always been
optimistic. The advantage of zfs is that it abstracts a lot of that away.
--
-Peter Tribble
in f2 will only
match the same data in f15 if they're aligned, which is only going to happen if
f1 ends on a block boundary.
Besides, you still have to read all the data off the disk, manipulate
it, and write
it all back.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
of modifications by applications
that aren't aware of your scheme?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
that library?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
creation' gives you seconds since the epoch, which you can convert
using a utility of your choice.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
on the fact that
(unlike chown -h) the chmod command follows symlinks and there's
no way to disable that behaviour.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.)
If not, I was looking at interposing my own readdir() (that's assuming
the application is using readdir()) that actually returns the entries in
the desired order. However, I'm having a bit of trouble hacking this
together (the current source doesn't compile in isolation on my S10
machine).
--
-Peter
and you soon forget that it's there
(until you have to
deal with one of the alternatives, which throws it into sharp relief).
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
for mirrored boot disks should prove
obsolete.
Why? Is the possibility of component or path failure and data corruption
so close to zero?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
to combine them.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Mar 28, 2009 at 11:06 AM, Michael Shadle mike...@gmail.com wrote:
On Sat, Mar 28, 2009 at 1:37 AM, Peter Tribble peter.trib...@gmail.com
wrote:
zpool add tank raidz1 disk_1 disk_2 disk_3 ...
(The syntax is just like creating a pool, only with add instead of create.)
so I can add
you want it.
If you want random I/O performance, raidz isn't a good choice. For most
things, hardware raid ought to give you more IOPS. You mentioned mail
and file serving, which isn't an obvious match for raidz (which works better
for capacity and throughput).
--
-Peter Tribble
http
is obiously stuck in molasses.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0.610
user0.058
sys 0.551
I don't know whether that explains all the problem, but it's clear
that having ACLs
on files and directories has a definite cost.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
will be be able to read from disks b
and d. Is this understanding correct ?
No. That quote is part of the discussion of ditto blocks.
See the following:
http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
://www.petertribble.co.uk/Solaris/jkstat.html
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
statistics exposed as kstats, though, which would
make it easier to analyse them with existing tools.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
above, you need to match 4480002, which on
my machine is the following line in /etc/mnttab:
swap/tmptmpfs xattr,dev=4480002 1232289278
so that's /tmp (not a zfs filesystem, but you should get the idea).
--
-Peter Tribble
http://www.petertribble.co.uk/ - http
On Sun, Jan 18, 2009 at 8:25 PM, Richard Elling richard.ell...@sun.com wrote:
Peter Tribble wrote:
See fsstat, which is based upon kstats. One of the thing I want to do with
JKstat is correlate filesystem operations with underlying disk operations.
The
hard part is actually connecting
-user
and ran
$ zpool import disco
The disc was mounted, but none of the hundreds of snapshots was there.
Did Imiss something?
How do you know the snapshots are gone?
Note that the zfs list command no longer shows snapshots by default.
You need 'zfs list -t all' for that.
--
-Peter Tribble
shows that there is an fdisk partition.
If you're going to use it then you'll need to at the very least put a
label on it.
format - partition should offer to label it.
You can then set the size of s0 (to be the same as s2, if you want to use the
full disk), and write the label again.
--
-Peter
then zfs will do it all for you; you
just need to define partitions/slices if you're going to use slices.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
remember the keep it small and simple thing?
Hm. I thought the '-t all' worked with the revised zfs list. The problem I
have with that is that you need to type different commands to get the
same output depending on which machine you're on, as '-t all' doesn't
work on older systems.
--
-Peter Tribble
update any further?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that
don't have the same number of disks?
One risk is that you mistyped the command, when you actually meant
to specify a balanced configuration.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
/mnt/zfs1/Integration
and use that for the Integration mountpoint. Then in GroupWS, 'ln -s
../Integration .'.
That way, if you look at Integration in /ws/com you get to something
that exists.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote:
Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
Well yeah - you've got to have some allowance for redundancy.
--
-Peter Tribble
http
On Wed, Sep 17, 2008 at 10:11 AM, gm_sjo [EMAIL PROTECTED] wrote:
2008/9/17 Peter Tribble:
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote:
Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
Well
of drives, more
vdevs implies narrower stripes, but that's a side-effect rather than a cause.
For what it's worth, we put all the disks on our thumpers into a single pool -
mostly it's 5x 8+1 raidz1 vdevs with a hot spare and 2 drives for the OS and
would happily go much bigger.
--
-Peter Tribble
http
, but I can't see zfs having issues.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it more or less as is.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
disk5 \
raidz1 disk6 disk7 disk8 disk9 disk10 \
raidz1 disk11 disk12 disk13 disk14 disk15 \
spare disk16
Gives you a single pool containing 3 raidz vdevs (each 4 data + 1 parity)
and a hot spare.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
drives?
What I have is a local zfs pool from the free space on the internal
drives, so I'm only using a partition and the drive's write cache
should be off, so my theory here is that zfs_nocacheflush shouldn't
have any effect because there's no drive cache in use...
--
-Peter Tribble
http
On Sat, Jul 12, 2008 at 12:23 AM, Ian Collins [EMAIL PROTECTED] wrote:
Peter Tribble wrote:
(The backup problem is the real stumbling block. And backup is an area ripe
for disruptive innovation.)
Is down to volume of data, or many small files?
Many small files. We could handle many more
.
(The backup problem is the real stumbling block. And backup is an area ripe
for disruptive innovation.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the intelligent
controllers in
some of Sun's JBOD units (the S1, and the 3000 series) fail to recognize drives
that work perfectly well elsewhere.
I'm slightly disappointed that there wasn't a model for 2.5 inch
drives in there,
though.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http
enable mpxio on the mpt or fibre
interfaces using
'stmsboot -D mpt' or 'stmsboot -D fp'.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
and mirror that to another raidz. (Or create a raidz out of mirrored
drives.) You can't do that. You can't layer raidz and mirroring.
You'll either have to use raidz for the lot, or just use mirroring:
zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0
mirror c1t6d0 c1t8d0
--
-Peter
have redundant data. The extra performance is
just a side-effect.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
of powerpath and use mpxio instead.
The problem seems to be that the clariion arrays are active/passive and
zfs trips up if it tries to use one of the passive links. Using mpxio hides
this and works fine. And powerpath on the (active/active) DMX-4 seems
to be OK too.
--
-Peter Tribble
http
read access.
That said, it's a difficult workload. My limited experience of (the rather more
expensive) Veritas on (rather more expensive) big arrays is that they don't
handle it particularly well either.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
where 16G minimum is reasonable, ZFS is fine. But the
bulk of the installed base of machines accessed by users is still
in the 512M-1G range - and Sun are still selling 512M machines.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
/10 million file point at the most - we're looking at restructuring the
directory hierarchy for the filesystems that are beyond this so we can back them
up in pieces.
How about NFS access?
Seems to work fine.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
On Mon, Jun 16, 2008 at 5:20 PM, dick hoogendijk [EMAIL PROTECTED] wrote:
On Mon, 16 Jun 2008 16:21:26 +0100
Peter Tribble [EMAIL PROTECTED] wrote:
The *real* common thread is that you need ridiculous amounts
of memory to get decent performance out of ZFS
That's FUD. Older systems might
. (SunBlade
150 with 1G of RAM, if you want specifics.)
The zfs root box is significantly slower all around. Not only is
initial I/O slower, but it seems much less able to cache data.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
command I only see 12 drives - I was expecting that when
3510 FC JBOD array is connected to a host over two loops, it should have
seen 24 drives (two entries for each drive).
What am I missing ?
Unlike sparc, mpxio is enabled by default on x86. Are you already
multipathed?
--
-Peter Tribble
be to delete the
snapshots. With that cycle, you're deleting 6000 snapshots a day,
and while snapshot creation is free, my experience is that snapshot
deletion is not.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
directories that break it up into smaller chunks.
(Some sort of hashing scheme appears to be indicated. Unfortunately our
applications fall into two classes: everything in one huge directory,
or a hashing
scheme that results in many thousands of top-level directories.)
--
-Peter Tribble
http
) and 127729 (Sparc).
I think you have sparc and x86 swapped over.
Looking at an S10U5 box I have here, 127728-06 is integrated.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
On Sun, Apr 20, 2008 at 4:39 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Sun, 20 Apr 2008, Peter Tribble wrote:
My experience so far is that anything past a terabyte and 10 million
files,
and any backup software struggles.
What is the cause of the struggling? Does the backup host
, Mar 29, 2008 at 05:14:20PM +, Peter Tribble wrote:
A brief search didn't show anything relevant, so here
goes:
Would it be feasible to support a scrub per-filesystem
rather than per-pool?
The reason is that on a large system, a scrub of a pool can
take excessively long
, and the data regularly read anyway; for the
quiet ones they're neither read nor backed up, so it
would be nice to be able to validate those.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
the users can make use of
at the moment.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
May not be relevant, but still worth checking - I have a 2530 (which ought
to be that same only SAS instead of FC), and got fairly poor performance
at first. Things improved significantly when I got the LUNs properly
balanced across the controllers.
--
-Peter Tribble
http://www.petertribble.co.uk
On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Fri, 15 Feb 2008, Peter Tribble wrote:
May not be relevant, but still worth checking - I have a 2530 (which ought
to be that same only SAS instead of FC), and got fairly poor performance
at first. Things
(Single Channel) or LPe11002-E (dual channel) HBA's?
Did you encounter any problems with configuring this.
My experience in this area is that powerpath doesn't get along with zfs
(I couldn't import the pool); using MPxIO worked fine.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http
confirmation that it's helping and
hasn't introduced any other regressions..)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
of random read I/O per vdev). I
would love to see better ways of backing up huge numbers of files.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
appears to be that you export the pool and import
it again.
Now, what if that system had been using ZFS root? I have a
hardware failure, I replace the raid card, the devid of the boot
device changes.
Will the system still boot properly?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http
On 9/24/07, Paul B. Henson [EMAIL PROTECTED] wrote:
On Sat, 22 Sep 2007, Peter Tribble wrote:
filesystem per user on the server, just to see how it would work. While
managing 20,00 filesystems with the automounter was trivial, the attempt
to manage 20,000 zfs filesystems wasn't entirely
files in user home
directories). This has been fixed, I believe, but only very recently in S10.]
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
path='/dev/dsk/c1t0d0s7'
devid='id1,[EMAIL PROTECTED]/h'
whole_disk=0
metaslab_array=13
metaslab_shift=32
ashift=9
asize=448412778496
--
-Peter Tribble
http://www.petertribble.co.uk/ - http
On 9/13/07, Solaris [EMAIL PROTECTED] wrote:
Try exporting the pool then import it. I have seen this after moving disks
between systems, and on a couple of occasions just rebooting.
Doesn't work. (How can you export something that isn't imported
anyway?)
--
-Peter Tribble
http
On 9/13/07, Mike Lee [EMAIL PROTECTED] wrote:
have you tried zpool clear?
Not yet. Let me give it a try:
# zpool clear storage
cannot open 'storage': pool is unavailable
Bother...
Thanks anyway!
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
On 9/13/07, Eric Schrock [EMAIL PROTECTED] wrote:
On Thu, Sep 13, 2007 at 06:36:33PM +0100, Peter Tribble wrote:
Doesn't work. (How can you export something that isn't imported
anyway?)
The pool is imported, or else 'zpool status' wouldn't show it at all.
It's just faulted. So when
On 9/13/07, Eric Schrock [EMAIL PROTECTED] wrote:
On Thu, Sep 13, 2007 at 07:54:12PM +0100, Peter Tribble wrote:
There must be a better way of handling this. It should have just
brought it online first time around, without all the fiddling around
(that feels like voodoo to me).
Yes
ranging from 0.3TB to 1.2TB) .
Why multiple pools rather than a single large pool?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
than that to
be comfortable.
The pool size is essentially irrelevant. For other parameters,
I would expect that if it helped general performance then
it's going to help backup performance too.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
was that the paths would change
but everything else would be fine.
Unfortunately not :-(
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this or are we going to have to
start over?
If we do start over, is powerpath going to behave itself
or might this sort of issue bite us again in the future?
Thanks for any help or suggestions from any
powerpath experts.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com
itself
or might this sort of issue bite us again in the future?
Thanks for any help or suggestions from any
powerpath experts.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
1 - 100 of 116 matches
Mail list logo