On Wed, Apr 14, 2010 at 09:58:50AM -0700, Richard Elling wrote:
On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote:
From my experience dealing with 4TB you stop writing after 80% of zpool
utilization
YMMV. I have routinely completely filled zpools. There have been some
improvements in
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short periods of time off and on. (It lights nagios up
like a
On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote:
I realize that I did things in the wrong order. I should have removed the
oldest snapshot first, on to the newest, and then removed the data in the
FS itself.
For the problem in question, this is irrelevant. As discussed in the
On Mon, Apr 12, 2010 at 09:32:50AM -0600, Tim Haley wrote:
Try explicitly enabling fmd to send to syslog in
/usr/lib/fm/fmd/plugins/syslog-msgs.conf
Wow, so useful, yet so well hidden I never even knew to look for it.
Please can this be on by default? Please?
--
Dan.
pgpDwZouV1dUr.pgp
On Mon, Apr 12, 2010 at 06:17:47PM -0500, Harry Putnam wrote:
But, I'm too unskilled in solaris and zfs admin to be risking a total
melt down if I try that before gaining a more thorough understanding.
Grab virtualbox or something similar and set yourself up a test
environment. In general, and
On Mon, Apr 12, 2010 at 08:01:27PM -0700, Peter Tripp wrote:
So I decided I would attach the disks to 2nd system (with working fans) where
I could backup the data to tape. So here's where I got dumb...I ran 'zpool
export'. Of course, I never actually ended up attaching the disks to another
On Sun, Apr 11, 2010 at 07:03:29PM -0400, Edward Ned Harvey wrote:
Heck, even if the faulted pool spontaneously sent the server into an
ungraceful reboot, even *that* would be an improvement.
Please look at the pool property failmode. Both of the preferences
you have expressed are available,
On Sat, Apr 10, 2010 at 11:50:05AM -0500, Bob Friesenhahn wrote:
Huge synchronous bulk writes are pretty rare since usually the
bottleneck is elsewhere, such as the ethernet.
Also, large writes can go straight to the pool, and the zil only logs
the intent to commit those blocks (ie, link them
On Sat, Apr 10, 2010 at 12:56:04PM -0500, Tim Cook wrote:
At that price, for the 5-in-3 at least, I'd go with supermicro. For $20
more, you get what appears to be a far more solid enclosure.
My intent with that link was only to show an example, not make a
recommendation. I'm glad others have
On Sat, Apr 10, 2010 at 02:51:45PM -0500, Harry Putnam wrote:
[Note: This discussion started in another thread
Subject: about backup and mirrored pools
but the subject has been significantly changed so started a new
thread]
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
On Sat, Apr 10, 2010 at 06:20:54PM -0500, Bob Friesenhahn wrote:
Since he is already using mirrors, he already has enough free space
since he can move one disk from each mirror to the main pool (which
unfortunately, can't be the boot 'rpool' pool), send the data, and then
move the second
On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote:
If I could find a reasonable backup method that avoided external
enclosures altogether, I would take that route.
I'm tending to like bare drives.
If you have the chassis space, there are 5-in-3 bays that don't need
extra drive
On Sun, Apr 04, 2010 at 07:13:58AM -0700, Kevin wrote:
I am trying to recover a raid set, there are only three drives that
are part of the set. I attached a disk and discovered it was bad.
It was never part of the raid set.
Are you able to tell us more precisely what you did with this disk?
On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote:
Daniel Carosone wrote:
Go with the 2x7 raidz2. When you start to really run out of space,
replace the drives with bigger ones.
While that's great in theory, there's getting to be a consensus that 1TB
7200RPM 3.5 Sata drives
On Thu, Apr 08, 2010 at 03:48:54PM -0700, Erik Trimble wrote:
Well
To be clear, I don't disagree with you; in fact for a specific part of
the market (at least) and a large part of your commentary, I agree. I
just think you're overstating the case for the rest.
The problem is (and this
On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote:
On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
As for error rates, this is something zfs should not be afraid
of. Indeed, many of us would be happy to get drives with less internal
ECC overhead and complexity for greater
Go with the 2x7 raidz2. When you start to really run out of space,
replace the drives with bigger ones. You will run out of space
eventually regardless; this way you can replace 7 at a time, not 14 at
a time. With luck, each replacement will last you long enough that
the next replacement will
On Tue, Apr 06, 2010 at 01:44:20PM -0400, Tony MacDoodle wrote:
I am trying to understand how refreservation works with snapshots.
If I have a 100G zfs pool
I have 4 20G volume groups in that pool.
refreservation = 20G on all volume groups.
Now when I want to do a snapshot
On Wed, Apr 07, 2010 at 01:52:23AM +1000, taemun wrote:
I was wondering if someone could explain why the DDT is seemingly
(from empirical observation) kept in a huge number of individual blocks,
randomly written across the pool, rather than just a large binary chunk
somewhere.
It's not really
On Wed, Apr 07, 2010 at 06:27:09AM +1000, Daniel Carosone wrote:
You have reminded me.. I wrote some patches to the zfs manpage to help
clarify this issue, while travelling, and never got around to posting
them when I got back. I'll dig them up off my netbook later today.
http
On Tue, Apr 06, 2010 at 06:53:04PM -0700, Richard Elling wrote:
Disagree. Swap is a perfectly fine workload for SSDs. Under ZFS,
even more so. I'd really like to squash this rumour and thought we
were making progress on that front :-( Today, there are millions or
thousands of
On Sun, Apr 04, 2010 at 11:46:16PM -0700, Willard Korfhage wrote:
Looks like it was RAM. I ran memtest+ 4.00, and it found no problems.
Then why do you suspect the ram?
Especially with 12 disks, another likely candidate could be an
overloaded power supply. While there may be problems showing
On Mon, Apr 05, 2010 at 07:43:26AM -0400, Edward Ned Harvey wrote:
Is the database running locally on the machine? Or at the other end of
something like nfs? You should have better performance using your present
config than just about any other config ... By enabling the log devices,
such as
On Mon, Apr 05, 2010 at 06:32:13PM -0700, Learner Study wrote:
I'm wondering what is the correct flow when both raid5 and de-dup are
enabled on a storage volume
I think we should do de-dup first and then raid5 ... is that
understanding correct?
Not really. Strictly speaking, ZFS
On Mon, Apr 05, 2010 at 06:58:57PM -0700, Learner Study wrote:
Hi Jeff:
I'm a bit confused...did you say Correct to my orig email or the
reply from Daniel...
Jeff is replying to your mail, not mine.
It looks like he's read your question a little differently. By that
reading, you are
On Mon, Apr 05, 2010 at 09:46:58PM -0500, Tim Cook wrote:
On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage
opensola...@familyk.orgwrote:
It certainly has symptoms that match a marginal power supply, but I
measured the power consumption some time ago and found it comfortably within
the
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
By the way, I see that now one of the disks is listed as degraded - too many
errors. Is there a good way to identify exactly which of the disks it is?
It's hidden in iostat -E, of all places.
--
Dan.
pgpB1dUBrSfPC.pgp
On Tue, Apr 06, 2010 at 12:29:35AM -0500, Tim Cook wrote:
On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone d...@geek.com.au wrote:
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
By the way, I see that now one of the disks is listed as degraded - too
many errors
On Thu, Apr 01, 2010 at 12:38:29AM +0100, Robert Milkowski wrote:
So I wasn't saying that it can work or that it can work in all
circumstances but rather I was trying to say that it probably shouldn't
be dismissed on a performance argument alone as for some use cases
It would be of great
On Mon, Mar 29, 2010 at 06:38:47PM -0400, David Magda wrote:
A new ARC case:
I read this earlier this morning. Welcome news indeed!
I have some concerns about the output format, having worked with
similar requirements in the past. In particular: as part of the
monotone VCS when reporting
On Tue, Mar 30, 2010 at 12:37:15PM +1100, Daniel Carosone wrote:
There will also need to be clear rules on output ordering, with
respect to renames, where multiple changes have happened to renamed
files.
Separately, but relevant in particular to the above due to the
potential for races: what
On Mon, Mar 29, 2010 at 01:10:22PM -0700, F. Wessels wrote:
The caiman installer allows you to control the size of the partition
on the boot disk but it doesn't allow you (at least I couldn't
figure out how) to control the size of the slices. So you end with
slice0 filling the entire
On Tue, Mar 30, 2010 at 03:13:45PM +1100, Daniel Carosone wrote:
You can:
- install to a partition that's the size you want rpool
- expand the partition to the full disk
- expand the s2 slice to the full disk
- leave the s0 slice for rpool alone
- make another slice for l2arc
There's been some talk about alignment lately, both for flash and WD disks.
What's missing, at least from my perspective, is a clear an
unambiguous test so users can verify that their zfs pools are aligned
correctly. This should be a test that sees through all the layers of
BIOS and SMI/EFI and
On Mon, Mar 29, 2010 at 12:21:39PM +1100, Daniel Carosone wrote:
#1. Use xxd (or similar) to examine the contents of the raw disk
This relies on knowing what to look for, and how that is aligned to
the start of the partition and to to metaslab addresses and offsets
that determine the writes
On Sun, Mar 28, 2010 at 09:32:02PM -0700, Richard Elling wrote:
This is documented in the ZFS on disk format doc.
Yep, I've been there in the meantime.. ;-)
Use prtvtoc or format to see the beginning of the slice relative to the
beginning of the partition. I dunno how you tell the start of
On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote:
You can't share a device (either as ZIL or L2ARC) between multiple pools.
Discussion here some weeks ago reached suggested that an L2ARC device
was used for all ARC evictions, regardless of the pool.
I'd very much like an
On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote:
not sure if 32bit BSD supports 48bit LBA
Solaris is the only otherwise-modern OS with this daft limitation.
--
Dan.
pgpE9xlpyJDRZ.pgp
Description: PGP signature
___
zfs-discuss mailing
On Sat, Mar 27, 2010 at 08:47:26PM +1100, Daniel Carosone wrote:
On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote:
not sure if 32bit BSD supports 48bit LBA
Solaris is the only otherwise-modern OS with this daft limitation.
Ok, it's not due to LBA48, but the 1Tb limitation
On Tue, Mar 23, 2010 at 07:22:59PM -0400, Frank Middleton wrote:
On 03/22/10 11:50 PM, Richard Elling wrote:
Look again, the checksums are different.
Whoops, you are correct, as usual. Just 6 bits out of 256 different...
Look which bits are different - digits 24, 53-56 in both cases.
On Wed, Mar 24, 2010 at 08:02:06PM +0100, Svein Skogen wrote:
Maybe someone should look at implementing the zfs code for the XScale
range of io-processors (such as the IOP333)?
NetBSD runs on (many of) those.
NetBSD has an (in-progress, still-some-issues) ZFS port.
Hopefully they will converge
On Mon, Mar 22, 2010 at 10:58:05PM -0700, homerun wrote:
if i access to datapool from network , smb , nfs , ftp , sftp , jne...
i get only max 200 KB/s speeds
compared to rpool that give XX MB/S speeds to and from network it is slow.
Any ideas what reasons might be and how try to find
On Sat, Mar 20, 2010 at 09:50:10PM -0700, Erik Trimble wrote:
Nah, the 8x2.5-in-2 are $220, while the 5x3.5-in-3 are $120.
And they have a sas expander inside, unlike every other variant of
these I've seen so far. Cabling mess win.
--
Dan.
pgpNzVMcKh5yn.pgp
Description: PGP signature
On Sun, Mar 21, 2010 at 08:59:29PM -0400, Edward Ned Harvey wrote:
ln -s .zfs/snapshot snapshots
Voila. All Windows or Mac or Linux or whatever users are able to
easily access snapshots.
Not being a CIFS user, could you clarify/confirm for me.. is this
just a presentation issue, ie
On Fri, Mar 19, 2010 at 06:34:50PM +1100, taemun wrote:
A pool with a 4-wide raidz2 is a completely nonsensical idea.
No, it's not - not completely.
It has the same amount of accessible storage as two striped mirrors. And
would be slower in terms of IOPS, and be harder to upgrade in the
On Fri, Mar 19, 2010 at 12:59:39AM -0700, homerun wrote:
Thanks for comments
So possible choises are :
1) 2 2-way mirros
2) 4 disks raidz2
BTW , can raidz have spare ? so is there one posible choise more :
3 disks raidz with 1 spare ?
raidz2 is basically this, with a pre-silvered
On Thu, Mar 18, 2010 at 03:36:22AM -0700, Kashif Mumtaz wrote:
I did another test on both machine. And write performance on ZFS
extraordinary slow.
-
In ZFS data was being write around 1037 kw/s while disk remain busy
On Thu, Mar 18, 2010 at 05:21:17AM -0700, Tonmaus wrote:
No, because the parity itself is not verified.
Aha. Well, my understanding was that a scrub basically means reading
all data, and compare with the parities, which means that these have
to be re-computed. Is that correct?
A scrub
As noted, the ratio caclulation applies over the data attempted to
dedup, not the whole pool. However, I saw a commit go by just in the
last couple of days about the dedupratio calculation being misleading,
though I didn't check the details. Presumably this will be reported
differently from the
On Thu, Mar 18, 2010 at 09:54:28PM -0700, Tonmaus wrote:
(and the details of how much and how low have changed a few times
along the version trail).
Is there any documentation about this, besides source code?
There are change logs and release notes, and random blog postings
along the way
On Wed, Mar 17, 2010 at 10:15:53AM -0500, Bob Friesenhahn wrote:
Clearly there are many more reads per second occuring on the zfs
filesystem than the ufs filesystem.
yes
Assuming that the application-level requests are really the same
From the OP, the workload is a find /.
So, ZFS makes
On Wed, Mar 17, 2010 at 08:43:13PM -0500, David Dyer-Bennet wrote:
My own stuff is intended to be backed up by a short-cut combination --
zfs send/receive to an external drive, which I then rotate off-site (I
have three of a suitable size). However, the only way that actually
works so
On Wed, Mar 10, 2010 at 02:54:18PM +0100, Svein Skogen wrote:
Are there any good options for encapsulating/decapsulating a zfs send
stream inside FEC (Forward Error Correction)? This could prove very
useful both for backup purposes, and for long-haul transmissions.
I used par2 for this for
On Thu, Mar 11, 2010 at 07:23:43PM +1100, Daniel Carosone wrote:
You have reminded me to go back and look again, and either find that
whatever issue was at fault last time was transient and now gone, or
determine what it actually was and get it resolved.
In case you want to: http
On Thu, Mar 11, 2010 at 02:00:41AM -0800, Svein Skogen wrote:
I can't help but keep wondering if not some sort of FEC wrapper
(optional of course) might solve both the backup and some of the
long-distance-transfer (where retransmissions really isn't wanted)
issues.
Retransmissions aren't
On Tue, Mar 02, 2010 at 03:14:04PM -0800, Richard Elling wrote:
That is just a shorthand for snapshotting (snapshooting? :-) datasets.
:-)
There still is no pool snapshot feature.
One could pick nits about zpool split ..
--
Dan.
pgppVa56AxgBa.pgp
Description: PGP signature
In addition to all the other good advice in the thread, I will
emphasise the benefit of having smaller snapshot granularity. I have
found this to be one of the most valuable and comprelling reasons when
I have chosen to create a separate filesystem.
If there's data that changes often and I
For rpool, which has SMI labels and fdisk partitions, you need to
expand the size of those, and then ZFS will notice (with or withhout
autoexpand, depending on version).
--
Dan.
pgpHNYyaslcOA.pgp
Description: PGP signature
___
zfs-discuss mailing list
On Tue, Mar 02, 2010 at 02:04:52PM -0800, Erik Trimble wrote:
I don't believe that is true for VM installations like Vladimir's,
though I certainly could be wrong.
I think you are :-)
Vladimir - I would say your best option is to simply back up your data
from the OpenSolaris VM, and do
On Mon, Mar 01, 2010 at 09:22:38AM -0800, Richard Elling wrote:
Once again, I'm assuming that each DDT entry corresponds to a
record (slab), so to be exact, I would need to know the number of
slabs (which doesn't currently seem possible). I'd be satisfied
with a guesstimate based on
Is there anything that is safe to use as a ZIL, faster than the
Mtron but more appropriate for home than a Stec?
ACARD ANS-9010, as mentioned several times here recently (also sold as
hyperdrive5)
--
Dan.
pgpeFYm43bUlS.pgp
Description: PGP signature
On Sun, Feb 28, 2010 at 07:36:30PM -0800, Bill Sommerfeld wrote:
To avoid this in the future, set PKG_CACHEDIR in your environment to
point at a filesystem which isn't cloned by beadm -- something outside
rpool/ROOT, for instance.
+1 - I've just used a dataset mounted at /var/pkg/download,
On Wed, Feb 24, 2010 at 10:57:08AM +, li...@di.cx wrote:
2 x SuperMicro AOC-SAT2-MV8 SATA controllers (so 16 ports in total,
plus 6 on the motherboard)
What about case space for the disks?
Disks: 3x40GB
rpool mirror and spare on shelf. 3 way mirror if you really want and have the
On Tue, Feb 23, 2010 at 12:09:20PM -0800, Erik Trimble wrote:
I've got stacks of both v20z/v40z hardware, plus a whole raft of IBM
xSeries (/not/ System X) machines which really, really, really need an
SSD for improved I/O. At this point, I'd kill for a parallel SCSI -
SATA adapter
On Fri, Feb 19, 2010 at 01:15:17PM -0600, David Dyer-Bennet wrote:
On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote:
Anybody know what the proper geometry is for a WD1600BEKT-6-1A13? It's
not even in the data sheets any more!
any such geometry has been entirely fictitious since
On Fri, Feb 19, 2010 at 11:51:29PM +0100, Ragnar Sundblad wrote:
On 19 feb 2010, at 23.40, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to
On Wed, Feb 17, 2010 at 11:37:54PM -0500, Ethan wrote:
It seems to me that you could also use the approach of 'zpool replace' for
That is true. It seems like it then have to rebuild from parity for every
drive, though, which I think would take rather a long while, wouldn't it?
No longer than
On Thu, Feb 18, 2010 at 12:42:58PM -0500, Ethan wrote:
On Thu, Feb 18, 2010 at 04:14, Daniel Carosone d...@geek.com.au wrote:
Although I do notice that right now, it imports just fine using the p0
devices using just `zpool import q`, no longer having to use import -d with
the directory
On Thu, Feb 18, 2010 at 10:39:48PM -0600, Bob Friesenhahn wrote:
This sounds like an initial 'silver' rather than a 'resilver'.
Yes, in particular it will be entirely seqential.
ZFS resilver is in txg order and involves seeking.
What I am interested in is the answer to these sort of
I have a machine whose purpose is to be a backup server. It has a
pool for holding backups from other machines, using zfs send|recv.
Call the pool dpool. Inside there are datasets for hostname/poolname,
for each of the received pools. All hosts have an rpool, some have
other pools as well. So
On Wed, Feb 17, 2010 at 12:31:27AM -0500, Ethan wrote:
And I just realized - yes, labels 2 and 3 are in the wrong place relative to
the end of the drive; I did not take into account the overhead taken up by
truecrypt when dd'ing the data. The raw drive is 1500301910016 bytes; the
truecrypt
On Wed, Feb 17, 2010 at 03:37:59PM -0500, Ethan wrote:
On Wed, Feb 17, 2010 at 15:22, Daniel Carosone d...@geek.com.au wrote:
I have not yet successfully imported. I can see two ways of making progress
forward. One is forcing zpool to attempt to import using slice 2 for each
disk rather than
On Wed, Feb 17, 2010 at 04:48:23PM -0500, Ethan wrote:
It looks like using p0 is exactly what I want, actually. Are s2 and p0 both
the entire disk?
No. s2 depends on there being a solaris partition table (Sun or EFI),
and if there's also an fdisk partition table (disk shared with other
OS), s2
On Wed, Feb 17, 2010 at 05:28:03PM -0500, Dennis Clarke wrote:
Good theory, however, this disk is fully external with its own power.
It can still be commanded to offline state.
--
Dan.
pgpzmziAIXUx3.pgp
Description: PGP signature
___
zfs-discuss
On Wed, Feb 17, 2010 at 04:44:19PM -0500, Ethan wrote:
There was no partitioning on the truecrypt disks. The truecrypt volumes
occupied the whole raw disks (1500301910016 bytes each). The devices that I
gave to the zpool on linux were the whole raw devices that truecrypt exposed
(1500301647872
On Wed, Feb 17, 2010 at 06:15:25PM -0500, Ethan wrote:
Success!
Awesome. Let that scrub finish before celebrating completely, but
this looks like a good place to stop and consider what you want for an
end state.
--
Dan.
pgph6ALkJoiw6.pgp
Description: PGP signature
On Wed, Feb 17, 2010 at 02:38:04PM -0500, Miles Nordin wrote:
copies=2 has proven to be mostly useless in practice.
I disagree. Perhaps my cases fit under the weasel-word mostly, but
single-disk laptops are a pretty common use-case.
If there were a real-world device that tended to randomly
On Mon, Feb 15, 2010 at 09:11:02PM -0600, Tracey Bernath wrote:
On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone d...@geek.com.au wrote:
Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
add more devices and let them load-balance. This is especially true
if you're
On Tue, Feb 16, 2010 at 06:20:05PM +0100, Juergen Nickelsen wrote:
Tony MacDoodle tpsdoo...@gmail.com writes:
Mounting ZFS filesystems: (1/6)cannot mount '/data/apache': directory is not
empty
(6/6)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
failed: exit
On Tue, Feb 16, 2010 at 02:53:18PM -0800, Christo Kutrovsky wrote:
looking to answer myself the following question:
Do I need to rollback all my NTFS volumes on iSCSI to the last available
snapshot every time there's a power failure involving the ZFS storage server
with a disabled ZIL.
No,
On Tue, Feb 16, 2010 at 06:28:05PM -0800, Richard Elling wrote:
The problem is that MTBF measurements are only one part of the picture.
Murphy's Law says something will go wrong, so also plan on backups.
+n
Imagine this scenario:
You lost 2 disks, and unfortunately you lost the 2 sides of
On Tue, Feb 16, 2010 at 10:06:13PM -0500, Ethan wrote:
This is the current state of my pool:
et...@save:~# zpool import
pool: q
id: 5055543090570728034
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or
On Wed, Feb 17, 2010 at 02:30:28PM +1100, Daniel Carosone wrote:
c9t4d0s8 UNAVAIL corrupted data
c9t5d0s2 ONLINE
c9t2d0s8 UNAVAIL corrupted data
c9t1d0s8 UNAVAIL corrupted data
c9t0d0s8 UNAVAIL corrupted data
- zdb
On Tue, Feb 16, 2010 at 04:47:11PM -0800, Christo Kutrovsky wrote:
One of the ideas that sparkled is have a max devices property for
each data set, and limit how many mirrored devices a given data set
can be spread on. I mean if you don't need the performance, you can
limit (minimize) the
On Tue, Feb 16, 2010 at 11:39:39PM -0500, Ethan wrote:
If slice 2 is the whole disk, why is zpool trying to using slice 8 for all
but one disk?
Because it's finding at least part of the labels for the pool member there.
Please check the partition tables of all the disks, and use zdb -l on
the
On Tue, Feb 16, 2010 at 10:33:26PM -0600, David Dyer-Bennet wrote:
Here's what I've started: I've created a mirrored pool called rp2 on
the new disks, and I'm zfs send -R a current snapshot over to the new
disks. In fact it just finished. I've got an altroot set, and
obviously I gave
On Mon, Feb 15, 2010 at 01:45:57PM +0100, Bogdan ?ulibrk wrote:
One more thing regarding SSD, will be useful to throw in additional
SAS/SATA drive in to serve as L2ARC? I know SSD is the most logical
thing to put as L2ARC, but will conventional drive be of *any* help in
L2ARC?
Only in
On Sun, Feb 14, 2010 at 11:08:52PM -0600, Tracey Bernath wrote:
Now, to add the second SSD ZIL/L2ARC for a mirror.
Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
add more devices and let them load-balance. This is especially true
if you're sharing ssd writes with ZIL, as
On Fri, Feb 12, 2010 at 09:50:32AM -0500, Mark J Musante wrote:
The other option is to zfs send the snapshot to create a copy
instead of a clone.
One day, in the future, I hope there might be a third option, somewhat
as an optimimsation.
With dedup and bp-rewrite, a new operation could be
On Fri, Feb 12, 2010 at 11:26:33AM -0800, Richard Elling wrote:
Mathing aorund a bit, for a 300 GB L2ARC (apologies for the tab separation):
size (GB) 300
size (sectors) 585937500
labels (sectors)9232
available
On Thu, Feb 11, 2010 at 02:50:06PM -0500, Tony MacDoodle wrote:
I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the
mirror and plunk it in another system?
Yes. If you have a recent opensolaris, there is zpool split
specifically to help this use case.
Otherwise, you can
On Thu, Feb 11, 2010 at 10:55:20PM -0500, Tony MacDoodle wrote:
I am getting the following message when I try and remove a snapshot from a
clone:
bash-3.00# zfs destroy data/webser...@sys_unconfigd
cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones
use '-R' to
Until zfs-crypto arrives, I am using a pool for sensitive data inside
several files encrypted via lofi crypto. The data is also valuable,
of course, so the pool is mirrored, with one file on each of several
pools (laptop rpool, and a couple of usb devices, not always
connected).
These backing
On Wed, Feb 10, 2010 at 12:37:46PM -0500, rwali...@washdcmail.com wrote:
I don't disagree with any of the facts you list, but I don't think the
alternatives are fully described by Sun vs. much cheaper retail parts.
We face exactly this same decision with buying RAM for our servers
(maybe
On Wed, Feb 10, 2010 at 05:36:10PM -0600, David Dyer-Bennet wrote:
That's all about *ME* picking the suitable base snapshot, as I understand
it.
Correct.
I understood the recent reference to be suggesting that I didn't have
to, that zfs would figure it out for me. Which still appears to me
On Wed, Feb 10, 2010 at 10:48:57PM -0600, David Dyer-Bennet wrote:
But I see how it could indeed be useful in
theory to send just a *little* extra if you weren't sure quite what was
needed but could guess pretty closely.
I think it's mostly for the benefit of retrying the same command, if
On Tue, Feb 09, 2010 at 08:26:42AM -0800, Richard Elling wrote:
zdb -D poolname will provide details on the DDT size. FWIW, I have a
pool with 52M DDT entries and the DDT is around 26GB.
I wish -D was documented; I had forgotten about it and only found the
(expensive) -S variant, which
On Mon, Feb 01, 2010 at 12:22:55PM -0800, Lutz Schumann wrote:
Created a pool on head1 containing just the cache
device (c0t0d0).
This is not possible, unless there is a bug. You
cannot create a pool
with only a cache device. I have verified this on
b131:
# zpool create
On Mon, Feb 08, 2010 at 11:24:56AM -0800, Lutz Schumann wrote:
Only with the zdb(1M) tool but note that the
checksums are NOT of files
but of the ZFS blocks.
Thanks - bocks, right (doh) - thats what I was missing. Damn it would be so
nice :(
If you're comparing the current data to a
On Mon, Feb 08, 2010 at 11:28:11PM +0100, Lasse Osterild wrote:
Ok thanks I know that the amount of used space will vary, but what's
the usefulness of the total size when ie in my pool above 4 x 1G
(roughly, depending on recordsize) are reserved for parity, it's not
like it's useable for
101 - 200 of 297 matches
Mail list logo