From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave U.Random
If I am going to make a
new install of Solaris 10 does it give me the option to slice and dice my
disks and to issue zpool commands?
No way that I know of, to install Solaris
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave U.Random
My personal preference, assuming 4 disks, since the OS is mostly reads
and
only a little bit of writes, is to create a 4-way mirrored 100G
partition
for the OS, and the
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Sunday, June 19, 2011 11:03 AM
I was planning, in the near
future, to go run iozone on some system with, and without the disk cache
enabled according to format -e. If my hypothesis is right, it shouldn't
significantly affect
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Saturday, June 18, 2011 7:47 PM
Actually, all of the data I've gathered recently shows that the number of
IOPS does not significantly increase for HDDs running random workloads.
However the response time does :-(
Could you
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marty Scholes
On a busy array it is hard even to use the leds as indicators.
Offline the disk. Light stays off.
Use dd to read the disk. Light stays on.
That should make it easy enough.
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 10:27 PM
Is it still the case, as it once was, that allocating anything other
than whole disks as vdevs forces NCQ / write cache off on the drive
(either or both, forget which, guess write cache)?
I will only
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
or is it completely random leaving me with some trial and error to work
out
what disk is on what port?
It's highly desirable to have drives with lights on them. So you can
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 11:05 PM
the [sata] channel is idle, blocked on command completion, while
the heads seek.
I'm interested in proving this point. Because I believe it's false.
Just hand waving for the moment ... Presenting the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
can you have one vdev that is a duplicate of another
vdev? By that I mean say you had 2x 7 disk raid-z2 vdevs, instead of them
both being used in one large pool could you have
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nomen Nescio
Has there been any change to the server hardware with respect to number
of
drives since ZFS has come out? Many of the servers around still have an
even
number of drives (2, 4)
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 10:27 PM
Is it still the case, as it once was, that allocating anything other
than whole disks as vdevs forces NCQ / write cache off on the drive
(either or both, forget which, guess write cache)?
I will only
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
In that case what 'option' would you choose - smaller raid-z vdevs or
larger
raid-z2 vdevs.
The more redundant disks you have, the more protection you get, and the
smaller
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
That would suck worse.
Don't mind Richard. He is of the mind that ZFS is perfect for everything
just the way it is, and anybody who wants anything different should adjust
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
The ZFS install will be mirrored, but I am not sure how to configure the
15
data disks from a performance (inc. resilvering) vs protection vs usable
space
perspective;
3x 5
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rasmus Fauske
I want to replace some slow consumer drives with new edc re4 ones but
when I do a replace it needs to scan the full pool and not only that
disk set (or just the old drive)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Simon Walter
I'm looking to create a NAS with versioning for non-technical users
(Windows and Mac). I want the users to be able to simply save a file,
and a revision/snapshot is created. I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
A college friend of mine is using Debian Linux on his desktop,
and wondered if he could tap into ZFS goodness without adding
another server in his small quiet apartment or
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
See FEC suggestion from another poster ;)
Well, of course, all storage mediums have built-in hardware FEC. At least disk
tape for sure. But naturally you can't always trust it
From: David Magda [mailto:dma...@ee.ryerson.ca]
Sent: Saturday, June 11, 2011 9:04 AM
If one is saving streams to a disk, it pay be worth creating parity files
for them
(especially if the destination file system is not ZFS):
Parity is just a really simple form of error detection. It's not
From: David Magda [mailto:dma...@ee.ryerson.ca]
Sent: Saturday, June 11, 2011 9:38 AM
These parity files use a forward error correction-style system that can be
used to perform data verification, and allow recovery when data is lost or
corrupted.
http://en.wikipedia.org/wiki/Parchive
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Besides, the format
is not public and subject to change, I think. So future compatibility
is not guaranteed.
That is not correct.
Years ago, there was a comment in the man
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jonathan Walker
New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a file using zfs send -R
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Thomas Hobbes
I am testing Solaris Express 11 with napp-it on two machines. In both
cases the same problem: Enabling encryption on a folder, filling it with
data will result in errors
Based on observed behavior measuring performance of dedup, I would say, some
chunk of data and its associated metadata seem have approximately the same
warmness in the cache. So when the data gets evicted, the associated
metadata tends to be evicted too. So whenever you have a cache miss,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
Here's how you calculate (average) how long a random IOPs takes:
seek time + ((60 / RPMs) / 2))]
1 Random IOPs takes [8.5ms + 4.13ms] = 12.6ms, which translates to 78
IOPS
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 02, 2011 9:03 PM
Separately, with only 4G of RAM, i think an L2ARC is likely about a
wash, since L2ARC entries also consume RAM.
True the L2ARC requires some ARC consumption to support it, but for typical
user data, it's a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Anonymous
Hi. I have a development system on Intel commodity hardware with a 500G
ZFS
root mirror. I have another 500G drive same as the other two. Is there any
way to use this disk to good
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
Theoretically, you'll get a 50% read increase, but I doubt it'll be that high
in
practice.
In my benchmarking, I found 2-way mirror reads 1.97x the speed of a single
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
So here's what I'm going to do. With arc_meta_limit at 7680M, of which
100M
was consumed naturally, that leaves me 7580 to play with. Call it
7500M.
Divide by 412 bytes
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
(1) I'll push the recordsize back
up to 128k, and then repeat this test something slightly smaller than
128k.
Say, 120k.
Good news. :-) Changing the recordsize made
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
4 mirrors of 2 = sustained bandwidth of 4 disks
raidz2 with 8 disks = sustained bandwidth of 6 disks
Correction:
4 mirrors of 2 = sustained read bandwidth of 8 disks,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eugen Leitl
How bad would raidz2 do on mostly sequential writes and reads
(Athlon64 single-core, 4 GByte RAM, FreeBSD 8.2)?
The best way is to go is striping mirrored pools, right?
As far
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brian
I have a raidz2 pool with one disk that seems to be going bad, several
errors
are noted in iostat. I have an RMA for the drive, however - no I am
wondering how I proceed. I need to
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, May 26, 2011 8:19 PM
Once your data is dedup'ed, by whatever means, access to it is the
same. You need enough memory+l2arc to indirect references via
DDT.
I don't think this is true. The reason you need arc+l2arc to store
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
Op 26-05-11 13:38, Edward Ned Harvey schreef:
Perhaps a property could be
set, which would store the DDT exclusively on that device.
Oh yes please, let me put my DDT
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Wednesday, May 25, 2011 10:10 PM
These are additional
iops that dedup creates, not ones that it substitutes for others in
roughly equal number.
Hey ZFS developers - Of course there are many ways to possibly address these
issues.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Daniel Carosone
On Wed, May 25, 2011 at 10:59:19PM +0200, Roy Sigurd Karlsbakk wrote:
The systems where we have had issues, are two 100TB boxes, with some
160TB raw storage each, so
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Both the necessity to read write the primary storage pool... That's
very
hurtful.
Actually, I'm seeing two different modes of degradation:
(1) Previously described
Hey, I got another question for ZFS developers -
Given: If you enable dedup and write a bunch of data, and then disable
dedup, the formerly written data will remain dedup'd.
Given: The zdb -s command, which simulates dedup to provide dedup
statistics without actually enabling dedup.
I've finally returned to this dedup testing project, trying to get a handle
on why performance is so terrible. At the moment I'm re-running tests and
monitoring memory_throttle_count, to see if maybe that's what's causing the
limit. But while that's in progress and I'm still thinking...
I
From: Matthew Ahrens [mailto:mahr...@delphix.com]
Sent: Wednesday, May 25, 2011 6:50 PM
The DDT is a ZAP object, so it is an on-disk hashtable, free of O(log(n))
rebalancing operations. It is written asynchronously, from syncing
context. That said, for each block written (unique or not),
When I search around, I see that nexenta has ndmp, and solaris 10 does not,
and there was at least some talk about supporting ndmp in opensolaris ...
So ...
Is ndmp present in solaris 11 express? Is it an installable 3rd party
package? How would you go about supporting ndmp if you wanted to?
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nico Williams
When you have two filesystems with similar contents, and the history
of each is useless in deciding how to do a bi-directional
synchronization, then you need a way to diff
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eugen Leitl
This would enable applications—without needing any further
in-filesystem code—to perform a Merkle Tree sync, which would range
from noticeably more efficient to dramatically more
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
New problem:
I'm following all the advice I summarized into the OP of this thread, and
testing on a test system. (A laptop). And it's just not working. I am
jumping
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
1) The process is rather slow (I think due to dedup involved -
even though, by my calculations, the whole DDT can fit in
my 8Gb RAM),
Please see:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Evaldas Auryla
Is there an easy way to map these sas-addresses to the physical disks in
enclosure ?
Of course in the ideal world, when a disk needs to be pulled, hardware would
know about
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
New problem:
I'm following all the advice I summarized into the OP of this thread, and
testing on a test system. (A laptop). And it's just not working. I am
jumping
From: Richard Elling [mailto:richard.ell...@gmail.com]
In one of my systems, I have 1TB mirrors, 70% full, which can be
sequentially completely read/written in 2 hrs. But the resilver took 12
hours of idle time. Supposing you had a 70% full pool of raidz3, 2TB
disks,
using 10 disks +
From: Sandon Van Ness [mailto:san...@van-ness.com]
ZFS resilver can take a very long time depending on your usage pattern.
I do disagree with some things he said though... like a 1TB drive being
able to be read/written in 2 hours? I seriously doubt this. Just reading
1 TB in 2 hours means
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
All drives have a very high DOA rate according to Newegg. The
way they package drives for shipping is exactly how Seagate
specifically says NOT to pack them here
8 months
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
On one hand, I've read that as current drives get larger (while their
random
IOPS/MBPS don't grow nearly as fast with new generations), it is becoming
more and more reasonable to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Donald Stahl
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Wait
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Jeremy
Finally, the send/recv protocol is not guaranteed to be compatible
between ZFS versions.
Years ago, there was a comment in the man page that said this. Here it is:
The format
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Naveen surisetty
I have a zfs stream back up took on zfs version 15, currently i have
upgraded
my OS, so new zfs version is 22. Restore process went well from old stream
backup to new zfs
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Naveen surisetty
I have a zfs stream back up took on zfs version 15, currently i have
upgraded
my OS, so new zfs version is 22. Restore process went well from old stream
backup to new zfs
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Arjun YK
Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
and restore it back if in case the disks are lost.
Backup would be done with an enterprise tool like tsm,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
So now I'll change meta_max and
see if it helps...
Oh, know what? Nevermind.
I just looked at the source, and it seems arc_meta_max is just a gauge for
you to use, so you
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
in my previous
post my arc_meta_used was bigger than my arc_meta_limit (by about 50%)
I have the same thing. But as I sit here and run more and more extensive
tests on it
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
BTW, here's how to tune it:
echo arc_meta_limit/Z 0x3000 | sudo mdb -kw
echo ::arc | sudo mdb -k | grep meta_limit
arc_meta_limit= 768 MB
Well
From: Erik Trimble [mailto:erik.trim...@oracle.com]
(1) I'm assuming you run your script repeatedly in the same pool,
without deleting the pool. If that is the case, that means that a run of
X+1 should dedup completely with the run of X. E.g. a run with 12
blocks will dedup the first
From: Garrett D'Amore [mailto:garr...@nexenta.com]
Just another data point. The ddt is considered metadata, and by default the
arc will not allow more than 1/4 of it to be used for metadata. Are you
still
sure it fits?
That's interesting. Is it tunable? That could certainly start to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
That could certainly start to explain why my
arc size arcstats:c never grew to any size I thought seemed reasonable...
Also now that I'm looking closer at arcstats
From: Garrett D'Amore [mailto:garr...@nexenta.com]
It is tunable, I don't remember the exact tunable name...
Arc_metadata_limit
or some such.
There it is:
echo ::arc | sudo mdb -k | grep meta_limit
arc_meta_limit= 286 MB
Looking at my chart earlier in this discussion, it
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
But I'll go tune and test with this knowledge, just to be sure.
BTW, here's how to tune it:
echo arc_meta_limit/Z 0x3000 | sudo mdb -kw
echo ::arc | sudo mdb -k | grep
New problem:
I'm following all the advice I summarized into the OP of this thread, and
testing on a test system. (A laptop). And it's just not working. I am
jumping into the dedup performance abyss far, far eariler than predicted...
My test system is a laptop with 1.5G ram, c_min =150M,
See below. Right around 400,000 blocks, dedup is suddenly an order of
magnitude slower than without dedup.
4010.7sec 136.7sec143 MB 195
MB
8021.0sec 465.6sec287 MB 391
MB
The interesting thing is - In
From: Richard Elling [mailto:richard.ell...@gmail.com]
--- To calculate size of DDT ---
zdb -S poolname
Look at total blocks allocated. It is rounded, and uses a suffix like K,
M, G but it's in decimal (powers of 10) notation, so you have to remember
that... So I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
zdb -DD poolname
This just gives you the -S output, and the -D output all in one go. So I
Sorry, zdb -DD only works for pools that are already dedup'd.
If you want
From: Garrett D'Amore [mailto:garr...@nexenta.com]
We have customers using dedup with lots of vm images... in one extreme
case they are getting dedup ratios of over 200:1!
I assume you're talking about a situation where there is an initial VM image,
and then to clone the machine, the
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Using the standard c_max value of 80%, remember that this is 80% of the
TOTAL system RAM, including that RAM normally dedicated to other
purposes. So long as the total amount of RAM you expect to dedicate to
ARC usage (for all ZFS uses,
From: Karl Wagner [mailto:k...@mouse-hole.com]
so there's an ARC entry referencing each individual DDT entry in the L2ARC?!
I had made the assumption that DDT entries would be grouped into at least
minimum block sized groups (8k?), which would have lead to a much more
reasonable ARC
From: Brandon High [mailto:bh...@freaks.com]
On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Generally speaking, dedup doesn't work on VM images. (Same is true for
ZFS
or netapp or anything else.) Because the VM images are all
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If you have to use the 4k recordsize, it is likely to consume 32x more
memory than the default 128k recordsize of ZFS. At this rate, it becomes
increasingly difficult
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of TianHong Zhao
There seems to be a few threads about zpool hang, do we have a
workaround to resolve the hang issue without rebooting ?
In my case, I have a pool with disks from external
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
another dedup question. I just installed an ssd disk as l2arc. This
is a backup server with 6 GB RAM (ie I don't often read the same data
again), basically it has a large
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rich Teer
Also related to this is a performance question. My initial test involved
copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
to complete. The strikes me as
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rich Teer
Not such a silly question. :-) The USB1 port was indeed the source of
much of the bottleneck. The same 50 MB file system took only 8 seconds
to copy when I plugged the drive
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Friday, April 29, 2011 12:49 AM
The lower bound of ARC size is c_min
# kstat -p zfs::arcstats:c_min
I see there is another character in the plot: c_max
c_max seems to be 80% of system ram (at least on my systems).
I assume
This is a summary of a much longer discussion Dedup and L2ARC memory
requirements (again)
Sorry even this summary is long. But the results vary enormously based on
individual usage, so any rule of thumb metric that has been bouncing
around on the internet is simply not sufficient. You need to go
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
ZFS's problem is that it needs ALL the resouces for EACH pool ALL the
time, and can't really share them well if it expects to keep performance
from tanking... (no pun intended)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
Are any of you out there using dedupe ZFS file systems to store VMware
VMDK (or any VM tech. really)? Curious what recordsize you use and
what your hardware specs /
From: Tim Cook [mailto:t...@cook.ms]
ZFS's problem is that it needs ALL the resouces for EACH pool ALL the
time, and can't really share them well if it expects to keep performance
from tanking... (no pun intended)
That's true, but on the flipside, if you don't have adequate resources
From: Tim Cook [mailto:t...@cook.ms]
That's patently false. VM images are the absolute best use-case for dedup
outside of backup workloads. I'm not sure who told you/where you got the
idea that VM images are not ripe for dedup, but it's wrong.
Well, I got that idea from this list. I said
From: Richard Elling [mailto:richard.ell...@gmail.com]
Worse yet, your arc consumption could be so large, that
PROCESSES don't fit in ram anymore. In this case, your processes get
pushed
out to swap space, which is really bad.
This will not happen. The ARC will be asked to shrink
From: Edward Ned Harvey
I saved the core and ran again. This time it spewed leaked space
messages
for an hour, and completed. But the final result was physically
impossible (it
counted up 744k total blocks, which means something like 3Megs per block
in
my 2.39T used pool. I checked
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
What does it mean / what should you do, if you run that command, and it
starts spewing messages like this?
leaked space: vdev 0, offset 0x3bd8096e00, size 7168
And one
From: Neil Perrin [mailto:neil.per...@oracle.com]
The size of these structures will vary according to the release you're
running.
You can always find out the size for a particular system using ::sizeof
within
mdb. For example, as super user :
: xvm-4200m2-02 ; echo ::sizeof ddt_entry_t |
From: Erik Trimble [mailto:erik.trim...@oracle.com]
OK, I just re-looked at a couple of things, and here's what I /think/ is
the correct numbers.
I just checked, and the current size of this structure is 0x178, or 376
bytes.
Each ARC entry, which points to either an L2ARC item (of any
From: Brandon High [mailto:bh...@freaks.com]
Sent: Thursday, April 28, 2011 5:33 PM
On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Correct me if I'm wrong, but the dedup sha256 checksum happens in
addition
to (not instead
From: Tomas Ögren [mailto:st...@acc.umu.se]
zdb -bb pool
Oy - this is scary - Thank you by the way for that command - I've been
gathering statistics across a handful of systems now ...
What does it mean / what should you do, if you run that command, and it
starts spewing messages like this?
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lamp Zy
One of my drives failed in Raidz2 with two hot spares:
What zpool zfs version are you using? What OS version?
Are all the drives precisely the same size (Same make/model number?)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
(BTW, is there any way to get a measurement of number of blocks consumed
per zpool? Per vdev? Per zfs filesystem?) *snip*.
you need to use zdb to see what the current
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
No, that's not true. The DDT is just like any other ZFS metadata and can
be
split over the ARC,
cache device (L2ARC) and the main pool devices. An infrequently referenced
DDT
There are a lot of conflicting references on the Internet, so I'd really
like to solicit actual experts (ZFS developers or people who have physical
evidence) to weigh in on this...
After searching around, the reference I found to be the most seemingly
useful was Erik's post here:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Rossing
So i figured out after a couple of scrubs and fmadm faulty that drive
c9t15d0 was bad.
My pool now looks like this:
NAME STATE READ WRITE CKSUM
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Julian King
Actually I think our figures more or less agree. 12 disks = 7 mbits
48 disks = 4x7mbits
I know that sounds like terrible performance to me. Any time I benchmark
disks, a cheap
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
I have a zpool with one dataset and a handful of snapshots. I
cannot delete two of the snapshots. The message I get is dataset is
busy. Neither fuser or lsof show anything
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nomen Nescio
Hi ladies and gents, I've got a new Solaris 10 development box with ZFS
mirror root using 500G drives. I've got several extra 320G drives and I'm
wondering if there's any way I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Giovanni Tirloni
We've production servers with 9 vdev's (mirrored) doing `zfs send`
daily to backup servers with with 7 vdev's (each 3-disk raidz1). Some
backup servers that receive datasets
301 - 400 of 1109 matches
Mail list logo