On Jun 6, 2012, at 8:01 AM, Sašo Kiselkov wrote:
On 06/06/2012 04:55 PM, Richard Elling wrote:
On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
On Jun 6, 2012, at 8:22 AM, Sašo Kiselkov wrote:
On 06/06/2012 05:01 PM, Sašo Kiselkov wrote:
I'll try and load the machine with dd(1) to the max to see if access
patterns of my software have something to do with it.
Tried and tested, any and all write I/O to the pool causes this xcall
On May 31, 2012, at 9:45 AM, Antonio S. Cofiño wrote:
Markus,
After Jim's answer I have started to read bout the well known issue.
Is it just mpt causing the errors or also mpt_sas?
Both drivers are causing the reset storm (See my answer to Jim's e-mail).
No. Resets are corrective
On May 30, 2012, at 9:25 AM, Antonio S. Cofiño wrote:
Dear All,
It may be this not the correct mailing list, but I'm having a ZFS issue when
a disk is failing.
The system is a supermicro motherboard X8DTH-6F in a 4U chassis
(SC847E1-R1400LPB) and an external SAS2 JBOD
On May 30, 2012, at 1:07 PM, Sašo Kiselkov wrote:
On 05/25/2012 08:40 PM, Richard Elling wrote:
See the soluion at https://www.illumos.org/issues/644
-- richard
And predictably, I'm back with another n00b question regarding this
array. I've put a pair of LSI-9200-8e controllers
On May 29, 2012, at 8:12 AM, Cindy Swearingen wrote:
Hi--
You don't see what release this is but I think that seeing the checkum
error accumulation on the spare was a zpool status formatting bug that
I have seen myself. This is fixed in a later Solaris release.
Once again, Cindy beats me
On May 29, 2012, at 6:10 AM, Jim Klimov wrote:
Also note that ZFS IO often is random even for reads, since you
have to read metadata and file data often from different dispersed
locations.
This is true for almost all other file systems, too. For example, in UFS,
metadata is stored in fixed
On May 28, 2012, at 12:46 PM, Lionel Cons wrote:
On Mon, May 28, 2012 at 9:06 PM, Iwan Aucamp aucam...@gmail.com wrote:
I'm getting sub-optimal performance with an mmap based database (mongodb)
which is running on zfs of Solaris 10u9.
System is Sun-Fire X4270-M2 with 2xX5680 and 72GB (6 *
Hi Dhiraj,
On May 27, 2012, at 11:28 PM, Dhiraj Bhandare wrote:
Hi All
I would like to create a sample application for ZFS using C++/C and libzfs.
I am very new to ZFS, I would like to have an some information about ZFS API.
Even some sample code will be useful.
Looking for help and
question below...
On May 28, 2012, at 1:25 PM, Iwan Aucamp wrote:
On 05/28/2012 10:12 PM, Andrew Gabriel wrote:
On 05/28/12 20:06, Iwan Aucamp wrote:
I'm thinking of doing the following:
- relocating mmaped (mongo) data to a zfs filesystem with only
metadata cache
- reducing zfs arc
On May 28, 2012, at 5:48 AM, Nathan Kroenert wrote:
Hi folks,
Looking to get some larger drives for one of my boxes. It runs exclusively
ZFS and has been using Seagate 2TB units up until now (which are 512 byte
sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives
On May 28, 2012, at 2:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives.
[Apologies to the list, this has expanded past ZFS, if someone complains, we can
move the thread to another illumos dev list]
On May 28, 2012, at 2:18 PM, Lionel Cons wrote:
On 28 May 2012 22:10, Richard Elling richard.ell...@gmail.com wrote:
The only recommendation which will lead to results
On May 28, 2012, at 9:21 PM, Stephan Budach wrote:
Hi all,
just to wrap this issue up: as FMA didn't report any other error than the one
which led to the degradation of the one mirror, I detached the original drive
from the zpool which flagged the mirror vdev as ONLINE (although there
On May 27, 2012, at 12:52 PM, Stephan Budach wrote:
Hi,
today I issued a scrub on one of my zpools and after some time I noticed that
one of the vdevs became degraded due to some drive having cksum errors. The
spare kicked in and the drive got resilvered, but why does the spare drive
See the soluion at https://www.illumos.org/issues/644
-- richard
On May 25, 2012, at 10:07 AM, Sašo Kiselkov wrote:
I'm currently trying to get a SuperMicro JBOD with dual SAS expander
chips running in MPxIO, but I'm a total amateur to this and would like
to ask about how to detect whether
On May 25, 2012, at 1:53 PM, zfs user wrote:
On 5/23/12 11:28 PM, Richard Elling wrote:
The man page is clear on this topic, IMHO
Indeed, even in snv_117 the zpool man page says that. But the
console/dmesg message was also quite clear, so go figure whom
to trust (or fear) more
On May 23, 2012, at 2:56 PM, Jim Klimov wrote:
Thanks again,
2012-05-24 1:01, Richard Elling wrote:
At least the textual error message infers that if a hotspare
were available for the pool, it would kick in and invalidate
the device I am scrubbing to update into the pool after the
DD
, while the exams are our last chance to
learn something at all =)
2012-05-24 10:28, Richard Elling wrote:
You have not made a case for why this hybrid and failure-prone
procedure is required. What problem are you trying to solve?
Bigger-better-faster? ;)
The original proposal in this thread
comments far below...
On May 22, 2012, at 1:42 AM, Jim Klimov wrote:
2012-05-22 7:30, Daniel Carosone wrote:
On Mon, May 21, 2012 at 09:18:03PM -0500, Bob Friesenhahn wrote:
On Mon, 21 May 2012, Jim Klimov wrote:
This is so far a relatively raw idea and I've probably missed
something. Do
On May 17, 2012, at 5:28 AM, Paul Kraus wrote:
On Wed, May 16, 2012 at 3:35 PM, Paynter, Richard
richard.payn...@infocrossing.com wrote:
Does anyone know what the minimum value for zfs_arc_max should be set to?
Does it depend on the amount of memory on the system, and – if so – is there
a
On May 17, 2012, at 12:19 PM, Paynter, Richard wrote:
I have a customer who wants to set zfs_arc_max to 1G for a 16G system, and 2G
for a 32G system. Both of these are SAP and/or Oracle db servers. They
apparently want to maximize the amount of memory available for the
applications.
On May 17, 2012, at 7:34 PM, Mohamed Magdy wrote:
HGY
-- Forwarded message --
From: Mohamed Magdy mohamedmagd...@gmail.com
Date: Fri, May 18, 2012 at 4:28 AM
Subject: Zpool import FAULTED
To: zfs-discuss@opensolaris.org
Dears,
I need help to get my pool online
On May 16, 2012, at 12:35 PM, Paynter, Richard
richard.payn...@infocrossing.com wrote:
Does anyone know what the minimum value for zfs_arc_max should be set to?
Does it depend on the amount of memory on the system, and – if so – is there
a formula, or percentage, to use to determine what
comments below...
On May 12, 2012, at 8:10 AM, Jim Klimov wrote:
2012-05-12 7:01, Jim Klimov wrote:
Overall the applied question is whether the disk will
make it back into the live pool (ultimately with no
continuous resilvering), and how fast that can be done -
I don't want to risk the big
On May 12, 2012, at 4:52 AM, Jim Klimov wrote:
2012-05-11 14:22, Jim Klimov wrote:
What conditions can cause the reset of the resilvering
process? My lost-and-found disk can't get back into the
pool because of resilvers restarting...
FOLLOW-UP AND NEW QUESTIONS
Here is a new piece of
On May 7, 2012, at 1:53 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
Has someone done real-world measurements which indicate that raidz*
actually provides better sequential read or write
On May 5, 2012, at 8:04 AM, Bob Friesenhahn wrote:
On Fri, 4 May 2012, Erik Trimble wrote:
predictable, and the backing store is still only giving 1 disk's IOPS. The
RAIDZ* may, however, give you significantly more throughput (in MB/s) than a
single disk if you do a lot of sequential read
On May 4, 2012, at 5:25 AM, Roman Matiyenko wrote:
Hi all,
I have a bad bad problem with our brand new server!
The lengthy details are below but to cut the story short, on the same
hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS
sequential writes of 1.4GB/s on Solaris 10
On May 2, 2012, at 2:40 AM, Fred Liu wrote:
Still a fully supported product from Oracle:
http://www.oracle.com/us/products/servers-storage/storage/storage-
software/qfs-software/overview/index.html
Yeah. But it seems no more updates since sun acquisition.
Don't know Oracle's roadmap
On Apr 29, 2012, at 7:59 PM, Fred Liu wrote:
On Apr 26, 2012, at 12:27 AM, Fred Liu wrote:
“zfs 'userused@' properties” and “'zfs userspace' command” are good enough
to gather usage statistics.
I think I mix that with NetApp. If my memory is correct, we have to set
quotas to get
more comments...
On May 1, 2012, at 10:41 AM, Ray Van Dolson wrote:
On Tue, May 01, 2012 at 07:18:18AM -0700, Bob Friesenhahn wrote:
On Mon, 30 Apr 2012, Ray Van Dolson wrote:
I'm trying to run some IOzone benchmarking on a new system to get a
feel for baseline performance.
On Apr 26, 2012, at 12:27 AM, Fred Liu wrote:
“zfs 'userused@' properties” and “'zfs userspace' command” are good enough to
gather usage statistics.
I think I mix that with NetApp. If my memory is correct, we have to set
quotas to get usage statistics under DataOnTAP.
Further, if we can
On Apr 25, 2012, at 11:00 PM, Nico Williams wrote:
On Thu, Apr 26, 2012 at 12:10 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
Reboot requirement is a lame client implementation.
And lame protocol design. You could possibly migrate
On Apr 25, 2012, at 8:14 AM, Eric Schrock wrote:
ZFS will always track per-user usage information even in the absence of
quotas. See the the zfs 'userused@' properties and 'zfs userspace' command.
tip: zfs get -H -o value -p userused@username filesystem
Yes, and this is the logical size, not
On Apr 25, 2012, at 5:48 AM, Paul Archer wrote:
This may fall into the realm of a religious war (I hope not!), but recently
several people on this list have said/implied that ZFS was only acceptable
for production use on FreeBSD (or Solaris, of course) rather than Linux with
ZoL.
I'm
On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
9:59am, Richard Elling wrote:
On Apr 25, 2012, at 5:48 AM, Paul Archer wrote:
This may fall into the realm of a religious war (I hope not!), but
recently several people on this list have
said/implied that ZFS was only acceptable
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
11:26am, Richard Elling wrote:
On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
The point of a clustered filesystem was to be able to spread our data
out among all nodes and still have access
from any node without having to run
On Apr 25, 2012, at 2:26 PM, Paul Archer wrote:
2:20pm, Richard Elling wrote:
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
Interesting, something more complex than NFS to avoid the
complexities of NFS? ;-)
We have data coming in on multiple nodes (with local
On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
On Wed, Apr 25, 2012 at 5:22 PM, Richard Elling
richard.ell...@gmail.com wrote:
Unified namespace doesn't relieve you of 240 cross-mounts (or equivalents).
FWIW,
automounters were invented 20+ years ago to handle this in a nearly seamless
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
On 4/25/12 6:57 PM, Paul Kraus wrote:
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williamsn...@cryptonector.com wrote:
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
richard.ell...@gmail.com wrote:
Nothing's changed. Automounter + data
On Apr 24, 2012, at 8:35 AM, Jim Klimov wrote:
On 2012-04-24 19:14, Tim Cook wrote:
Personally unless the dataset is huge and you're using z3, I'd be
scrubbing once a week. Even if it's z3, just do a window on Sunday's or
something so that you at least make it through the whole dataset at
On Apr 17, 2012, at 12:25 AM, Jim Klimov wrote:
2012-04-17 5:15, Richard Elling wrote:
For the archives...
Write-back cache enablement is toxic for file systems that do not issue
cache flush commands, such as Solaris' UFS. In the early days of ZFS,
on Solaris 10 or before ZFS was bootable
For the archives...
On Apr 16, 2012, at 3:37 PM, Peter Jeremy wrote:
On 2012-Apr-14 02:30:54 +1000, Tim Cook t...@cook.ms wrote:
You will however have an issue replacing them if one should fail. You need
to have the same block count to replace a device, which is why I asked for a
http://wesunsolve.net/bugid/id/6563887
-- richard
On Apr 14, 2012, at 6:04 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
I thought ZFSv20-something added a if the blockcount is within 10%,
On Apr 11, 2012, at 1:34 AM, Ian Collins wrote:
I use an application with a fairly large receive data buffer (256MB) to
replicate data between sites.
I have noticed the buffer becoming completely full when receiving snapshots
for some filesystems, even over a slow (~2MB/sec) WAN
On Apr 9, 2012, at 7:10 AM, Paul Kraus wrote:
Sorry for the off topic post, but I figure there is experience
here. I have a total of ten J4400 chassis all loaded with SATA drives.
Has anyone noticed a tendency for drives in specific slots to fail
more often than others? I have seen more
On Apr 6, 2012, at 4:58 PM, Marion Hakanson wrote:
a...@blackandcode.com said:
I'm spec'ing out a Thumper-esque solution and having trouble finding my
favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The
Seagate Constellations seem pretty reasonable given the market
On Apr 7, 2012, at 4:15 PM, Jim Klimov wrote:
I'm not familiar with the J4400 at all, but isn't Sun/Oracle using -like
NetAPP-
Interposer cards and thus handling the SATA drives more or less like SAS
ones?
Out of curiosity, are there any third-party hardware vendors
that make
On Apr 4, 2012, at 12:08 PM, Jan-Aage Frydenbø-Bruvoll wrote:
Dear List,
I am struggling with a storage pool on a server, where I would like to
offline a device for replacement. The pool consists of two-disk stripes set
up in mirrors (yep, stupid, but we were running out of VDs on the
On Mar 29, 2012, at 4:33 AM, Borja Marcos wrote:
On Mar 29, 2012, at 11:59 AM, Ian Collins wrote:
Does zfs receive produce any warnings? Have you tried adding -v?
Thank you very much Ian and Carsten. Well, adding a -v gave me a clue. Turns
out that one of the old snapshots had a
I see nothing unusual in the lockstat data. I think you're barking up
the wrong tree.
-- richard
On Mar 25, 2012, at 10:51 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 1:19 PM, Richard Elling
richard.ell...@richardelling.com wrote:
Apologies to the ZFSers, this thread really belongs
On Mar 25, 2012, at 8:58 PM, Yuri Vorobyev wrote:
Hello.
What the best practices for choosing ZFS volume volblocksize setting for
VMware VMFS-5?
VMFS-5 block size is 1Mb. Not sure how it corresponds with ZFS.
Zero correlation.
What I see on the wire from VMFS is 16KB random reads
On Mar 26, 2012, at 4:18 PM, Bob Friesenhahn wrote:
On Mon, 26 Mar 2012, Andrew Gabriel wrote:
I just played and knocked this up (note the stunning lack of comments,
missing optarg processing, etc)...
Give it a list of files to check...
This is a cool program, but programmers were
On Mar 24, 2012, at 10:29 PM, Aubrey Li wrote:
Hi,
I'm migrating a webserver(apache+php) from RHEL to solaris. During the
stress testing comparison, I found under the same session number of client
request, CPU% is ~70% on RHEL while CPU% is full on solaris.
After some investigation, zfs
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a bigger
symptom than locks.
-- richard
On Mar 25, 2012, at 6:24 AM, Aubrey Li wrote:
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl
On Mar 25, 2012, at 6:26 AM, Jeff Bacon wrote:
In general, mixing SATA and SAS directly behind expanders (eg without
SAS/SATA intereposers) seems to be bad juju that an OS can't fix.
In general I'd agree. Just mixing them on the same box can be problematic,
I've noticed - though I think as
On Mar 25, 2012, at 10:25 AM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
richard.ell...@richardelling.com wrote:
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a
bigger
On Mar 25, 2012, at 6:51 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro james.ma...@oracle.com wrote:
If you're chasing CPU utilization, specifically %sys (time in the kernel),
I would start with a time-based kernel profile.
#dtrace -n 'profile-997hz /arg0/ { @[stack()] =
Apologies to the ZFSers, this thread really belongs elsewhere.
On Mar 25, 2012, at 10:11 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 11:34 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Mar 25, 2012, at 6:51 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro james.ma
Thanks for sharing, Jeff!
Comments below...
On Mar 24, 2012, at 4:33 PM, Jeff Bacon wrote:
2012-03-21 16:41, Paul Kraus wrote:
I have been running ZFS in a mission critical application since
zpool version 10 and have not seen any issues with some of the vdevs
in a zpool full while others
On Mar 22, 2012, at 3:03 AM, Jim Klimov wrote:
2012-03-21 22:53, Richard Elling wrote:
...
This is why a single
vdev's random-read performance is equivalent to the random-read
performance of
a single drive.
It is not as bad as that. The actual worst case number for a HDD
comments below...
On Mar 21, 2012, at 10:40 AM, Marion Hakanson wrote:
p...@kraus-haus.org said:
Without knowing the I/O pattern, saying 500 MB/sec. is meaningless.
Achieving 500MB/sec. with 8KB files and lots of random accesses is really
hard, even with 20 HDDs. Achieving 500MB/sec. of
On Mar 18, 2012, at 11:16 AM, Jim Klimov wrote:
Hello all,
I was asked if it is possible to convert a ZFS pool created
explicitly with ashift=12 (via the tweaked binary) and filled
with data back into ashift=9 so as to use the slack space
from small blocks (BP's, file tails, etc.)
copy
On Mar 16, 2012, at 3:06 PM, Brandon High wrote:
On Fri, Mar 16, 2012 at 2:35 PM, Philip Brown p...@bolthole.com wrote:
if there isnt a process visible doing this via ps, I'm wondering how
one might check if a zfs filesystem or snapshot is rendered busy in
this way, interfering with an
Hi Andy
On Feb 14, 2012, at 10:37 AM, andy thomas wrote:
On one of our servers, we have a RAIDz1 ZFS pool called 'maths2' consisting
of 7 x 300 Gb disks which in turn contains a single ZFS filesystem called
'home'.
Yesterday, using the 'ls' command to list the directories within this
Hi Andy,
On Feb 14, 2012, at 12:41 PM, andy thomas wrote:
On Tue, 14 Feb 2012, Richard Elling wrote:
Hi Andy
On Feb 14, 2012, at 10:37 AM, andy thomas wrote:
On one of our servers, we have a RAIDz1 ZFS pool called 'maths2' consisting
of 7 x 300 Gb disks which in turn contains
On Feb 10, 2012, at 9:12 AM, Simon Casady wrote:
I have a file that I can't delete, change permissions or owner. ls -v
does not show any acl's on the file not even those for normal unix rw
etc.
permissions from ls -l show -rwx--
chmod gived an error of not owner for the owner !!
and
On Feb 1, 2012, at 4:09 AM, Jim Klimov wrote:
2012-02-01 6:22, Ragnar Sundblad wrote:
That is almost what I do, except that I only have one HBA.
We haven't seen many HBAs fail during the years, none actually, so we
thought it was overkill to double those too. But maybe we are wrong?
Thanks for the info, James!
On Jan 31, 2012, at 6:58 PM, James C. McPherson wrote:
On 1/02/12 12:40 PM, Ragnar Sundblad wrote:
...
I still don't really get what stmsboot -u actually does (and if - and if
so how much - this differs between x86 and sparc).
Would it be impolite to ask you to
Hi Edmund,
On Jan 31, 2012, at 5:43 PM, Edmund White wrote:
You will definitely want to have a Smart Array card (p411 or p811) on hand
to update the firmware on the enclosure. Make sure you're on firmware
version 0131. You may also want to update the disk firmware at the same
time.
I
Hi Ivan,
On Jan 26, 2012, at 8:25 PM, Ivan Rodriguez wrote:
Dear fellows,
We have a backup server with a zpool size of 20 TB, we transfer
information using zfs snapshots every day (we have around 300 fs on
that pool),
the storage is a dell md3000i connected by iscsi, the pool is
On Jan 24, 2012, at 7:52 AM, Jim Klimov wrote:
2012-01-24 13:05, Mickaël CANÉVET wrote:
Hi,
Unless I misunderstood something, zfs send of a volume that has
compression activated, uncompress it. So if I do a zfs send|zfs receive
from a compressed volume to a compressed volume, my data are
On Jan 21, 2012, at 6:32 AM, Jim Klimov wrote:
2012-01-21 0:33, Jim Klimov wrote:
2012-01-13 4:12, Jim Klimov wrote:
As I recently wrote, my data pool has experienced some
unrecoverable errors. It seems that a userdata block
of deduped data got corrupted and no longer matches the
stored
On Jan 16, 2012, at 8:08 AM, David Magda wrote:
On Mon, January 16, 2012 01:19, Richard Elling wrote:
[1] http://www.usenix.org/event/fast10/tech/full_papers/zhang.pdf
Yes. Netapp has funded those researchers in the past. Looks like a FUD
piece to me.
Lookout everyone, the memory system
On Jan 17, 2012, at 4:11 AM, Anonymous Remailer (austria) wrote:
I have a desktop system with 2 ZFS mirrors. One drive in one mirror is
starting to produce read errors and slowing things down dramatically. I
detached it and the system is running fine. I can't tell which drive it is
though! The
On Jan 15, 2012, at 7:04 AM, Jim Klimov wrote:
Does raidzN actually protect against bitrot?
That's a kind of radical, possibly offensive, question formula
that I have lately.
Simple answer: no. raidz provides data protection. Checksums verify
data is correct. Two different parts of the
On Jan 14, 2012, at 6:36 AM, Stefan Ring wrote:
Inspired by the paper End-to-end Data Integrity for File Systems: A
ZFS Case Study [1], I've been thinking if it is possible to devise a way,
in which a minimal in-memory data corruption would cause massive data
loss.
For enterprise-class
On Jan 15, 2012, at 8:49 PM, Bob Friesenhahn wrote:
On Sun, 15 Jan 2012, Edward Ned Harvey wrote:
Such failures can happen undetected with or without ECC memory. It's simply
less likely with ECC. The whole thing about ECC memory... It's just doing
parity. It's a very weak checksum. If
On Jan 12, 2012, at 4:12 PM, Jim Klimov wrote:
As I recently wrote, my data pool has experienced some
unrecoverable errors. It seems that a userdata block
of deduped data got corrupted and no longer matches the
stored checksum. For whatever reason, raidz2 did not
help in recovery of this
On Jan 12, 2012, at 2:34 PM, Jim Klimov wrote:
I guess I have another practical rationale for a second
checksum, be it ECC or not: my scrubbing pool found some
unrecoverable errors. Luckily, for those files I still
have external originals, so I rsynced them over. Still,
there is one file
On Jan 11, 2012, at 5:01 AM, Jim Klimov wrote:
Hello all, I found this dialog on the zfs-de...@zfsonlinux.org list,
and I'd like someone to confirm-or-reject the discussed statement.
Paraphrasing in my words and understanding:
Labels, including Uberblock rings, are fixed 256KB in size each,
On Jan 9, 2012, at 7:23 PM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 07/01/12 13:39, Jim Klimov wrote:
I have transitioned a number of systems roughly by the same
procedure as you've outlined. Sadly, my notes are not in English so
they wouldn't be of much help
On Jan 9, 2012, at 5:44 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
To put things in proper perspective, with 128K filesystem blocks, the
worst case file fragmentation as a percentage is
On Jan 7, 2012, at 8:59 AM, Jim Klimov wrote:
I wonder if it is possible (currently or in the future as an RFE)
to tell ZFS to automatically read-ahead some files and cache them
in RAM and/or L2ARC?
See discussions on the ZFS intelligent prefetch algorithm. I think Ben
Rockwood's
description
On Jan 8, 2012, at 5:10 PM, Jim Klimov wrote:
2012-01-09 4:14, Richard Elling пишет:
On Jan 7, 2012, at 8:59 AM, Jim Klimov wrote:
I wonder if it is possible (currently or in the future as an RFE)
to tell ZFS to automatically read-ahead some files and cache them
in RAM and/or L2ARC?
See
Note: more analysis of the GPFS implementations is needed, but that will take
more
time than I'll spend this evening :-) Quick hits below...
On Jan 7, 2012, at 7:15 PM, Tim Cook wrote:
On Sat, Jan 7, 2012 at 7:37 PM, Richard Elling richard.ell...@gmail.com
wrote:
Hi Jim,
On Jan 6, 2012
Hi Jim,
On Jan 6, 2012, at 3:33 PM, Jim Klimov wrote:
Hello all,
I have a new idea up for discussion.
Several RAID systems have implemented spread spare drives
in the sense that there is not an idling disk waiting to
receive a burst of resilver data filling it up, but the
capacity of
On Jan 7, 2012, at 7:12 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
For smaller systems such as laptops or low-end servers,
which can house 1-2 disks, would it make sense to dedicate
a 2-4Gb
Hi Grant,
On Jan 4, 2012, at 2:59 PM, grant lowe wrote:
Hi all,
I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB
memory RIght now oracle . I've been trying to load test the box with
bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more
than
On Jan 5, 2012, at 10:19 AM, Tim Cook wrote:
Speaking of illumos, what exactly is the deal with the zfs discuss mailing
list? There's all of 3 posts that show up for all of 2011. Am I missing
something, or is there just that little traction currently?
On Jan 5, 2012, at 6:53 AM, sol wrote:
if a bug fixed in Illumos is never reported to Oracle by a customer,
it would likely never get fixed in Solaris either
:-(
I would have liked to think that there was some good-will between the ex- and
current-members of the zfs team, in the sense
On Jan 4, 2012, at 8:49 AM, Peter Radig wrote:
Thanks. The guys from Oracle are currently looking at some new code that was
introduced in arc_reclaim_thread() between b151a and b175.
Closed source strategy loses again!
-- richard
Peter Radig, Ahornstrasse 34, 85774 Unterföhring,
On Dec 30, 2011, at 5:57 AM, Hung-Sheng Tsao (laoTsao) wrote:
now s11 support shadow migration, just for this purpose, AFAIK
not sure nexentaStor support shadow migration
The shadow property is closed source. Once you go there, you are locked into
Oracle.
-- richard
--
ZFS and
On Dec 29, 2011, at 10:31 PM, Ray Van Dolson wrote:
Hi all;
We have a dev box running NexentaStor Community Edition 3.1.1 w/ 24GB
(we don't run dedupe on production boxes -- and we do pay for Nexenta
licenses on prd as well) RAM and an 8.5TB pool with deduplication
enabled (1.9TB or so in
On Dec 29, 2011, at 1:29 PM, Nico Williams wrote:
On Thu, Dec 29, 2011 at 2:06 PM, sol a...@yahoo.com wrote:
Richard Elling wrote:
many of the former Sun ZFS team
regularly contribute to ZFS through the illumos developer community.
Does this mean that if they provide a bug fix via illumos
On Dec 27, 2011, at 7:46 PM, Tim Cook wrote:
On Tue, Dec 27, 2011 at 9:34 PM, Nico Williams n...@cryptonector.com wrote:
On Tue, Dec 27, 2011 at 8:44 PM, Frank Cusack fr...@linetwo.net wrote:
So with a de facto fork (illumos) now in place, is it possible that two
zpools will report the same
On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote:
Not exactly. What is dedup'ed is the stream only, which is infect not very
efficient. Real dedup aware replication is taking the necessary steps to
avoid sending a block that exists on the other storage system.
These exist outside of ZFS (eg
On Dec 4, 2011, at 8:50 AM, Ryan Wehler wrote:
A certification does not mean that any specific implementation operates
without errors. A failed part,
noisy environment, or other influences will affect any specific
implementation.
Would it not be more prudent to re-run the tests after a
On Dec 1, 2011, at 5:08 PM, Ryan Wehler wrote:
During the diagnostics of my SAN failure last week we thought we had seen a
backplane failure due to high error counts with 'lsiutil'. However, even
with a new backplane and ruling out failed cards (MPXIO or singular) or bad
cables I'm still
101 - 200 of 2354 matches
Mail list logo