Hi,
I have a pool of 22 1T SATA disks in a RAIDZ3 configuration. It is filled with
files of an average size of 2MB. I filled it randomly to resemble the expected
workload in production use.
Problems arise when I try to scrub/resilver this pool. This operation takes the
better part of a week
Hi,
I have a setup with thousands of filesystems, each containing several
snapshots. For a good percentage of these filesystems I want to create
a snapshots once every hour, for others once every 2 hours and so forth.
I built some tools to do this, no problem so far.
While examining disk load on
Brent Jones wrote:
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been trying to more than a year, and
watching dozens, if not hundreds of threads.
Getting half-ways decent performance from NFS and ZFS is impossible
unless you disable the ZIL.
Arne Jansen wrote:
Hi,
I have a setup with thousands of filesystems, each containing several
snapshots. For a good percentage of these filesystems I want to create
a snapshots once every hour, for others once every 2 hours and so forth.
I built some tools to do this, no problem so far.
While
thomas wrote:
Very interesting. This could be useful for a number of us. Would you be willing
to share your work?
No problem. I'll contact you off-list.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 11:51 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
Andrey Kuzmin wrote:
As to your results, it sounds almost too good to be true. As Bob
has pointed out, h/w design targeted hundreds IOPS
, 2010 at 12:03 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 11:51 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net mailto:sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
Andrey Kuzmin
Andrey Kuzmin wrote:
As to your results, it sounds almost too good to be true. As Bob has
pointed out, h/w design targeted hundreds IOPS, and it was hard to
believe it can scale 100x. Fantastic.
Hundreds IOPS is not quite true, even with hard drives. I just tested
a Hitachi 15k drive and it
Darren J Moffat wrote:
But the following document says Recursive ZFS snapshots are created
quickly as one atomic operation. The snapshots are created together (all
at once) or not created at all.
http://docs.sun.com/app/docs/doc/819-5461/gdfdt?a=view
I've looked at the code again - I miss
Darren J Moffat wrote:
On 11/06/2010 11:42, Arne Jansen wrote:
Darren J Moffat wrote:
But the following document says Recursive ZFS snapshots are created
quickly as one atomic operation. The snapshots are created together
(all
at once) or not created at all.
http://docs.sun.com/app/docs/doc
Thomas Nau wrote:
Dear all
We ran into a nasty problem the other day. One of our mirrored zpool
hosts several ZFS filesystems. After a reboot (all FS mounted at that
time an in use) the machine paniced (console output further down). After
detaching one of the mirrors the pool fortunately
Marcelo Leal wrote:
Hello there,
I think you should share it with the list, if you can, seems like an
interesting work. ZFS has some issues with snapshots and spa_sync performance
for snapshots deletion.
I'm a bit reluctant to post it to the list where it can still be found
years from now.
Hi,
I known it's been discussed here more than once, and I read the
Evil tuning guide, but I didn't find a definitive statement:
There is absolutely no sense in having slog devices larger than
then main memory, because it will never be used, right?
ZFS will rather flush the txg to disk than
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Arne Jansen
There is absolutely no sense in having slog devices larger than
then main memory, because it will never be used, right?
Also: A TXG is guaranteed
Roy Sigurd Karlsbakk wrote:
There is absolutely no sense in having slog devices larger than
then main memory, because it will never be used, right?
ZFS will rather flush the txg to disk than reading back from
zil? So there is a guideline to have enough slog to hold about 10
seconds of zil,
There has been many threads in the past asking about ZIL
devices. Most of them end up in recommending Intel X-25
as an adequate device. Nevertheless there is always the
warning about them not heeding cache flushes. But what use
is a ZIL that ignores cache flushes? If I'm willing to
tolerate that
Bob Friesenhahn wrote:
On Tue, 15 Jun 2010, Arne Jansen wrote:
In case of a power failure I will likely lose about as many writes as
I do with SSDs, a few milliseconds.
I agree with your concerns, but the data loss may span as much as 30
seconds rather than just a few milliseconds.
Wait
Christopher George wrote:
So why buy SSD for ZIL at all?
For the record, not all SSDs ignore cache flushes. There are at least
two SSDs sold today that guarantee synchronous write semantics; the
Sun/Oracle LogZilla and the DDRdrive X1. Also, I believe it is more
LogZilla? Are these
Arve Paalsrud wrote:
Not to forget the The Deneva Reliability disks from OCZ that just got
released. See
http://www.oczenterprise.com/details/ocz-deneva-reliability-2-5-emlc-ssd.html
The Deneva Reliability family features built-in supercapacitor (SF-1500
models) that acts as a temporary
David Markey wrote:
I have done a similar deployment,
However we gave each student their own ZFS filesystem. Each of which had
a .zfs directory in it.
Don't host 50k filesystems on a single pool. It's more pain than it's
worth.
___
zfs-discuss
MichaelHoy wrote:
I’ve posted a query regarding the visibility of snapshots via CIFS here
(http://opensolaris.org/jive/thread.jspa?threadID=130577tstart=0)
however, I’m beginning to suspect that it may be a more fundamental ZFS
question so I’m asking the same question here.
At what level
David Magda wrote:
On Wed, June 16, 2010 11:02, David Magda wrote:
[...]
Yes, I understood it as suck, and that link is for ZIL. For L2ARC SSD
numbers see:
s/suck/such/
ah, I tried to make sense from 'suck' in the sense of 'just writing
sequentially' or something like that ;)
:)
David Magda wrote:
On Wed, June 16, 2010 10:44, Arne Jansen wrote:
David Magda wrote:
I'm not sure you'd get the same latency and IOps with disk that you can
with a good SSD:
http://blogs.sun.com/brendan/entry/slog_screenshots
[...]
Please keep in mind I'm talking about a usage as ZIL
Bob Friesenhahn wrote:
On Wed, 16 Jun 2010, Arne Jansen wrote:
Please keep in mind I'm talking about a usage as ZIL, not as L2ARC or
main
pool. Because ZIL issues nearly sequential writes, due to the
NVRAM-protection
of the RAID-controller the disk can leave the write cache enabled
David Magda wrote:
On Wed, June 16, 2010 15:15, Arne Jansen wrote:
I double checked before posting: I can nearly saturate a 15k disk if I
make full use of the 32 queue slots giving 137 MB/s or 34k IOPS/s. Times
3 nearly matches the above mentioned 114k IOPS :)
34K*3 = 102K. 12K isn't
at 5:36 AM, Arne Jansen sensi...@gmx.net wrote:
artiepen wrote:
40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2,
and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up
to 40 very rarely.
As far as random vs. sequential. Correct me if I'm wrong
Curtis E. Combs Jr. wrote:
Um...I started 2 commands in 2 separate ssh sessions:
in ssh session one:
iostat -xn 1 stats
in ssh session two:
mkfile 10g testfile
when the mkfile was finished i did the dd command...
on the same zpool1 and zfs filesystem..that's it, really
No, this doesn't
David Magda wrote:
On Fri, June 18, 2010 08:29, Sendil wrote:
I can create 400+ file system for each users,
but will this affect my system performance during the system boot up?
Is this recommanded or any alternate is available for this issue.
You can create a dataset for each user, and
Sandon Van Ness wrote:
Sounds to me like something is wrong as on my 20 disk backup machine
with 20 1TB disks on a single raidz2 vdev I get the following with DD on
sequential reads/writes:
writes:
r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero
of=./100gb.bin
10+0
Hi,
I don't know if it's already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write
Roy Sigurd Karlsbakk wrote:
I have read people are having problems with lengthy boot times with lots of
datasets. We're planning to do extensive snapshotting on this system, so there
might be close to a hundred snapshots per dataset, perhaps more. With 200 users
and perhaps 10-20 shared
David Magda wrote:
On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote:
So far the plan is to keep it in one pool for design and
administration simplicity. Why would you want to split up (net) 40TB
into more pools? Seems to me that'll mess up things a bit, having to
split up SSDs for use
Roy Sigurd Karlsbakk wrote:
Hi all
I plan to setup a new system with four Crucial RealSSD 256MB SSDs for both SLOG
and L2ARC. The plan is to use four small slices for the SLOG, striping two
mirrors. I have seen questions in here about the theoretical benefit of doing
this, but I haven't seen
Roy Sigurd Karlsbakk wrote:
- mirroring l2arc won't gain anything, as it doesn't contain any
information that cannot be rebuilt if a device is lost. Further, if a
device is lost,
the system just uses the remaining devices. So I wouldn't waste any
space mirroring l2arc, I'll just stripe them.
I
Wes Felter wrote:
On 6/19/10 3:56 AM, Arne Jansen wrote:
while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device
IMO it might be better to use the smallest (50GB, maybe overprovisioned
down to ~20GB) Vertex 2
Paul B. Henson wrote:
On Sun, 20 Jun 2010, Arne Jansen wrote:
In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have 7000,
but did some boot-time tuning).
What kind of boot tuning are you referring to? We've
Arne Jansen wrote:
Paul B. Henson wrote:
On Sun, 20 Jun 2010, Arne Jansen wrote:
In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have 7000,
but did some boot-time tuning).
What kind of boot tuning are you
Hi,
Roy Sigurd Karlsbakk wrote:
Crucial RealSSD C300 has been released and showing good numbers for use as
Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as
opposed to Intel units etc?
I had a chance to get my hands on a Crucial RealSSD C300/128MB yesterday and
Arne Jansen wrote:
Hi,
Roy Sigurd Karlsbakk wrote:
Crucial RealSSD C300 has been released and showing good numbers for use as
Zil and L2ARC. Does anyone know if this unit flushes its cache on request,
as opposed to Intel units etc?
I had a chance to get my hands on a Crucial RealSSD
Arne Jansen wrote:
Hi,
Roy Sigurd Karlsbakk wrote:
Crucial RealSSD C300 has been released and showing good numbers for use as
Zil and L2ARC. Does anyone know if this unit flushes its cache on request,
as opposed to Intel units etc?
Also the IOPS with cache flushes is quite low, 385
Ross Walker wrote:
Raidz is definitely made for sequential IO patterns not random. To get good
random IO with raidz you need a zpool with X raidz vdevs where X = desired
IOPS/IOPS of single drive.
I have seen statements like this repeated several times, though
I haven't been able to find
Now the test for the Vertex 2 Pro. This was fun.
For more explanation please see the thread Crucial RealSSD C300 and cache
flush?
This time I made sure the device is attached via 3GBit SATA. This is also
only a short test. I'll retest after some weeks of usage.
cache enabled, 32 buffers, 64k
Geoff Nordli wrote:
Is this the one
(http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maxim
um-performance-enterprise-solid-state-drives/ocz-vertex-2-pro-series-sata-ii
-2-5--ssd-.html) with the built in supercap?
Yes.
Geoff
Edward Ned Harvey wrote:
Due to recent experiences, and discussion on this list, my colleague and
I performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not
have log device removal that was introduced in zpool 19) In any way
possible, you lose an
Daniel Carosone wrote:
Something similar would be useful, and much more readily achievable,
from ZFS from such an application, and many others. Rather than a way
to compare reliably between two files for identity, I'ld liek a way to
compare identity of a single file between two points in
Jordan McQuown wrote:
I’m curious to know what other people are running for HD’s in white box
systems? I’m currently looking at Seagate Barracuda’s and Hitachi
Deskstars. I’m looking at the 1tb models. These will be attached to an
LSI expander in a sc847e2 chassis driven by an LSI 9211-8i HBA.
Edward Ned Harvey wrote:
From: Robert Milkowski [mailto:mi...@task.gda.pl]
[In raidz] The issue is that each zfs filesystem block is basically
spread across
n-1 devices.
So every time you want to read back a single fs block you need to wait
for all n-1 devices to provide you with a part of it
Giovanni Tirloni wrote:
On Thu, Sep 2, 2010 at 10:18 AM, Jeff Bacon ba...@walleyesoftware.com
mailto:ba...@walleyesoftware.com wrote:
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn't the slog a copy of the
in-memory
Hi,
currently I'm trying to debug a very strange phenomenon on a nearly full
pool (96%). Here are the symptoms: over NFS, a find on the pool takes
a very long time, up to 30s (!) for each file. Locally, the performance
is quite normal.
What I found out so far: It seems that every nfs write
, but as a side effect reads also
came to a nearly complete halt.
--
Arne
Neil.
On 09/09/10 09:00, Arne Jansen wrote:
Hi,
currently I'm trying to debug a very strange phenomenon on a nearly full
pool (96%). Here are the symptoms: over NFS, a find on the pool takes
a very long time, up to 30s
Richard Elling wrote:
On Sep 9, 2010, at 10:09 AM, Arne Jansen wrote:
Hi Neil,
Neil Perrin wrote:
NFS often demands it's transactions are stable before returning.
This forces ZFS to do the system call synchronously. Usually the
ZIL (code) allocates and writes a new block in the intent log
interested could give it some testing
and/or review. If there are no objections, I'll send a formal
webrev soon.
Thanks,
Arne
On 10.10.2012 21:38, Arne Jansen wrote:
Hi,
We're currently working on a feature to send zfs streams in a portable
format that can be received on any filesystem
On 10/18/2012 10:19 PM, Andrew Gabriel wrote:
Arne Jansen wrote:
We have finished a beta version of the feature.
What does FITS stand for?
Filesystem Incremental Transport Stream
(or Filesystem Independent Transport Stream)
___
zfs-discuss mailing
On 19.10.2012 10:47, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
On 10/18/2012 10:19 PM, Andrew Gabriel wrote:
Arne Jansen wrote:
We have finished a beta version of the feature.
What does FITS stand for?
Filesystem Incremental Transport Stream
(or Filesystem Independent
On 19.10.2012 11:16, Irek Szczesniak wrote:
On Wed, Oct 17, 2012 at 2:29 PM, Arne Jansen sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send/
It adds a command 'zfs fits-send
On 19.10.2012 12:17, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
Is this an attempt to create a competition for TAR?
Not really. We'd have preferred tar if it would have been powerful enough.
It's more an alternative to rsync for incremental updates. I really
like the send
On 19.10.2012 13:53, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
On 19.10.2012 12:17, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
Is this an attempt to create a competition for TAR?
Not really. We'd have preferred tar if it would have been powerful enough
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send
On 10/20/2012 01:10 AM, Tim Cook wrote:
On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net
On 10/20/2012 01:21 AM, Matthew Ahrens wrote:
On Fri, Oct 19, 2012 at 1:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
Please don't bother changing libzfs (and proliferating the copypasta
there) -- do it like
On 20.10.2012 22:24, Tim Cook wrote:
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/20/2012 01:10 AM, Tim Cook wrote:
On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net
On 22.10.2012 06:32, Matthew Ahrens wrote:
On Sat, Oct 20, 2012 at 1:24 PM, Tim Cook t...@cook.ms
mailto:t...@cook.ms wrote:
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/20/2012 01:10 AM, Tim Cook wrote
62 matches
Mail list logo