On Mar 16, 2013, at 7:01 PM, Andrew Werchowiecki
andrew.werchowie...@xpanse.com.au wrote:
It's a home set up, the performance penalty from splitting the cache devices
is non-existant, and that work around sounds like some pretty crazy amount of
overhead where I could instead just have a
On Mar 15, 2013, at 6:09 PM, Marion Hakanson hakan...@ohsu.edu wrote:
Greetings,
Has anyone out there built a 1-petabyte pool?
Yes, I've done quite a few.
I've been asked to look
into this, and was told low performance is fine, workload is likely
to be write-once, read-occasionally,
On Feb 26, 2013, at 12:33 AM, Tiernan OToole lsmart...@gmail.com wrote:
Thanks all! I will check out FreeNAS and see what it can do... I will also
check my RAID Card and see if it can work with JBOD... fingers crossed... The
machine has a couple internal SATA ports (think there are 2, could
On Feb 21, 2013, at 8:02 AM, John D Groenveld jdg...@elvis.arl.psu.edu wrote:
# zfs list -t vol
NAME USED AVAIL REFER MOUNTPOINT
rpool/dump4.00G 99.9G 4.00G -
rpool/foo128 66.2M 100G16K -
rpool/swap4.00G 99.9G 4.00G -
# zfs destroy rpool/foo128
cannot
On Feb 20, 2013, at 2:49 PM, Markus Grundmann mar...@freebsduser.eu wrote:
Hi!
My name is Markus and I living in germany. I'm new to this list and I have a
simple question
related to zfs. My favorite operating system is FreeBSD and I'm very happy to
use zfs on them.
It's possible to
On Feb 20, 2013, at 3:27 PM, Tim Cook t...@cook.ms wrote:
On Wed, Feb 20, 2013 at 5:09 PM, Richard Elling richard.ell...@gmail.com
wrote:
On Feb 20, 2013, at 2:49 PM, Markus Grundmann mar...@freebsduser.eu wrote:
Hi!
My name is Markus and I living in germany. I'm new to this list and I
On Feb 16, 2013, at 10:16 PM, Bryan Horstmann-Allen b...@mirrorshades.net
wrote:
+--
| On 2013-02-17 18:40:47, Ian Collins wrote:
|
One of its main advantages is it has been platform agnostic. We see
Solaris,
On Feb 6, 2013, at 5:17 PM, Gregg Wonderly gregg...@gmail.com wrote:
This is one of the greatest annoyances of ZFS. I don't really understand
how, a zvol's space can not be accurately enumerated from top to bottom of
the tree in 'df' output etc. Why does a zvol divorce the space used from
On Jan 29, 2013, at 6:08 AM, Robert Milkowski rmilkow...@task.gda.pl wrote:
From: Richard Elling
Sent: 21 January 2013 03:51
VAAI has 4 features, 3 of which have been in illumos for a long time. The
remaining
feature (SCSI UNMAP) was done by Nexenta and exists in their NexentaStor
product
On Jan 20, 2013, at 8:16 AM, Edward Harvey imaginat...@nedharvey.com wrote:
But, by talking about it, we're just smoking pipe dreams. Cuz we all know
zfs is developmentally challenged now. But one can dream...
I disagree the ZFS is developmentally challenged. There is more development
now
On Jan 20, 2013, at 4:51 PM, Tim Cook t...@cook.ms wrote:
On Sun, Jan 20, 2013 at 6:19 PM, Richard Elling richard.ell...@gmail.com
wrote:
On Jan 20, 2013, at 8:16 AM, Edward Harvey imaginat...@nedharvey.com wrote:
But, by talking about it, we're just smoking pipe dreams. Cuz we all know
On Jan 19, 2013, at 7:16 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
If almost all of the I/Os are
bloom filters are a great fit for this :-)
-- richard
On Jan 19, 2013, at 5:59 PM, Nico Williams n...@cryptonector.com wrote:
I've wanted a system where dedup applies only to blocks being written
that have a good chance of being dups of others.
I think one way to do this would be to
On Jan 17, 2013, at 9:35 PM, Thomas Nau thomas@uni-ulm.de wrote:
Thanks for all the answers more inline)
On 01/18/2013 02:42 AM, Richard Elling wrote:
On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
mailto:bfrie...@simple.dallas.tx.us wrote:
On Wed, 16 Jan
On Jan 18, 2013, at 4:40 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-01-18 06:35, Thomas Nau wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
4k might be a little small. 8k will have less metadata
volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI.
The pool is made of SAS2 disks (11 x 3-way mirrored) plus mirrored STEC RAM
ZIL
SSDs and 128G of main memory
The iSCSI access pattern (1 hour daytime average) looks like the following
(Thanks to Richard Elling for the dtrace
On Jan 17, 2013, at 8:35 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-01-17 16:04, Bob Friesenhahn wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
Matching the volume block size to what the clients
Kiselkov skiselkov...@gmail.com wrote:
On 01/08/2013 04:27 PM, mark wrote:
On Jul 2, 2012, at 7:57 PM, Richard Elling wrote:
FYI, HP also sells an 8-port IT-style HBA (SC-08Ge), but it is hard to
locate
with their configurators. There might be a more modern equivalent cleverly
hidden
On Jan 7, 2013, at 1:20 PM, Marion Hakanson hakan...@ohsu.edu wrote:
Greetings,
We're trying out a new JBOD here. Multipath (mpxio) is not working,
and we could use some feedback and/or troubleshooting advice.
Sometimes the mpxio detection doesn't work properly. You can try to
whitelist
On Jan 5, 2013, at 9:42 AM, Russ Poyner rpoy...@engr.wisc.edu wrote:
I'm configuring a box with 24x 3Tb consumer SATA drives, and wondering about
the best way to configure the pool. The customer wants capacity on the cheap,
and I want something I can service without sweating too much about
On Jan 4, 2013, at 11:12 AM, Robert Milkowski rmilkow...@task.gda.pl wrote:
Illumos is not so good at dealing with huge memory systems but perhaps
it is also more stable as well.
Well, I guess that it depends on your environment, but generally I would
expect S11 to be more stable if only
On Jan 3, 2013, at 8:38 PM, Geoff Nordli geo...@gnaa.net wrote:
Thanks Richard, Happy New Year.
On 13-01-03 09:45 AM, Richard Elling wrote:
On Jan 2, 2013, at 8:45 PM, Geoff Nordli geo...@gnaa.net wrote:
I am looking at the performance numbers for the Oracle VDI admin guide.
http
On Jan 2, 2013, at 8:45 PM, Geoff Nordli geo...@gnaa.net wrote:
I am looking at the performance numbers for the Oracle VDI admin guide.
http://docs.oracle.com/html/E26214_02/performance-storage.html
From my calculations for 200 desktops running Windows 7 knowledge user (15
iops) with a
On Jan 3, 2013, at 12:33 PM, Eugen Leitl eu...@leitl.org wrote:
On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
Just a little update on the home NAS project.
I've set the pool sync to
On Jan 2, 2013, at 2:03 AM, Eugen Leitl eu...@leitl.org wrote:
On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
On Dec 30, 2012, at 9:02 AM, Eugen Leitl eu...@leitl.org wrote:
The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
memory, no ECC. All the systems have Intel
On Dec 30, 2012, at 9:02 AM, Eugen Leitl eu...@leitl.org wrote:
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
a raidz3 (no compression nor dedup) with reasonable bonnie++
1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s
Seq-Read @ 53%
On Dec 6, 2012, at 5:30 AM, Matt Van Mater matt.vanma...@gmail.com wrote:
I'm unclear on the best way to warm data... do you mean to simply `dd
if=/volumes/myvol/data of=/dev/null`? I have always been under the
impression that ARC/L2ARC has rate limiting how much data can be added to
On Dec 5, 2012, at 5:41 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2012-12-05 04:11, Richard Elling wrote:
On Nov 29, 2012, at 1:56 AM, Jim Klimov jimkli...@cos.ru
mailto:jimkli...@cos.ru wrote:
I've heard a claim that ZFS relies too much on RAM caching, but
implements no sort of priorities
On Dec 5, 2012, at 7:46 AM, Matt Van Mater matt.vanma...@gmail.com wrote:
I don't have anything significant to add to this conversation, but wanted to
chime in that I also find the concept of a QOS-like capability very appealing
and that Jim's recent emails resonate with me. You're not
bug fix below...
On Dec 5, 2012, at 1:10 PM, Richard Elling richard.ell...@gmail.com wrote:
On Dec 5, 2012, at 7:46 AM, Matt Van Mater matt.vanma...@gmail.com wrote:
I don't have anything significant to add to this conversation, but wanted to
chime in that I also find the concept of a QOS
On Nov 29, 2012, at 1:56 AM, Jim Klimov jimkli...@cos.ru wrote:
I've heard a claim that ZFS relies too much on RAM caching, but
implements no sort of priorities (indeed, I've seen no knobs to
tune those) - so that if the storage box receives many different
types of IO requests with different
On Dec 1, 2012, at 6:54 PM, Nikola M. minik...@gmail.com wrote:
On 12/ 2/12 03:24 AM, Nikola M. wrote:
It is using Solaris Zones and throttling their disk usage on that level,
so you separate workload processes on separate zones.
Or even put KVM machines under the zones (Joyent and OI support
On Nov 23, 2012, at 11:56 AM, Fabian Keil freebsd-lis...@fabiankeil.de wrote:
Just in case your GNU/Linux experiments don't work out, you could
also try ZFS on Geli on FreeBSD which works reasonably well.
For illumos-based distros or Solaris 11, using ZFS with lofi has been
well discussed
On Nov 13, 2012, at 12:08 PM, Peter Tripp pe...@psych.columbia.edu wrote:
Hi folks,
I'm in the market for a couple of JBODs. Up until now I've been relatively
lucky with finding hardware that plays very nicely with ZFS. All my gear
currently in production uses LSI SAS controllers
On Oct 22, 2012, at 6:52 AM, Chris Nagele nag...@wildbit.com wrote:
If after it decreases in size it stays there it might be similar to:
7111576 arc shrinks in the absence of memory pressure
After it dropped, it did build back up. Today is the first day that
these servers are
On Oct 19, 2012, at 4:59 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
At some point, people
On Oct 19, 2012, at 1:04 AM, Michel Jansens michel.jans...@ulb.ac.be wrote:
On 10/18/12 21:09, Michel Jansens wrote:
Hi,
I've been using a Solaris 10 update 9 machine for some time to replicate
filesystems from different servers through zfs send|ssh zfs receive.
This was done to store
On Oct 19, 2012, at 6:37 AM, Eugen Leitl eu...@leitl.org wrote:
Hi,
I would like to give a short talk at my organisation in order
to sell them on zfs in general, and on zfs-all-in-one and
zfs as remote backup (zfs send).
Googling will find a few shorter presos. I have full-day presos on
On Oct 19, 2012, at 12:16 AM, James C. McPherson j...@opensolaris.org wrote:
On 19/10/12 04:50 PM, Jim Klimov wrote:
Hello all,
I have one more thought - or a question - about the current
strangeness of rpool import: is it supported, or does it work,
to have rpools on multipathed devices?
On Oct 12, 2012, at 5:50 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
Pedantically, a pool can be made in a file, so it works the same...
Pool can only be made
Hi John,
comment below...
On Oct 11, 2012, at 3:10 AM, Carsten John cj...@mpi-bremen.de wrote:
Hello everybody,
I just wanted to share my experience with a (partially) broken SSD that was
in use in a ZIL mirror.
We experienced a dramatic performance problem with one of our zpools,
On Oct 11, 2012, at 6:03 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
Read it again he asked, On that note, is there a minimal user-mode zfs thing
that would
On Oct 11, 2012, at 2:58 PM, Phillip Wagstrom phillip.wagst...@gmail.com
wrote:
On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire disk for ZFS and not to
On Oct 10, 2012, at 9:29 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
If the recipient system
On Oct 7, 2012, at 3:50 PM, Johannes Totz johan...@jo-t.de wrote:
On 05/10/2012 15:01, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tiernan OToole
I am in the process of
On Oct 5, 2012, at 1:57 PM, Albert Shih albert.s...@obspm.fr wrote:
Hi all,
I'm actually running ZFS under FreeBSD. I've a question about how many
disks I «can» have in one pool.
At this moment I'm running with one server (FreeBSD 9.0) with 4 MD1200
(Dell) meaning 48 disks. I've
On Oct 4, 2012, at 8:58 AM, Jan Owoc jso...@gmail.com wrote:
Hi,
I have a machine whose zpools are at version 28, and I would like to
keep them at that version for portability between OSes. I understand
that 'zpool status' asks me to upgrade, but so does 'zpool status -x'
(the man page
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber dswa...@druber.com wrote:
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber dswa...@druber.com wrote:
This whole thread has been fascinating. I really wish we (OI) had the two
following things
Thanks Neil, we always appreciate your comments on ZIL implementation.
One additional comment below...
On Oct 4, 2012, at 8:31 AM, Neil Perrin neil.per...@oracle.com wrote:
On 10/04/12 05:30, Schweiss, Chip wrote:
Thanks for all the input. It seems information on the performance of the
On Oct 4, 2012, at 1:33 PM, Schweiss, Chip c...@innovates.com wrote:
Again thanks for the input and clarifications.
I would like to clarify the numbers I was talking about with ZiL performance
specs I was seeing talked about on other forums. Right now I'm getting
streaming performance
If you've been hiding under a rock, not checking your email, then you might
not have heard about the Next Big Whopper Event for ZFS Fans: ZFS Day!
The agenda is now set and the teams are preparing to descend towards San
Francisco's Moscone Center vortex for a full day of ZFS. I'd love to see
On Sep 26, 2012, at 10:54 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Here's another one.
Two identical servers are sitting side by side. They could be connected to
each other via anything (presently using
On Sep 26, 2012, at 4:28 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 09/26/2012 01:14 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Got me
On Sep 25, 2012, at 12:30 PM, Jim Klimov jimkli...@cos.ru wrote:
Hello all,
With original old ZFS iSCSI implementation there was
a shareiscsi property for the zvols to be shared out,
and I believe all configuration pertinent to the iSCSI
server was stored in the pool options (I may be
On Sep 25, 2012, at 11:17 AM, Jason Usher jushe...@yahoo.com wrote:
Ok - but from a performance point of view, I am only using
ram/cpu resources for the deduping of just the individual
filesystems I enabled dedupe on, right ? I hope that
turning on dedupe for just one filesystem did not
On Sep 25, 2012, at 1:32 PM, Jim Klimov jimkli...@cos.ru wrote:
2012-09-26 0:21, Richard Elling пишет:
Does this mean that importing a pool with iSCSI zvols
on a fresh host (LiveCD instance on the same box, or
via failover of shared storage to a different host)
will not be able
On Sep 25, 2012, at 1:46 PM, Jim Klimov jimkli...@cos.ru wrote:
2012-09-24 21:08, Jason Usher wrote:
Ok, thank you. The problem with this is, the
compressratio only goes to two significant digits, which
means if I do the math, I'm only getting an
approximation. Since we may use these
On Sep 24, 2012, at 10:08 AM, Jason Usher jushe...@yahoo.com wrote:
Oh, and one other thing ...
--- On Fri, 9/21/12, Jason Usher jushe...@yahoo.com wrote:
It shows the allocated number of bytes used by the
filesystem, i.e.
after compression. To get the uncompressed size,
multiply
On Sep 20, 2012, at 10:05 PM, Stefan Ring stefan...@gmail.com wrote:
On Fri, Sep 21, 2012 at 6:31 AM, andy thomas a...@time-domain.co.uk wrote:
I have a ZFS filseystem and create weekly snapshots over a period of 5 weeks
called week01, week02, week03, week04 and week05 respectively. Ny
Hi Bogdan,
On Sep 21, 2012, at 4:00 AM, Bogdan Ćulibrk b...@default.rs wrote:
Greetings,
I'm trying to achieve selective output of zfs list command for specific
user to show only delegated sets. Anyone knows how to achieve this?
There are several ways, but no builtin way, today. Can you
On Sep 18, 2012, at 7:31 AM, Eugen Leitl eu...@leitl.org wrote:
Can I actually have a year's worth of snapshots in
zfs without too much performance degradation?
I've got 6 years of snapshots with no degradation :-)
In general, there is not a direct correlation between snapshot count and
On Sep 15, 2012, at 6:03 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Sat, 15 Sep 2012, Dave Pooser wrote:
The problem: so far the send/recv appears to have copied 6.25TB of
5.34TB.
That... doesn't look right. (Comparing zfs list -t snapshot and looking at
the 5.34 ref
On Sep 12, 2012, at 12:44 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
I send a replication data stream from one host to another. (and receive).
I discovered that after receiving, I need to remove the auto-snapshot
For illumos-based distributions, there is a written and written@ property
that shows the
amount of data writtent to each snapshot. This helps to clear the confusion
over the way
the used property is accounted.
https://www.illumos.org/issues/1645
-- richard
On Aug 29, 2012, at 11:12 AM,
On Aug 24, 2012, at 6:50 AM, Sašo Kiselkov wrote:
This is something I've been looking into in the code and my take on your
proposed points this:
1) This requires many and deep changes across much of ZFS's architecture
(especially the ability to sustain tlvdev failures).
2) Most of this
On Aug 13, 2012, at 2:24 AM, Sašo Kiselkov wrote:
On 08/13/2012 10:45 AM, Scott wrote:
Hi Saso,
thanks for your reply.
If all disks are the same, is the root pointer the same?
No.
Also, is there a signature or something unique to the root block that I can
search for on the disk?
On Aug 13, 2012, at 8:59 PM, Scott wrote:
On Mon, Aug 13, 2012 at 10:40:45AM -0700, Richard Elling wrote:
On Aug 13, 2012, at 2:24 AM, Sa?o Kiselkov wrote:
On 08/13/2012 10:45 AM, Scott wrote:
Hi Saso,
thanks for your reply.
If all disks are the same, is the root pointer the same
On Aug 9, 2012, at 4:11 AM, joerg.schill...@fokus.fraunhofer.de (Joerg
Schilling) wrote:
Sa?o Kiselkov skiselkov...@gmail.com wrote:
On 08/09/2012 01:05 PM, Joerg Schilling wrote:
Sa?o Kiselkov skiselkov...@gmail.com wrote:
To me it seems that the open-sourced ZFS community is not open,
On Aug 2, 2012, at 5:40 PM, Nigel W wrote:
On Thu, Aug 2, 2012 at 3:39 PM, Richard Elling richard.ell...@gmail.com
wrote:
On Aug 1, 2012, at 8:30 AM, Nigel W wrote:
Yes. +1
The L2ARC as is it currently implemented is not terribly useful for
storing the DDT in anyway because each DDT
On Aug 1, 2012, at 2:41 PM, Peter Jeremy wrote:
On 2012-Aug-01 21:00:46 +0530, Nigel W nige...@nosun.ca wrote:
I think a fantastic idea for dealing with the DDT (and all other
metadata for that matter) would be an option to put (a copy of)
metadata exclusively on a SSD.
This is on my
On Aug 1, 2012, at 8:30 AM, Nigel W wrote:
On Wed, Aug 1, 2012 at 8:33 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 08/01/2012 04:14 PM, Jim Klimov wrote:
chances are that
some blocks of userdata might be more popular than a DDT block and
would push it out of L2ARC as well...
Which
On Aug 1, 2012, at 12:21 AM, Suresh Kumar wrote:
Dear ZFS-Users,
I am using Solarisx86 10u10, All the devices which are belongs to my zpool
are in available state .
But I am unable to import the zpool.
#zpool import tXstpool
cannot import 'tXstpool': one or more devices is currently
On Jul 31, 2012, at 8:05 PM, opensolarisisdeadlongliveopensolaris wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
I believe what you meant to say was dedup with HDDs sux. If you had
used fast SSDs instead of HDDs
On Aug 1, 2012, at 8:04 AM, Jesse Jamez wrote:
Hello,
I recently rebooted my workstation and the disk names changed causing my ZFS
pool to be unavailable.
What OS and release?
I did not make any hardware changes? My first question is the obvious? Did
I loose my data? Can I recover
On Jul 31, 2012, at 10:07 AM, Nigel W wrote:
On Tue, Jul 31, 2012 at 9:36 AM, Ray Arachelian r...@arachelian.com wrote:
On 07/31/2012 09:46 AM, opensolarisisdeadlongliveopensolaris wrote:
Dedup: First of all, I don't recommend using dedup under any
circumstance. Not that it's unstable or
On Jul 30, 2012, at 10:20 AM, Roy Sigurd Karlsbakk wrote:
- Opprinnelig melding -
On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk
r...@karlsbakk.net wrote:
Also keep in mind that if you have an SLOG (ZIL on a separate
device), and then lose this SLOG (disk crash etc), you will
On Jul 30, 2012, at 12:25 PM, Tim Cook wrote:
On Mon, Jul 30, 2012 at 12:44 PM, Richard Elling richard.ell...@gmail.com
wrote:
On Jul 30, 2012, at 10:20 AM, Roy Sigurd Karlsbakk wrote:
- Opprinnelig melding -
On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk
r...@karlsbakk.net
On Jul 29, 2012, at 7:07 AM, Jim Klimov wrote:
Hello, list
For several times now I've seen statements on this list implying
that a dedicated ZIL/SLOG device catching sync writes for the log,
also allows for more streamlined writes to the pool during normal
healthy TXG syncs, than is the
On Jul 29, 2012, at 1:53 PM, Jim Klimov wrote:
2012-07-30 0:40, opensolarisisdeadlongliveopensolaris пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
For several times now I've seen statements on this list implying
think is 4KB might look very different coming out of ESXi. Use nfssvrtop
or one of the many dtrace one-liners for observing NFS traffic to see what is
really on the wire. And I'm very interested to know if you see 16KB reads
during the write-only workload.
more below...
From: Richard Elling
Important question, what is the interconnect? iSCSI? FC? NFS?
-- richard
On Jul 24, 2012, at 9:44 AM, matth...@flash.shanje.com wrote:
Working on a POC for high IO workloads, and I’m running in to a bottleneck
that I’m not sure I can solve. Testbed looks like this :
SuperMicro 6026-6RFT+
On Jul 22, 2012, at 10:18 PM, Yuri Vorobyev wrote:
Hello.
I faced with a strange performance problem with new disk shelf.
We a using ZFS system with SATA disks for a while.
What OS and release?
-- richard
It is Supermicro SC846-E16 chassis, Supermicro X8DTH-6F motherboard with 96Gb
On Jul 16, 2012, at 2:43 AM, Michael Hase wrote:
Hello list,
did some bonnie++ benchmarks for different zpool configurations
consisting of one or two 1tb sata disks (hitachi hds721010cla332, 512
bytes/sector, 7.2k), and got some strange results, please see
attachements for exact numbers
Thanks Sašo!
Comments below...
On Jul 10, 2012, at 4:56 PM, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256.
No need to supplant, there are 8 bits for enumerating hash
On Jul 11, 2012, at 10:11 AM, Bob Friesenhahn wrote:
On Wed, 11 Jul 2012, Richard Elling wrote:
The last studio release suitable for building OpenSolaris is available in
the repo.
See the instructions at
http://wiki.illumos.org/display/illumos/How+To+Build+illumos
Not correct as far as I
On Jul 11, 2012, at 10:23 AM, Sašo Kiselkov wrote:
Hi Richard,
On 07/11/2012 06:58 PM, Richard Elling wrote:
Thanks Sašo!
Comments below...
On Jul 10, 2012, at 4:56 PM, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
On Jul 11, 2012, at 1:06 PM, Bill Sommerfeld wrote:
on a somewhat less serious note, perhaps zfs dedup should contain chinese
lottery code (see http://tools.ietf.org/html/rfc3607 for one explanation)
which asks the sysadmin to report a detected sha-256 collision to
eprint.iacr.org or the
To amplify what Mike says...
On Jul 10, 2012, at 5:54 AM, Mike Gerdts wrote:
ls(1) tells you how much data is in the file - that is, how many bytes
of data that an application will see if it reads the whole file.
du(1) tells you how many disk blocks are used. If you look at the
stat
First things first, the panic is a bug. Please file one with your OS supplier.
More below...
On Jul 6, 2012, at 4:55 PM, Ian Collins wrote:
On 07/ 7/12 11:29 AM, Brian Wilson wrote:
On 07/ 6/12 04:17 PM, Ian Collins wrote:
On 07/ 7/12 08:34 AM, Brian Wilson wrote:
Hello,
I'd like a sanity
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI target
and initiator behaviour.
-- richard
On Jun 28, 2012, at 10:47 PM, Ian Collins wrote:
I'm trying to work out the case a remedy for a very sick iSCSI pool on a
Solaris 11 host.
The volume is exported from
On Jun 25, 2012, at 10:55 AM, Philip Brown wrote:
I ran into something odd today:
zfs destroy -r random/filesystem
is mindbogglingly slow. But seems to me, it shouldnt be.
It's slow, because the filesystem has two snapshots on it. Presumably, it's
busy rolling back the snapshots.
but
On Jun 20, 2012, at 4:08 AM, Jim Klimov wrote:
Also by default if you don't give the whole drive to ZFS, its cache
may be disabled upon pool import and you may have to reenable it
manually (if you only actively use this disk for one or more ZFS
pools - which play with caching nicely).
This
On Jun 20, 2012, at 5:08 PM, Jim Klimov wrote:
2012-06-21 1:58, Richard Elling wrote:
On Jun 20, 2012, at 4:08 AM, Jim Klimov wrote:
Also by default if you don't give the whole drive to ZFS, its cache
may be disabled upon pool import and you may have to reenable it
The behaviour
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
by the way
when you format start with cylinder 1 donot use 0
There is no requirement for skipping cylinder 0 for root on Solaris, and there
never has been.
-- richard
--
ZFS and performance consulting
http://www.RichardElling.com
[Phil beat me to it]
Yes, the 0s are a result of integer division in DTrace/kernel.
On Jun 14, 2012, at 9:20 PM, Timothy Coalson wrote:
Indeed they are there, shown with 1 second interval. So, it is the
client's fault after all. I'll have to see whether it is somehow
possible to get the
On Jun 14, 2012, at 1:35 PM, Robert Milkowski wrote:
The client is using async writes, that include commits. Sync writes do
not need commits.
What happens is that the ZFS transaction group commit occurs at more-
or-less regular intervals, likely 5 seconds for more modern ZFS
systems. When
On Jun 13, 2012, at 4:51 PM, Daniel Carosone wrote:
On Wed, Jun 13, 2012 at 05:56:56PM -0500, Timothy Coalson wrote:
client: ubuntu 11.10
/etc/fstab entry: server:/mainpool/storage /mnt/myelin nfs
bg,retry=5,soft,proto=tcp,intr,nfsvers=3,noatime,nodiratime,async 0
0
Hi Tim,
On Jun 14, 2012, at 12:20 PM, Timothy Coalson wrote:
Thanks for the script. Here is some sample output from 'sudo
./nfssvrtop -b 512 5' (my disks are 512B-sector emulated and the pool
is ashift=9, some benchmarking didn't show much difference with
ashift=12 other than giving up 8%
On Jun 11, 2012, at 6:05 AM, Jim Klimov wrote:
2012-06-11 5:37, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kalle Anka
Assume we have 100 disks in one zpool. Assume it takes 5 hours to scrub
one
disk. If I
On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I'm
occasionally seeing a storm of
1 - 100 of 2354 matches
Mail list logo