On 02/27/2013 12:32 PM, Ahmed Kamal wrote:
How is the quality of the ZFS Linux port today? Is it comparable to Illumos
or at least FreeBSD ? Can I trust production data to it ?
Can't speak from personal experience, but a colleague of mine has been
PPA builds on Ubuntu and has had, well, less
On 02/26/2013 09:33 AM, Tiernan OToole wrote:
As a follow up question: Data Deduplication: The machine, to start, will
have about 5Gb RAM. I read somewhere that 20TB storage would require about
8GB RAM, depending on block size...
The typical wisdom is that 1TB of dedup'ed data = 1GB of RAM.
On 02/26/2013 03:51 PM, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
this list is going to get shut down by Oracle next month.
Whose description still reads, everything ZFS running
On 02/26/2013 05:57 PM, Eugen Leitl wrote:
On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
I can't seem to find this list. Do you have an URL
On 02/21/2013 04:02 PM, Markus Grundmann wrote:
On 02/21/2013 03:34 PM, Jan Owoc wrote:
Does this do what you want? (zpool destroy is already undo-able) Jan
Jan that's not was I want.
I want set a property that's enable/disable all modifications with zpool
commands (e.g. zfs destroy, zfs
On 02/21/2013 12:27 AM, Peter Wood wrote:
Will adding another vdev hurt the performance?
In general, the answer is: no. ZFS will try to balance writes to
top-level vdevs in a fashion that assures even data distribution. If
your data is equally likely to be hit in all places, then you will not
On 02/17/2013 06:40 AM, Ian Collins wrote:
Toby Thain wrote:
Signed up, thanks.
The ZFS list has been very high value and I thank everyone whose wisdom
I have enjoyed, especially people like you Sašo, Mr Elling, Mr
Friesenhahn, Mr Harvey, the distinguished Sun and Oracle engineers who
post
On 02/16/2013 06:44 PM, Tim Cook wrote:
We've got Oracle employees on the mailing list, that while helpful, in no
way have the authority to speak for company policy. They've made that
clear on numerous occasions And that doesn't change the fact that we
literally have heard NOTHING from
On 02/16/2013 09:49 PM, John D Groenveld wrote:
Boot with kernel debugger so you can see the panic.
Sadly, though, without access to the source code, all he do can at that
point is log a support ticket with Oracle (assuming he has paid his
support fees) and hope it will get picked up by somebody
On 02/16/2013 10:47 PM, James C. McPherson wrote:
On 17/02/13 06:54 AM, Sašo Kiselkov wrote:
On 02/16/2013 09:49 PM, John D Groenveld wrote:
Boot with kernel debugger so you can see the panic.
Sadly, though, without access to the source code, all he do can at that
point is log a support
On 02/15/2013 03:39 PM, Tyler Walter wrote:
As someone who has zero insider information and feels that there isn't
much push at oracle to develop or release new zfs features, I have to
assume it's not coming. The only way I see it becoming a reality is if
someone in the illumos community
On 02/13/2013 04:30 PM, Kiley, Heather L (IS) wrote:
I am trying to replace a failed disk on my zfs system.
I replaced the disk and while the physical drive status is now OK, my logical
drive is still failed.
When I do a zpool status, the new disk comes up as unavailable:
spare
On 02/10/2013 01:01 PM, Koopmann, Jan-Peter wrote:
Why should it?
I believe currently only Nexenta but correct me if I am wrong
The code has been mainlined a while ago, see:
https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/comstar/lu/stmf_sbd/sbd.c#L3702-L3730
On 02/11/2013 04:53 PM, Borja Marcos wrote:
Hello,
I'n updating Devilator, the performance data collector for Orca and FreeBSD
to include ZFS monitoring. So far I am graphing the ARC and L2ARC size, L2ARC
writes and reads, and several hit/misses data pairs.
Any suggestions to improve
On 01/31/2013 11:16 PM, Albert Shih wrote:
Hi all,
I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
(I known it's bad).
Well I've server running FreeBSD 9.0 with (don't count / on differents
disks) zfs pool with 36 disk.
The performance is very very good on
On 02/05/2013 05:04 PM, Sašo Kiselkov wrote:
On 01/31/2013 11:16 PM, Albert Shih wrote:
Hi all,
I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
(I known it's bad).
Well I've server running FreeBSD 9.0 with (don't count / on differents
disks) zfs pool with 36
On 01/29/2013 02:59 PM, Robert Milkowski wrote:
It also has a lot of performance improvements and general bug fixes
in
the Solaris 11.1 release.
Performance improvements such as?
Dedup'ed ARC for one.
0 block automatically dedup'ed in-memory.
Improvements to ZIL performance.
Zero-copy
On 01/29/2013 03:08 PM, Robert Milkowski wrote:
From: Richard Elling
Sent: 21 January 2013 03:51
VAAI has 4 features, 3 of which have been in illumos for a long time. The
remaining
feature (SCSI UNMAP) was done by Nexenta and exists in their NexentaStor
product,
but the CEO made a
On 01/22/2013 12:30 PM, Darren J Moffat wrote:
On 01/21/13 17:03, Sašo Kiselkov wrote:
Again, what significant features did they add besides encryption? I'm
not saying they didn't, I'm just not aware of that many.
Just a few examples:
Solaris ZFS already has support for 1MB block size
On 01/22/2013 02:20 PM, Michel Jansens wrote:
Maybe 'shadow migration' ? (eg: zfs create -o shadow=nfs://server/dir
pool/newfs)
Hm, interesting, so it works as a sort of replication system, except
that the data needs to be read-only and you can start accessing it on
the target before the
On 01/22/2013 02:39 PM, Darren J Moffat wrote:
On 01/22/13 13:29, Darren J Moffat wrote:
Since I'm replying here are a few others that have been introduced in
Solaris 11 or 11.1.
and another one I can't believe I missed since I was one of the people
that helped design it and I did
On 01/22/2013 04:32 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: Darren J Moffat [mailto:darr...@opensolaris.org]
Support for SCSI UNMAP - both issuing it and honoring it when it is the
backing store of an iSCSI target.
When I search for scsi unmap, I come up
On 01/22/2013 05:00 PM, casper@oracle.com wrote:
Some vendors call this (and thins like it) Thin Provisioning, I'd say
it is more accurate communication between 'disk' and filesystem about
in use blocks.
In some cases, users of disks are charged by bytes in use; when not using
SCSI
On 01/22/2013 05:34 PM, Darren J Moffat wrote:
On 01/22/13 16:02, Sašo Kiselkov wrote:
On 01/22/2013 05:00 PM, casper@oracle.com wrote:
Some vendors call this (and thins like it) Thin Provisioning, I'd say
it is more accurate communication between 'disk' and filesystem about
in use
On 01/22/2013 10:45 PM, Jim Klimov wrote:
On 2013-01-22 14:29, Darren J Moffat wrote:
Preallocated ZVOLs - for swap/dump.
Or is it also supported to disable COW for such datasets, so that
the preallocated swap/dump zvols might remain contiguous on the
faster tracks of the drive (i.e. like a
On 01/22/2013 11:22 PM, Jim Klimov wrote:
On 2013-01-22 23:03, Sašo Kiselkov wrote:
On 01/22/2013 10:45 PM, Jim Klimov wrote:
On 2013-01-22 14:29, Darren J Moffat wrote:
Preallocated ZVOLs - for swap/dump.
Or is it also supported to disable COW for such datasets, so that
the preallocated
On 01/21/2013 02:28 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
I disagree the ZFS is developmentally challenged.
As an IT consultant, 8 years ago before I heard of ZFS, it was always easy
to sell Ontap, as long
On 01/22/2013 03:56 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
as far as incompatibility among products, I've yet to come
across it
I was talking about ... install solaris 11, and it's using a new version
of zfs
On 01/08/2013 04:27 PM, mark wrote:
On Jul 2, 2012, at 7:57 PM, Richard Elling wrote:
FYI, HP also sells an 8-port IT-style HBA (SC-08Ge), but it is hard to
locate
with their configurators. There might be a more modern equivalent cleverly
hidden somewhere difficult to find.
-- richard
On 01/07/2013 09:32 PM, Tim Fletcher wrote:
On 07/01/13 14:01, Andrzej Sochon wrote:
Hello *Sašo*!
I found you here:
http://mail.opensolaris.org/pipermail/zfs-discuss/2012-May/051546.html
“How about reflashing LSI firmware to the card? I read on Dell's spec
sheets that the card runs an
On 11/14/2012 11:14 AM, Michel Jansens wrote:
Hi,
I've ordered a new server with:
- 4x600GB Toshiba 10K SAS2 Disks
- 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no
SAS/SATA problems). Specs:
http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html
I
We've got a SC847E26-RJBOD1. Takes a bit of getting used to that you
have to wire it yourself (plus you need to buy a pair of internal
SFF-8087 cables to connect the back and front backplanes - incredible
SuperMicro doesn't provide those out of the box), but other than that,
never had a problem
On 11/07/2012 12:39 PM, Tiernan OToole wrote:
Morning all...
I have a Dedicated server in a data center in Germany, and it has 2 3TB
drives, but only software RAID. I have got them to install VMWare ESXi and
so far everything is going ok... I have the 2 drives as standard data
stores...
On 11/07/2012 01:16 PM, Eugen Leitl wrote:
I'm very interested, as I'm currently working on an all-in-one with
ESXi (using N40L for prototype and zfs send target, and a Supermicro
ESXi box for production with guests, all booted from USB internally
and zfs snapshot/send source).
Well, seeing
On 10/25/2012 05:59 AM, Jerry Kemp wrote:
I have just acquired a new JBOD box that will be used as a media
center/storage for home use only on my x86/x64 box running OpenIndiana
b151a7 currently.
Its strictly a JBOD, no hw raid options, with an eSATA port to each drive.
I am looking for
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100 and
are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
cards with LSI, because buying directly from them
On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100 and
are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
cards with LSI
On 10/25/2012 04:28 PM, Patrick Hahn wrote:
On Thu, Oct 25, 2012 at 10:13 AM, Sašo Kiselkov skiselkov...@gmail.comwrote:
On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards
On 10/25/2012 05:40 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100
and
are essentially rebranded LSI 9200-8e
On 09/26/2012 01:14 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Got me wondering: how many reads of a block from spinning rust
suffice for it to ultimately
On 09/26/2012 05:08 PM, Matt Van Mater wrote:
I've looked on the mailing list (the evil tuning wikis are down) and
haven't seen a reference to this seemingly simple question...
I have two OCZ Vertex 4 SSDs acting as L2ARC. I have a spare Crucial SSD
(about 1.5 years old) that isn't getting
On 09/26/2012 05:18 PM, Matt Van Mater wrote:
If the added device is slower, you will experience a slight drop in
per-op performance, however, if your working set needs another SSD,
overall it might improve your throughput (as the cache hit ratio will
increase).
Thanks for your fast
On 09/25/2012 09:38 PM, Jim Klimov wrote:
2012-09-11 16:29, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
My first thought was everything is
hitting in
On 09/21/2012 01:34 AM, Jason Usher wrote:
Hi,
I have a ZFS filesystem with compression turned on. Does the used property
show me the actual data size, or the compressed data size ? If it shows me
the compressed size, where can I see the actual data size ?
It shows the allocated number
Have you tried a zpool clear and subsequent scrub to see if the error
pops up again?
Cheers,
--
Saso
On 09/20/2012 09:45 AM, Stephan Budach wrote:
Hi,
a couple of days we had an issue with one of our FC switches which led
to a switch restart. Due to this issue the zpool vdevs had been
On 09/18/2012 04:31 PM, Eugen Leitl wrote:
I'm currently thinking about rolling a variant of
http://www.napp-it.org/napp-it/all-in-one/index_en.html
with remote backup (via snapshot and send) to 2-3
other (HP N40L-based) zfs boxes for production in
our organisation. The systems
On 09/11/2012 03:32 PM, Dan Swartzendruber wrote:
I think you may have a point. I'm also inclined to enable prefetch caching
per Saso's comment, since I don't have massive throughput - latency is more
important to me.
I meant to say the exact opposite: enable prefetch caching only if your
On 09/11/2012 03:41 PM, Dan Swartzendruber wrote:
LOL, I actually was unclear not you. I understood what you were saying,
sorry for being unclear. I have 4 disks in raid10, so my max random read
throughput is theoretically somewhat faster than the L2ARC device, but I
never really do that
On 09/11/2012 04:06 PM, Dan Swartzendruber wrote:
Thanks a lot for clarifying how this works.
You're very welcome.
Since I'm quite happy
having an SSD in my workstation, I will need to purchase another SSD :) I'm
wondering if it makes more sense to buy two SSDs of half the size (e.g.
On 09/05/2012 05:06 AM, Yaverot wrote:
What is the smallest sized drive I may use to replace this dead drive?
That information has to be someplace because ZFS will say that drive Q is too
small. Is there an easy way to query that information?
I use fdisk to find this out. For instance say
On 08/30/2012 12:07 PM, Anonymous wrote:
Hi. I have a spare off the shelf consumer PC and was thinking about loading
Solaris on it for a development box since I use Studio @work and like it
better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
has only one drive. If ZFS
On 08/30/2012 04:08 PM, Nomen Nescio wrote:
Hi. I have a spare off the shelf consumer PC and was thinking about loading
Solaris on it for a development box since I use Studio @work and like it
better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
has only one drive. If
On 08/30/2012 04:22 PM, Anonymous wrote:
On 08/30/2012 12:07 PM, Anonymous wrote:
Hi. I have a spare off the shelf consumer PC and was thinking about loading
Solaris on it for a development box since I use Studio @work and like it
better than gcc. I was thinking maybe it isn't so smart to use
On 08/26/2012 07:40 AM, Yuri Vorobyev wrote:
Can someone with Supermicro JBOD equipped with SAS drives and LSI
HBA do this sequential read test?
Did that on a SC847 with 45 drives, read speeds around 2GB/s aren't a
problem.
Don't forget to set primarycache=none on testing dataset.
There's
On 08/27/2012 10:37 AM, Yuri Vorobyev wrote:
Is there any way to disable ARC for testing and leave prefetch enabled?
No. The reason is quite simply because prefetch is a mechanism separate
from your direct application's read requests. Prefetch runs on ahead of
your anticipated read requests and
On 08/27/2012 12:58 PM, Yuri Vorobyev wrote:
27.08.2012 14:43, Sašo Kiselkov пишет:
Is there any way to disable ARC for testing and leave prefetch enabled?
No. The reason is quite simply because prefetch is a mechanism separate
from your direct application's read requests. Prefetch runs
On 08/27/2012 09:02 PM, Mark Wolek wrote:
RAIDz set, lost a disk, replaced it... lost another disk during resilver.
Replaced it, ran another resilver, and now it shows all disks with too many
errors.
Safe to say this is getting rebuilt and restored, or is there hope to recover
some of
On 08/25/2012 11:53 AM, Jim Klimov wrote:
No they're not, here's l2arc_buf_hdr_t a per-buffer structure
held for
buffers which were moved to l2arc:
typedef struct l2arc_buf_hdr {
l2arc_dev_t *b_dev;
uint64_t b_daddr;
} l2arc_buf_hdr_t;
That's about 16-bytes overhead per block, or 3.125%
This is something I've been looking into in the code and my take on your
proposed points this:
1) This requires many and deep changes across much of ZFS's architecture
(especially the ability to sustain tlvdev failures).
2) Most of this can be achieved (except for cache persistency) by
On 08/24/2012 05:13 PM, Scott Aitken wrote:
Hi all,
I know the easiest answer to this question is don't do it in the first
place, and if you do, you should have a backup, however I'll ask it
regardless.
Is there a way to backup the ZFS metadata on each member device of a pool
to another
Oh man, that's a million-billion points you made. I'll try to run
through each quickly.
On 08/24/2012 05:43 PM, Jim Klimov wrote:
First of all, thanks for reading and discussing! :)
No problem at all ;)
2012-08-24 17:50, Sašo Kiselkov wrote:
This is something I've been looking
On 08/25/2012 12:22 AM, Jim Klimov wrote:
2012-08-25 0:42, Sašo Kiselkov wrote:
Oh man, that's a million-billion points you made. I'll try to run
through each quickly.
Thanks...
I still do not have the feeling that you've fully got my
idea, or, alternately, that I correctly understand ARC
On 08/20/2012 08:55 PM, Ernest Dipko wrote:
Is there any way to recover the data within a zpool after a spool create -f
was issued on the disks?
We had a pool that contained two internal disks (mirrored) and we added a
zvol to it our of an existing pool for some temporary space. After the
On 08/20/2012 10:15 PM, Jim Klimov wrote:
2012-08-20 23:39, Sašo Kiselkov пишет:
We then tried to recreate the pool, which was successful - but
without data…
A zpool create overwrites all labels on a device (that's why you had to
add -f, which essentially means blame me if all goes wrong
On 08/13/2012 03:02 AM, Scott wrote:
Hi all,
I have a 5 disk raidz array in a state of disrepair. Suffice to say three
disks are ok, while two are missing all their labels. (Both ends of the
disks were overwritten). The data is still intact.
There are 4 labels on a zfs-labeled disk, two
On 08/13/2012 10:00 AM, Sašo Kiselkov wrote:
On 08/13/2012 03:02 AM, Scott wrote:
Hi all,
I have a 5 disk raidz array in a state of disrepair. Suffice to say three
disks are ok, while two are missing all their labels. (Both ends of the
disks were overwritten). The data is still intact
On 08/13/2012 10:45 AM, Scott wrote:
Hi Saso,
thanks for your reply.
If all disks are the same, is the root pointer the same?
No.
Also, is there a signature or something unique to the root block that I can
search for on the disk? I'm going through the On-disk specification at the
On 08/13/2012 12:48 PM, Ray Arachelian wrote:
While attempting to fix the last of my damaged zpools, there's one that
consists of 4 drives + one 60G file. The file happened by accident - I
attempted to add a partition off an SSD drive but missed the cache
keyword. Of course, once this is
On 08/13/2012 02:01 PM, Ray Arachelian wrote:
On 08/13/2012 06:50 AM, Sašo Kiselkov wrote:
See the -d option to zpool import. -- Saso
Many thanks for this, it worked very nicely, though the first time
I ran it, it failed. So what -d does is to substitute /dev. In
order for it to work, you
On 08/09/2012 12:52 PM, Joerg Schilling wrote:
Jim Klimov jimkli...@cos.ru wrote:
In the end, the open-sourced ZFS community got no public replies
from Oracle regarding collaboration or lack thereof, and decided
to part ways and implement things independently from Oracle.
AFAIK main ZFS
On 08/09/2012 01:05 PM, Joerg Schilling wrote:
Sa?o Kiselkov skiselkov...@gmail.com wrote:
To me it seems that the open-sourced ZFS community is not open, or could
you
point me to their mailing list archives?
Jörg
z...@lists.illumos.org
Well, why then has there been a discussion
On 08/09/2012 01:11 PM, Joerg Schilling wrote:
Sa?o Kiselkov skiselkov...@gmail.com wrote:
On 08/09/2012 01:05 PM, Joerg Schilling wrote:
Sa?o Kiselkov skiselkov...@gmail.com wrote:
To me it seems that the open-sourced ZFS community is not open, or
could you
point me to their mailing
On 08/07/2012 02:18 AM, Christopher George wrote:
I mean this as constructive criticism, not as angry bickering. I totally
respect you guys doing your own thing.
Thanks, I'll try my best to address your comments...
Thanks for your kind reply, though there are some points I'd like to
address,
On 08/07/2012 04:08 PM, Bob Friesenhahn wrote:
On Tue, 7 Aug 2012, Sašo Kiselkov wrote:
MLC is so much cheaper that you can simply slap on twice as much and use
the rest for ECC, mirroring or simply overprovisioning sectors. The
common practice to extending the lifecycle of MLC is by short
On 08/07/2012 12:12 AM, Christopher George wrote:
Is your DDRdrive product still supported and moving?
Yes, we now exclusively target ZIL acceleration.
We will be at the upcoming OpenStorage Summit 2012,
and encourage those attending to stop by our booth and
say hello :-)
On 08/03/2012 03:18 PM, Justin Stringfellow wrote:
While this isn't causing me any problems, I'm curious as to why this is
happening...:
$ dd if=/dev/random of=ob bs=128k count=1 while true
Can you check whether this happens from /dev/urandom as well?
--
Saso
On 08/01/2012 12:04 PM, Jim Klimov wrote:
Probably DDT is also stored with 2 or 3 copies of each block,
since it is metadata. It was not in the last ZFS on-disk spec
from 2006 that I found, for some apparent reason ;)
That's probably because it's extremely big (dozens, hundreds or even
On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Availability of the DDT is IMHO crucial to a deduped pool, so
I won't be surprised to see it forced to triple
On 08/01/2012 04:14 PM, Jim Klimov wrote:
2012-08-01 17:55, Sašo Kiselkov пишет:
On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Availability of the DDT is IMHO
On 07/29/2012 04:07 PM, Jim Klimov wrote:
Hello, list
Hi Jim,
For several times now I've seen statements on this list implying
that a dedicated ZIL/SLOG device catching sync writes for the log,
also allows for more streamlined writes to the pool during normal
healthy TXG syncs, than is
On 07/29/2012 06:01 PM, Jim Klimov wrote:
2012-07-29 19:50, Sašo Kiselkov wrote:
On 07/29/2012 04:07 PM, Jim Klimov wrote:
For several times now I've seen statements on this list implying
that a dedicated ZIL/SLOG device catching sync writes for the log,
also allows for more streamlined
On 07/25/2012 05:49 PM, Habony, Zsolt wrote:
Hello,
There is a feature of zfs (autoexpand, or zpool online -e ) that it can
consume the increased LUN immediately and increase the zpool size.
That would be a very useful ( vital ) feature in enterprise environment.
Though when I tried
Hi,
Have you had a look iostat -E (error counters) to make sure you don't
have faulty cabling? I've bad cables trip me up once in a manner similar
to your situation here.
Cheers,
--
Saso
On 07/23/2012 07:18 AM, Yuri Vorobyev wrote:
Hello.
I faced with a strange performance problem with new
On 07/12/2012 07:16 PM, Tim Cook wrote:
Sasso: yes, it's absolutely worth implementing a higher performing hashing
algorithm. I'd suggest simply ignoring the people that aren't willing to
acknowledge basic mathematics rather than lashing out. No point in feeding
the trolls. The PETABYTES of
On 07/12/2012 09:52 PM, Sašo Kiselkov wrote:
I have far too much time to explain
P.S. that should have read I have taken far too much time explaining.
Men are crap at multitasking...
Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss
On 07/11/2012 02:18 AM, John Martin wrote:
On 07/10/12 19:56, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512
On 07/11/2012 05:20 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256
at 9:19 AM, Sašo Kiselkov skiselkov...@gmail.comwrote:
Fletcher is a checksum, not a hash. It can and often will produce
collisions, so you need to set your dedup to verify (do a bit-by-bit
comparison prior to deduplication) which can result in significant write
amplification (every write
On 07/11/2012 10:41 AM, Ferenc-Levente Juhos wrote:
I was under the impression that the hash (or checksum) used for data
integrity is the same as the one used for deduplication,
but now I see that they are different.
They are the same in use, i.e. once you switch dedup on, that implies
On 07/11/2012 10:47 AM, Joerg Schilling wrote:
Sa??o Kiselkov skiselkov...@gmail.com wrote:
write in case verify finds the blocks are different). With hashes, you
can leave verify off, since hashes are extremely unlikely (~10^-77) to
produce collisions.
This is how a lottery works. the
On 07/11/2012 11:02 AM, Darren J Moffat wrote:
On 07/11/12 00:56, Sašo Kiselkov wrote:
* SHA-512: simplest to implement (since the code is already in the
kernel) and provides a modest performance boost of around 60%.
FIPS 180-4 introduces SHA-512/t support and explicitly SHA-512/256
On 07/11/2012 10:50 AM, Ferenc-Levente Juhos wrote:
Actually although as you pointed out that the chances to have an sha256
collision is minimal, but still it can happen, that would mean
that the dedup algorithm discards a block that he thinks is a duplicate.
Probably it's anyway better to do
On 07/11/2012 11:53 AM, Tomas Forsman wrote:
On 11 July, 2012 - Sa??o Kiselkov sent me these 1,4K bytes:
Oh jeez, I can't remember how many times this flame war has been going
on on this list. Here's the gist: SHA-256 (or any good hash) produces a
near uniform random distribution of output.
On 07/11/2012 12:00 PM, casper@oracle.com wrote:
You do realize that the age of the universe is only on the order of
around 10^18 seconds, do you? Even if you had a trillion CPUs each
chugging along at 3.0 GHz for all this time, the number of processor
cycles you will have executed
On 07/11/2012 12:24 PM, Justin Stringfellow wrote:
Suppose you find a weakness in a specific hash algorithm; you use this
to create hash collisions and now imagined you store the hash collisions
in a zfs dataset with dedup enabled using the same hash algorithm.
Sorry, but isn't this
On 07/11/2012 12:32 PM, Ferenc-Levente Juhos wrote:
Saso, I'm not flaming at all, I happen to disagree, but still I understand
that
chances are very very very slim, but as one poster already said, this is
how
the lottery works. I'm not saying one should make an exhaustive search with
On 07/11/2012 12:37 PM, Ferenc-Levente Juhos wrote:
Precisely, I said the same thing a few posts before:
dedup=verify solves that. And as I said, one could use dedup=hash
algorithm,verify with
an inferior hash algorithm (that is much faster) with the purpose of
reducing the number of dedup
On 07/11/2012 01:09 PM, Justin Stringfellow wrote:
The point is that hash functions are many to one and I think the point
was about that verify wasn't really needed if the hash function is good
enough.
This is a circular argument really, isn't it? Hash algorithms are never
perfect, but
On 07/11/2012 01:36 PM, casper@oracle.com wrote:
This assumes you have low volumes of deduplicated data. As your dedup
ratio grows, so does the performance hit from dedup=verify. At, say,
dedupratio=10.0x, on average, every write results in 10 reads.
I don't follow.
If dedupratio
On 07/11/2012 01:42 PM, Justin Stringfellow wrote:
This assumes you have low volumes of deduplicated data. As your dedup
ratio grows, so does the performance hit from dedup=verify. At, say,
dedupratio=10.0x, on average, every write results in 10 reads.
Well you can't make an omelette without
1 - 100 of 169 matches
Mail list logo