On 01/07/2011 10:26 AM, Darren J Moffat wrote:
On 06/01/2011 23:07, David Magda wrote:
On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
Fletcher is faster than SHA-256, so I think that must be what you're
asking about: can Fletcher+Verification be faster than
Sha256+NoVerification? Or do
On 01/07/2011 01:15 PM, Darren J Moffat wrote:
On 07/01/2011 11:56, Sašo Kiselkov wrote:
On 01/07/2011 10:26 AM, Darren J Moffat wrote:
On 06/01/2011 23:07, David Magda wrote:
On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
Fletcher is faster than SHA-256, so I think that must be what
On 04/08/2011 05:20 PM, Mark Sandrock wrote:
On Apr 8, 2011, at 7:50 AM, Evaldas Auryla evaldas.aur...@edqm.eu wrote:
On 04/ 8/11 01:14 PM, Ian Collins wrote:
You have built-in storage failover with an AR cluster;
and they do NFS, CIFS, iSCSI, HTTP and WebDav
out of the box.
And you
On 04/08/2011 06:59 PM, Darren J Moffat wrote:
On 08/04/2011 17:47, Sašo Kiselkov wrote:
In short, I think the X4540 was an elegant and powerful system that
definitely had its market, especially in my area of work (digital video
processing - heavy on latency, throughput and IOPS - an area
On 04/08/2011 07:22 PM, J.P. King wrote:
No, I haven't tried a S7000, but I've tried other kinds of network
storage and from a design perspective, for my applications, it doesn't
even make a single bit of sense. I'm talking about high-volume real-time
video streaming, where you stream
On 04/09/2011 01:41 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Julian King
Actually I think our figures more or less agree. 12 disks = 7 mbits
48 disks = 4x7mbits
I know that sounds like terrible
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor
read/write ops using iostat, but that doesn't tell me how contiguous the
data
On 05/19/2011 03:35 PM, Tomas Ögren wrote:
On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes:
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed
On 05/19/2011 07:47 PM, Richard Elling wrote:
On May 19, 2011, at 5:35 AM, Sašo Kiselkov wrote:
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed (8-10
On 05/24/2011 03:08 PM, a.sm...@ukgrid.net wrote:
Hi,
see the seeksize script on this URL:
http://prefetch.net/articles/solaris.dtracetopten.html
Not used it but looks neat!
cheers Andy.
I already did and it does the job just fine. Thank you for your kind
suggestion.
BR,
--
Saso
Hi All,
I'd like to ask about whether there is a method to enforce a certain txg
commit frequency on ZFS. I'm doing a large amount of video streaming
from a storage pool while also slowly continuously writing a constant
volume of data to it (using a normal file descriptor, *not* in O_SYNC).
When
On 06/26/2011 06:17 PM, Richard Elling wrote:
On Jun 24, 2011, at 5:29 AM, Sašo Kiselkov wrote:
Hi All,
I'd like to ask about whether there is a method to enforce a certain txg
commit frequency on ZFS. I'm doing a large amount of video streaming
from a storage pool while also slowly
On 06/29/2011 02:33 PM, Sašo Kiselkov wrote:
Also there is a buffer-size limit, like this (384Mb):
set zfs:zfs_write_limit_override = 0x1800
or on command-line like this:
# echo zfs_write_limit_override/W0t402653184 | mdb -kw
Currently my value for this is 0. How should I set it? I'm
On 06/27/2011 11:59 AM, Jim Klimov wrote:
I'd like to ask about whether there is a method to enforce a
certain txg
commit frequency on ZFS.
Well, there is a timer frequency based on TXG age (i.e 5 sec
by default now), in /etc/system like this:
set zfs:zfs_txg_synctime = 5
When
On 06/30/2011 01:10 PM, Jim Klimov wrote:
2011-06-30 11:47, Sašo Kiselkov пишет:
On 06/30/2011 02:49 AM, Jim Klimov wrote:
2011-06-30 2:21, Sašo Kiselkov пишет:
On 06/29/2011 02:33 PM, Sašo Kiselkov wrote:
Also there is a buffer-size limit, like this (384Mb):
set zfs:zfs_write_limit_override
On 06/30/2011 01:33 PM, Jim Klimov wrote:
2011-06-30 15:22, Sašo Kiselkov пишет:
I tried increasing this
value to 2000 or 3000, but without an effect - prehaps I need to set it
at pool mount time or in /etc/system. Could somebody with more
knowledge
of these internals please chime
On 06/30/2011 11:56 PM, Sašo Kiselkov wrote:
On 06/30/2011 01:33 PM, Jim Klimov wrote:
2011-06-30 15:22, Sašo Kiselkov пишет:
I tried increasing this
value to 2000 or 3000, but without an effect - prehaps I need to set it
at pool mount time or in /etc/system. Could somebody with more
On 11/30/2011 02:40 PM, Edmund White wrote:
Absolutely.
I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
running NexentaStor.
On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
internal disks (boot, a handful of large disks, Pliant SSDs for
On 07/01/2011 12:01 AM, Sašo Kiselkov wrote:
On 06/30/2011 11:56 PM, Sašo Kiselkov wrote:
Hm, it appears I'll have to do some reboots and more extensive testing.
I tried tuning various settings and then returned everything back to the
defaults. Yet, now I can ramp the number of concurrent
On 01/17/2012 01:06 AM, David Magda wrote:
Kind of off topic, but I figured of some interest to the list. There will be
a new file system in Windows 8 with some features that we all know and love
in ZFS:
As mentioned previously, one of our design goals was to detect and correct
Hi,
I'm getting weird errors while trying to install openindiana 151a on a
Dell R715 with a PERC H200 (based on an LSI SAS 2008). Any time the OS
tries to access the drives (for whatever reason), I get this dumped into
syslog:
genunix: WARNING: Device
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote:
Hi,
are those DELL branded WD disks? DELL tends to manipulate the
firmware of the drives so that power handling with Solaris fails.
If this is the case here:
Easiest way to make it work is to modify /kernel/drv/sd.conf and
add an entry
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote:
Hi,
are those DELL branded WD disks? DELL tends to manipulate the firmware of
the drives so that power handling with Solaris fails. If this is the case
here:
Easiest way to make it work is to modify /kernel/drv/sd.conf and add an
entry
On 05/16/2012 10:17 AM, Koopmann, Jan-Peter wrote:
One thing came up while trying this - I'm on a text install
image system, so my / is a ramdisk. Any ideas how I can change
the sd.conf on the USB disk or reload the driver configuration on
the fly? I tried looking for the file on the USB
I'm currently trying to get a SuperMicro JBOD with dual SAS expander
chips running in MPxIO, but I'm a total amateur to this and would like
to ask about how to detect whether MPxIO is working (or not).
My SAS topology is:
*) One LSI SAS2008-equipped HBA (running the latest IT firmware from
On 05/25/2012 07:35 PM, Jim Klimov wrote:
Sorry I can't comment on MPxIO, except that I thought zfs could by
itself discern two paths to the same drive, if only to protect
against double-importing the disk into pool.
Unfortunately, it isn't the same thing. MPxIO provides redundant
signaling to
On 05/25/2012 08:40 PM, Richard Elling wrote:
See the soluion at https://www.illumos.org/issues/644 -- richard
Good Lord, that was it! It never occurred to me that the drives had a
say in this. Thanks a billion!
Cheers,
--
Saso
___
zfs-discuss mailing
On 05/07/2012 05:42 AM, Greg Mason wrote:
I am currently trying to get two of these things running Illumian. I don't
have any particular performance requirements, so I'm thinking of using some
sort of supported hypervisor, (either RHEL and KVM or VMware ESXi) to get
around the driver
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One oddity is the box has two SATA
SSDs which also show up the card's BIOS, but present OK
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do
On 05/28/2012 01:12 PM, Ian Collins wrote:
On 05/28/12 11:01 PM, Sašo Kiselkov wrote:
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian
On 05/25/2012 08:40 PM, Richard Elling wrote:
See the soluion at https://www.illumos.org/issues/644
-- richard
And predictably, I'm back with another n00b question regarding this
array. I've put a pair of LSI-9200-8e controllers in the server and
attached the cables to the enclosure to each of
On 05/30/2012 10:53 PM, Richard Elling wrote:
On May 30, 2012, at 1:07 PM, Sašo Kiselkov wrote:
On 05/25/2012 08:40 PM, Richard Elling wrote:
See the soluion at https://www.illumos.org/issues/644
-- richard
And predictably, I'm back with another n00b question regarding this
array. I've
On 05/30/2012 10:53 PM, Richard Elling wrote:
Those ereports are consistent with faulty cabling. You can trace all of the
cables and errors using tools like lsiutil, sg_logs, kstats, etc.
Unfortunately,
it is not really possible to get into this level of detail over email, and it
can
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I'm
occasionally seeing a storm of xcalls on one of the 32 VCPUs (10
xcalls a second).
On 06/06/2012 04:55 PM, Richard Elling wrote:
On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana
On 06/06/2012 05:01 PM, Sašo Kiselkov wrote:
I'll try and load the machine with dd(1) to the max to see if access
patterns of my software have something to do with it.
Tried and tested, any and all write I/O to the pool causes this xcall
storm issue, writing more data to it only exacerbates
On 06/06/2012 09:43 PM, Jim Mauro wrote:
I can't help but be curious about something, which perhaps you verified but
did not post.
What the data here shows is;
- CPU 31 is buried in the kernel (100% sys).
- CPU 31 is handling a moderate-to-high rate of xcalls.
What the data does not
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad data, so
ultimately, this kind of isn't workable... I need to somehow resolve
this... I'm running four
On 06/12/2012 03:57 PM, Sašo Kiselkov wrote:
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad data, so
ultimately, this kind of isn't workable... I
On 06/12/2012 05:21 PM, Matt Breitbach wrote:
I saw this _exact_ problem after I bumped ram from 48GB to 192GB. Low
memory pressure seemed to be the cuplrit. Happened usually during storage
vmotions or something like that which effectively nullified the data in the
ARC (sometimes 50GB of
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
So the xcall are necessary part of memory reclaiming, when one needs to tear
down the TLB entry mapping the physical memory (which can from here on be
repurposed).
So the xcall are just part of this. Should not cause trouble, but they do.
On 06/12/2012 06:06 PM, Jim Mauro wrote:
So try unbinding the mac threads; it may help you here.
How do I do that? All I can find on interrupt fencing and the like is to
simply set certain processors to no-intr, which moves all of the
interrupts and it doesn't prevent the xcall storm
On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote:
find where your nics are bound too
mdb -k
::interrupts
create a processor set including those cpus [ so just the nic code will
run there ]
andy
Tried and didn't help, unfortunately. I'm still seeing drops. What's
On 06/12/2012 07:19 PM, Roch Bourbonnais wrote:
Try with this /etc/system tunings :
set mac:mac_soft_ring_thread_bind=0 set mac:mac_srs_thread_bind=0
set zfs:zio_taskq_batch_pct=50
Thanks for the recommendations, I'll try and see whether it helps, but
this is going to take me a while
On 06/15/2012 03:35 PM, Johannes Totz wrote:
On 15/06/2012 13:22, Sašo Kiselkov wrote:
On 06/15/2012 02:14 PM, Hans J Albertsson wrote:
I've got my root pool on a mirror on 2 512 byte blocksize disks. I
want to move the root pool to two 2 TB disks with 4k blocks. The
server only has room
On 06/13/2012 03:43 PM, Roch wrote:
Sašo Kiselkov writes:
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
So the xcall are necessary part of memory reclaiming, when one needs to
tear down the TLB entry mapping the physical memory (which can from here on
be repurposed).
So
On 06/18/2012 12:05 AM, Richard Elling wrote:
You might try some of the troubleshooting techniques described in Chapter 5
of the DTtrace book by Brendan Gregg and Jim Mauro. It is not clear from your
description that you are seeing the same symptoms, but the technique should
apply.
--
On 06/19/2012 11:05 AM, Sašo Kiselkov wrote:
On 06/18/2012 07:50 PM, Roch wrote:
Are we hitting :
7167903 Configuring VLANs results in single threaded soft ring fanout
Confirmed, it is definitely this.
Hold the phone, I just tried unconfiguring all of the VLANs in the
system and went
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3 candidates, so I went out and did some
On 07/11/2012 02:18 AM, John Martin wrote:
On 07/10/12 19:56, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512
On 07/11/2012 05:20 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256
at 9:19 AM, Sašo Kiselkov skiselkov...@gmail.comwrote:
Fletcher is a checksum, not a hash. It can and often will produce
collisions, so you need to set your dedup to verify (do a bit-by-bit
comparison prior to deduplication) which can result in significant write
amplification (every write
On 07/11/2012 10:41 AM, Ferenc-Levente Juhos wrote:
I was under the impression that the hash (or checksum) used for data
integrity is the same as the one used for deduplication,
but now I see that they are different.
They are the same in use, i.e. once you switch dedup on, that implies
On 07/11/2012 10:47 AM, Joerg Schilling wrote:
Sa??o Kiselkov skiselkov...@gmail.com wrote:
write in case verify finds the blocks are different). With hashes, you
can leave verify off, since hashes are extremely unlikely (~10^-77) to
produce collisions.
This is how a lottery works. the
On 07/11/2012 11:02 AM, Darren J Moffat wrote:
On 07/11/12 00:56, Sašo Kiselkov wrote:
* SHA-512: simplest to implement (since the code is already in the
kernel) and provides a modest performance boost of around 60%.
FIPS 180-4 introduces SHA-512/t support and explicitly SHA-512/256
On 07/11/2012 10:50 AM, Ferenc-Levente Juhos wrote:
Actually although as you pointed out that the chances to have an sha256
collision is minimal, but still it can happen, that would mean
that the dedup algorithm discards a block that he thinks is a duplicate.
Probably it's anyway better to do
On 07/11/2012 11:53 AM, Tomas Forsman wrote:
On 11 July, 2012 - Sa??o Kiselkov sent me these 1,4K bytes:
Oh jeez, I can't remember how many times this flame war has been going
on on this list. Here's the gist: SHA-256 (or any good hash) produces a
near uniform random distribution of output.
On 07/11/2012 12:00 PM, casper@oracle.com wrote:
You do realize that the age of the universe is only on the order of
around 10^18 seconds, do you? Even if you had a trillion CPUs each
chugging along at 3.0 GHz for all this time, the number of processor
cycles you will have executed
On 07/11/2012 12:24 PM, Justin Stringfellow wrote:
Suppose you find a weakness in a specific hash algorithm; you use this
to create hash collisions and now imagined you store the hash collisions
in a zfs dataset with dedup enabled using the same hash algorithm.
Sorry, but isn't this
On 07/11/2012 12:32 PM, Ferenc-Levente Juhos wrote:
Saso, I'm not flaming at all, I happen to disagree, but still I understand
that
chances are very very very slim, but as one poster already said, this is
how
the lottery works. I'm not saying one should make an exhaustive search with
On 07/11/2012 12:37 PM, Ferenc-Levente Juhos wrote:
Precisely, I said the same thing a few posts before:
dedup=verify solves that. And as I said, one could use dedup=hash
algorithm,verify with
an inferior hash algorithm (that is much faster) with the purpose of
reducing the number of dedup
On 07/11/2012 01:09 PM, Justin Stringfellow wrote:
The point is that hash functions are many to one and I think the point
was about that verify wasn't really needed if the hash function is good
enough.
This is a circular argument really, isn't it? Hash algorithms are never
perfect, but
On 07/11/2012 01:36 PM, casper@oracle.com wrote:
This assumes you have low volumes of deduplicated data. As your dedup
ratio grows, so does the performance hit from dedup=verify. At, say,
dedupratio=10.0x, on average, every write results in 10 reads.
I don't follow.
If dedupratio
On 07/11/2012 01:42 PM, Justin Stringfellow wrote:
This assumes you have low volumes of deduplicated data. As your dedup
ratio grows, so does the performance hit from dedup=verify. At, say,
dedupratio=10.0x, on average, every write results in 10 reads.
Well you can't make an omelette without
On 07/11/2012 01:51 PM, Eugen Leitl wrote:
As a napp-it user who recently needs to upgrade from NexentaCore I recently
saw
preferred for OpenIndiana live but running under Illumian, NexentaCore and
Solaris 11 (Express)
as a system recommendation for napp-it.
I wonder about the future
On 07/11/2012 03:39 PM, David Magda wrote:
On Tue, July 10, 2012 19:56, Sašo Kiselkov wrote:
However, before I start out on a pointless endeavor, I wanted to probe
the field of ZFS users, especially those using dedup, on whether their
workloads would benefit from a faster hash algorithm (and
On 07/11/2012 03:57 PM, Gregg Wonderly wrote:
Since there is a finite number of bit patterns per block, have you tried to
just calculate the SHA-256 or SHA-512 for every possible bit pattern to see
if there is ever a collision? If you found an algorithm that produced no
collisions for any
On 07/11/2012 03:58 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
I really mean no disrespect, but this comment is so dumb I could swear
my IQ dropped by a few tenths of a point just by reading
On 07/11/2012 04:19 PM, Gregg Wonderly wrote:
But this is precisely the kind of observation that some people seem to miss
out on the importance of. As Tomas suggested in his post, if this was true,
then we could have a huge compression ratio as well. And even if there was
10% of the bit
On 07/11/2012 04:22 PM, Bob Friesenhahn wrote:
On Wed, 11 Jul 2012, Sašo Kiselkov wrote:
the hash isn't used for security purposes. We only need something that's
fast and has a good pseudo-random output distribution. That's why I
looked toward Edon-R. Even though it might have security
On 07/11/2012 04:23 PM, casper@oracle.com wrote:
On Tue, 10 Jul 2012, Edward Ned Harvey wrote:
CPU's are not getting much faster. But IO is definitely getting faster.
It's best to keep ahea
d of that curve.
It seems that per-socket CPU performance is doubling every year.
That
On 07/11/2012 04:27 PM, Gregg Wonderly wrote:
Unfortunately, the government imagines that people are using their home
computers to compute hashes and try and decrypt stuff. Look at what is
happening with GPUs these days. People are hooking up 4 GPUs in their
computers and getting huge
On 07/11/2012 04:30 PM, Gregg Wonderly wrote:
This is exactly the issue for me. It's vital to always have verify on. If
you don't have the data to prove that every possible block combination
possible, hashes uniquely for the small bit space we are talking about,
then how in the world can
On 07/11/2012 04:36 PM, Justin Stringfellow wrote:
Since there is a finite number of bit patterns per block, have you tried to
just calculate the SHA-256 or SHA-512 for every possible bit pattern to see
if there is ever a collision? If you found an algorithm that produced no
collisions
On 07/11/2012 04:39 PM, Ferenc-Levente Juhos wrote:
As I said several times before, to produce hash collisions. Or to calculate
rainbow tables (as a previous user theorized it) you only need the
following.
You don't need to reproduce all possible blocks.
1. SHA256 produces a 256 bit hash
On 07/11/2012 04:54 PM, Ferenc-Levente Juhos wrote:
You don't have to store all hash values:
a. Just memorize the first one SHA256(0)
b. start cointing
c. bang: by the time you get to 2^256 you get at least a collision.
Just one question: how long do you expect this going to take on average?
On 07/11/2012 04:56 PM, Gregg Wonderly wrote:
So, if I had a block collision on my ZFS pool that used dedup, and it had my
bank balance of $3,212.20 on it, and you tried to write your bank balance of
$3,292,218.84 and got the same hash, no verify, and thus you got my
block/balance and now
On 07/11/2012 05:10 PM, David Magda wrote:
On Wed, July 11, 2012 09:45, Sašo Kiselkov wrote:
I'm not convinced waiting makes much sense. The SHA-3 standardization
process' goals are different from ours. SHA-3 can choose to go with
something that's slower, but has a higher security margin. I
On 07/11/2012 05:33 PM, Bob Friesenhahn wrote:
On Wed, 11 Jul 2012, Sašo Kiselkov wrote:
The reason why I don't think this can be used to implement a practical
attack is that in order to generate a collision, you first have to know
the disk block that you want to create a collision
On 07/11/2012 05:58 PM, Gregg Wonderly wrote:
You're entirely sure that there could never be two different blocks that can
hash to the same value and have different content?
Wow, can you just send me the cash now and we'll call it even?
You're the one making the positive claim and I'm
On 07/11/2012 06:23 PM, Gregg Wonderly wrote:
What I'm saying is that I am getting conflicting information from your
rebuttals here.
Well, let's address that then:
I (and others) say there will be collisions that will cause data loss if
verify is off.
Saying that there will be without any
On 07/11/2012 10:06 PM, Bill Sommerfeld wrote:
On 07/11/12 02:10, Sašo Kiselkov wrote:
Oh jeez, I can't remember how many times this flame war has been going
on on this list. Here's the gist: SHA-256 (or any good hash) produces a
near uniform random distribution of output. Thus, the chances
On 07/12/2012 07:16 PM, Tim Cook wrote:
Sasso: yes, it's absolutely worth implementing a higher performing hashing
algorithm. I'd suggest simply ignoring the people that aren't willing to
acknowledge basic mathematics rather than lashing out. No point in feeding
the trolls. The PETABYTES of
On 07/12/2012 09:52 PM, Sašo Kiselkov wrote:
I have far too much time to explain
P.S. that should have read I have taken far too much time explaining.
Men are crap at multitasking...
Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss
Hi,
Have you had a look iostat -E (error counters) to make sure you don't
have faulty cabling? I've bad cables trip me up once in a manner similar
to your situation here.
Cheers,
--
Saso
On 07/23/2012 07:18 AM, Yuri Vorobyev wrote:
Hello.
I faced with a strange performance problem with new
On 07/25/2012 05:49 PM, Habony, Zsolt wrote:
Hello,
There is a feature of zfs (autoexpand, or zpool online -e ) that it can
consume the increased LUN immediately and increase the zpool size.
That would be a very useful ( vital ) feature in enterprise environment.
Though when I tried
On 07/29/2012 04:07 PM, Jim Klimov wrote:
Hello, list
Hi Jim,
For several times now I've seen statements on this list implying
that a dedicated ZIL/SLOG device catching sync writes for the log,
also allows for more streamlined writes to the pool during normal
healthy TXG syncs, than is
On 07/29/2012 06:01 PM, Jim Klimov wrote:
2012-07-29 19:50, Sašo Kiselkov wrote:
On 07/29/2012 04:07 PM, Jim Klimov wrote:
For several times now I've seen statements on this list implying
that a dedicated ZIL/SLOG device catching sync writes for the log,
also allows for more streamlined
On 08/01/2012 12:04 PM, Jim Klimov wrote:
Probably DDT is also stored with 2 or 3 copies of each block,
since it is metadata. It was not in the last ZFS on-disk spec
from 2006 that I found, for some apparent reason ;)
That's probably because it's extremely big (dozens, hundreds or even
On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Availability of the DDT is IMHO crucial to a deduped pool, so
I won't be surprised to see it forced to triple
On 08/01/2012 04:14 PM, Jim Klimov wrote:
2012-08-01 17:55, Sašo Kiselkov пишет:
On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Availability of the DDT is IMHO
On 08/03/2012 03:18 PM, Justin Stringfellow wrote:
While this isn't causing me any problems, I'm curious as to why this is
happening...:
$ dd if=/dev/random of=ob bs=128k count=1 while true
Can you check whether this happens from /dev/urandom as well?
--
Saso
On 08/07/2012 12:12 AM, Christopher George wrote:
Is your DDRdrive product still supported and moving?
Yes, we now exclusively target ZIL acceleration.
We will be at the upcoming OpenStorage Summit 2012,
and encourage those attending to stop by our booth and
say hello :-)
On 08/07/2012 02:18 AM, Christopher George wrote:
I mean this as constructive criticism, not as angry bickering. I totally
respect you guys doing your own thing.
Thanks, I'll try my best to address your comments...
Thanks for your kind reply, though there are some points I'd like to
address,
On 08/07/2012 04:08 PM, Bob Friesenhahn wrote:
On Tue, 7 Aug 2012, Sašo Kiselkov wrote:
MLC is so much cheaper that you can simply slap on twice as much and use
the rest for ECC, mirroring or simply overprovisioning sectors. The
common practice to extending the lifecycle of MLC is by short
On 08/09/2012 12:52 PM, Joerg Schilling wrote:
Jim Klimov jimkli...@cos.ru wrote:
In the end, the open-sourced ZFS community got no public replies
from Oracle regarding collaboration or lack thereof, and decided
to part ways and implement things independently from Oracle.
AFAIK main ZFS
On 08/09/2012 01:05 PM, Joerg Schilling wrote:
Sa?o Kiselkov skiselkov...@gmail.com wrote:
To me it seems that the open-sourced ZFS community is not open, or could
you
point me to their mailing list archives?
Jörg
z...@lists.illumos.org
Well, why then has there been a discussion
On 08/09/2012 01:11 PM, Joerg Schilling wrote:
Sa?o Kiselkov skiselkov...@gmail.com wrote:
On 08/09/2012 01:05 PM, Joerg Schilling wrote:
Sa?o Kiselkov skiselkov...@gmail.com wrote:
To me it seems that the open-sourced ZFS community is not open, or
could you
point me to their mailing
1 - 100 of 169 matches
Mail list logo