On 07/11/2012 01:51 PM, Eugen Leitl wrote:
As a napp-it user who recently needs to upgrade from NexentaCore I recently
saw
preferred for OpenIndiana live but running under Illumian, NexentaCore and
Solaris 11 (Express)
as a system recommendation for napp-it.
I wonder about the future
On 07/11/2012 03:39 PM, David Magda wrote:
On Tue, July 10, 2012 19:56, Sašo Kiselkov wrote:
However, before I start out on a pointless endeavor, I wanted to probe
the field of ZFS users, especially those using dedup, on whether their
workloads would benefit from a faster hash algorithm (and
On 07/11/2012 03:57 PM, Gregg Wonderly wrote:
Since there is a finite number of bit patterns per block, have you tried to
just calculate the SHA-256 or SHA-512 for every possible bit pattern to see
if there is ever a collision? If you found an algorithm that produced no
collisions for any
On 07/11/2012 03:58 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
I really mean no disrespect, but this comment is so dumb I could swear
my IQ dropped by a few tenths of a point just by reading
On 07/11/2012 04:19 PM, Gregg Wonderly wrote:
But this is precisely the kind of observation that some people seem to miss
out on the importance of. As Tomas suggested in his post, if this was true,
then we could have a huge compression ratio as well. And even if there was
10% of the bit
On 07/11/2012 04:22 PM, Bob Friesenhahn wrote:
On Wed, 11 Jul 2012, Sašo Kiselkov wrote:
the hash isn't used for security purposes. We only need something that's
fast and has a good pseudo-random output distribution. That's why I
looked toward Edon-R. Even though it might have security
On 07/11/2012 04:23 PM, casper@oracle.com wrote:
On Tue, 10 Jul 2012, Edward Ned Harvey wrote:
CPU's are not getting much faster. But IO is definitely getting faster.
It's best to keep ahea
d of that curve.
It seems that per-socket CPU performance is doubling every year.
That
On 07/11/2012 04:27 PM, Gregg Wonderly wrote:
Unfortunately, the government imagines that people are using their home
computers to compute hashes and try and decrypt stuff. Look at what is
happening with GPUs these days. People are hooking up 4 GPUs in their
computers and getting huge
On 07/11/2012 04:30 PM, Gregg Wonderly wrote:
This is exactly the issue for me. It's vital to always have verify on. If
you don't have the data to prove that every possible block combination
possible, hashes uniquely for the small bit space we are talking about,
then how in the world can
On 07/11/2012 04:36 PM, Justin Stringfellow wrote:
Since there is a finite number of bit patterns per block, have you tried to
just calculate the SHA-256 or SHA-512 for every possible bit pattern to see
if there is ever a collision? If you found an algorithm that produced no
collisions
On 07/11/2012 04:39 PM, Ferenc-Levente Juhos wrote:
As I said several times before, to produce hash collisions. Or to calculate
rainbow tables (as a previous user theorized it) you only need the
following.
You don't need to reproduce all possible blocks.
1. SHA256 produces a 256 bit hash
On 07/11/2012 04:54 PM, Ferenc-Levente Juhos wrote:
You don't have to store all hash values:
a. Just memorize the first one SHA256(0)
b. start cointing
c. bang: by the time you get to 2^256 you get at least a collision.
Just one question: how long do you expect this going to take on average?
On 07/11/2012 04:56 PM, Gregg Wonderly wrote:
So, if I had a block collision on my ZFS pool that used dedup, and it had my
bank balance of $3,212.20 on it, and you tried to write your bank balance of
$3,292,218.84 and got the same hash, no verify, and thus you got my
block/balance and now
On 07/11/2012 05:10 PM, David Magda wrote:
On Wed, July 11, 2012 09:45, Sašo Kiselkov wrote:
I'm not convinced waiting makes much sense. The SHA-3 standardization
process' goals are different from ours. SHA-3 can choose to go with
something that's slower, but has a higher security margin. I
On 07/11/2012 05:33 PM, Bob Friesenhahn wrote:
On Wed, 11 Jul 2012, Sašo Kiselkov wrote:
The reason why I don't think this can be used to implement a practical
attack is that in order to generate a collision, you first have to know
the disk block that you want to create a collision
On 07/11/2012 05:58 PM, Gregg Wonderly wrote:
You're entirely sure that there could never be two different blocks that can
hash to the same value and have different content?
Wow, can you just send me the cash now and we'll call it even?
You're the one making the positive claim and I'm
On 07/11/2012 06:23 PM, Gregg Wonderly wrote:
What I'm saying is that I am getting conflicting information from your
rebuttals here.
Well, let's address that then:
I (and others) say there will be collisions that will cause data loss if
verify is off.
Saying that there will be without any
On 07/11/2012 10:06 PM, Bill Sommerfeld wrote:
On 07/11/12 02:10, Sašo Kiselkov wrote:
Oh jeez, I can't remember how many times this flame war has been going
on on this list. Here's the gist: SHA-256 (or any good hash) produces a
near uniform random distribution of output. Thus, the chances
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3 candidates, so I went out and did some
On 06/19/2012 11:05 AM, Sašo Kiselkov wrote:
On 06/18/2012 07:50 PM, Roch wrote:
Are we hitting :
7167903 Configuring VLANs results in single threaded soft ring fanout
Confirmed, it is definitely this.
Hold the phone, I just tried unconfiguring all of the VLANs in the
system and went
On 06/18/2012 12:05 AM, Richard Elling wrote:
You might try some of the troubleshooting techniques described in Chapter 5
of the DTtrace book by Brendan Gregg and Jim Mauro. It is not clear from your
description that you are seeing the same symptoms, but the technique should
apply.
--
On 06/13/2012 03:43 PM, Roch wrote:
Sašo Kiselkov writes:
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
So the xcall are necessary part of memory reclaiming, when one needs to
tear down the TLB entry mapping the physical memory (which can from here on
be repurposed).
So
On 06/15/2012 03:35 PM, Johannes Totz wrote:
On 15/06/2012 13:22, Sašo Kiselkov wrote:
On 06/15/2012 02:14 PM, Hans J Albertsson wrote:
I've got my root pool on a mirror on 2 512 byte blocksize disks. I
want to move the root pool to two 2 TB disks with 4k blocks. The
server only has room
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad data, so
ultimately, this kind of isn't workable... I need to somehow resolve
this... I'm running four
On 06/12/2012 03:57 PM, Sašo Kiselkov wrote:
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad data, so
ultimately, this kind of isn't workable... I
On 06/12/2012 05:21 PM, Matt Breitbach wrote:
I saw this _exact_ problem after I bumped ram from 48GB to 192GB. Low
memory pressure seemed to be the cuplrit. Happened usually during storage
vmotions or something like that which effectively nullified the data in the
ARC (sometimes 50GB of
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
So the xcall are necessary part of memory reclaiming, when one needs to tear
down the TLB entry mapping the physical memory (which can from here on be
repurposed).
So the xcall are just part of this. Should not cause trouble, but they do.
On 06/12/2012 06:06 PM, Jim Mauro wrote:
So try unbinding the mac threads; it may help you here.
How do I do that? All I can find on interrupt fencing and the like is to
simply set certain processors to no-intr, which moves all of the
interrupts and it doesn't prevent the xcall storm
On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote:
find where your nics are bound too
mdb -k
::interrupts
create a processor set including those cpus [ so just the nic code will
run there ]
andy
Tried and didn't help, unfortunately. I'm still seeing drops. What's
On 06/12/2012 07:19 PM, Roch Bourbonnais wrote:
Try with this /etc/system tunings :
set mac:mac_soft_ring_thread_bind=0 set mac:mac_srs_thread_bind=0
set zfs:zio_taskq_batch_pct=50
Thanks for the recommendations, I'll try and see whether it helps, but
this is going to take me a while
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I'm
occasionally seeing a storm of xcalls on one of the 32 VCPUs (10
xcalls a second).
On 06/06/2012 04:55 PM, Richard Elling wrote:
On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana
On 06/06/2012 05:01 PM, Sašo Kiselkov wrote:
I'll try and load the machine with dd(1) to the max to see if access
patterns of my software have something to do with it.
Tried and tested, any and all write I/O to the pool causes this xcall
storm issue, writing more data to it only exacerbates
On 06/06/2012 09:43 PM, Jim Mauro wrote:
I can't help but be curious about something, which perhaps you verified but
did not post.
What the data here shows is;
- CPU 31 is buried in the kernel (100% sys).
- CPU 31 is handling a moderate-to-high rate of xcalls.
What the data does not
On 05/25/2012 08:40 PM, Richard Elling wrote:
See the soluion at https://www.illumos.org/issues/644
-- richard
And predictably, I'm back with another n00b question regarding this
array. I've put a pair of LSI-9200-8e controllers in the server and
attached the cables to the enclosure to each of
On 05/30/2012 10:53 PM, Richard Elling wrote:
On May 30, 2012, at 1:07 PM, Sašo Kiselkov wrote:
On 05/25/2012 08:40 PM, Richard Elling wrote:
See the soluion at https://www.illumos.org/issues/644
-- richard
And predictably, I'm back with another n00b question regarding this
array. I've
On 05/30/2012 10:53 PM, Richard Elling wrote:
Those ereports are consistent with faulty cabling. You can trace all of the
cables and errors using tools like lsiutil, sg_logs, kstats, etc.
Unfortunately,
it is not really possible to get into this level of detail over email, and it
can
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One oddity is the box has two SATA
SSDs which also show up the card's BIOS, but present OK
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do
On 05/28/2012 01:12 PM, Ian Collins wrote:
On 05/28/12 11:01 PM, Sašo Kiselkov wrote:
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian
On 05/07/2012 05:42 AM, Greg Mason wrote:
I am currently trying to get two of these things running Illumian. I don't
have any particular performance requirements, so I'm thinking of using some
sort of supported hypervisor, (either RHEL and KVM or VMware ESXi) to get
around the driver
I'm currently trying to get a SuperMicro JBOD with dual SAS expander
chips running in MPxIO, but I'm a total amateur to this and would like
to ask about how to detect whether MPxIO is working (or not).
My SAS topology is:
*) One LSI SAS2008-equipped HBA (running the latest IT firmware from
On 05/25/2012 07:35 PM, Jim Klimov wrote:
Sorry I can't comment on MPxIO, except that I thought zfs could by
itself discern two paths to the same drive, if only to protect
against double-importing the disk into pool.
Unfortunately, it isn't the same thing. MPxIO provides redundant
signaling to
On 05/25/2012 08:40 PM, Richard Elling wrote:
See the soluion at https://www.illumos.org/issues/644 -- richard
Good Lord, that was it! It never occurred to me that the drives had a
say in this. Thanks a billion!
Cheers,
--
Saso
___
zfs-discuss mailing
Hi,
I'm getting weird errors while trying to install openindiana 151a on a
Dell R715 with a PERC H200 (based on an LSI SAS 2008). Any time the OS
tries to access the drives (for whatever reason), I get this dumped into
syslog:
genunix: WARNING: Device
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote:
Hi,
are those DELL branded WD disks? DELL tends to manipulate the
firmware of the drives so that power handling with Solaris fails.
If this is the case here:
Easiest way to make it work is to modify /kernel/drv/sd.conf and
add an entry
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote:
Hi,
are those DELL branded WD disks? DELL tends to manipulate the firmware of
the drives so that power handling with Solaris fails. If this is the case
here:
Easiest way to make it work is to modify /kernel/drv/sd.conf and add an
entry
On 05/16/2012 10:17 AM, Koopmann, Jan-Peter wrote:
One thing came up while trying this - I'm on a text install
image system, so my / is a ramdisk. Any ideas how I can change
the sd.conf on the USB disk or reload the driver configuration on
the fly? I tried looking for the file on the USB
On 01/17/2012 01:06 AM, David Magda wrote:
Kind of off topic, but I figured of some interest to the list. There will be
a new file system in Windows 8 with some features that we all know and love
in ZFS:
As mentioned previously, one of our design goals was to detect and correct
On 07/01/2011 12:01 AM, Sašo Kiselkov wrote:
On 06/30/2011 11:56 PM, Sašo Kiselkov wrote:
Hm, it appears I'll have to do some reboots and more extensive testing.
I tried tuning various settings and then returned everything back to the
defaults. Yet, now I can ramp the number of concurrent
On 11/30/2011 02:40 PM, Edmund White wrote:
Absolutely.
I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
running NexentaStor.
On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
internal disks (boot, a handful of large disks, Pliant SSDs for
On 06/30/2011 01:10 PM, Jim Klimov wrote:
2011-06-30 11:47, Sašo Kiselkov пишет:
On 06/30/2011 02:49 AM, Jim Klimov wrote:
2011-06-30 2:21, Sašo Kiselkov пишет:
On 06/29/2011 02:33 PM, Sašo Kiselkov wrote:
Also there is a buffer-size limit, like this (384Mb):
set zfs:zfs_write_limit_override
On 06/30/2011 01:33 PM, Jim Klimov wrote:
2011-06-30 15:22, Sašo Kiselkov пишет:
I tried increasing this
value to 2000 or 3000, but without an effect - prehaps I need to set it
at pool mount time or in /etc/system. Could somebody with more
knowledge
of these internals please chime
On 06/30/2011 11:56 PM, Sašo Kiselkov wrote:
On 06/30/2011 01:33 PM, Jim Klimov wrote:
2011-06-30 15:22, Sašo Kiselkov пишет:
I tried increasing this
value to 2000 or 3000, but without an effect - prehaps I need to set it
at pool mount time or in /etc/system. Could somebody with more
On 06/29/2011 02:33 PM, Sašo Kiselkov wrote:
Also there is a buffer-size limit, like this (384Mb):
set zfs:zfs_write_limit_override = 0x1800
or on command-line like this:
# echo zfs_write_limit_override/W0t402653184 | mdb -kw
Currently my value for this is 0. How should I set it? I'm
On 06/27/2011 11:59 AM, Jim Klimov wrote:
I'd like to ask about whether there is a method to enforce a
certain txg
commit frequency on ZFS.
Well, there is a timer frequency based on TXG age (i.e 5 sec
by default now), in /etc/system like this:
set zfs:zfs_txg_synctime = 5
When
On 06/26/2011 06:17 PM, Richard Elling wrote:
On Jun 24, 2011, at 5:29 AM, Sašo Kiselkov wrote:
Hi All,
I'd like to ask about whether there is a method to enforce a certain txg
commit frequency on ZFS. I'm doing a large amount of video streaming
from a storage pool while also slowly
Hi All,
I'd like to ask about whether there is a method to enforce a certain txg
commit frequency on ZFS. I'm doing a large amount of video streaming
from a storage pool while also slowly continuously writing a constant
volume of data to it (using a normal file descriptor, *not* in O_SYNC).
When
On 05/24/2011 03:08 PM, a.sm...@ukgrid.net wrote:
Hi,
see the seeksize script on this URL:
http://prefetch.net/articles/solaris.dtracetopten.html
Not used it but looks neat!
cheers Andy.
I already did and it does the job just fine. Thank you for your kind
suggestion.
BR,
--
Saso
On 05/19/2011 07:47 PM, Richard Elling wrote:
On May 19, 2011, at 5:35 AM, Sašo Kiselkov wrote:
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed (8-10
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor
read/write ops using iostat, but that doesn't tell me how contiguous the
data
On 05/19/2011 03:35 PM, Tomas Ögren wrote:
On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes:
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed
On 04/09/2011 01:41 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Julian King
Actually I think our figures more or less agree. 12 disks = 7 mbits
48 disks = 4x7mbits
I know that sounds like terrible
On 04/08/2011 05:20 PM, Mark Sandrock wrote:
On Apr 8, 2011, at 7:50 AM, Evaldas Auryla evaldas.aur...@edqm.eu wrote:
On 04/ 8/11 01:14 PM, Ian Collins wrote:
You have built-in storage failover with an AR cluster;
and they do NFS, CIFS, iSCSI, HTTP and WebDav
out of the box.
And you
On 04/08/2011 06:59 PM, Darren J Moffat wrote:
On 08/04/2011 17:47, Sašo Kiselkov wrote:
In short, I think the X4540 was an elegant and powerful system that
definitely had its market, especially in my area of work (digital video
processing - heavy on latency, throughput and IOPS - an area
On 04/08/2011 07:22 PM, J.P. King wrote:
No, I haven't tried a S7000, but I've tried other kinds of network
storage and from a design perspective, for my applications, it doesn't
even make a single bit of sense. I'm talking about high-volume real-time
video streaming, where you stream
On 01/07/2011 10:26 AM, Darren J Moffat wrote:
On 06/01/2011 23:07, David Magda wrote:
On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
Fletcher is faster than SHA-256, so I think that must be what you're
asking about: can Fletcher+Verification be faster than
Sha256+NoVerification? Or do
On 01/07/2011 01:15 PM, Darren J Moffat wrote:
On 07/01/2011 11:56, Sašo Kiselkov wrote:
On 01/07/2011 10:26 AM, Darren J Moffat wrote:
On 06/01/2011 23:07, David Magda wrote:
On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
Fletcher is faster than SHA-256, so I think that must be what
101 - 169 of 169 matches
Mail list logo