e impossible to find it via brute-force.
On Wed, Jul 11, 2012 at 4:49 PM, Sašo Kiselkov wrote:
> On 07/11/2012 04:39 PM, Ferenc-Levente Juhos wrote:
> > As I said several times before, to produce hash collisions. Or to
> calculate
> > rainbow tables (as a previous user theoriz
As I said several times before, to produce hash collisions. Or to calculate
rainbow tables (as a previous user theorized it) you only need the
following.
You don't need to reproduce all possible blocks.
1. SHA256 produces a 256 bit hash
2. That means it produces a value on 256 bits, in other words
You don't need to reproduce all possible blocks.
1. SHA256 produces a 256 bit hash
2. That means it produces a value on 256 bits, in other words a value
between 0..2^256 - 1
3. If you start counting from 0 to 2^256 and for each number calculate the
SHA256 you will get at least one hash collision (i
Precisely, I said the same thing a few posts before:
dedup=verify solves that. And as I said, one could use dedup=,verify with
an inferior hash algorithm (that is much faster) with the purpose of
reducing the number of dedup candidates.
For that matter one could use a trivial CRC32, if the two bloc
Wed, Jul 11, 2012 at 11:10 AM, Sašo Kiselkov wrote:
> On 07/11/2012 10:50 AM, Ferenc-Levente Juhos wrote:
> > Actually although as you pointed out that the chances to have an sha256
> > collision is minimal, but still it can happen, that would mean
> > that the dedup
be "bulletproof" and faster.
On Wed, Jul 11, 2012 at 10:50 AM, Ferenc-Levente Juhos
wrote:
> Actually although as you pointed out that the chances to have an sha256
> collision is minimal, but still it can happen, that would mean
> that the dedup algorithm discards a block that he
have brought up the subject, now I know if I will
enable the dedup feature I must set it to sha256,verify.
On Wed, Jul 11, 2012 at 10:41 AM, Ferenc-Levente Juhos
wrote:
> I was under the impression that the hash (or checksum) used for data
> integrity is the same as the one used for deduplica
I was under the impression that the hash (or checksum) used for data
integrity is the same as the one used for deduplication,
but now I see that they are different.
On Wed, Jul 11, 2012 at 10:23 AM, Sašo Kiselkov wrote:
> On 07/11/2012 09:58 AM, Ferenc-Levente Juhos wrote:
> >
Hello all,
what about the fletcher2 and fletcher4 algorithms? According to the zfs man
page on oracle, fletcher4 is the current default.
Shouldn't the fletcher algorithms be much faster then any of the SHA
algorithms?
On Wed, Jul 11, 2012 at 9:19 AM, Sašo Kiselkov wrote:
> On 07/11/2012 05:20 AM,
Of course you don't see any difference, this is how it should work.
'ls' will never report the compressed size, because it's not aware of it.
Nothing is aware of the compression and decompression that takes place
on-the-fly, except of course zfs.
That's the reason why you could gain in write and r
10 matches
Mail list logo