>Hi,
>
>
>When cracking a burst, for all DP generated (with 12lsb == 0), about
>30% of them will find a match on disk.

The probability to find an endpoint on the disk is the same as the probability 
to
produce a merger when adding a new chain to the table. This is only true if your
lookup chain starts at round 0. For all lookup chains starting round 1-7, the 
probability
is getting smaller. IIRC our merge ratio at generation time was 60%.

>This seems like _a_lot_ !
>
>Especially since when you look at all the end points in the table,
>only about 1/750000 of all the possible 52 bits (64-12) values are
>present.

a lookup chain has the chance to merge with a table chain (8-Rs) * 2^12 times,
where Rs is the starting round. When Rs == 0, then the probability to merge
is 2^32.5 (table height) * 2^12 * 2^3 = 2^47.5.
The 2^47.5 is not the expected 60% of 2^52 and the factor of 2^3.5 can be 
attributed
to the affinity of A5/1 to produce merges when being clocked with the majority 
rule.
the 2^3.5 fits well with the keyspace reduction of the 100 extra clocking 
optimization.

Each end point on disk is the thin end of a funnel and you can only stack them 
on their
wide end (which represents the rest of the chain), thats why the density of the 
end points is
at 60% of the maximum when only 2^32.5 of 2^52 possible end points are stored 
in the table.

>
>This is particularly annoying because :
>
> - For each hit, we have to replay the chain see if we find the
>bitstream we're looking for.

this is less of a problem if you take into account that once you have to
check for a false positive, you have to invest a certain amount of time
to calculate the chain from the start point. This chain is computed on
a GPU that can calculate 1 or 100k chains in essentially the same time,
because you either use only 1 ALU or you use all 320. You cannot use
more than one ALU to speed up the calculation of a single chain.

The above is not entirely true, i.e. if you use a cpu you can only compute
a few hundred chains in parallel and the bitslice implementation is a
factor of 2 or 3 slower than the optimal latency implementation with which
a GPU can only compute a few thousand parallel chains.

> - If the disk hit rate was very low, we could make an 'index' with
>just the endpoints (and this would be about 37% of the current size),
>and then just put that part on very fast drive (SSD) and leave the
>rest on HDD.

the is a technique called 'checkpoints' that can help eliminate false hits
(http://sites.uclouvain.be/security/download/slides/AvoineJO-2005-indocrypt-slides.pdf)

but it is no silver bullet and it has to be considered at table generation time.
___________________________________________________________
Neu: WEB.DE De-Mail - Einfach wie E-Mail, sicher wie ein Brief!  
Jetzt De-Mail-Adresse reservieren: https://produkte.web.de/go/demail02
_______________________________________________
A51 mailing list
A51@lists.reflextor.com
http://lists.lists.reflextor.com/cgi-bin/mailman/listinfo/a51

Reply via email to