i see the sample tables in data folder and the test application but couldnt
figure out after all how the test scenario works . would you explain how
this test vector works and test what against what ?
thanks



>
> ---------- Forwarded message ----------
> From: Sascha Krissler <[email protected]>
> Date: Tue, Sep 29, 2009 at 10:56 PM
> Subject: Re: [A51] Fwd: all tables
> To: [email protected]
>
>
>
>
> thanks , thats what i was looking for . so with a HP super computer
> > consist of :
> > SSD + i7 quad Core + 3* GTX 295 we will have nearly 1000 chains per
> > second and two of these computers equal all the computing resource
> > that we have on this project now ? am i correct ? after all , does
>
> this is correct from the point of view of the network server, which only
> sees nodes that report status.
>
> > overclocking the GPUs help ? i have also another important question .
>
> you can get 20% increase from overclocking, at least with a single gtx260
> in
> an open case, ensuring proper cooling.
>
> > assuming i have produced several chains is there anyway i can test
> > see with an imaginary known plain text there are chances to find it
> > in the limited produced chains ? how one can be sure the tables that
> > are being built are correct , other than following project's
> > instructions?
>
> you can download one of the a5/1 implementations that are not from our
> project and then generate some ciphertext. also read the reference
> implementation
> in svn://reference/a51.cpp to verify that the tables produce data to speed
> up
> a reverse function of A5/1. if you assume that the reference implementation
> is
> correct then you can verify the actual generated table as described here:
>
> http://reflextor.com/trac/a51/wiki/RunningTheProgram#Checkingvaluesagainstthereferenceimplementation
>
>
> >
> > bests
> >
> > On Tue, Sep 29, 2009 at 7:34 PM, Sascha Krissler <sascha.kriss...@web.
> > de> wrote:
> > dividing the number of chains needed by the available resources gives
> > you the time needed to finish the tables. so given 2000 chains/sec
> > which
> > is the speed of all nodes reporting status, and 2^37 chains needed,
> > that would be: 105 weeks. if we want to be done by xmas, we need
> > 8 times as much power.
> >
> > a gtx260 (700mhz) with 216 cores gives
> > 162 chains/sec peak, a 9600M-GT (500mhz) with 32 cores gives
> > 20 chains peak. so you get around 0.00107 (gtx260) and 0.00125(9600)
> > chains per (core * mhz * second). although the frequency is not
> > really the core frequency,
> > but since shader freq is usually linked to GPU freq, i got used to
> > calculate with the
> > GPU freq.
> > more new text below.
> >
> > > i really didnt understand your answer . anyway according to table
> > > structure can we divide the amount of chains needed to produce with
> > > the amount of computing resource ( Cores , any other factor ? ) come
> > > up with a parametric number ?
> > >
> > > ---------- Forwarded message ----------
> > > From: *Sascha Krissler*<[email protected]>
> > >
> > > since the tables will be uploaded, there is no need to do this.
> >
> > it does not make sense to compute the tables for yourself, since they
> > will
> > already be produced with the current network.
> >
> > > if you want to decrypt messages without the network, you will
> > probably
> > > want to use an FPGA with the proper size and you would need some
> > > very fast SSDs. take a look at the TableStructure node in the trac
> > > wiki
> >
> > with the network that is proposed, you distribute the precomputation
> > time
> > and disk accesses across several nodes. if you wanted to do this all
> > on your
> > own, you would need much computing power and hardware that can do
> > many IO Operations per unit of time. If you want to do all the
> > precomputation
> > yourself, you would need 380 fast gpus, which will need 38 kW of
> > power.
> > and you would have to do 2,5 million disk accesses which would take
> > half an hour with one hard disk (assuming 1ms access time).
> >
> > > for some computation. if you used a hundred GPUs to do the
> > precalculati
> > > on
> > > during the lookup, you would need a several kW power line.
> > >
> > > if somebody wants to build all the tables in house how to compute
> > > > needed resources and time ? i want to simplify things by having a
> > > > formula around to put the amount of cores , frequency (
> > considering
> > > > overclocking is possible and this is also a variable ) and other
> > > > factors all together . all ideas are appreciated
> > > >
> > >
> >
> > > _______________________________________________ A51 mailing list
> > A51@
> > > lists.reflextor.comhttp://lists.lists.reflextor.com/cgi-bin/mailman/
> > >
> > > listinfo/a51
> >
>
>
_______________________________________________
A51 mailing list
[email protected]
http://lists.lists.reflextor.com/cgi-bin/mailman/listinfo/a51

Reply via email to