thanks , thats what i was looking for . so with a HP super computer consist
of :SSD + i7 quad Core + 3* GTX 295 we will have nearly 1000 chains per
second and two of these computers equal all the computing resource that we
have on this project now ? am i correct ? after all , does overclocking the
GPUs help ? i have also another important question . assuming i have
produced several chains is there anyway i can test see with an imaginary
known plain text there are chances to find it in the limited produced chains
? how one can be sure the tables that are being built are correct , other
than following project's instructions?

bests

On Tue, Sep 29, 2009 at 7:34 PM, Sascha Krissler <[email protected]>wrote:

> dividing the number of chains needed by the available resources gives
> you the time needed to finish the tables. so given 2000 chains/sec which
> is the speed of all nodes reporting status, and 2^37 chains needed,
> that would be: 105 weeks. if we want to be done by xmas, we need
> 8 times as much power.
>
> a gtx260 (700mhz) with 216 cores gives
> 162 chains/sec peak, a 9600M-GT (500mhz) with 32 cores gives
> 20 chains peak. so you get around 0.00107 (gtx260) and 0.00125(9600)
> chains per (core * mhz * second). although the frequency is not really the
> core frequency,
> but since shader freq is usually linked to GPU freq, i got used to
> calculate with the
> GPU freq.
> more new text below.
>
> > i really didnt understand your answer . anyway according to table
> > structure can we divide the amount of chains needed to produce with
> > the amount of computing resource ( Cores , any other factor ? ) come
> > up with a parametric number ?
> >
> > ---------- Forwarded message ----------
> > From:  *Sascha Krissler*<[email protected]>
> >
> > since the tables will be uploaded, there is no need to do this.
>
> it does not make sense to compute the tables for yourself, since they will
> already be produced with the current network.
>
> > if you want to decrypt messages without the network, you will probably
> > want to use an FPGA with the proper size and you would need some
> > very fast SSDs. take a look at the TableStructure node in the trac
> > wiki
>
> with the network that is proposed, you distribute the precomputation time
> and disk accesses across several nodes. if you wanted to do this all on
> your
> own, you would need much computing power and hardware that can do
> many IO Operations per unit of time. If you want to do all the
> precomputation
> yourself, you would need 380 fast gpus, which will need 38 kW of power.
> and you would have to do 2,5 million disk accesses which would take
> half an hour with one hard disk (assuming 1ms access time).
>
> > for some computation. if you used a hundred GPUs to do the precalculati
> > on
> > during the lookup, you would need a several kW power line.
> >
> > if somebody wants to build all the tables in house how to compute
> > > needed resources and time ? i want to simplify things by having a
> > > formula around to put the amount of cores , frequency ( considering
> > > overclocking is possible and this is also a variable ) and other
> > > factors all together . all ideas are appreciated
> > >
> >
> > _______________________________________________ A51 mailing list A51@
> > lists.reflextor.com http://lists.lists.reflextor.com/cgi-bin/mailman/
> >
> > listinfo/a51
>
>
> ________________________________________________________________
> Neu: WEB.DE Doppel-FLAT mit Internet-Flatrate + Telefon-Flatrate
> für nur 19,99 Euro/mtl.!* http://produkte.web.de/go/02/
>
>
_______________________________________________
A51 mailing list
[email protected]
http://lists.lists.reflextor.com/cgi-bin/mailman/listinfo/a51

Reply via email to