On Tue, Nov 03, 2009 at 11:17:47AM +1100, Scott Bragg wrote:
> Hi,
> 
> I just got back after leaving my computer for a week or so generating a
> table using the old version, so I have two 846Mb files (data.end.tbl,
> data.start.tbl). Should I just keep this process going until it completes or
> compile the new version?

use the new version, keep the old files.
if you leave the next time, change operations to a higher value.

> 
> Here's my invocation command:
> 
> ./a51table --condition rounds:rounds=32 --roundfunc
> xor:condition=distinguished_point::bits=15:generator=lfsr::tablesize=32::advance=143136
> --implementation sharedmem --algorithm A51 --device cuda:operations=512
> --work random:prefix=11,0 --consume file:prefix=data:append --logger normal
> generate --chains 380000000 --chainlength 3000000 --intermediate
> filter:runlength=512
> 
> Other questions:
> 
> RE: GTX260. I just installed this card before I left and I'm getting about
> 105chains/sec with --operations 512. However I see approximate benchmarks on
> the website that say the GTX260 can do about 165chains/sec. I'm using Ubuntu
> Jaunty, compiz and 2 monitors at 1900x1024. I can increase --operations to
> 768 (114chains/sec) or 1024 (120chains/sec) but I do notice some slowdown

165 chains/sec can be reached with 8192 operations and overclocking to
691 mhz. and you have to have the version with 216 cores.
The new client performs 10% better when --operations is quite small.

> 
> RE the--network
> nickname=<your_name>:password=<your_passwd>:host=reflextor.com:port=80
> option - do I have to create an account somewhere first or just make up a
> user/pass?

no registration needed. you do not even have to supply a nickname or
a password. The first one to use a nickname will own it.

> 
> RE other gpu's and cpu's: I have 2 machines with simple Geforce 9500 cards
> that do about 20chains/sec, and a couple of core quad's that for most of the
> time are barely loaded (I see new posts about a CPU implementation). Since
> 20chains/sec would take 10-12months to generate a table  with say 380 000
> 000 chains if I were to utilise these gpu's (and perhaps the core quads)
> should I generate/contribute smaller tables on these (say 32 000 000 chains)
> as I'd prefer to keep rolling some sort of new table each month instead of
> keeping one process going for a year.

use all resources for a single table, providing the same --advance parameter
to all client invocations. later you can merge+sort the files.
the CPU and Cell implementations are not yet ready for production use and
will be announced when this changes.

If one of you only has relatively slow hardware and still wants to contribute,
then you can do so and when the project is nearing completion and you have
a half baked table only, then you should pass this table along to somebody
with more horsepower, who can then finish the last 50% in half a month.
So that requires you to transfer sth. around 3gb at this point.
In case too many tables are produced, the overall success probability will
increase, so there is no real fixed end. (i suggest).
_______________________________________________
A51 mailing list
[email protected]
http://lists.lists.reflextor.com/cgi-bin/mailman/listinfo/a51

Reply via email to