Hi Carl!
Yep, einst...@home is still using the boinc_zip code on the client side
(for our standard application). On the server side (validator) we are
using zlib and zziplib. For our most recent application, however, we are
using the gzip compression of the Core Client and have no compression
So, am I correct with the assumption that if multiple upload servers are used,
that they are all presumed to be on the same storage network, or server code
would need customization to cross check with all upload servers to respond to
the file size queries? And that customization to support an
Hi everyone,
I have a few questions related to c...@boinc:
1) Looking at client/app_start.cpp one can tell that setting avg_ncpus
affects the priority of a CPU (host) task. I suppose that setting this value
to 1 using a modified cuda plan_class will a) enable idle priority and b)
occupy a
I don't really understand the attraction to multiple, distributed upload
servers.
Taking SETI as our model, you could spot an upload server or two at
PAIX, or on Campus, and you could take uploads at gigabit rates.
Some sort of daemon on the upload server would then send work back to
the main
Having the validator pull from the upload server(s) makes a lot more
sense than a daemon on the upload server pushing work to the local storage.
Sorry for missing the obvious.
David Anderson wrote:
In principle, upload servers should be able to share storage, or not.
This feature was put in
Lynn W. Taylor wrote:
Some sort of daemon on the upload server would then send work back to
the main site as quickly as possible without saturating the link.
... but you're still limited to 80 or 90 megabytes.
It doesn't solve the problem (too many simultaneous connections) it just
moves
It's not a strong attraction, since no projects use them.
But possible factors are:
- increased availability
- political reasons, e.g. wanting to give colleagues or
partner institutions a role (this was the original CPDN motivation)
Increased through is a non-factor since, as you point out,
You do get some increased throughput if the problem is dropped connections
and packets, and the distributed upload servers have sufficiently better
connections, and the link has to the final upload server has sufficient
bandwidth to handle the load if the connections are carefully controlled
(i.e.
You do get some increased throughput if the problem is dropped connections
and packets, and the distributed upload servers have sufficiently better
connections, and the link has to the final upload server has sufficient
bandwidth to handle the load if the connections are carefully controlled
Seti is seeing serval conditions:
Connect failed - the original ACK in the connection request is either
rejected (Upload Server) or lost in the traffic. The majority shows the
Upload Server has been denying the connection request.
HTTP Error -184 Boinc and sent the request and received the
You absolutely do get the increased throughput due to reduced load on
the connection, but you can get that by not adding more hardware.
I like the idea of having the validator pull from the upload server(s)
whereever they are as that moves hardware instead of adding it, and you
can throttle
Hey! An excellent thoughtful thread! And the answer is...? ...
Nicolás Alvarez wrote:
El Martes 14 Jul 2009 09:41:10 Carl Christensen escribió:
with boinc credits the simple rule is there's no pleasing anyone. I
[...]
Even more: think what would have happened if they had released the GPU
El Martes 21 Jul 2009 16:54:01 Martin escribió:
My thought is that we must have a semantic shift so that what is
usefully utilised is rewarded, and not just *time spent* (perhaps busyly
uselessly spinning wheels) on whatever hardware.
The GPU and CPU apps don't necessarily make the same amount
If you really want to open that can of worms, how about the fact that a
floating point add (which is a floating point operation) is dramatically
easier than a floating point cos().
... and the fact that an AMD processor might do adds dramatically faster
than an Intel, but do cos() slower.
A
Oliver Bock wrote:
Hi everyone,
I have a few questions related to c...@boinc:
1) Looking at client/app_start.cpp one can tell that setting avg_ncpus
affects the priority of a CPU (host) task. I suppose that setting this value
to 1 using a modified cuda plan_class will a) enable idle
Nicolás Alvarez wrote:
El Martes 21 Jul 2009 16:54:01 Martin escribió:
My thought is that we must have a semantic shift so that what is
usefully utilised is rewarded, and not just *time spent* (perhaps busyly
uselessly spinning wheels) on whatever hardware.
The GPU and CPU apps don't
Lynn W. Taylor wrote:
If you really want to open that can of worms, how about the fact that a
floating point add (which is a floating point operation) is dramatically
easier than a floating point cos().
... and the fact that an AMD processor might do adds dramatically faster
than an
Lynn W. Taylor wrote:
This is actually pretty encouraging, because it looks like you have a
repeatable test case.
Very curious indeed...
I've just done a search on my logs spanning back to Nov 2008 for
13.240.68.208, and nothing found.
Or do I need to enable http_debug? Where? (If you want
Martin wrote:
Lynn W. Taylor wrote:
This is actually pretty encouraging, because it looks like you have a
repeatable test case.
Very curious indeed...
I've just done a search on my logs spanning back to Nov 2008 for
13.240.68.208, and nothing found.
Or do I need to enable
On Jul 21, 2009, at 1:19 PM, Lynn W. Taylor wrote:
I don't see a way out, short of doing exactly what Eric Korpela's
script
tries to do -- normalize FLOPS credit to that predicted by the
(imperfect) benchmarks.
Solution: Give up on cross-project credit parity. It's an impossible
goal.
20 matches
Mail list logo