Hello.
Thank you for your reply even though I didn't get the answer I was
asking about. Don't bother answering it now, as I discovered by
experiment, that in fact it was the right tag for what I wanted to do.
In general I agree with your points, however the advantage of my setup
is that I can specify separate resource shares per resource (and not
per project as it's done now). The second positive side is, that with
my setup, I was able to run Milyway on CPU and GPU concurrently even
though MW doesn't support ATI platform yet (and would be able to do
the same with any project, that has an GPU opti. app., but an older
server version or GPU platform/app. not yet implemented on server).
And I can (or better, could) do all this now, with existing CCs, not
some upcoming version in the future.
I think, from your response, that my idea was a bit misunderstood. I
did not mean, that running a core client for each resource is THE
solution. What I have in mind is that instead of one scheduler,
covering all the resources and and their combinations it wouldn't hurt
to start a "scheduler thread" for each such instance, which would rely
on a "central" resource matrix, from which they would be able to
discern available resources, and update it/them on task start/end.
Now, while my experiment was a success, I discovered another bug (or
is it a feature?): when I set <suppress_net_info> in cc_config.xml,
core clients from 5.10.xx up to 6.10.14 ignored the
--allow_multiple_clients command line parameter. The only way I could
get the whole kaboodle to work was to: rename/move cc_config.xml(s) to
something/somewhere else before starting the clients, rename/move
those files back when the clients were running and issue reread
cc_config.xml command via boinccmd. After a couple of restarts (I was
trying to write a script, to automate these moves and cc_config
rereading) even this workaround didn't work anymore, in fact, now all
CC versions I tried simply ignore --allow_multiple_clients parameter,
no matter what (cc_config.xml is either present or absent) - I suspect
the culprit is lurking in some of the other .xml files. As I was able
to trace this bug down to almost the version where this feature was
implemented, I wonder, was it tested thoroughly enough?
BR,
Vid Vidmar

On Wed, Oct 14, 2009 at 6:47 PM, David Anderson <[email protected]> wrote:
> I see no advantages to the "one client per resource" approach,
> and it has many disadvantagaes:
> - can't enforce memory usage limit
> - doesn't handle apps that use 2 resources (e.g. GPU/CPU)
> - requires rewrite of manager
> - increases number of scheduler RPCs
> etc.
>
> You imply that scheduling in 6.10.x is not "stable and well behaved".
> If you can provide specific examples of this, with appropriate message logs,
> then we can fix the problem.
>
> -- David
>
> vid vidmar wrote:
>>
>> Hello.
>> Recently I got a strong feeling, that BOINC CC development is heading
>> in a wrong direction. I have been thinking about it a lot and now I am
>> positive, that cramming support and scheduling for different resources
>> (cpus, gpus, and others yet to come) all in one CC is wrong thing to
>> do as it closes the possibility for expansion with each added resource
>> by adding complexity to the code and the whole process. Instead of
>> "internalizing" I think "externalizing" would be much better.
>> Therefore I started an experiment in which I have a CC running for
>> each resource (thanks to --allow_multiple_clients), and I think it's
>> working way better than anything a single (multiple resources aware)
>> CC is doing (and will be able to). FYI experimentation has led me to
>> believe, that in 6.4.7 or thereabout, the scheduler is most stable and
>> well behaved (or was it 5.10.?, but that had other issues).
>> The only problem I have now with this setup, is that I cannot attach
>> two different resources to a single project. The trouble is that each
>> CC instance is seen by the server (relying on IP and domain) as the
>> same computer (now, think what happens, when result lists don't
>> match). This server-side behaviour is what is ruining what I think
>> would be something much more powerful and extensible, than what is
>> currently being developed.
>> So, to get to the subject line: would the use of <suppress_net_info>
>> tag in cc_config.xml prevent such "wrongly-correct" identification,
>> would then each CC be seen as unique computer, allowing me to pass
>> this last obstacle without me needing to find, fix and recompile the
>> code?
>> Best regards,
>> Vid Vidmar
>>
>> P.S. If this is the second such email from me, I apologize.
>> _______________________________________________
>> boinc_dev mailing list
>> [email protected]
>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>> To unsubscribe, visit the above URL and
>> (near bottom of page) enter your email address.
>
>
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to