That would be a step in the right direction.

However I can imagine a couple of cluster setups that aren't covered (i.e. may still result in identical host CPIDs) by that, like running BOINC clients in
- chroot / jails
- multiple VMs where the VM inner network interface has a constant MAC address (the VMs then can't talk to each other or even to the host, but that doesn't necessarily matter)

I'm not wure if it's worth thinking about these case, though.


For our particular case I can say that the cluster (currently) connects using a 6.12.34 client which reports only 1 CPU / core. According to the scheduler logs we do get "User has another host with same CPID" events, even multiple of these within 60s when all previous were answered with delay requests ("Not sending work - last request too recent"), i.e. probably from different clients.

How did the 6.12.34 client calculate the hist CPID?

Best,
Bernd


On 01.02.14 00:42, David Anderson wrote:
Currently the host CPID is the MD5 of the MAC address.
How about if we change it to MD5(Mac address, path of data directory).
That way we'd get different CPIDs for different client instances
on a given machine.
-- David

On 31-Jan-2014 1:57 AM, Bernd Machenschalk wrote:

However, I'm afraid this doesn't work for us: Other clusters run one client per
"slot" (more or less continuously), i.e. CPU core. Therefore we enabled
<multiple_clients_per_host> in the project config, which (as far as I 
understand the
code) prevents the scheduler from searching for matching hosts when attaching a 
new
client.
_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to