Hi! FWIW, we get reports of this as well on Einstein@Home, this one is for BOINC 7.0.62:
http://einstein.phys.uwm.edu/forum_thread.php?id=10134&nowrap=true#124926 > On the server side it >appears to be a request for 0 seconds of CPU and 0 seconds of GPU work Confirmed. We see this in the scheduler log: 2013-06-03 10:46:02.4045 [PID=30856] Request: [USER#xxxxx] [HOST#6119565] [IP xxx.xxx.xxx.38] client 7.0.62 2013-06-03 10:46:02.4187 [PID=30856] [send] effective_ncpus 1 max_jobs_on_host_cpu 999999 max_jobs_on_host 999999 2013-06-03 10:46:02.4187 [PID=30856] [send] effective_ngpus 1 max_jobs_on_host_gpu 999999 2013-06-03 10:46:02.4187 [PID=30856] [send] Not using matchmaker scheduling; Not using EDF sim 2013-06-03 10:46:02.4187 [PID=30856] [send] CPU: req 0.00 sec, 0.00 instances; est delay 0.00 2013-06-03 10:46:02.4187 [PID=30856] [send] CUDA: req 0.00 sec, 0.00 instances; est delay 0.00 2013-06-03 10:46:02.4187 [PID=30856] [send] work_req_seconds: 0.00 secs 2013-06-03 10:46:02.4187 [PID=30856] [send] available disk 23.89 GB, work_buf_min 0 2013-06-03 10:46:02.4187 [PID=30856] [send] active_frac 0.999999 on_frac 0.999944 DCF 1.226839 2013-06-03 10:46:02.4222 [PID=30856] Sending reply to [HOST#6119565]: 0 results, delay req 60.00 2013-06-03 10:46:02.4225 [PID=30856] Scheduler ran 0.024 seconds The polling interval seems to be once per minute. Cheers HBE ----------------------------------------------------------------- Heinz-Bernd Eggenstein Max Planck Institute for Gravitational Physics Callinstrasse 38 D-30167 Hannover, Germany Tel.: +49-511-762-19466 (Room 037) From: Eric J Korpela <[email protected]> To: "[email protected]" <[email protected]>, Date: 06/03/2013 07:17 PM Subject: [boinc_dev] BOINC 7.0.64 weirdness? Sent by: "boinc_dev" <[email protected]> Some BOINC v7 clients are getting into a weird state where they contact the server every few minutes to request no work. I haven't been able to reproduce it, but people have reported that it goes away when they select "read config file", even if they don't have a config file. Here's a not very detailed log that someone sent me. On the server side it appears to be a request for 0 seconds of CPU and 0 seconds of GPU work (essentially the same as requesting an update when no work is required). 5/31/2013 10:30:23 AM | SETI@home | Starting task 23jn12ab.24163.21032.12.11.50_1 using setiathome_enhanced version 609 (cuda23) in slot 2 5/31/2013 10:30:25 AM | SETI@home | Started upload of 23oc12ac.25717.18472.12.11.50_0_0 5/31/2013 10:30:28 AM | SETI@home | Finished upload of 23oc12ac.25717.18472.12.11.50_0_0 5/31/2013 11:31:14 AM | SETI@home | Computation for task 23jn12ab.24163.21032.12.11.50_1 finished 5/31/2013 11:31:14 AM | SETI@home | Starting task 23oc12ac.25646.13973.15.12.45_0 using setiathome_v7 version 700 (cuda32) in slot 2 5/31/2013 11:31:18 AM | SETI@home | Started upload of 23jn12ab.24163.21032.12.11.50_1_0 5/31/2013 11:31:25 AM | SETI@home | Finished upload of 23jn12ab.24163.21032.12.11.50_1_0 5/31/2013 11:31:28 AM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 11:31:28 AM | SETI@home | Reporting 2 completed tasks 5/31/2013 11:31:28 AM | SETI@home | Not requesting tasks 5/31/2013 11:31:30 AM | SETI@home | Scheduler request completed 5/31/2013 11:33:26 AM | SETI@home | Computation for task 26mr10ab.24819.17826.5.11.138_0 finished 5/31/2013 11:33:26 AM | SETI@home | Starting task 23oc12ac.25646.13973.15.12.40_0 using setiathome_v7 version 700 in slot 1 5/31/2013 11:33:29 AM | SETI@home | Started upload of 26mr10ab.24819.17826.5.11.138_0_0 5/31/2013 11:33:32 AM | SETI@home | Finished upload of 26mr10ab.24819.17826.5.11.138_0_0 5/31/2013 11:36:35 AM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 11:36:35 AM | SETI@home | Reporting 1 completed tasks 5/31/2013 11:36:35 AM | SETI@home | Not requesting tasks 5/31/2013 11:36:37 AM | SETI@home | Scheduler request completed 5/31/2013 1:28:09 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 1:28:09 PM | SETI@home | Not requesting tasks 5/31/2013 1:28:11 PM | SETI@home | Scheduler request completed 5/31/2013 1:33:15 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 1:33:15 PM | SETI@home | Not requesting tasks 5/31/2013 1:33:19 PM | SETI@home | Scheduler request completed 5/31/2013 1:38:23 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 1:38:23 PM | SETI@home | Not requesting tasks 5/31/2013 1:38:26 PM | SETI@home | Scheduler request completed 5/31/2013 1:43:30 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 1:43:30 PM | SETI@home | Not requesting tasks 5/31/2013 1:43:33 PM | SETI@home | Scheduler request completed 5/31/2013 1:48:37 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 1:48:37 PM | SETI@home | Not requesting tasks 5/31/2013 1:48:40 PM | SETI@home | Scheduler request completed 5/31/2013 1:53:45 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 1:53:45 PM | SETI@home | Not requesting tasks 5/31/2013 1:53:48 PM | SETI@home | Scheduler request completed 5/31/2013 1:58:52 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 1:58:52 PM | SETI@home | Not requesting tasks 5/31/2013 1:58:55 PM | SETI@home | Scheduler request completed 5/31/2013 2:30:19 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 2:30:19 PM | SETI@home | Not requesting tasks 5/31/2013 2:30:21 PM | SETI@home | Scheduler request completed 5/31/2013 2:35:26 PM | SETI@home | Sending scheduler request: To fetch work. 5/31/2013 2:35:26 PM | SETI@home | Not requesting tasks 5/31/2013 2:35:28 PM | SETI@home | Scheduler request completed _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address. _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.
