I have a single mesos master and 19 slaves. I have several jenkins
servers making on-demand requests using the jenkins-mesos plugin - it
all seems to be working correctly, mesos slaves are assigned to the
jenkins servers, they execute jobs and eventually they detach.
Except.
Except the jenkins servers are getting spammed about every 1 or 2
seconds with this in /var/log/jenkins/jenkins.log:
Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
WARNING: Ignoring disk resources from offer
Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
INFO: Ignoring ports resources from offer
Jan 30, 2015 2:59:15 PM org.jenkinsci.plugins.mesos.JenkinsScheduler matches
INFO: Offer not sufficient for slave request:
[name: "cpus"
type: SCALAR
scalar {
value: 1.6
}
role: "*"
, name: "mem"
type: SCALAR
scalar {
value: 455.0
}
role: "*"
, name: "disk"
type: SCALAR
scalar {
value: 32833.0
}
role: "*"
, name: "ports"
type: RANGES
ranges {
range {
begin: 31000
end: 32000
}
}
role: "*"
]
[]
Requested for Jenkins slave:
cpus: 0.2
mem: 704.0
attributes:
The mesos master side is also hitting the logs with eg:
I0130 14:59:43.789172 10828 master.cpp:2344] Processing reply for offers: [
20150129-120204-1408111020-5050-10811-O665754 ] on slave
20150129-120204-1408111020-5050-10811-S2 at slave(1)@172.17.238.75:5051
(ci00bldslv15v.ss.corp.cnp.tnsi.com) for framework
20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
[email protected]:54503
I0130 14:59:43.789654 10828 master.cpp:2344] Processing reply for offers: [
20150129-120204-1408111020-5050-10811-O665755 ] on slave
20150129-120204-1408111020-5050-10811-S13 at slave(1)@172.17.238.98:5051
(ci00bldslv12v.ss.corp.cnp.tnsi.com) for framework
20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
[email protected]:54503
I0130 14:59:43.790004 10828 master.cpp:2344] Processing reply for offers: [
20150129-120204-1408111020-5050-10811-O665756 ] on slave
20150129-120204-1408111020-5050-10811-S11 at slave(1)@172.17.238.95:5051
(ci00bldslv11v.ss.corp.cnp.tnsi.com) for framework
20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
[email protected]:54503
I0130 14:59:43.790349 10828 master.cpp:2344] Processing reply for offers: [
20150129-120204-1408111020-5050-10811-O665757 ] on slave
20150129-120204-1408111020-5050-10811-S7 at slave(1)@172.17.238.108:5051
(ci00bldslv19v.ss.corp.cnp.tnsi.com) for framework
20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
[email protected]:54503
I0130 14:59:43.790670 10828 master.cpp:2344] Processing reply for offers: [
20150129-120204-1408111020-5050-10811-O665758 ] on slave
20150129-120204-1408111020-5050-10811-S14 at slave(1)@172.17.238.78:5051
(ci00bldslv06v.ss.corp.cnp.tnsi.com) for framework
20150129-120204-1408111020-5050-10811-0001 (Jenkins Scheduler) at
[email protected]:54503
I0130 14:59:43.791192 10828 hierarchical_allocator_process.hpp:563] Recovered
cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total
allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on
slave 20150129-120204-1408111020-5050-10811-S2 from framework
20150129-120204-1408111020-5050-10811-0001
I0130 14:59:43.791507 10828 hierarchical_allocator_process.hpp:563] Recovered
cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total
allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on
slave 20150129-120204-1408111020-5050-10811-S13 from framework
20150129-120204-1408111020-5050-10811-0001
I0130 14:59:43.791857 10828 hierarchical_allocator_process.hpp:563] Recovered
cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total
allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on
slave 20150129-120204-1408111020-5050-10811-S11 from framework
20150129-120204-1408111020-5050-10811-0001
I0130 14:59:43.792145 10828 hierarchical_allocator_process.hpp:563] Recovered
cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000] (total
allocatable: cpus(*):1.6; mem(*):453; disk(*):32961; ports(*):[31000-32000]) on
slave 20150129-120204-1408111020-5050-10811-S7 from framework
20150129-120204-1408111020-5050-10811-0001
I0130 14:59:43.792417 10828 hierarchical_allocator_process.hpp:563] Recovered
cpus(*):1.6; mem(*):455; disk(*):32833; ports(*):[31000-32000] (total
allocatable: cpus(*):1.6; mem(*):455; disk(*):32833; ports(*):[31000-32000]) on
slave 20150129-120204-1408111020-5050-10811-S14 from framework
20150129-120204-1408111020-5050-10811-0001
Is that normal? Certainly it's not desirable especially as jenkins is
also throwing a new config.xml file into the config-history directory on
every iteration and filling up the disc!!!!:
jenkins/config-history/config/2015-01-30_10-49-10
jenkins/config-history/config/2015-01-30_10-49-11
jenkins/config-history/config/2015-01-30_10-49-12
jenkins/config-history/config/2015-01-30_10-49-13
jenkins/config-history/config/2015-01-30_10-49-14
jenkins/config-history/config/2015-01-30_10-49-15
jenkins/config-history/config/2015-01-30_10-49-16
jenkins/config-history/config/2015-01-30_10-49-17
Any advice? I'm not too concerned about the log spamming, but the
version history spamming is serious.
Thanks
Bob
--
Senior Software Engineer
T. 07 3224 9778
M. 04 1177 6888
Level 20, 300 Adelaide Street, Brisbane QLD 4000, Australia.
On 18th December 2014, MasterCard acquired the Gateway Services business
(TNSPay Retail and TNSPay eCommerce) of Transaction Network Services, to
join MasterCard’s global gateway business, DataCash.