[jira] [Commented] (HBASE-8416) Region Server Spinning on JMX requests
[ https://issues.apache.org/jira/browse/HBASE-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641858#comment-13641858 ] Ron Buckley commented on HBASE-8416: We're good with the fix in hadoop common. Region Server Spinning on JMX requests -- Key: HBASE-8416 URL: https://issues.apache.org/jira/browse/HBASE-8416 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.4 Reporter: Ron Buckley Priority: Critical Fix For: 0.98.0, 0.94.8, 0.95.2 This morning one our region servers (we have 44) stopped responding to the '/jmx' request. (It's working for regular activity.) Additionally, the region server is now using all the CPU on the host, running all 8 cores at 100%. A full jstack is at: http://pastebin.com/dGTmTEN7 Right now, there are 37 threads stuck here: 38565532@qtp-228776471-196 prio=10 tid=0x2aaacc4f2800 nid=0x7f57 runnable [0x54a48000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.get(HashMap.java:303) at org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase.getAttribute(MetricsDynamicMBeanBase.java:137) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:638) at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:315) at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:293) at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:193) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1056) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8416) Region Server Spinning on JMX requests
Ron Buckley created HBASE-8416: -- Summary: Region Server Spinning on JMX requests Key: HBASE-8416 URL: https://issues.apache.org/jira/browse/HBASE-8416 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.4 Reporter: Ron Buckley This morning one our region servers (we have 44) stopped responding to the '/jmx' request. (It's working for regular activity.) Additionally, the region server is now using all the CPU on the host, running all 8 cores at 100%. A full jstack is at: http://pastebin.com/dGTmTEN7 Right now, there are 37 threads stuck here: 38565532@qtp-228776471-196 prio=10 tid=0x2aaacc4f2800 nid=0x7f57 runnable [0x54a48000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.get(HashMap.java:303) at org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase.getAttribute(MetricsDynamicMBeanBase.java:137) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:638) at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:315) at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:293) at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:193) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1056) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8416) Region Server Spinning on JMX requests
[ https://issues.apache.org/jira/browse/HBASE-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640365#comment-13640365 ] Ron Buckley commented on HBASE-8416: We're running CDH 4.1.1, which lists 2.0.0-mr1-cdh4.1.1 Region Server Spinning on JMX requests -- Key: HBASE-8416 URL: https://issues.apache.org/jira/browse/HBASE-8416 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.4 Reporter: Ron Buckley This morning one our region servers (we have 44) stopped responding to the '/jmx' request. (It's working for regular activity.) Additionally, the region server is now using all the CPU on the host, running all 8 cores at 100%. A full jstack is at: http://pastebin.com/dGTmTEN7 Right now, there are 37 threads stuck here: 38565532@qtp-228776471-196 prio=10 tid=0x2aaacc4f2800 nid=0x7f57 runnable [0x54a48000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.get(HashMap.java:303) at org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase.getAttribute(MetricsDynamicMBeanBase.java:137) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:638) at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:315) at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:293) at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:193) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1056) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-3996) d
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Buckley updated HBASE-3996: --- Summary: d (was: Support multiple tables and scanners as input to the mapper in map/reduce jobs) d - Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Bryan Baugher Priority: Critical Fix For: 0.95.0, 0.94.5 Attachments: 3996-0.94.txt, 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v13.txt, 3996-v14.txt, 3996-v15.txt, 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 3996-v9.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Buckley updated HBASE-3996: --- Summary: Support multiple tables and scanners as input to the mapper in map/reduce jobs (was: d) Support multiple tables and scanners as input to the mapper in map/reduce jobs --- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Bryan Baugher Priority: Critical Fix For: 0.95.0, 0.94.5 Attachments: 3996-0.94.txt, 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v13.txt, 3996-v14.txt, 3996-v15.txt, 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 3996-v9.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-6905) Regionserver Metrics reporting as negative numbers
[ https://issues.apache.org/jira/browse/HBASE-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ron Buckley resolved HBASE-6905. Resolution: Duplicate This was already fixed in HBASE-5283. Regionserver Metrics reporting as negative numbers -- Key: HBASE-6905 URL: https://issues.apache.org/jira/browse/HBASE-6905 Project: HBase Issue Type: Bug Affects Versions: 0.92.1 Reporter: Ron Buckley Priority: Minor Our HBase cluster has been up since May. Over the weekend one of our monitors started barking because the readRequestsCount is being reported as negative. requestsPerSecond=17, numberOfOnlineRegions=81, numberOfStores=91, numberOfStorefiles=115, storefileIndexSizeMB=2, rootIndexSizeKB=2497, totalStaticIndexSizeKB=197243, totalStaticBloomSizeKB=218013, memstoreSizeMB=286, readRequestsCount=-1820030847, writeRequestsCount=326458652, compactionQueueSize=0, flushQueueSize=0, usedHeapMB=6303, maxHeapMB=10214, blockCacheSizeMB=4137.7, blockCacheFreeMB=969.52, blockCacheCount=60445, blockCacheHitCount=3040378952, blockCacheMissCount=501619888, blockCacheEvictedCount=134409709, blockCacheHitRatio=85%, blockCacheHitCachingRatio=95%, hdfsBlocksLocalityIndex=100 These are the current counts our regiion servers are reporting. hadoopsup@wchddb01pxdu:~/hbaseTools$ ./getHBaseMetric.sh readRequestsCount | sort -k +2 -t = -n wchddb41pxdu.prod.oclc.org:9004 readRequestsCount=-2140115894 wchddb18pxdu.prod.oclc.org:9004 readRequestsCount=-2129290048 wchddb33pxdu.prod.oclc.org:9004 readRequestsCount=-1938060269 wchddb22pxdu.prod.oclc.org:9004 readRequestsCount=-1819997297 wchddb37pxdu.prod.oclc.org:9004 readRequestsCount=-1361713407 wchddb39pxdu.prod.oclc.org:9004 readRequestsCount=-1346438303 wchddb15pxdu.prod.oclc.org:9004 readRequestsCount=23724328 wchddb17pxdu.prod.oclc.org:9004 readRequestsCount=26673900 wchddb31pxdu.prod.oclc.org:9004 readRequestsCount=26753467 wchddb08pxdu.prod.oclc.org:9004 readRequestsCount=29632942 wchddb50pxdu.prod.oclc.org:9004 readRequestsCount=42054983 wchddb43pxdu.prod.oclc.org:9004 readRequestsCount=91783220 wchddb45pxdu.prod.oclc.org:9004 readRequestsCount=276019000 wchddb12pxdu.prod.oclc.org:9004 readRequestsCount=332978804 wchddb14pxdu.prod.oclc.org:9004 readRequestsCount=414414026 wchddb25pxdu.prod.oclc.org:9004 readRequestsCount=714715497 wchddb46pxdu.prod.oclc.org:9004 readRequestsCount=854608441 wchddb28pxdu.prod.oclc.org:9004 readRequestsCount=1056464575 wchddb40pxdu.prod.oclc.org:9004 readRequestsCount=1142317805 wchddb11pxdu.prod.oclc.org:9004 readRequestsCount=1192703412 wchddb07pxdu.prod.oclc.org:9004 readRequestsCount=1324556090 wchddb23pxdu.prod.oclc.org:9004 readRequestsCount=1345624237 wchddb47pxdu.prod.oclc.org:9004 readRequestsCount=1354096261 wchddb38pxdu.prod.oclc.org:9004 readRequestsCount=1494019278 wchddb27pxdu.prod.oclc.org:9004 readRequestsCount=1556667925 wchddb10pxdu.prod.oclc.org:9004 readRequestsCount=1578493607 wchddb21pxdu.prod.oclc.org:9004 readRequestsCount=1621338673 wchddb29pxdu.prod.oclc.org:9004 readRequestsCount=1623831234 wchddb48pxdu.prod.oclc.org:9004 readRequestsCount=1643042108 wchddb24pxdu.prod.oclc.org:9004 readRequestsCount=1662372209 wchddb34pxdu.prod.oclc.org:9004 readRequestsCount=1709785225 wchddb20pxdu.prod.oclc.org:9004 readRequestsCount=1710768413 wchddb42pxdu.prod.oclc.org:9004 readRequestsCount=1710792044 wchddb30pxdu.prod.oclc.org:9004 readRequestsCount=1742485411 wchddb26pxdu.prod.oclc.org:9004 readRequestsCount=1819557634 wchddb49pxdu.prod.oclc.org:9004 readRequestsCount=1856456352 wchddb44pxdu.prod.oclc.org:9004 readRequestsCount=1858474961 wchddb06pxdu.prod.oclc.org:9004 readRequestsCount=1894403007 wchddb13pxdu.prod.oclc.org:9004 readRequestsCount=1938887475 wchddb09pxdu.prod.oclc.org:9004 readRequestsCount=1970495619 wchddb05pxdu.prod.oclc.org:9004 readRequestsCount=1998165666 wchddb16pxdu.prod.oclc.org:9004 readRequestsCount=2003054336 wchddb04pxdu.prod.oclc.org:9004 readRequestsCount=2015847806 wchddb32pxdu.prod.oclc.org:9004 readRequestsCount=2134753437 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-6905) Regionserver Metrics reporting as negative numbers
Ron Buckley created HBASE-6905: -- Summary: Regionserver Metrics reporting as negative numbers Key: HBASE-6905 URL: https://issues.apache.org/jira/browse/HBASE-6905 Project: HBase Issue Type: Bug Affects Versions: 0.92.1 Reporter: Ron Buckley Priority: Minor Our HBase cluster has been up since May. Over the weekend one of our monitors started barking because the readRequestsCount is being reported as negative. requestsPerSecond=17, numberOfOnlineRegions=81, numberOfStores=91, numberOfStorefiles=115, storefileIndexSizeMB=2, rootIndexSizeKB=2497, totalStaticIndexSizeKB=197243, totalStaticBloomSizeKB=218013, memstoreSizeMB=286, readRequestsCount=-1820030847, writeRequestsCount=326458652, compactionQueueSize=0, flushQueueSize=0, usedHeapMB=6303, maxHeapMB=10214, blockCacheSizeMB=4137.7, blockCacheFreeMB=969.52, blockCacheCount=60445, blockCacheHitCount=3040378952, blockCacheMissCount=501619888, blockCacheEvictedCount=134409709, blockCacheHitRatio=85%, blockCacheHitCachingRatio=95%, hdfsBlocksLocalityIndex=100 These are the current counts our regiion servers are reporting. hadoopsup@wchddb01pxdu:~/hbaseTools$ ./getHBaseMetric.sh readRequestsCount | sort -k +2 -t = -n wchddb41pxdu.prod.oclc.org:9004 readRequestsCount=-2140115894 wchddb18pxdu.prod.oclc.org:9004 readRequestsCount=-2129290048 wchddb33pxdu.prod.oclc.org:9004 readRequestsCount=-1938060269 wchddb22pxdu.prod.oclc.org:9004 readRequestsCount=-1819997297 wchddb37pxdu.prod.oclc.org:9004 readRequestsCount=-1361713407 wchddb39pxdu.prod.oclc.org:9004 readRequestsCount=-1346438303 wchddb15pxdu.prod.oclc.org:9004 readRequestsCount=23724328 wchddb17pxdu.prod.oclc.org:9004 readRequestsCount=26673900 wchddb31pxdu.prod.oclc.org:9004 readRequestsCount=26753467 wchddb08pxdu.prod.oclc.org:9004 readRequestsCount=29632942 wchddb50pxdu.prod.oclc.org:9004 readRequestsCount=42054983 wchddb43pxdu.prod.oclc.org:9004 readRequestsCount=91783220 wchddb45pxdu.prod.oclc.org:9004 readRequestsCount=276019000 wchddb12pxdu.prod.oclc.org:9004 readRequestsCount=332978804 wchddb14pxdu.prod.oclc.org:9004 readRequestsCount=414414026 wchddb25pxdu.prod.oclc.org:9004 readRequestsCount=714715497 wchddb46pxdu.prod.oclc.org:9004 readRequestsCount=854608441 wchddb28pxdu.prod.oclc.org:9004 readRequestsCount=1056464575 wchddb40pxdu.prod.oclc.org:9004 readRequestsCount=1142317805 wchddb11pxdu.prod.oclc.org:9004 readRequestsCount=1192703412 wchddb07pxdu.prod.oclc.org:9004 readRequestsCount=1324556090 wchddb23pxdu.prod.oclc.org:9004 readRequestsCount=1345624237 wchddb47pxdu.prod.oclc.org:9004 readRequestsCount=1354096261 wchddb38pxdu.prod.oclc.org:9004 readRequestsCount=1494019278 wchddb27pxdu.prod.oclc.org:9004 readRequestsCount=1556667925 wchddb10pxdu.prod.oclc.org:9004 readRequestsCount=1578493607 wchddb21pxdu.prod.oclc.org:9004 readRequestsCount=1621338673 wchddb29pxdu.prod.oclc.org:9004 readRequestsCount=1623831234 wchddb48pxdu.prod.oclc.org:9004 readRequestsCount=1643042108 wchddb24pxdu.prod.oclc.org:9004 readRequestsCount=1662372209 wchddb34pxdu.prod.oclc.org:9004 readRequestsCount=1709785225 wchddb20pxdu.prod.oclc.org:9004 readRequestsCount=1710768413 wchddb42pxdu.prod.oclc.org:9004 readRequestsCount=1710792044 wchddb30pxdu.prod.oclc.org:9004 readRequestsCount=1742485411 wchddb26pxdu.prod.oclc.org:9004 readRequestsCount=1819557634 wchddb49pxdu.prod.oclc.org:9004 readRequestsCount=1856456352 wchddb44pxdu.prod.oclc.org:9004 readRequestsCount=1858474961 wchddb06pxdu.prod.oclc.org:9004 readRequestsCount=1894403007 wchddb13pxdu.prod.oclc.org:9004 readRequestsCount=1938887475 wchddb09pxdu.prod.oclc.org:9004 readRequestsCount=1970495619 wchddb05pxdu.prod.oclc.org:9004 readRequestsCount=1998165666 wchddb16pxdu.prod.oclc.org:9004 readRequestsCount=2003054336 wchddb04pxdu.prod.oclc.org:9004 readRequestsCount=2015847806 wchddb32pxdu.prod.oclc.org:9004 readRequestsCount=2134753437 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira