[GitHub] storm issue #2517: [STORM-2901] Reuse ZK connection for getKeySequenceNumber
Github user danny0405 commented on the issue: https://github.com/apache/storm/pull/2517 @HeartSaVioR Really thx for you careful review work i have applied your contribution. It's weird that i build storm-core with cmd: `mvn clean install -Pall-tests` successfully, this is partial of the build log: *** 3636 [main] WARN o.a.s.v.ConfigValidation - storm.messaging.netty.max_retries is a deprecated config please see class org.apache.storm.Config.STORM_MESSAGING_NETTY_MAX_RETRIES for more information. Running org.apache.storm.submitter-test Tests run: 1, Passed: 13, Failures: 0, Errors: 0 Running org.apache.storm.messaging.netty-unit-test Tests run: 5, Passed: 24, Failures: 0, Errors: 0 Running org.apache.storm.cluster-test Tests run: 10, Passed: 92, Failures: 0, Errors: 0 Running integration.org.apache.storm.integration-test Tests run: 13, Passed: 45, Failures: 0, Errors: 0 Running org.apache.storm.scheduler-test Tests run: 4, Passed: 64, Failures: 0, Errors: 0 Running org.apache.storm.messaging.netty-integration-test Tests run: 1, Passed: 1, Failures: 0, Errors: 0 Running org.apache.storm.transactional-test Tests run: 6, Passed: 108, Failures: 0, Errors: 0 Running org.apache.storm.grouping-test Tests run: 3, Passed: 7, Failures: 0, Errors: 0 Running org.apache.storm.security.auth.auth-test Tests run: 14, Passed: 100, Failures: 0, Errors: 0 Running integration.org.apache.storm.trident.integration-test Tests run: 9, Passed: 141, Failures: 0, Errors: 0 Running org.apache.storm.metrics-test Tests run: 8, Passed: 66, Failures: 0, Errors: 0 Running org.apache.storm.security.auth.auto-login-module-test Tests run: 6, Passed: 22, Failures: 0, Errors: 0 Running org.apache.storm.trident.tuple-test Tests run: 6, Passed: 36, Failures: 0, Errors: 0 Running org.apache.storm.serialization-test Tests run: 1, Passed: 0, Failures: 0, Errors: 0 Running org.apache.storm.nimbus-test Tests run: 41, Passed: 773, Failures: 0, Errors: 0 Running org.apache.storm.drpc-test Tests run: 7, Passed: 14, Failures: 0, Errors: 0 Running org.apache.storm.versioned-store-test Tests run: 2, Passed: 5, Failures: 0, Errors: 0 Running org.apache.storm.scheduler.multitenant-scheduler-test Tests run: 17, Passed: 233, Failures: 0, Errors: 0 Running org.apache.storm.trident.state-test Tests run: 5, Passed: 37, Failures: 0, Errors: 0 Running integration.org.apache.storm.testing4j-test Tests run: 7, Passed: 26, Failures: 0, Errors: 0 Running org.apache.storm.security.auth.nimbus-auth-test Tests run: 5, Passed: 39, Failures: 0, Errors: 0 [INFO] [INFO] --- jacoco-maven-plugin:0.7.2.201409121644:report (report) @ storm-core --- [INFO] Skipping JaCoCo execution due to missing execution data file:/Users/danny0405/study/storm2/storm/storm-core/target/jacoco.exec [INFO] [INFO] --- maven-jar-plugin:2.6:jar (default-jar) @ storm-core --- [INFO] Building jar: /Users/danny0405/study/storm2/storm/storm-core/target/storm-core-2.0.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor (attach-descriptor) @ storm-core --- [INFO] [INFO] --- maven-dependency-plugin:2.8:copy-dependencies (copy-dependencies) @ storm-core --- [INFO] Copying jgrapht-core-0.9.0.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/jgrapht-core-0.9.0.jar [INFO] Copying tools.logging-0.2.3.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/tools.logging-0.2.3.jar [INFO] Copying netty-3.9.0.Final.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/netty-3.9.0.Final.jar [INFO] Copying carbonite-1.5.0.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/carbonite-1.5.0.jar [INFO] Copying httpclient-4.3.3.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/httpclient-4.3.3.jar [INFO] Copying hiccup-0.3.6.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/hiccup-0.3.6.jar [INFO] Copying joda-time-2.3.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/joda-time-2.3.jar [INFO] Copying tools.namespace-0.2.11.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/tools.namespace-0.2.11.jar [INFO] Copying clj-stacktrace-0.2.8.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/clj-stacktrace-0.2.8.jar [INFO] Copying commons-compress-1.4.1.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/commons-compress-1.4.1.jar [INFO] Copying jackson-core-2.9.2.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/jackson-core-2.9.2.jar [INFO] Copying disruptor-3.3.2.jar to /Users/danny0405/study/storm2/storm/storm-core/target/dependency/disruptor-3.3.2.jar [INFO]
[GitHub] storm issue #2203: STORM-2153: New Metrics Reporting API
Github user roshannaik commented on the issue: https://github.com/apache/storm/pull/2203 Just wanted to unblock things here ... will post summarized numbers from my runs by tomorrow here is a very brief summary. I ran both TVL and ConstSpoutIdBoltNullBoltTopo (with and without ACKing) in single worker mode. **With ACKING**: The latency and throughput numbers are slightly better for this patch. **Without ACKING**: In case of both binaries, the worker dies & restarts after a few every minutes of running if we let the topo run without any throttling. (a known issue that will get fixed in STORM-2306 for Storm 2.0) The numbers taken when the worker is running generally indicate that this patch is most likely slower wrt the peak throughput. IMO, For Storm1.x this is not an issue as at higher throughputs worker is going to keep dying. But for Storm 2.0, once we have STORM-2306, it will be possible to better measure the throughput and a fix may be necessary. In short, my runs indicate that things are looking good for this patch wrt Storm 1.x. May need to revisit perf for Storm 2.0 As mentioned before, will post summarized numbers later. ---
[GitHub] storm issue #2203: STORM-2153: New Metrics Reporting API
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/2203 @arunmahadevan @roshannaik is going to do some performance tests. We could wait his input in a day or a bit more. ---
[GitHub] storm issue #2504: STORM-2887: store metrics into RocksDB
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/2504 @agresch Could you push your documentation to this branch? I could see and try out from my side. ---
[GitHub] storm pull request #2504: STORM-2887: store metrics into RocksDB
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/2504#discussion_r162768150 --- Diff: pom.xml --- @@ -324,6 +324,7 @@ 0.9.12 2.3.5 2.3.0 +5.8.6 --- End diff -- Actually I don't have experience with RocksDB. If you see RocksDB 5.8.6 is running properly, let's just use the version. ---
[GitHub] storm pull request #2504: STORM-2887: store metrics into RocksDB
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/2504#discussion_r162768100 --- Diff: storm-server/pom.xml --- @@ -64,6 +64,10 @@ auto-service true + +org.rocksdb +rocksdbjni --- End diff -- Good. We are good to go then. ---
[GitHub] storm pull request #2504: STORM-2887: store metrics into RocksDB
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/2504#discussion_r162768064 --- Diff: storm-server/src/main/java/org/apache/storm/metricstore/rocksdb/RocksDbStore.java --- @@ -0,0 +1,639 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.storm.metricstore.rocksdb; + +import com.codahale.metrics.Meter; +import java.io.File; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.atomic.AtomicReference; + +import org.apache.storm.DaemonConfig; +import org.apache.storm.metric.StormMetricsRegistry; +import org.apache.storm.metricstore.AggLevel; +import org.apache.storm.metricstore.FilterOptions; +import org.apache.storm.metricstore.Metric; +import org.apache.storm.metricstore.MetricException; +import org.apache.storm.metricstore.MetricStore; +import org.apache.storm.utils.ObjectReader; +import org.rocksdb.BlockBasedTableConfig; +import org.rocksdb.IndexType; +import org.rocksdb.Options; +import org.rocksdb.ReadOptions; +import org.rocksdb.RocksDB; +import org.rocksdb.RocksDBException; +import org.rocksdb.RocksIterator; +import org.rocksdb.WriteBatch; +import org.rocksdb.WriteOptions; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +public class RocksDbStore implements MetricStore, AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(RocksDbStore.class); +private static final int MAX_QUEUE_CAPACITY = 4000; +static final int INVALID_METADATA_STRING_ID = 0; +RocksDB db; +private ReadOnlyStringMetadataCache readOnlyStringMetadataCache = null; +private BlockingQueue queue = new LinkedBlockingQueue(MAX_QUEUE_CAPACITY); +private RocksDbMetricsWriter metricsWriter = null; +private MetricsCleaner metricsCleaner = null; +private Meter failureMeter = null; + +interface RocksDbScanCallback { +boolean cb(RocksDbKey key, RocksDbValue val); // return false to stop scan +} + +/** + * Create metric store instance using the configurations provided via the config map. + * + * @param config Storm config map + * @throws MetricException on preparation error + */ +public void prepare(Map config) throws MetricException { +validateConfig(config); + +this.failureMeter = StormMetricsRegistry.registerMeter("RocksDB:metric-failures"); + +RocksDB.loadLibrary(); +boolean createIfMissing = ObjectReader.getBoolean(config.get(DaemonConfig.STORM_ROCKSDB_CREATE_IF_MISSING), false); + +try (Options options = new Options().setCreateIfMissing(createIfMissing)) { +// use the hash index for prefix searches +BlockBasedTableConfig tfc = new BlockBasedTableConfig(); +tfc.setIndexType(IndexType.kHashSearch); +options.setTableFormatConfig(tfc); +options.useCappedPrefixExtractor(RocksDbKey.KEY_SIZE); + +String path = getRocksDbAbsoluteDir(config); +LOG.info("Opening RocksDB from {}", path); +db = RocksDB.open(options, path); +} catch (RocksDBException e) { +String message = "Error opening RockDB database"; +LOG.error(message, e); +throw new MetricException(message, e); +} + +// create thread to delete old metrics and metadata +Integer retentionHours = Integer.parseInt(config.get(DaemonConfig.STORM_ROCKSDB_METRIC_RETENTION_HOURS).toString()); +Integer deletionPeriod = 0; +if (config.containsKey(DaemonConfig.STORM_ROCKSDB_METRIC_DELETION_PERIOD_HOURS)) { +deletionPeriod =
[GitHub] storm issue #2203: STORM-2153: New Metrics Reporting API
Github user arunmahadevan commented on the issue: https://github.com/apache/storm/pull/2203 +1 again. @HeartSaVioR , based on the TVL numbers I interpret that the performance numbers (throughput and latency) are comparable to 1.x branch. In that case can we merge this patch? If we find any other issues during RC testing we can take a call. ---
[GitHub] storm issue #2504: STORM-2887: store metrics into RocksDB
Github user agresch commented on the issue: https://github.com/apache/storm/pull/2504 Other than creating a storm.home constant, I added the changes you requested. I am having trouble getting the html for the new md file generated to validate the formatting. I ran "jekyll serve -w" and was able to see an updated link for my page in index.md on the html side, but it did not generate any html for storm-metricstore.md. I could use some advice on how this is supposed to work. Thanks. ---
[GitHub] storm issue #2519: STORM-2903: Fix possible NullPointerException in Abstract...
Github user arunmahadevan commented on the issue: https://github.com/apache/storm/pull/2519 +1 ---
[GitHub] storm issue #2519: STORM-2903: Fix possible NullPointerException in Abstract...
Github user omkreddy commented on the issue: https://github.com/apache/storm/pull/2519 @arunmahadevan thanks for the review. yes, printing token should be sufficient. reverting PR to first version ---
[GitHub] storm pull request #2519: STORM-2903: Fix possible NullPointerException in A...
Github user arunmahadevan commented on a diff in the pull request: https://github.com/apache/storm/pull/2519#discussion_r162705370 --- Diff: external/storm-autocreds/src/main/java/org/apache/storm/common/AbstractAutoCreds.java --- @@ -215,9 +215,17 @@ private void addTokensToUGI(Subject subject) { if (allTokens != null) { for (Token token : allTokens) { try { + +if (token == null) { +LOG.debug("Ignoring null token"); +continue; +} + LOG.debug("Current user: {}", UserGroupInformation.getCurrentUser()); -LOG.debug("Token from credential: {} / {}", token.toString(), - token.decodeIdentifier().getUser()); +LOG.debug("Token from Credentials : {}", token); + +if (token.decodeIdentifier() != null) --- End diff -- @omkreddy , toString already handles it and just printing the token would take care of decoding and printing the user information. https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L429 ---
[GitHub] storm issue #2519: MINOR: Fix possible NullPointerException in AbstractAutoC...
Github user omkreddy commented on the issue: https://github.com/apache/storm/pull/2519 PR for master: https://github.com/apache/storm/pull/2520 ---
[GitHub] storm pull request #2520: STORM-2903: Fix possible NullPointerException in A...
GitHub user omkreddy opened a pull request: https://github.com/apache/storm/pull/2520 STORM-2903: Fix possible NullPointerException in AbstractHadoopAutoCreds and doc cleanups You can merge this pull request into a Git repository by running: $ git pull https://github.com/omkreddy/storm AUTO-CREDS-CLEANUP-MASTER Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2520.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2520 commit fd080d99d5c4d483e6a1149ad3edc0b5a7732412 Author: Manikumar ReddyDate: 2018-01-19T18:10:37Z STORM-2903: Fix possible NullPointerException in AbstractHadoopAutoCreds and doc cleanups ---
[GitHub] storm issue #2519: MINOR: Fix possible NullPointerException in AbstractAutoC...
Github user omkreddy commented on the issue: https://github.com/apache/storm/pull/2519 @satishd @arunmahadevan Actually exception is from token.decodeIdentifier(). decodeIdentifier can return null. Updated the PR with review comments. ---
[GitHub] storm pull request #2519: MINOR: Fix possible NullPointerException in Abstra...
Github user arunmahadevan commented on a diff in the pull request: https://github.com/apache/storm/pull/2519#discussion_r162679780 --- Diff: external/storm-autocreds/src/main/java/org/apache/storm/common/AbstractAutoCreds.java --- @@ -216,8 +216,7 @@ private void addTokensToUGI(Subject subject) { for (Token token : allTokens) { try { LOG.debug("Current user: {}", UserGroupInformation.getCurrentUser()); -LOG.debug("Token from credential: {} / {}", token.toString(), - token.decodeIdentifier().getUser()); +LOG.debug("Extracted Token from Credentials : {}", token); --- End diff -- It appears that the token.toString() does some decoding so might print the user info. ---
[GitHub] storm issue #2519: MINOR: Fix possible NullPointerException in AbstractAutoC...
Github user arunmahadevan commented on the issue: https://github.com/apache/storm/pull/2519 +1, Thanks for the patch. Can you also associate it with a JIRA and raise a patch for master branch as well? ---
[GitHub] storm pull request #2519: MINOR: Fix possible NullPointerException in Abstra...
Github user satishd commented on a diff in the pull request: https://github.com/apache/storm/pull/2519#discussion_r162677211 --- Diff: external/storm-autocreds/src/main/java/org/apache/storm/common/AbstractAutoCreds.java --- @@ -216,8 +216,7 @@ private void addTokensToUGI(Subject subject) { for (Token token : allTokens) { try { LOG.debug("Current user: {}", UserGroupInformation.getCurrentUser()); -LOG.debug("Token from credential: {} / {}", token.toString(), - token.decodeIdentifier().getUser()); +LOG.debug("Extracted Token from Credentials : {}", token); --- End diff -- token.toString() will not print token.decodeIdentifier().getUser()) which may be useful in debugging. `UserGroupInformation.getCurrentUser().addToken(token)` ignores null tokens. Better to log and skip when `token` is null like below instead of changing the current log statements. ``` for (Token token : allTokens) { try { if(token == null) { LOG.debug("Ignoring null token"); continue; } LOG.debug("Current user: {}", UserGroupInformation.getCurrentUser()); LOG.debug("Token from credential: {} / {}", token.toString(), token.decodeIdentifier().getUser()); UserGroupInformation.getCurrentUser().addToken(token); LOG.info("Added delegation tokens to UGI."); } catch (IOException e) { LOG.error("Exception while trying to add tokens to ugi", e); } } ``` ---
[GitHub] storm issue #2519: MINOR: Fix possible NullPointerException in AbstractAutoC...
Github user omkreddy commented on the issue: https://github.com/apache/storm/pull/2519 @arunmahadevan @HeartSaVioR Observed below exception while testing Hive token mechanism. ``` Caused by: java.lang.NullPointerException at org.apache.storm.common.AbstractAutoCreds.addTokensToUGI(AbstractAutoCreds.java:219) ~[storm-autocreds-1.2.0.3.1.0.0-526.jar:1.2.0.3.1.0.0-526] at org.apache.storm.common.AbstractAutoCreds.populateSubject(AbstractAutoCreds.java:118) ~[storm-autocreds-1.2.0.3.1.0.0-526.jar:1.2.0.3.1.0.0-526] at org.apache.storm.security.auth.AuthUtils.populateSubject(AuthUtils.java:228) ~[storm-core-1.2.0.3.1.0.0-526.jar:1.2.0.3.1.0.0-526] ... 10 more 2018-01-19 16:23:26.157 o.a.s.util main [ERROR] Halting process: ("Error on initialization") ``` ---
[GitHub] storm pull request #2504: STORM-2887: store metrics into RocksDB
Github user agresch commented on a diff in the pull request: https://github.com/apache/storm/pull/2504#discussion_r162668430 --- Diff: storm-server/src/main/java/org/apache/storm/metricstore/rocksdb/RocksDbStore.java --- @@ -0,0 +1,639 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.storm.metricstore.rocksdb; + +import com.codahale.metrics.Meter; +import java.io.File; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.atomic.AtomicReference; + +import org.apache.storm.DaemonConfig; +import org.apache.storm.metric.StormMetricsRegistry; +import org.apache.storm.metricstore.AggLevel; +import org.apache.storm.metricstore.FilterOptions; +import org.apache.storm.metricstore.Metric; +import org.apache.storm.metricstore.MetricException; +import org.apache.storm.metricstore.MetricStore; +import org.apache.storm.utils.ObjectReader; +import org.rocksdb.BlockBasedTableConfig; +import org.rocksdb.IndexType; +import org.rocksdb.Options; +import org.rocksdb.ReadOptions; +import org.rocksdb.RocksDB; +import org.rocksdb.RocksDBException; +import org.rocksdb.RocksIterator; +import org.rocksdb.WriteBatch; +import org.rocksdb.WriteOptions; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +public class RocksDbStore implements MetricStore, AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(RocksDbStore.class); +private static final int MAX_QUEUE_CAPACITY = 4000; +static final int INVALID_METADATA_STRING_ID = 0; +RocksDB db; +private ReadOnlyStringMetadataCache readOnlyStringMetadataCache = null; +private BlockingQueue queue = new LinkedBlockingQueue(MAX_QUEUE_CAPACITY); +private RocksDbMetricsWriter metricsWriter = null; +private MetricsCleaner metricsCleaner = null; +private Meter failureMeter = null; + +interface RocksDbScanCallback { +boolean cb(RocksDbKey key, RocksDbValue val); // return false to stop scan +} + +/** + * Create metric store instance using the configurations provided via the config map. + * + * @param config Storm config map + * @throws MetricException on preparation error + */ +public void prepare(Map config) throws MetricException { +validateConfig(config); + +this.failureMeter = StormMetricsRegistry.registerMeter("RocksDB:metric-failures"); + +RocksDB.loadLibrary(); +boolean createIfMissing = ObjectReader.getBoolean(config.get(DaemonConfig.STORM_ROCKSDB_CREATE_IF_MISSING), false); + +try (Options options = new Options().setCreateIfMissing(createIfMissing)) { +// use the hash index for prefix searches +BlockBasedTableConfig tfc = new BlockBasedTableConfig(); +tfc.setIndexType(IndexType.kHashSearch); +options.setTableFormatConfig(tfc); +options.useCappedPrefixExtractor(RocksDbKey.KEY_SIZE); + +String path = getRocksDbAbsoluteDir(config); +LOG.info("Opening RocksDB from {}", path); +db = RocksDB.open(options, path); +} catch (RocksDBException e) { +String message = "Error opening RockDB database"; +LOG.error(message, e); +throw new MetricException(message, e); +} + +// create thread to delete old metrics and metadata +Integer retentionHours = Integer.parseInt(config.get(DaemonConfig.STORM_ROCKSDB_METRIC_RETENTION_HOURS).toString()); +Integer deletionPeriod = 0; +if (config.containsKey(DaemonConfig.STORM_ROCKSDB_METRIC_DELETION_PERIOD_HOURS)) { +deletionPeriod =
[GitHub] storm pull request #2519: MINOR: Fix possible NullPointerException in Abstra...
GitHub user omkreddy opened a pull request: https://github.com/apache/storm/pull/2519 MINOR: Fix possible NullPointerException in AbstractAutoCreds and doc cleanups You can merge this pull request into a Git repository by running: $ git pull https://github.com/omkreddy/storm AUTO-CREDS-CLEANUP Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2519.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2519 commit 469759e48dfc2bc9d3ac6ee6bf26fcca6707b175 Author: Manikumar ReddyDate: 2018-01-19T15:51:49Z MINOR: Fix possible NullPointerException in AbstractAutoCreds logs and doc cleanups ---
[GitHub] storm issue #2203: STORM-2153: New Metrics Reporting API
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/2203 Again I don't have much time to do perf test with details. I increased rate to 10 (with max spout 5000) which the all the branches even can't catch up the rate. I lowered rate to 85000 (with max spout 5000) which looks keeping the rate. Pasting raw values. >> 1.2.0 (b8f76af) 1. uptime | acked | acked/sec | failed | 99% | 99.9% | min | max | mean | stddev | user | sys | gc | mem -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- 31 | 437,240 | 14,104.52 | 0 | 6,048,186,367 | 6,358,564,863 | 1,769,996,288 | 6,715,080,703 | 4,566,189,266.37 | 885,893,684.01 | 162,030 | 5,320 | 0 | 848.94 61 | 2,313,000 | 77,100.00 | 0 | 12,280,922,111 | 12,708,741,119 | 4,894,752,768 | 13,186,891,775 | 8,458,374,872.10 | 1,323,986,260.94 | 303,800 | 15,480 | 32,612 | 1,283.47 91 | 2,357,320 | 78,577.33 | 0 | 17,137,926,143 | 18,119,393,279 | 7,637,827,584 | 18,605,932,543 | 11,163,048,118.57 | 2,259,041,300.64 | 303,190 | 16,620 | 32,479 | 1,266.13 121 | 2,341,220 | 78,040.67 | 0 | 22,179,479,551 | 22,682,796,031 | 9,277,800,448 | 22,984,785,919 | 13,752,645,112.96 | 3,421,493,870.62 | 300,820 | 16,360 | 32,251 | 1,534.90 151 | 2,458,660 | 81,955.33 | 0 | 25,685,917,695 | 26,491,224,063 | 11,333,009,408 | 27,514,634,239 | 16,290,749,165.18 | 4,629,241,671.60 | 301,850 | 17,960 | 32,979 | 1,361.38 181 | 2,707,140 | 90,238.00 | 0 | 24,998,051,839 | 25,635,586,047 | 9,554,624,512 | 26,675,773,439 | 16,656,627,804.48 | 4,532,406,841.16 | 314,300 | 18,370 | 34,126 | 1,326.70 211 | 2,758,320 | 91,944.00 | 0 | 22,833,790,975 | 23,471,325,183 | 6,601,834,496 | 24,729,616,383 | 15,588,408,898.78 | 4,512,253,905.57 | 309,680 | 19,180 | 31,032 | 1,425.91 241 | 2,707,380 | 90,246.00 | 0 | 20,854,079,487 | 22,162,702,335 | 3,615,490,048 | 23,018,340,351 | 13,927,540,232.07 | 4,854,717,857.60 | 308,130 | 19,870 | 29,631 | 1,861.17 271 | 2,728,860 | 90,962.00 | 0 | 22,632,464,383 | 24,008,196,095 | 845,152,256 | 25,216,155,647 | 12,358,302,921.24 | 5,927,445,795.86 | 306,950 | 19,100 | 31,501 | 1,787.01 301 | 2,667,500 | 88,916.67 | 0 | 24,763,170,815 | 26,038,239,231 | 17,498,112 | 27,447,525,375 | 11,183,029,224.17 | 7,159,481,478.40 | 308,470 | 19,980 | 31,305 | 2,342.57 2. uptime | acked | acked/sec | failed | 99% | 99.9% | min | max | mean | stddev | user | sys | gc | mem -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- 30 | 578,720 | 19,290.67 | 0 | 7,402,946,559 | 7,662,993,407 | 266,469,376 | 8,065,646,591 | 5,158,004,864.46 | 1,074,126,219.89 | 262,310 | 9,350 | 9,773 | 1,602.33 60 | 2,372,600 | 79,086.67 | 0 | 11,668,553,727 | 12,054,429,695 | 4,987,027,456 | 12,423,528,447 | 7,402,521,102.49 | 1,520,006,340.32 | 307,880 | 14,850 | 33,679 | 1,237.30 90 | 2,437,040 | 81,234.67 | 0 | 15,216,934,911 | 15,653,142,527 | 5,872,025,600 | 16,617,832,447 | 9,132,024,518.75 | 2,534,964,885.31 | 308,240 | 15,100 | 35,179 | 754.82 120 | 2,543,240 | 84,774.67 | 0 | 16,475,226,111 | 16,944,988,159 | 5,528,092,672 | 17,800,626,175 | 10,760,658,547.64 | 2,897,544,833.24 | 311,050 | 17,130 | 33,127 | 1,370.09 150 | 2,636,720 | 87,890.67 | 0 | 14,310,965,247 | 14,982,053,887 | 2,990,538,752 | 16,139,681,791 | 9,655,823,414.65 | 3,002,342,820.97 | 311,070 | 18,610 | 32,034 | 1,531.01 180 | 2,752,440 | 91,748.00 | 0 | 14,571,012,095 | 15,997,075,455 | 14,172,160 | 16,995,319,807 | 8,147,697,251.82 | 4,013,817,552.85 | 306,620 | 19,910 | 29,342 | 1,275.41 210 | 2,713,300 | 90,443.33 | 0 | 15,032,385,535 | 16,022,241,279 | 46,727,168 | 18,001,952,767 | 6,401,270,339.36 | 4,479,415,024.72 | 301,180 | 20,950 | 27,750 | 1,848.82 240 | 2,692,920 | 89,764.00 | 0 | 15,820,914,687 | 16,827,547,647 | 32,653,312 | 17,582,522,367 | 4,715,121,892.86 | 5,286,625,289.02 | 303,560 | 21,070 | 27,904 | 1,181.89 271 | 2,726,920 | 87,965.16 | 0 | 14,579,400,703 | 15,619,588,095 | 29,425,664 | 16,668,164,095 | 3,519,416,574.71 | 4,638,221,636.04 | 302,440 | 22,290 | 27,634 | 2,023.50 301 | 2,769,580 | 92,319.33 | 0 | 7,050,625,023 | 7,931,428,863 | 7,737,344 | 8,464,105,471 | 960,653,059.65 | 1,605,263,196.59 | 277,020 | 28,700 | 17,844 | 1,250.71 3. uptime | acked | acked/sec | failed | 99% | 99.9% | min | max | mean | stddev | user | sys | gc | mem -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- 30 | 541,280 | 18,042.67 | 0 | 5,599,395,839 | 5,922,357,247 | 474,218,496 | 6,257,901,567 | 4,213,619,216.15 | 685,663,910.05 | 157,680 | 5,270 | 0 | 1,275.05 60 | 2,407,560 | 80,252.00 | 0 | 10,225,713,151 | 10,812,915,711 | 3,972,005,888 | 11,307,843,583 | 6,705,396,513.71 | 1,066,545,879.38 | 302,660 | 13,700 | 32,120 | 1,532.04 90 | 2,613,980 | 87,132.67 | 0 | 12,658,409,471 | 13,774,094,335 |
[GitHub] storm issue #2504: STORM-2887: store metrics into RocksDB
Github user agresch commented on the issue: https://github.com/apache/storm/pull/2504 I'll put up a commit for the remaining issues (or comment if I have further questions). ---
[GitHub] storm pull request #2504: STORM-2887: store metrics into RocksDB
Github user agresch commented on a diff in the pull request: https://github.com/apache/storm/pull/2504#discussion_r162634637 --- Diff: storm-server/pom.xml --- @@ -64,6 +64,10 @@ auto-service true + +org.rocksdb +rocksdbjni --- End diff -- I tested on a mac and a RHEL vm. The rocksDB jar also contains a win64 jni dll. If an error is thrown creating the metrics store, everything should be treated as a noop. ---
[GitHub] storm pull request #2504: STORM-2887: store metrics into RocksDB
Github user agresch commented on a diff in the pull request: https://github.com/apache/storm/pull/2504#discussion_r162630427 --- Diff: pom.xml --- @@ -324,6 +324,7 @@ 0.9.12 2.3.5 2.3.0 +5.8.6 --- End diff -- No. It was the latest version at the time. Open to advice/suggestions. 5.8.7 and 5.9.2 now also exist. ---
[GitHub] storm pull request #2475: STORM-2862: More flexible logging in multilang
Github user hmcc commented on a diff in the pull request: https://github.com/apache/storm/pull/2475#discussion_r162613466 --- Diff: storm-client/src/jvm/org/apache/storm/utils/DefaultShellLogHandler.java --- @@ -0,0 +1,100 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.apache.storm.multilang.ShellMsg; +import org.apache.storm.task.TopologyContext; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Handle output from non-JVM processes. + */ +public class DefaultShellLogHandler implements ShellLogHandler { +public static final Logger LOG = LoggerFactory.getLogger(DefaultShellLogHandler.class); --- End diff -- Yep, makes sense - thanks! ---
[GitHub] storm pull request #2517: [STORM-2901] Reuse ZK connection for getKeySequenc...
Github user danny0405 commented on a diff in the pull request: https://github.com/apache/storm/pull/2517#discussion_r162608557 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -1263,24 +1281,23 @@ private void setupStormCode(Mapconf, String topoId, String tmpJ String codeKey = ConfigUtils.masterStormCodeKey(topoId); String confKey = ConfigUtils.masterStormConfKey(topoId); NimbusInfo hostPortInfo = nimbusHostPortInfo; + if (tmpJarLocation != null) { //in local mode there is no jar try (FileInputStream fin = new FileInputStream(tmpJarLocation)) { store.createBlob(jarKey, fin, new SettableBlobMeta(BlobStoreAclHandler.DEFAULT), subject); } -if (store instanceof LocalFsBlobStore) { -clusterState.setupBlobstore(jarKey, hostPortInfo, getVersionForKey(jarKey, hostPortInfo, conf)); -} } topoCache.addTopoConf(topoId, subject, topoConf); -if (store instanceof LocalFsBlobStore) { -clusterState.setupBlobstore(confKey, hostPortInfo, getVersionForKey(confKey, hostPortInfo, conf)); -} - topoCache.addTopology(topoId, subject, topology); + if (store instanceof LocalFsBlobStore) { -clusterState.setupBlobstore(codeKey, hostPortInfo, getVersionForKey(codeKey, hostPortInfo, conf)); +if (tmpJarLocation != null) { +clusterState.setupBlobstore(jarKey, hostPortInfo, getVersionForKey(jarKey, hostPortInfo, getOrCreateZkClient())); --- End diff -- Done. ---
[GitHub] storm pull request #2517: [STORM-2901] Reuse ZK connection for getKeySequenc...
Github user danny0405 commented on a diff in the pull request: https://github.com/apache/storm/pull/2517#discussion_r162608136 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -1149,6 +1155,18 @@ public void setAuthorizationHandler(IAuthorizer authorizationHandler) { this.authorizationHandler = authorizationHandler; } +private CuratorFramework getOrCreateZkClient() { --- End diff -- Done. ---