[GitHub] storm-site issue #5: Style tables so they appear similar to how they look on...
Github user erikdw commented on the issue: https://github.com/apache/storm-site/pull/5 :shipit: ! Thanks @srdo .FYI, I hand-hacked my view of the site using Google Chrome's developer tools, and verified that the additions you propose make it look good: https://user-images.githubusercontent.com/441/40030501-a4daa4fa-5786-11e8-8039-b9fca9dbb9cc.png;> ---
[GitHub] storm pull request #2636: 1.1.x branch - Add SSL functionality
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2636#discussion_r184219125 --- Diff: external/storm-cassandra/src/main/java/org/apache/storm/cassandra/client/ClusterFactory.java --- @@ -67,7 +83,57 @@ protected Cluster make(Map<String, Object> stormConf) { .setConsistencyLevel(cassandraConf.getConsistencyLevel()); cluster.withQueryOptions(options); +SslProps sslProps = cassandraConf.getSslProps(); +configureIfSsl(cluster, sslProps); return cluster.build(); } + +/** + * If sslProps passed, then set SSL configuration in clusterBuilder. + * + * @param clusterBuilder cluster builder + * @param sslProps SSL properties + * @return + */ +private static Builder configureIfSsl(Builder clusterBuilder, SslProps sslProps) { +if (sslProps == null || !sslProps.isSsl()) { +return clusterBuilder; +} + +SSLContext sslContext = getSslContext(sslProps.getTruststorePath(), sslProps.getTruststorePassword(), sslProps.getKeystorePath(), +sslProps.getKeystorePassword()); + +// Default cipher suites supported by C* --- End diff -- Any purpose for this comment being retained? ---
[GitHub] storm issue #2637: Map of Spout configurations from `storm-kafka` to `storm-...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2637 @hmcl : would you happen to have a chance (or desire) to skim through this PR Hugo? I know that you contributed heavily to storm-kafka-client, so thought you might have some thoughts on this. No worries if not. ---
[GitHub] storm pull request #2634: [STORM-3021] Fix wrong usages of Config.TOPOLOGY_W...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2634#discussion_r181535091 --- Diff: storm-client/src/jvm/org/apache/storm/Config.java --- @@ -372,9 +374,10 @@ /** * How many executors to spawn for ackers. * - * By not setting this variable or setting it as null, Storm will set the number of acker executors - * to be equal to the number of workers configured for this topology. If this variable is set to 0, - * then Storm will immediately ack tuples as soon as they come off the spout, effectively disabling reliability. + * By not setting this variable or setting it as null, Storm will set the number of acker executor to be equal to --- End diff -- nit: please restore pluralization of `executor`. i.e., `number of acker executors`. ---
[GitHub] storm issue #2550: STORM-2937: Overwrite latest storm-kafka-client 1.x-branc...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2550 @HeartSaVioR : I just updated all the commit messages in-line with @hmcl's proposal. I also noticed that I missed the addition of the `serialVersionUID` to Fields.java so I put that in and rebased. That's the only non-message change in the PR. ---
[GitHub] storm issue #2550: STORM-2937: Overwrite latest storm-kafka-client 1.x-branc...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2550 > First of all, I need to make this clear: I'm not claiming that this PR should be changed. I'm OK to merge this in as it is, and I'll do it once 24hrs policy is met. Let me first add just a bit more info to my PR's commit messages per @hmcl's suggestions. > What we want to discuss is the standard rule of Storm, hence it should not be resolved in offline discussion, but discussion thread in dev list should be initiated and discussion should take place in there. Standard rule should not be brought from decision among part of us. Totally agree. Sorry that my use of "offline" was unclear. I meant offline from *this* channel of discussion. In a dev-list email thread sounds apropos. [1] So maybe we start a thread there? [1] Except that the format of the mails on the list are pretty ugly, and thus it's harder to cleanly express certain things. e.g., my previous message there would not be nearly as clean in the email list. Not sure why the lists seemingly don't support HTML emails. ---
[GitHub] storm issue #2550: STORM-2937: Overwrite latest storm-kafka-client 1.x-branc...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2550 @hmcl : now we're going down the rathole that I knew existed if we didn't take this offline. ð Before I get to the question of using command-line `git` to see the related commits, I must note my agreement that all commits should be prefixed with the same ID (e.g., `STORM-2937:`) and perhaps a common brief description [1]. That would provide a convention for seeing that the commits are all part of the same JIRA. That *might* suffice for satisfying your question. As for how I personally would enumerate the commits in the `git log` for this PR as you requested, I cannot show you *yet* because this PR hasn't been merged. So if you'll excuse the reference to a separate repo, I'll show you how I enumerate the commits for a PR different in a different repo: * https://github.com/mesos/storm/pull/233/commits We can see that the commits there are: * f9fa0f6 * 03030db When I look at my "git log" for purposes of seeing the flow of history, I use`git l` [2]. This output looks like so: ``` * 4119581 - (base/master) avoid hardcoding storm-core version in storm-shim-1x/pom.xml, just use the property defined in the project's base pom.xml (2018-01-23 18:56:03 -0800) * 256d5f0 - Merge pull request #231 from JessicaLHartog/logviewer-sidecar-flag (2017-12-20 11:40:06 -0800) |\ | * 18832dd - Conditionally perform logviewer sidecar creation depending on configuration. (2017-12-13 15:28:21 -0800) | * 6dd4973 - Rename now defunct configuration parameter (2017-12-13 14:46:29 -0800) * | d2cedb8 - Merge pull request #233 (2017-12-20 11:30:34 -0800) |\ \ | * | 03030db - Update protobuf version to be compatable with newer versions of Mesos (2017-12-18 15:16:37 -0800) | * | f9fa0f6 - Updating storm version (2017-12-18 15:16:37 -0800) |/ / * | 7a1e50c - Merge pull request #232 from JessicaLHartog/issue-222 (2017-12-18 15:12:29 -0800) |\ \ | |/ |/| ``` I acknowledge that it's a bit hard to parse visually at first! But it shows me that there are 2 commits that went into the merge of PR #233: * 03030db * f9fa0f6 Then you can put those commits in a sequence when you run various git commands. Again, I **do** agree that it's best practice to squash. I'm just trying to argue that it shouldn't be a blanket unwavering policy with no exceptions. Otherwise I believe we're choosing ease of commit stewardship over ease of reading and maintaining the actual code. [1] I kind of dislike having the *exact same* headline with nothing unique for each commit because it forces you to look more deeply to see what each commit does (e.g., open each commit separately in GitHub, or use git show for each commit, or use git log and scroll to the commits you are looking at). That is a minor nit with the convention you proposed though. [2] Definition of my `git l` alias ``` % cat ~/.gitconfig ... [alias] ... l = log --date=iso-local --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cd) %C(bold blue)<%an>%Creset' --abbrev-commit ``` ---
[GitHub] storm issue #2550: STORM-2937: Overwrite latest storm-kafka-client 1.x-branc...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2550 To be clear, I'm largely aligned with what you're both saying. I might not understand what precise problem you face that makes not squashing such a problem, but I too would appreciate the aesthetic clarity of a cleaner git log than we currently have, *so long as* it doesn't lose valuable information. i.e., the stance of *always* squashing *will* cause information loss in the git log. ---
[GitHub] storm issue #2550: STORM-2937: Overwrite latest storm-kafka-client 1.x-branc...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2550 @hmcl @HeartSaVioR : perhaps we can take this offline? (I admit to being guilty of starting this discussion in these PRs.) I am not grasping the actual problems that you encounter that make squashing so important. I have read everything you wrote here and in the other thread where we discussed it. But I still cannot see what *exactly* is the problem for you, nor why all these other big projects have gone to this information-losing approach. All commits of a PR *are* bunched together. They can all be referenced as a group in `git revert` or `git cherry-pick`. Please note I do appreciate that you are willing to allow for divergence from a strict "squash all the things" standard when it is valuable. NOTE: I abhor super noisy PRs that merge tons of bug-fixy commits, especially since that encourages bad commit messages as people usually quickly write "fixed foo" with no details when they are iteratively committing. So it's a bit uncomfortable to find myself defending *not* squashing. ---
[GitHub] storm pull request #2550: STORM-2937: Overwrite latest storm-kafka-client 1....
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2550 STORM-2937: Overwrite latest storm-kafka-client 1.x-branch into 1.0.x-branch Mimics @HeartSaVioR 's work in STORM-2936 and PR #2549, but applies back to the more divergent 1.0.x-branch. Big thanks to @HeartSaVioR , @srdo, @hmcl for guidance and @jessicalhartog & @srishtyagrawal for pairing to figure out all the odds and ends here. And for the storm community's flexibility and willingness to address this kind of issue! * origin commit sha (1.x-branch): 74ca795 * That is the same as in STORM-2936 and PR #2549 * changed 2 things in external/storm-kafka-client/pom.xml: * module version had to be synced with 1.0.x-branch current version * storm-core dependency's scope changed from undefined `${provided.scope}` to `provided` * updated storm's base pom.xml with the same kafka client version as 1.x-branch: 0.10.1.0 * overwriting requires a few somewhat isolated changes to storm-core * added InterfaceStability.java * overwrote Time.java with that from 1.x-branch * needed both for tests in storm-kafka-client, and for Time.nanoTime() invocation * backported changes to SlotTest.java to leverage Time.java changes * needed since startSimulatingAutoAdvanceOnSleep() was ditched in the Time.java changes. * backport changes to Fields.java to allow equality comparisons in tests * otherwise we received confusing test failures claiming 2 seemingly identical items to be different * backport changes to trident interface for partition handling, which were needed for KafkaTridentSpoutEmitter * also copied back the docs NOTE: there were other minor-ish things I spotted in 1.x-branchthat I intentionally didn't fix for now (typos, ^M characters, trailing whitespace, etc.). I didn't want this to entail any more changes than necessary. For quick reference, here are the files outside of storm-kafka-client that have been modified or added: ``` % git diff origin/1.0.x-branch | grep '^diff' | awk '{print $3}' | grep -v '^external/storm-kafka-client/' docs/index.md docs/storm-kafka-client.md external/storm-eventhubs/src/main/java/org/apache/storm/eventhubs/trident/OpaqueTridentEventHubEmitter.java external/storm-kafka/src/jvm/org/apache/storm/kafka/trident/TridentKafkaEmitter.java pom.xml storm-core/src/jvm/org/apache/storm/annotation/InterfaceStability.java storm-core/src/jvm/org/apache/storm/trident/spout/IOpaquePartitionedTridentSpout.java storm-core/src/jvm/org/apache/storm/trident/spout/OpaquePartitionedTridentSpoutExecutor.java storm-core/src/jvm/org/apache/storm/trident/topology/state/TransactionalState.java storm-core/src/jvm/org/apache/storm/tuple/Fields.java storm-core/src/jvm/org/apache/storm/utils/Time.java storm-core/test/jvm/org/apache/storm/daemon/supervisor/SlotTest.java ``` You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2937 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2550.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2550 commit 533ef7ad2ba057bf746bc152ac84151516b744b1 Author: Erik Weathers <erikdw@...> Date: 2018-02-07T00:10:36Z copied external/storm-kafka-client from 1.x-branch at SHA 74ca795 (this doesn't compile, intentionally) commit e42ae4de8659c240609c7230a89e750a2e16df07 Author: Erik Weathers <erikdw@...> Date: 2018-02-07T01:44:49Z update storm-kafka-client pom.xml and base pom.xml as necessary for using storm-kafka-client from 1.x-branch in 1.0.x-branch Also added InterfaceStability.java which is another needed change. commit f758123b57c8c60e2842a56b947416ca7cdf988e Author: Hugo Louro <hmclouro@...> Date: 2017-03-10T21:13:31Z backport trident interface changes commit 5d9ad1eaa6c6bca8edf5eceffb57e89e27975838 Author: Erik Weathers <erikdw@...> Date: 2018-02-07T03:47:22Z backport changes to Fields to allow it to be checked for equality in storm-kafka-client unit tests commit 0ddc411540f317e63f6a493eb668f49996bc45f3 Author: Erik Weathers <erikdw@...> Date: 2018-02-07T04:02:43Z copy Time.java from 1.x-branch to allow use of nanoTime() in storm-kafka-client, and also update SlotTest to use try-with-resources since new Time implementation ditched startSimulatingAutoAdvanceOnSleep() (this was a selective cherry-pick of a03137ed, retaining only those changes needed) commit ad736050727f0b38247a16948f6c880b4782e734 Author: Erik Weathers <erikdw@...> Date: 2018-02-07T08:16:37Z add storm-kafka-client doc from 1.x-branch, and link to it from index.md ---
[GitHub] storm issue #2549: STORM-2936 Overwrite latest storm-kafka-client 1.x-branch...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2549 Also can you please tell me how you "archived" the couple of items that you did? ---
[GitHub] storm issue #2549: STORM-2936 Overwrite latest storm-kafka-client 1.x-branch...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2549 I know we've talked with @hmcl about the "to squash or not to squash?" question -- IMNSHO this is a perfect example of where squashing is a bad idea. i.e., I have *no* idea what "fix compilation issues due to storm-core" actually entailed by just looking at this giant commit. ---
[GitHub] storm issue #2433: [STORM-2693] Heartbeats and assignments promotion for sto...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2433 @revans2 : thanks, please just loop us in and we'll carve out time to provide feedback. ---
[GitHub] storm issue #2433: [STORM-2693] Heartbeats and assignments promotion for sto...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2433 /quote My consideration is the workers started by parent supervisor, how do the workers know their parent supervisor's heartbeat/assignment port? If i passed it as starting argument, the port will be invalid if its parent supervisor collapse or restarts. For storm-on-mesos, the Supervisor + Workers run in a per-topology container on each host. The Supervisor is the container's init process, so if it dies then the Workers die with it. So the problematic scenario you outlined doesn't exist for that use-case. ---
[GitHub] storm issue #2433: [STORM-2693] Heartbeats and assignments promotion for sto...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2433 @HeartSaVioR : really @revans2 noticed the change's implications for storm-on-mesos, so he should get the credit. :-) He's rightly suggested that we create some tests to codify everything that might break us -- that is understandably difficult, but I'm collecting a list of things we'll need to test for. And awesome, `-c supervisor.thrift.port=` will work great for us, thanks for confirming! (I was able to look through the `bin/storm.py` code and see that the `-c/--config` handling ends up putting elements into `-Dstorm.options=` for any executed storm java class.) ---
[GitHub] storm issue #2433: [STORM-2693] Heartbeats and assignments promotion for sto...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2433 @HeartSaVioR : > By the way, I was not aware of the discussion in storm-mesos, so don't know which works should be done in Storm side, and how these are coupled with "this issue". Maybe better to only pointing out relevant things, and discuss in dev. mailing list if necessary for things out of topic. I think you're misinterpreting [Jessica's comments](https://github.com/apache/storm/pull/2433#issuecomment-357140893). She wasn't trying to bog down this discussion with an attempt to resolve the other breakages of storm-on-mesos, I believe she was just providing some context to Bobby's early statement](https://github.com/apache/storm/pull/2433#issuecomment-356661822) about storm-on-mesos being broken a few times by changes like this in storm-core. Furthermore, I believe she was also clarifying that storm-on-mesos is currently *totally* broken (as explained in the context she provided) in the 1.1+ branches, so this proposed change technically is not *breaking* storm-on-mesos, since it's already broken. Our goal is simply to prevent even more breaking changes. > As far as I understand in your comment, only concern with this issue is specifying Supervisor's thrift port, which shouldn't be random in range but just using specified port. If I understand correctly, the patch already does that (via configuration), and storm-mesos could launch Supervisor instance with overriding supervisor thrift port. Makes sense? That has potential to work -- can you please clarify something though? Is it possible to specify this setting (`supervisor.thrift.port`) as a CLI parameter to the supervisor as it is launched? If that works then awesome, that means the requirement Jessica outlined is already satisfied since we can simply specify that setting when we launch each supervisor. However, if the option instead must be in the storm configuration yaml file, then it is insufficient. That is because storm-on-mesos *must* be able to have different ports for every supervisor, but every supervisor shares the same config file in storm-on-mesos. ---
[GitHub] storm issue #2510: [STORM-2892] fix PATH substitution in flux tests
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2510 @srdo : makes sense, thanks for the info ---
[GitHub] storm issue #2510: [STORM-2892] fix PATH substitution in flux tests
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2510 @srdo : that is my understanding as well. Sorry, I didn't realize you backported this to all of the other active branches already, is that done by default for trivial changes like this? ---
[GitHub] storm issue #2512: [STORM-2894] fix some random typos
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2512 @srdo : I assume for cleanliness / ease of reading purposes? I am a proponent of squashing commits when there are tons of slight tweaks being made to the same code. I do like separate commits for logically separate things, since it can help understand the code changes in more bite-sized chunks instead of mammoth unreadably-long commits. ---
[GitHub] storm issue #2512: [STORM-2894] fix some random typos
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2512 @srdo : I actually spent extra effort to have 2 commits. :-) This was to ensure that there is no hidden changes amidst the renaming of the file (I am a strong proponent of trivial-to-read commits/diffs). With the 2 separate commits, in raw CLI `git` the changes are easy to follow. When I made the change at first in 1 commit I believe it was hard to see within the `git show` what lines had been touched. I might be misremembering a bit, but I definitely remember that it was clear in the `git status` that a non-content-changing rename was done, so long as I didn't touch the file contents simultaneously. ---
[GitHub] storm issue #2510: [STORM-2892] fix PATH substitution in flux tests
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2510 @srdo & @revans2 : thx for the quick review-and-merge here! Follow-up question: since this has been present since the Flux stuff was introduced, it impacts all active branches. Should I backport and send PRs for each active branch? I presume the cherry-pick will be clean, so maybe you can just do cherry-picks instead? Or we just ignore it (I can just skip tests when I do builds, or patch my PATH to avoid the problem). ---
[GitHub] storm issue #2511: [STORM-2893] fix storm distribution build by using POSIX ...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2511 @revans2 & @srdo : thx for the quick review-and-merge! ---
[GitHub] storm pull request #2511: [STORM-2893] fix storm distribution build by using...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2511 [STORM-2893] fix storm distribution build by using POSIX tar There is a known issue with the maven-assembly-plugin version 2.5+ where it fails with errors like the following: ``` [ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:2.6:single (default) on project final-package: Execution default of goal org.apache.maven.plugins:maven-assembly-plugin:2.6:single failed: group id '906175167' is too big ( > 2097151 ). Use STAR or POSIX extensions to overcome this limit -> [Help 1] ``` This became an issue with change 84a4314d96b9e4e377a3d5d81d0a042d96a0625e when the maven-assembly-plugin was upgraded from version 2.2.2 to 2.6, as inherited from the apache pom version 18. To fix this we set the assembly plugin's `tarLongFileMode` configuration setting to `posix`. You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2893 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2511.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2511 commit 08d7d8db51df3b2620461f7167cae787c9cf96dc Author: Erik Weathers <erikdw@...> Date: 2018-01-10T07:18:48Z [STORM-2893] fix storm distribution build by using POSIX tar There is a known issue with the maven-assembly-plugin version 2.5+ where it fails with errors like the following: ``` [ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:2.6:single (default) on project final-package: Execution default of goal org.apache.maven.plugins:maven-assembly-plugin:2.6:single failed: group id '906175167' is too big ( > 2097151 ). Use STAR or POSIX extensions to overcome this limit -> [Help 1] ``` This became an issue with change 84a4314d96b9e4e377a3d5d81d0a042d96a0625e when the maven-assembly-plugin was upgraded from version 2.2.2 to 2.6, as inherited from the apache pom version 18. To fix this we set the assembly plugin's `tarLongFileMode` configuration setting to `posix`. ---
[GitHub] storm pull request #2510: [STORM-2892] fix PATH substitution in flux tests
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2510 [STORM-2892] fix PATH substitution in flux tests The tests fail when the PATH environment variable has a trailing colon, despite that being a valid PATH. This happens because it is substituted directly into the resultant YAML file, which results in invalid YAML (since you cannot end a map's value with ":" in the raw text). So the solution is to wrap the map value with double-quotes. Also fix 2 typos in comments in this file. You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2892 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2510.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2510 commit b070bd1077398910ee3f818980d1d1705969485c Author: Erik Weathers <erikdw@...> Date: 2018-01-10T06:10:42Z [STORM-2892] fix PATH substitution in flux tests The tests fail when the PATH environment variable has a trailing colon, despite that being a valid PATH. This happens because it is substituted directly into the resultant YAML file, which results in invalid YAML (since you cannot end a map's value with ":" in the raw text). So the solution is to wrap the map value with double-quotes. Also fix 2 typos in comments in this file. ---
[GitHub] storm issue #2468: [STORM-2690] resurrect invocation of ISupervisor.assigned...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2468 @ptgoetz : I've created PRs for the branches: * [`1.0.x-branch`](https://github.com/apache/storm/pull/2470) * [`1.1.x-branch`](https://github.com/apache/storm/pull/2471) * [`1.x-branch`](https://github.com/apache/storm/pull/2472) ---
[GitHub] storm pull request #2471: [STORM-2690] resurrect invocation of ISupervisor.a...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2471 [STORM-2690] resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible This commit fixes the storm-mesos integration for the interaction between the Storm core's Supervisor daemon and the MesosSupervisor that implements the ISupervisor interface. _(Backport of #2468 to `1.1.x-branch`.)_ You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2690__1.1.x-branch Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2471.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2471 commit a4215f37a41030b96ce57338ff23aede12131ccd Author: Erik Weathers <eri...@gmail.com> Date: 2017-12-19T21:04:09Z [STORM-2690] resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible This commit fixes the storm-mesos integration for the interaction between the Storm core's Supervisor daemon and the MesosSupervisor that implements the ISupervisor interface. ---
[GitHub] storm pull request #2472: [STORM-2690] resurrect invocation of ISupervisor.a...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2472 [STORM-2690] resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible This commit fixes the storm-mesos integration for the interaction between the Storm core's Supervisor daemon and the MesosSupervisor that implements the ISupervisor interface. _(Backport of #2468 to `1.x-branch`.)_ You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2690__1.x-branch Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2472.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2472 commit a884c6f2e7df01edc4fcaa7d1b392f09f4ff3c2a Author: Erik Weathers <eri...@gmail.com> Date: 2017-12-19T21:04:09Z [STORM-2690] resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible This commit fixes the storm-mesos integration for the interaction between the Storm core's Supervisor daemon and the MesosSupervisor that implements the ISupervisor interface. ---
[GitHub] storm pull request #2470: [STORM-2690] resurrect invocation of ISupervisor.a...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2470 [STORM-2690] resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible This commit fixes the storm-mesos integration for the interaction between the Storm core's Supervisor daemon and the MesosSupervisor that implements the ISupervisor interface. _(Backport of #2468 to `1.0.x-branch`.)_ You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2690__1.0.x-branch Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2470.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2470 commit a21dcf6c16f794e1ac1287b9fcd3b4e8a9418824 Author: Erik Weathers <eri...@gmail.com> Date: 2017-12-19T21:04:09Z [STORM-2690] resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible This commit fixes the storm-mesos integration for the interaction between the Storm core's Supervisor daemon and the MesosSupervisor that implements the ISupervisor interface. ---
[GitHub] storm issue #2468: [STORM-2690] resurrect invocation of ISupervisor.assigned...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2468 @ptgoetz : yep! I've been testing on `1.0.x-branch`. I can send PRs for all the active main branches (`1.0.x-branch`, `1.1.x-branch`, `1.x-branch`). ---
[GitHub] storm issue #2468: [STORM-2690] resurrect invocation of ISupervisor.assigned...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2468 @revans2 : can you please review this when you have a chance? Notably we mostly need this back in the [1.0.x-branch](https://github.com/apache/storm/tree/1.0.x-branch), since (as [we discussed](https://github.com/mesos/storm/issues/222#issuecomment-352514556)) storm 1.1+ is fundamentally incompatible with storm-mesos at this point. ---
[GitHub] storm pull request #2468: [STORM-2690] resurrect invocation of ISupervisor.a...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2468 [STORM-2690] resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible This commit fixes the storm-mesos integration for the interaction between the Storm core's Supervisor daemon and the MesosSupervisor that implements the ISupervisor interface. You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2690 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2468.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2468 commit 5e3c966b23bc3b6d924a52ea62a32b194978af24 Author: Erik Weathers <eri...@gmail.com> Date: 2017-12-19T04:41:35Z [STORM-2690] resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible This commit fixes the storm-mesos integration for the interaction between the Storm core's Supervisor daemon and the MesosSupervisor that implements the ISupervisor interface. ---
[GitHub] storm issue #2203: STORM-2153: New Metrics Reporting API
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2203 @ptgoetz: thanks for your work on this. But I may please request that you add some description about what you've actually done? There is a lot of discussion in STORM-2153, and a ton of different topics are covered in that ticket, so it would be greatly beneficial if you could put some prose about what you've implemented. Thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2188: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2188 @HeartSaVioR : please review at your convenience. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @HeartSaVioR : FYI, here's the PR for the backport: * https://github.com/apache/storm/pull/2188 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2188: STORM-2506: Print mapping between Task ID and Kafk...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2188 STORM-2506: Print mapping between Task ID and Kafka Partitions You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2506--1.x-branch Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2188.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2188 commit 282840028f3285321de54077e5690805db39b5d2 Author: Srishty Agrawal <sagra...@groupon.com> Date: 2017-03-29T19:35:57Z STORM-2506: Print mapping between Task ID and Kafka Partitions --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @HeartSaVioR : FYI, I resolved the conflicts in my local repo, I'm gonna build and then send a new PR, will link from here. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @HeartSaVioR : I didn't have time tonight. I'll try to get to it tomorrow. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @HeartSaVioR : I'll try to take a stab later tonight at it. I'll let you know if I find time. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @HeartSaVioR : thanks a lot! @srishtyagrawal is on leave for a month. Are there any 1.x releases planned before late July? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2117: STORM-2515: Fix most checkstyle violations in storm-kafka...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2117 @srdo : thanks for your perspective on this. I wonder if we could do this: * we adhere to the no-uppercase-acronyms rule we are debating now, thus creating some breaking changes ** as stated previously, we make an effort to backport these changes into storm-1.0.x and/or storm-1.x, keeping both the new name and the old name for the methods, and then we mark the previous methods as deprecated. * maybe we can add the feature to `storm_checkstyle.xml` that *allows* checkstyle-suppression annotations? Or maybe we just defer that for now. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2117: STORM-2515: Fix most checkstyle violations in storm-kafka...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2117 @srdo : I might be in the minority (though it's hard to tell, since as you noted the email thread didn't get much traction), but I don't think there's much advantage in changing existing ***external*** APIs just for adherence with this specific coding style standard. I should note that we're not actually using `storm-kafka-client` in our users' topologies yet because we haven't been able to upgrade to Storm 1.0+ yet (partially because of backwards-incompatible changes in code and behavior). In my view, breaking of backwards compatibility should be done only when actually necessary; e.g., changing package from `backtype` to `org.apache` was a good reason, since we needed to make the project properly namespaced. Whereas coding style adherence is not a very convincing reason. Maybe someone with a view on the number & scale of breaking changes in Storm 2.0 as compared to 1.0 could speak up? I'm fairly ignorant about the "gestalt" of what is happening in Storm 2.0 and so don't know the answer to that question. i.e., if everyone is going to have to make a bunch of changes *anyways* then it's probably "no big deal" to slap in this method name casing change. But if 2.0 isn't highly divergent as compared to 1.0 for topology authors, then I'd argue this isn't worth it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2117: STORM-2515: Fix most checkstyle violations in storm-kafka...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2117 @srdo : my opinion runs contrary to others it seems. For every breaking API change my team & company incur a bunch of coordination work. We have many independent internal engineering teams that are entirely responsible for their own topology code, so if there is a breaking API change we need to work with each of them to get them to modify their code appropriately. Rather than breaking an API to adhere to a belatedly-applied code standard, we could take the approach of allowing explicit exemptions from the uppercase-in-identifier standard via annotations: * https://stackoverflow.com/a/22556386 This would require another modification to the base checkstyle.xml file, since Google's defaults don't allow this explicit one-off suppression via annotations. If we don't have the stomach for using this suppression mechanism, then we should at the minimum backport the changed API identifier as an available method in every active storm version branch. That will help minimize upgrade pains since people can make the changes when upgrading to 1.1.x, and not have to make this particular change when upgrading to 2.x. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 [Green âï¸](https://travis-ci.org/apache/storm/builds/233734011?utm_source=github_status_medium=notification), woot! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @srishtyagrawal : thanks! So there's still a [test failure](https://travis-ci.org/apache/storm/jobs/233447284): ``` Tests run: 37, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2,126.119 sec <<< FAILURE! - in TestSuite testTumbleCount(org.apache.storm.st.tests.window.TumblingWindowTest) Time elapsed: 222.363 sec <<< FAILURE! java.lang.AssertionError: Expecting min 5 bolt emits, found: 4 ``` I wonder if this is a transient issue, or if it's caused by your change of the debug log line for emitting tuples? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 technically and I think canonically, there's no problem with a method and a variable having the same exact name. What's bad in the existing code is that `taskId()` means something *entirely* different. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @srdo : ah, I forgot that `final` was different from `static` + `final`, and that `static` + `final` is *really* what a constant should be. So, in that list that I [posted above](https://github.com/apache/storm/pull/2109#issuecomment-302000708) of the [identifiers that are violating the abbreviation standard](https://gist.github.com/erikdw/775de855745c4926ebe131651c04cde8), the following ones are all probably improperly declared as just `final` instead of `static` + `final`: ``` % (find . -name 'checkstyle-result.xml' -exec cat {} \;) | grep 'consecutive capital letters' | cut -d';' -f2 | cut -d'&' -f1 | sort | uniq -c | sort | grep -v '[a-z]' 1 ATTEMPTS_INTERVAL_TIME 1 BLOBSTORE_MAX_KEY_SEQUENCE_SUBTREE 1 CHANNEL_ALIVE_INTERVAL_MS 1 CLI 1 COORD_STREAM 1 DRPC 1 INITIAL_SEQUENCE_NUMBER 1 INT_CAPACITY 1 MAX_NUM 1 MAX_RETRY_ATTEMPTS 1 MAX_ROUNDS 1 ONE_HUNDRED 1 OR 1 PQ_SIZE 1 PREPARE_ID 1 SCM 1 SPOUT_ID 1 TOPOLOGY_WORKER_DEFAULT_MEMORY_ALLOCATION 2 BLOBSTORE_SUBTREE 2 STORM_CODE_SUFFIX 2 STORM_CONF_SUFFIX 2 STORM_JAR_SUFFIX 7 LOG ``` Those seem easy enough to fix! So I withdraw my objection to `ignoreFinal` being `true`, thanks for pointing out my mistake. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @revans2 : agree that we should be careful in making changes away from the base "google_checks.xml", and I did already [argue above](https://github.com/apache/storm/pull/2109#discussion_r116677826) that we should be using `taskId` for the variable (that was implied when I said we should rename that *really* badly named `taskId()` method to `taskPrefix()`). So that should unblock Srishty for now. But let me make a few points: * The "standard" we are using here is just the *Google* standard, it's not some industry-wide thing that everyone strictly follows. I'm not sure whether we should be trying to follow it strictly, or just using it as a jumping off point for our own standard. * I'm pretty sure most would agree that `ignoreFinal` *should* be `false` (unlike the google default), because otherwise your constants won't be be allowed to be named `FOO_BAR`, they'd be named `Foo_Bar` or `FooBar`, neither of which stand out as being a constant. So that's at least one more deviation I strongly believe we should be making. And I'm sure there are more, the Google "standard" seems to be a bit wacky IMHO. * If your statement about "documentation" and "HBase" (I'm not familiar with that issue) is about how we're obscuring what we've changed from the Google defaults, I do have sympathy for that view. But that's because it's also going to be harder to upgrade our version of checkstyle later, since we'll need our changes to be applied to a newer google_checks.xml file. Though I will note that it's quite easy to craft a command to show the changes we've made so far: ``` % diff <(curl -s https://raw.githubusercontent.com/checkstyle/checkstyle/checkstyle-7.7/src/main/resources/google_checks.xml) <(curl -s https://raw.githubusercontent.com/apache/storm/master/storm-buildtools/storm_checkstyle.xml) 1a2,19 > > > 6a25,36 > The original file came from here: > https://raw.githubusercontent.com/checkstyle/checkstyle/checkstyle-7.7/src/main/resources/google_checks.xml > It has been slightly modified for use in Apache Storm, as follows: > * 4 space indents instead of 2 > * line-length limit is 140 instead of 100 > Once checkstyle has the ability to override selected configuration elements from within the Maven > pom.xml file, then we can remove this file in favor of overriding the provided google_checks.xml file. > See this issue to track that functionality: > https://github.com/checkstyle/checkstyle/issues/2873 > --> > > 55c85 < --- > 158c188 < --- > 160c190 < --- > 163c193 < --- > ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 @srishtyagrawal : we didn't *choose* to set it to 1, we just inherited the default of the google style in [google_checks.xml](https://github.com/checkstyle/checkstyle/blob/checkstyle-7.7/src/main/resources/google_checks.xml#L167). I'm personally fine with 3, you can make that change too. I would note that the checkstyle thing is hitting constants (`final `) too, from [my analysis of the checkstyle-result.xml files](https://gist.github.com/erikdw/775de855745c4926ebe131651c04cde8). That is a result of another deviation from the default, where `ignoreFinal` is [false for google_checks.xml](https://github.com/checkstyle/checkstyle/blob/checkstyle-7.7/src/main/resources/google_checks.xml#L166), instead of [true as is the default](http://checkstyle.sourceforge.net/config_naming.html#AbbreviationAsWordInName). We should *definitely* change that to `true`! We might want to do a comparison of all the defaults against what we've inherited from The GOOG. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2109: STORM-2506: Print mapping between Task ID and Kafk...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2109#discussion_r116677826 --- Diff: external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java --- @@ -139,8 +139,8 @@ public void onPartitionsRevoked(Collection partitions) { @Override public void onPartitionsAssigned(Collection partitions) { -LOG.info("Partitions reassignment. [consumer-group={}, consumer={}, topic-partitions={}]", -kafkaSpoutConfig.getConsumerGroupId(), kafkaConsumer, partitions); +LOG.info("Partitions reassignment. [task-ID={}, consumer-group={}, consumer={}, topic-partitions={}]", --- End diff -- Thanks Srishty. Regarding `taskId()` being a function name as the justification for `taskID` being the variable name: in at least one case the name of the function is bad and should be changed. i.e., KafkaUtils.java's `taskId()` should be `taskPrefix()` or something else. It's *not* the "Task ID". --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2109: STORM-2506: Print mapping between Task ID and Kafka Parti...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2109 These failed build errors should go away once #2112 (my checkstyle-fixing PR) is merged and you rebase. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2109: STORM-2506: Print mapping between Task ID and Kafk...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2109#discussion_r116337786 --- Diff: external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java --- @@ -139,8 +139,8 @@ public void onPartitionsRevoked(Collection partitions) { @Override public void onPartitionsAssigned(Collection partitions) { -LOG.info("Partitions reassignment. [consumer-group={}, consumer={}, topic-partitions={}]", -kafkaSpoutConfig.getConsumerGroupId(), kafkaConsumer, partitions); +LOG.info("Partitions reassignment. [task-ID={}, consumer-group={}, consumer={}, topic-partitions={}]", --- End diff -- Minor thing, just pointing it out, though it's probably fine this way: the "task ID" is referred to in 3 different styles in these log lines through this PR: * `taskId` * `Task-ID` * `task-ID` I think each is consistent within their respective log lines. Just wondering if there's any value to it being consistent across them. Also wonder if there's any existing convention in other log lines in the code. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2109: STORM-2506: Print mapping between Task ID and Kafk...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2109#discussion_r116337306 --- Diff: external/storm-kafka/src/jvm/org/apache/storm/kafka/ZkCoordinator.java --- @@ -75,9 +79,9 @@ private static DynamicBrokersReader buildReader(Map stormConf, SpoutConfig spout @Override public void refresh() { try { -LOG.info(taskId(_taskIndex, _totalTasks) + "Refreshing partition manager connections"); +LOG.info(taskId(_taskIndex, _totalTasks, _taskID) + " Refreshing partition manager connections"); --- End diff -- thanks for noticing and fixing these spacing issues in the log lines! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2112: [STORM-2510] update checkstyle configuration to lower vio...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2112 @revans2 : ah, thanks for pointing out the need for license. I copied how google-cloud-intellij is doing it: * https://github.com/GoogleCloudPlatform/google-cloud-intellij/blob/ba8c85810f65443d9ff411ad30448b377afa6012/google_checks.xml I also increased the line limit to 140 and adjusted the maxAllowedViolations accordingly. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2112: [STORM-2510] update checkstyle configuration to lower vio...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2112 @vinodkc & @revans2 & @hmcc & @srishtyagrawal : please ð when you have a chance. This is a follow-up to the [discussions](https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22096038) in PR #2083, and also an offline email thread about the changes in PR #2093. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2112: [STORM-2510] update checkstyle configuration to lo...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2112 [STORM-2510] update checkstyle configuration to lower violations 1. Update from checkstyle-6.11.2 to checkstyle-7.7. 2. Tweak google_checks.xml from checkstyle-7.7 to have the following changes that are more inline with the way the storm code is written: * 4 space indents instead of 2 * line-length limit is 120 instead of 100 3. Exclude the generated thrift code in storm-client from being checked. 4. Update all maxAllowedViolations to be in-sync with the current number of violations, through use of the `update-all-pom-violations.bash` script that is attached to STORM-2510. NOTE: this also fixes STORM-2507. You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2510 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2112.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2112 commit 88cf6f7b418a6edb9ea32b66382261dd9e2570dd Author: Erik Weathers <eri...@gmail.com> Date: 2017-05-12T09:25:21Z [STORM-2510] commit original unmodified google_checks.xml from checkstyle-7.7 Just ran this command: ``` % curl -o storm-buildtools/storm_checkstyle.xml https://raw.githubusercontent.com/checkstyle/checkstyle/checkstyle-7.7/src/main/resources/google_checks.xml ``` commit 29c761908ec8305d36535c1d2e3d1c0c3d6c95d1 Author: Erik Weathers <eri...@gmail.com> Date: 2017-05-12T09:27:10Z [STORM-2510] update checkstyle configuration to lower violations 1. Update from checkstyle-6.11.2 to checkstyle-7.7. 2. Tweak google_checks.xml from checkstyle-7.7 to have the following changes that are more inline with the way the storm code is written: * 4 space indents instead of 2 * line-length limit is 120 instead of 100 3. Exclude the generated thrift code in storm-client from being checked. 4. Update all maxAllowedViolations to be in-sync with the current number of violations, through use of the `update-all-pom-violations.bash` script that is attached to STORM-2510. NOTE: this also fixes STORM-2507. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #:
Github user erikdw commented on the pull request: https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22100450 In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java: In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java on line 66: @vinodkc : yup, that's what I did for determining these violation values. I'll file a ticket and send a PR tomorrow after I figure out the exclusion syntax for the checkstyle maven plugin. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #:
Github user erikdw commented on the pull request: https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22100422 In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java: In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java on line 66: I'm about to fall asleep, so don't have time to post all info. I'm gonna file a JIRA ticket related to tweaking checkstyle and then capture the stuff I've been doing in that. The long-and-short is that I believe we'll have way less failures overall if we change to 4-space indents, *assuming* we exclude the generated thrift code, which is 2-space indented. The storm-sql code is also 2-space indented. But most other code is 4-space indented. Ignoring storm-client and storm-sql, in all the rest of the code there are "only" **9001** violations if we use 4-space indent, versus **41818** if we go to 4-space indent. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #:
Github user erikdw commented on the pull request: https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22098737 In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java: In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java on line 66: @vinodkc : whoa! I'm surprised that there are vendor-specific things built-into the plugin. I should have google'ed around and seen if this was an existing thing (I'll use the excuse that the key "configLocation" is misleading). Thanks for explaining. BTW, that naming they used (`_checks.xml`) is not ideal if these only apply for checkstyle. Do you happen to know if it's possible to override specific rules? Ah, I searched, and it's not yet: * https://github.com/checkstyle/checkstyle/issues/2873 I'm running an analysis of the checkstyle-result.xml files. I'll report back with the situation regarding violations of the default 2-space indent level, as compared to what it would be if we changed to 4-space. I'm also going to analyze the line-length violations. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #:
Github user erikdw commented on the pull request: https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22096706 In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java: In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java on line 66: (BTW, @hmcc: sorry to hijack your review here -- just saw it come through and was like: errp, let's figure out whether we really want these kinds of "fixes"!) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #:
Github user erikdw commented on the pull request: https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22096665 In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java: In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java on line 66: @vinodkc (got the right vinod this time!)... so earlier I said I couldn't find the file. Upon more closely reading the change I noticed the [reference to "google_checks.xml"](https://github.com/apache/storm/blob/9755ff547de3247fe4aa1b60a778983145f43f76/pom.xml#L1100) I'm *guessing* that maybe you forgot to add it into the commit, so it's present in your personal repo, but not in the master? https://github.com/vinodkc/storm-1/blob/0db616045007a02d55f0eec54296587c8b6ed256/pom.xml#L1098 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #:
Github user erikdw commented on the pull request: https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22096565 In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java: In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java on line 66: @vinodc : dang, I'm sorry for incorrectly addressing you!! :) Thanks for redirecting to @vinodkc. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #:
Github user erikdw commented on the pull request: https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22096048 In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java: In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java on line 66: Per my comments in the email and [other PR related to fixing these](https://github.com/apache/storm/pull/2105), changing everything from 4 space indent to 2 space indent is going to be obscuring authorship for little value, and aesthetically I personally prefer 4 space indents with java code. If I figure out how to make checkstyle be ok with 4 space indents can we do that instead of having to adapt all of the code? (I haven't done any analysis to see how the code is indented in general, so I might be making a bad assumption that it's generally already 4 space indented.) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #:
Github user erikdw commented on the pull request: https://github.com/apache/storm/commit/63567ae4e9fe573467db41e13696cbc50176b27a#commitcomment-22096038 In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java: In storm-client/test/jvm/org/apache/storm/utils/ConfigUtilsTest.java on line 66: @vinodc & @revans2 : was it intentional to choose 2 space indents or was that just some default? Also, I wonder if there are char line-length limits are with checkstyle? The [original PR #2093|https://github.com/apache/storm/pull/2093/files] doesn't seem to have any config file that is choosing these values, it seems to be inherited / implicit somehow. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2106: [STORM-2507] Temp Fix-Master branch build failure due to ...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2106 ð --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2105: [STORM-2507] Master branch build failure due to ad...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2105#discussion_r115647872 --- Diff: storm-webapp/src/main/java/org/apache/storm/daemon/drpc/DRPCServer.java --- @@ -45,170 +46,180 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import com.codahale.metrics.Meter; -import com.google.common.annotations.VisibleForTesting; - public class DRPCServer implements AutoCloseable { -private static final Logger LOG = LoggerFactory.getLogger(DRPCServer.class); -private final static Meter meterShutdownCalls = StormMetricsRegistry.registerMeter("drpc:num-shutdown-calls"); - -//TODO in the future this might be better in a common webapp location -public static void addRequestContextFilter(ServletContextHandler context, String configName, Map<String, Object> conf) { -IHttpCredentialsPlugin auth = AuthUtils.GetHttpCredentialsPlugin(conf, (String)conf.get(configName)); -ReqContextFilter filter = new ReqContextFilter(auth); -context.addFilter(new FilterHolder(filter), "/*", FilterMapping.ALL); -} - -private static ThriftServer mkHandlerServer(final DistributedRPC.Iface service, Integer port, Map<String, Object> conf) { -ThriftServer ret = null; -if (port != null && port >= 0) { -ret = new ThriftServer(conf, new DistributedRPC.Processor<>(service), -ThriftConnectionType.DRPC); -} -return ret; -} -private static ThriftServer mkInvokeServer(final DistributedRPCInvocations.Iface service, int port, Map<String, Object> conf) { -return new ThriftServer(conf, new DistributedRPCInvocations.Processor<>(service), -ThriftConnectionType.DRPC_INVOCATIONS); -} - -private static Server mkHttpServer(Map<String, Object> conf, DRPC drpc) { -Integer drpcHttpPort = (Integer) conf.get(DaemonConfig.DRPC_HTTP_PORT); -Server ret = null; -if (drpcHttpPort != null && drpcHttpPort >= 0) { -LOG.info("Starting RPC HTTP servers..."); -String filterClass = (String) (conf.get(DaemonConfig.DRPC_HTTP_FILTER)); -@SuppressWarnings("unchecked") -Map<String, String> filterParams = (Map<String, String>) (conf.get(DaemonConfig.DRPC_HTTP_FILTER_PARAMS)); -FilterConfiguration filterConfiguration = new FilterConfiguration(filterClass, filterParams); -final List filterConfigurations = Arrays.asList(filterConfiguration); -final Integer httpsPort = ObjectReader.getInt(conf.get(DaemonConfig.DRPC_HTTPS_PORT), 0); -final String httpsKsPath = (String) (conf.get(DaemonConfig.DRPC_HTTPS_KEYSTORE_PATH)); -final String httpsKsPassword = (String) (conf.get(DaemonConfig.DRPC_HTTPS_KEYSTORE_PASSWORD)); -final String httpsKsType = (String) (conf.get(DaemonConfig.DRPC_HTTPS_KEYSTORE_TYPE)); -final String httpsKeyPassword = (String) (conf.get(DaemonConfig.DRPC_HTTPS_KEY_PASSWORD)); -final String httpsTsPath = (String) (conf.get(DaemonConfig.DRPC_HTTPS_TRUSTSTORE_PATH)); -final String httpsTsPassword = (String) (conf.get(DaemonConfig.DRPC_HTTPS_TRUSTSTORE_PASSWORD)); -final String httpsTsType = (String) (conf.get(DaemonConfig.DRPC_HTTPS_TRUSTSTORE_TYPE)); -final Boolean httpsWantClientAuth = (Boolean) (conf.get(DaemonConfig.DRPC_HTTPS_WANT_CLIENT_AUTH)); -final Boolean httpsNeedClientAuth = (Boolean) (conf.get(DaemonConfig.DRPC_HTTPS_NEED_CLIENT_AUTH)); - -//TODO a better way to do this would be great. -DRPCApplication.setup(drpc); -ret = UIHelpers.jettyCreateServer(drpcHttpPort, null, httpsPort); - -UIHelpers.configSsl(ret, httpsPort, httpsKsPath, httpsKsPassword, httpsKsType, httpsKeyPassword, httpsTsPath, httpsTsPassword, httpsTsType, -httpsNeedClientAuth, httpsWantClientAuth); - -ServletContextHandler context = new ServletContextHandler(ServletContextHandler.NO_SESSIONS); -context.setContextPath("/"); -ret.setHandler(context); - -ServletHolder jerseyServlet = context.addServlet(ServletContainer.class, "/*"); -jerseyServlet.setInitOrder(1); -jerseyServlet.setInitParameter("javax.ws.rs.Application", DRPCApplication.class.getName()); - -UIHelpers.configFilters(context, filterConf
[GitHub] storm pull request #2105: [STORM-2507] Master branch build failure due to ad...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2105#discussion_r115638968 --- Diff: storm-webapp/src/main/java/org/apache/storm/daemon/drpc/DRPCServer.java --- @@ -45,170 +46,180 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import com.codahale.metrics.Meter; -import com.google.common.annotations.VisibleForTesting; - public class DRPCServer implements AutoCloseable { -private static final Logger LOG = LoggerFactory.getLogger(DRPCServer.class); -private final static Meter meterShutdownCalls = StormMetricsRegistry.registerMeter("drpc:num-shutdown-calls"); - -//TODO in the future this might be better in a common webapp location -public static void addRequestContextFilter(ServletContextHandler context, String configName, Map<String, Object> conf) { -IHttpCredentialsPlugin auth = AuthUtils.GetHttpCredentialsPlugin(conf, (String)conf.get(configName)); -ReqContextFilter filter = new ReqContextFilter(auth); -context.addFilter(new FilterHolder(filter), "/*", FilterMapping.ALL); -} - -private static ThriftServer mkHandlerServer(final DistributedRPC.Iface service, Integer port, Map<String, Object> conf) { -ThriftServer ret = null; -if (port != null && port >= 0) { -ret = new ThriftServer(conf, new DistributedRPC.Processor<>(service), -ThriftConnectionType.DRPC); -} -return ret; -} -private static ThriftServer mkInvokeServer(final DistributedRPCInvocations.Iface service, int port, Map<String, Object> conf) { -return new ThriftServer(conf, new DistributedRPCInvocations.Processor<>(service), -ThriftConnectionType.DRPC_INVOCATIONS); -} - -private static Server mkHttpServer(Map<String, Object> conf, DRPC drpc) { -Integer drpcHttpPort = (Integer) conf.get(DaemonConfig.DRPC_HTTP_PORT); -Server ret = null; -if (drpcHttpPort != null && drpcHttpPort >= 0) { -LOG.info("Starting RPC HTTP servers..."); -String filterClass = (String) (conf.get(DaemonConfig.DRPC_HTTP_FILTER)); -@SuppressWarnings("unchecked") -Map<String, String> filterParams = (Map<String, String>) (conf.get(DaemonConfig.DRPC_HTTP_FILTER_PARAMS)); -FilterConfiguration filterConfiguration = new FilterConfiguration(filterClass, filterParams); -final List filterConfigurations = Arrays.asList(filterConfiguration); -final Integer httpsPort = ObjectReader.getInt(conf.get(DaemonConfig.DRPC_HTTPS_PORT), 0); -final String httpsKsPath = (String) (conf.get(DaemonConfig.DRPC_HTTPS_KEYSTORE_PATH)); -final String httpsKsPassword = (String) (conf.get(DaemonConfig.DRPC_HTTPS_KEYSTORE_PASSWORD)); -final String httpsKsType = (String) (conf.get(DaemonConfig.DRPC_HTTPS_KEYSTORE_TYPE)); -final String httpsKeyPassword = (String) (conf.get(DaemonConfig.DRPC_HTTPS_KEY_PASSWORD)); -final String httpsTsPath = (String) (conf.get(DaemonConfig.DRPC_HTTPS_TRUSTSTORE_PATH)); -final String httpsTsPassword = (String) (conf.get(DaemonConfig.DRPC_HTTPS_TRUSTSTORE_PASSWORD)); -final String httpsTsType = (String) (conf.get(DaemonConfig.DRPC_HTTPS_TRUSTSTORE_TYPE)); -final Boolean httpsWantClientAuth = (Boolean) (conf.get(DaemonConfig.DRPC_HTTPS_WANT_CLIENT_AUTH)); -final Boolean httpsNeedClientAuth = (Boolean) (conf.get(DaemonConfig.DRPC_HTTPS_NEED_CLIENT_AUTH)); - -//TODO a better way to do this would be great. -DRPCApplication.setup(drpc); -ret = UIHelpers.jettyCreateServer(drpcHttpPort, null, httpsPort); - -UIHelpers.configSsl(ret, httpsPort, httpsKsPath, httpsKsPassword, httpsKsType, httpsKeyPassword, httpsTsPath, httpsTsPassword, httpsTsType, -httpsNeedClientAuth, httpsWantClientAuth); - -ServletContextHandler context = new ServletContextHandler(ServletContextHandler.NO_SESSIONS); -context.setContextPath("/"); -ret.setHandler(context); - -ServletHolder jerseyServlet = context.addServlet(ServletContainer.class, "/*"); -jerseyServlet.setInitOrder(1); -jerseyServlet.setInitParameter("javax.ws.rs.Application", DRPCApplication.class.getName()); - -UIHelpers.configFilters(context, filterConf
[GitHub] storm issue #2101: [STORM-2191] shorten classpaths by using wildcards
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2101 @revans2 : here's the backport to 1.x-branch. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2101: [STORM-2191] shorten classpaths by using wildcards
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2101 [STORM-2191] shorten classpaths by using wildcards Backport of PR #2094 to 1.x-branch. You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2191__1.x-branch Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2101.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2101 commit 9767255b2627119f14f930f6b39fe5d5e7fd663d Author: Erik Weathers <eri...@gmail.com> Date: 2017-04-28T07:50:18Z [STORM-2191] shorten classpaths by using wildcards This commit resolves cherry-pick conflicts for backporting change from storm master branch. Related commit in master: 2ceef0dce21895c4ef0f0b8eb5bc8248fad25e13 Instead of fully enumerating all JARs in the lib directories, we just use a Java classpath wildcard to allow the JVM to autodiscover all JARs in the classpath. This affects both the Worker and LogWriter processes, as well as Storm daemons such as the Nimbus, UI, Logviewer, and Supervisor. This change results in shorter commands, so that you can actually see the full content of the Worker command in `ps` output on Linux. Prior to this change Worker commands were easily longer than 4096 bytes, which is the default Linux kernel limit for commands being recorded into the process table. Longer commands get truncated, though they do get executed. An example of the change in Worker classpath length can be seen here: Before: ``` -cp STORM_DIR/lib-worker/asm-5.0.3.jar:STORM_DIR/lib-worker/carbonite-1.5.0.jar:STORM_DIR/lib-worker/chill-java-0.8.0.jar:STORM_DIR/lib-worker/clojure-1.7.0.jar:STORM_DIR/lib-worker/commons-codec-1.6.jar:STORM_DIR/lib-worker/commons-collections-3.2.2.jar:STORM_DIR/lib-worker/commons-io-2.5.jar:STORM_DIR/lib-worker/commons-lang-2.5.jar:STORM_DIR/lib-worker/commons-logging-1.1.3.jar:STORM_DIR/lib-worker/curator-client-2.12.0.jar:STORM_DIR/lib-worker/curator-framework-2.12.0.jar:STORM_DIR/lib-worker/disruptor-3.3.2.jar:STORM_DIR/lib-worker/guava-16.0.1.jar:STORM_DIR/lib-worker/httpclient-4.3.3.jar:STORM_DIR/lib-worker/httpcore-4.4.1.jar:STORM_DIR/lib-worker/jgrapht-core-0.9.0.jar:STORM_DIR/lib-worker/jline-0.9.94.jar:STORM_DIR/lib-worker/json-simple-1.1.jar:STORM_DIR/lib-worker/kryo-3.0.3.jar:STORM_DIR/lib-worker/kryo-shaded-3.0.3.jar:STORM_DIR/lib-worker/libthrift-0.9.3.jar:STORM_DIR/lib-worker/log4j-api-2.8.jar:STORM_DIR/lib-worker/log4j-core-2.8.jar:STORM_DIR/lib-worker/log4j-ove r-slf4j-1.6.6.jar:STORM_DIR/lib-worker/log4j-slf4j-impl-2.8.jar:STORM_DIR/lib-worker/minlog-1.3.0.jar:STORM_DIR/lib-worker/netty-3.9.0.Final.jar:STORM_DIR/lib-worker/objenesis-2.1.jar:STORM_DIR/lib-worker/reflectasm-1.10.1.jar:STORM_DIR/lib-worker/servlet-api-2.5.jar:STORM_DIR/lib-worker/slf4j-api-1.7.21.jar:STORM_DIR/lib-worker/snakeyaml-1.11.jar:STORM_DIR/lib-worker/storm-client-2.0.0-SNAPSHOT.jar:STORM_DIR/lib-worker/sysout-over-slf4j-1.0.2.jar:STORM_DIR/lib-worker/zookeeper-3.4.6.jar:STORM_DIR/conf:STORM_DIR/storm-local/supervisor/stormdist/foo-topology-1-1-1493359573/stormjar.jar ``` After: ``` -cp STORM_DIR/lib-worker/*:STORM_DIR/extlib/*:STORM_DIR/conf:STORM_DIR/storm-local/supervisor/stormdist/foo-topology-1-1-1493359573/stormjar.jar ``` This change also includes additional documentation about the use of classpaths in Storm and provides some guidance for using the various features for using external libraries. For more details on this problem and a discussion about this solution's merits, please see [STORM-2191](https://issues.apache.org/jira/browse/STORM-2191). commit b04011ee8c40b8971592e74e85c18778989bb932 Author: Erik Weathers <eri...@gmail.com> Date: 2017-05-04T21:57:56Z [STORM-2191] address revans2's suggestions, and fix typo (extra ']') in markdown in Setting-up-a-Storm-cluster.md commit b780df2312835021de4c29c673696cfac7dff767 Author: Erik Weathers <eri...@gmail.com> Date: 2017-05-04T22:01:12Z [STORM-2191] add link to the new Classpath-handling doc page from the index page --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2094: [STORM-2191] shorten classpaths by using wildcards
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2094#discussion_r114898011 --- Diff: docs/Setting-up-a-Storm-cluster.md --- @@ -102,9 +102,9 @@ The time to allow any given healthcheck script to run before it is marked failed storm.health.check.timeout.ms: 5000 ``` -### Configure external libraries and environmental variables (optional) +### Configure external libraries and environment variables (optional) -If you need support from external libraries or custom plugins, you can place such jars into the extlib/ and extlib-daemon/ directories. Note that the extlib-daemon/ directory stores jars used only by daemons (Nimbus, Supervisor, DRPC, UI, Logviewer), e.g., HDFS and customized scheduling libraries. Accordingly, two environmental variables STORM_EXT_CLASSPATH and STORM_EXT_CLASSPATH_DAEMON can be configured by users for including the external classpath and daemon-only external classpath. +If you need support from external libraries or custom plugins, you can place such jars into the extlib/ and extlib-daemon/ directories. Note that the extlib-daemon/ directory stores jars used only by daemons (Nimbus, Supervisor, DRPC, UI, Logviewer), e.g., HDFS and customized scheduling libraries. Accordingly, two environment variables STORM_EXT_CLASSPATH and STORM_EXT_CLASSPATH_DAEMON can be configured by users for including the external classpath and daemon-only external classpath. See [Classpath handling](Classpath-handling.html)] for more details on using external libraries. --- End diff -- I also just added a reference from the index page. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2094: [STORM-2191] shorten classpaths by using wildcards
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2094#discussion_r114897599 --- Diff: docs/Setting-up-a-Storm-cluster.md --- @@ -102,9 +102,9 @@ The time to allow any given healthcheck script to run before it is marked failed storm.health.check.timeout.ms: 5000 ``` -### Configure external libraries and environmental variables (optional) +### Configure external libraries and environment variables (optional) -If you need support from external libraries or custom plugins, you can place such jars into the extlib/ and extlib-daemon/ directories. Note that the extlib-daemon/ directory stores jars used only by daemons (Nimbus, Supervisor, DRPC, UI, Logviewer), e.g., HDFS and customized scheduling libraries. Accordingly, two environmental variables STORM_EXT_CLASSPATH and STORM_EXT_CLASSPATH_DAEMON can be configured by users for including the external classpath and daemon-only external classpath. +If you need support from external libraries or custom plugins, you can place such jars into the extlib/ and extlib-daemon/ directories. Note that the extlib-daemon/ directory stores jars used only by daemons (Nimbus, Supervisor, DRPC, UI, Logviewer), e.g., HDFS and customized scheduling libraries. Accordingly, two environment variables STORM_EXT_CLASSPATH and STORM_EXT_CLASSPATH_DAEMON can be configured by users for including the external classpath and daemon-only external classpath. See [Classpath handling](Classpath-handling.html)] for more details on using external libraries. --- End diff -- Thanks for the suggestions @revans2 ! I've pushed a commit that adds references to the new page to those 2 pages you pointed at. I also fixed a typo in the markdown. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2083: STORM-2421: support lists of childopts in DaemonConfig.
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2083 @hmcc : I put a comment in the bug -- the problem isn't sufficiently specified IMHO. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2092: STORM-2493: update documents to reflect the change...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2092#discussion_r113879015 --- Diff: docs/storm-pmml.md --- @@ -0,0 +1,37 @@ +#Storm PMML Bolt --- End diff -- I'm anal like that. ;-) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2092: STORM-2493: update documents to reflect the change...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/2092#discussion_r113874334 --- Diff: docs/storm-pmml.md --- @@ -0,0 +1,37 @@ +#Storm PMML Bolt --- End diff -- nit: spacing is not consistent here with the other docs that you've added (and the existing ones). i.e., `#Foo` should be `# Foo`. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #2094: [STORM-2191] shorten classpaths by using wildcards
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/2094 @revans2 & @harshach & @hmcl : Here's my proposed change. I'm probably misinterpreting some of the usages or behaviors with all of these various external library features, but I tried my best to figure them out after extensively reading the tickets and commits that added the features. Also, I'm fine with not renaming `get_jars_full` and `getFullJars`, I just thought the names were misleading after this change. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2094: [STORM-2191] shorten classpaths by using wildcards
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2094 [STORM-2191] shorten classpaths by using wildcards Instead of fully enumerating all JARs in the lib directories, we just use a Java classpath wildcard to allow the JVM to autodiscover all JARs in the classpath. This affects both the Worker and LogWriter processes, as well as Storm daemons such as the Nimbus, UI, Logviewer, and Supervisor. This change results in shorter commands, so that you can actually see the full content of the Worker command in `ps` output on Linux. Prior to this change Worker commands were easily longer than 4096 bytes, which is the default Linux kernel limit for commands being recorded into the process table. Longer commands get truncated, though they do get executed. An example of the change in Worker classpath length can be seen here: Before: ``` -cp STORM_DIR/lib-worker/asm-5.0.3.jar:STORM_DIR/lib-worker/carbonite-1.5.0.jar:STORM_DIR/lib-worker/chill-java-0.8.0.jar:STORM_DIR/lib-worker/clojure-1.7.0.jar:STORM_DIR/lib-worker/commons-codec-1.6.jar:STORM_DIR/lib-worker/commons-collections-3.2.2.jar:STORM_DIR/lib-worker/commons-io-2.5.jar:STORM_DIR/lib-worker/commons-lang-2.5.jar:STORM_DIR/lib-worker/commons-logging-1.1.3.jar:STORM_DIR/lib-worker/curator-client-2.12.0.jar:STORM_DIR/lib-worker/curator-framework-2.12.0.jar:STORM_DIR/lib-worker/disruptor-3.3.2.jar:STORM_DIR/lib-worker/guava-16.0.1.jar:STORM_DIR/lib-worker/httpclient-4.3.3.jar:STORM_DIR/lib-worker/httpcore-4.4.1.jar:STORM_DIR/lib-worker/jgrapht-core-0.9.0.jar:STORM_DIR/lib-worker/jline-0.9.94.jar:STORM_DIR/lib-worker/json-simple-1.1.jar:STORM_DIR/lib-worker/kryo-3.0.3.jar:STORM_DIR/lib-worker/kryo-shaded-3.0.3.jar:STORM_DIR/lib-worker/libthrift-0.9.3.jar:STORM_DIR/lib-worker/log4j-api-2.8.jar:STORM_DIR/lib-worker/log4j-core-2.8.jar:STORM_DIR/lib-worker/log4j-ove r-slf4j-1.6.6.jar:STORM_DIR/lib-worker/log4j-slf4j-impl-2.8.jar:STORM_DIR/lib-worker/minlog-1.3.0.jar:STORM_DIR/lib-worker/netty-3.9.0.Final.jar:STORM_DIR/lib-worker/objenesis-2.1.jar:STORM_DIR/lib-worker/reflectasm-1.10.1.jar:STORM_DIR/lib-worker/servlet-api-2.5.jar:STORM_DIR/lib-worker/slf4j-api-1.7.21.jar:STORM_DIR/lib-worker/snakeyaml-1.11.jar:STORM_DIR/lib-worker/storm-client-2.0.0-SNAPSHOT.jar:STORM_DIR/lib-worker/sysout-over-slf4j-1.0.2.jar:STORM_DIR/lib-worker/zookeeper-3.4.6.jar:STORM_DIR/conf:STORM_DIR/storm-local/supervisor/stormdist/foo-topology-1-1-1493359573/stormjar.jar ``` After: ``` -cp STORM_DIR/lib-worker/*:STORM_DIR/extlib/*:STORM_DIR/conf:STORM_DIR/storm-local/supervisor/stormdist/foo-topology-1-1-1493359573/stormjar.jar ``` This change also includes additional documentation about the use of classpaths in Storm and provides some guidance for using the various features for using external libraries. For more details on this problem and a discussion about this solution's merits, please see [STORM-2191](https://issues.apache.org/jira/browse/STORM-2191). You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2191 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2094.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2094 commit 2ceef0dce21895c4ef0f0b8eb5bc8248fad25e13 Author: Erik Weathers <eri...@gmail.com> Date: 2017-04-28T07:50:18Z [STORM-2191] shorten classpaths by using wildcards Instead of fully enumerating all JARs in the lib directories, we just use a Java classpath wildcard to allow the JVM to autodiscover all JARs in the classpath. This affects both the Worker and LogWriter processes, as well as Storm daemons such as the Nimbus, UI, Logviewer, and Supervisor. This change results in shorter commands, so that you can actually see the full content of the Worker command in `ps` output on Linux. Prior to this change Worker commands were easily longer than 4096 bytes, which is the default Linux kernel limit for commands being recorded into the process table. Longer commands get truncated, though they do get executed. An example of the change in Worker classpath length can be seen here: Before: ``` -cp STORM_DIR/lib-worker/asm-5.0.3.jar:STORM_DIR/lib-worker/carbonite-1.5.0.jar:STORM_DIR/lib-worker/chill-java-0.8.0.jar:STORM_DIR/lib-worker/clojure-1.7.0.jar:STORM_DIR/lib-worker/commons-codec-1.6.jar:STORM_DIR/lib-worker/commons-collections-3.2.2.jar:STORM_DIR/lib-worker/commons-io-2.5.jar:STORM_DIR/lib-worker/commons-lang-2.5.jar:STORM_DIR/lib-worker/commons-logging-1.1.3.jar:STORM_DIR/lib-worker/curator-client-2.12.0.jar:STORM_DIR/lib-worker/curator-framework-2.12.0.jar:STORM_DIR/lib-
[GitHub] storm pull request #2091: fix typo in storm-client pom file: kyro -> kryo
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2091 fix typo in storm-client pom file: kyro -> kryo @revans2 : typo I found while poking around and trying to understand the shaded kryo jar, as per [our discussion in STORM-2191](https://issues.apache.org/jira/browse/STORM-2191?focusedCommentId=15984124=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15984124). You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm kyro-typo-in-pom-comment Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2091.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2091 commit dc19b6dd2299b54295888acca997c63d962600b1 Author: Erik Weathers <eri...@gmail.com> Date: 2017-04-26T06:18:51Z fix typo in storm-client pom file: kyro -> kryo --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #2088: [STORM-2486] Prevent cd from printing target direc...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/2088 [STORM-2486] Prevent cd from printing target directory to avoid breaking classpath @harshach &/or @hmcl &/or @revans2 &/or @HeartSaVioR : can you please review this when you get a chance? I spoke to Hugo and Harsha about this last week. I put a bunch of details into [STORM-2486](https://issues.apache.org/jira/browse/STORM-2486). You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm STORM-2486 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2088.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2088 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1874: STORM-2286 Storm Rebalance command should support arbitra...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/1874 I'm unable to reconcile the statements about the ratio always being 1:1 with the content of your "For example" paragraph. Those examples sound like you have having a different number of executors than tasks. Am I misreading them? In any case, I'm not a clojure expert, so you probably wanna get a core contributor (e.g., Bobby Evans, Jungtaek Kim, someone else) to look at this. However, another Q that comes to mind is: does this behavior also exist in the master branch at HEAD? i.e., the Java-based code. If so then you'll wanna do this change in Java too, right? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1874: STORM-2286 Strom Rebalance command should support arbitra...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/1874 @danny0405 : given that I personally find no value in having this separate task vs. executor distinction that Storm supports (i.e., allowing for a different ratio than 1:1 for tasks to executors), I'm probably not the best person to review this. :)But it would help if you gave some specific examples, instead of only talking abstractly about the problem. Also just FYI, you have a typo in the headline for this PR as well as the storm ticket (Strom --> Storm). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1131: STORM-822: Kafka Spout New Consumer API
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/1131 @jianbzhou : you can file a STORM JIRA ticket yourself actually -- you just need to [create an Apache JIRA account](https://issues.apache.org/jira/secure/Signup!default.jspa). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1851: Kafka spout should consume from latest when ZK partition ...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/1851 @danny0405 : thanks for the adjustment! :) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1131: STORM-822: Kafka Spout New Consumer API
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/1131 > Btw, I just send the latest code to you via email. @jianbzhou: can you please clarify the "you" in this statement? > we have communicated via email for a while and going forward let's talk in this thread so everyone is in same page. @jianbzhou : wouldn't it be more appropriate to open a new STORM- JIRA and then communicate via that JIRA? As opposed to private emails with diffs, or adding comments into a merged and done PR? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1851: Kafka spout should consume from latest when ZK par...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1851#discussion_r94543681 --- Diff: external/storm-kafka/src/jvm/org/apache/storm/kafka/PartitionManager.java --- @@ -199,7 +199,12 @@ private void fill() { try { msgs = KafkaUtils.fetchMessages(_spoutConfig, _consumer, _partition, offset); } catch (TopicOffsetOutOfRangeException e) { -offset = KafkaUtils.getOffset(_consumer, _partition.topic, _partition.partition, kafka.api.OffsetRequest.EarliestTime()); +long partitionLatestOffset = KafkaUtils.getOffset(_consumer, _partition.topic, _partition.partition, kafka.api.OffsetRequest.LatestTime()); +if (partitionLatestOffset < offset) { +offset = partitionLatestOffset; +}else{ --- End diff -- nit: `}else{` --> `} else {` (i.e., please be consistent with other code for the spacing style) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1817: STORM-2237: Nimbus reports bad supervisor heartbeat - unk...
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/1817 @knusbaum : can you please put in some details about what is wrong, how it's fixed, etc.? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1406: [STORM-433] [WIP] Executor queue backlog metric
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/1406 @abhishekagarwal87 & @HeartSaVioR : any update on this? Getting visibility into queue depth in storm is a major reason for us to even consider upgrading off of the storm-0.9.x branch. Do you need help? We can potentially dedicate time in our next quarter towards helping. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #913: Fix typo in get_jars_full
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/913 Pointer to the new PR referenced above: https://github.com/apache/storm/pull/924 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1744: STORM-1276: line for line translation of nimbus to java
Github user erikdw commented on the issue: https://github.com/apache/storm/pull/1744 @revans2 : Since I didn't have any substantive blocking commits I'm ok with merging it. And it's been long enough that I think you're probably ok to merge away. Any hidden problems will hopefully be surfaced by subsequent testing by others when this is in the mainline. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r90198668 --- Diff: storm-core/src/jvm/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -0,0 +1,3729 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.daemon.nimbus; + +import static org.apache.storm.metric.StormMetricsRegistry.registerMeter; +import static org.apache.storm.utils.Utils.OR; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ServerSocket; +import java.nio.ByteBuffer; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; +import java.security.Principal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.UnaryOperator; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.security.auth.Subject; + +import org.apache.storm.Config; +import org.apache.storm.StormTimer; +import org.apache.storm.blobstore.AtomicOutputStream; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.BlobStoreAclHandler; +import org.apache.storm.blobstore.BlobSynchronizer; +import org.apache.storm.blobstore.InputStreamWithMeta; +import org.apache.storm.blobstore.KeySequenceNumber; +import org.apache.storm.blobstore.LocalFsBlobStore; +import org.apache.storm.cluster.ClusterStateContext; +import org.apache.storm.cluster.ClusterUtils; +import org.apache.storm.cluster.DaemonType; +import org.apache.storm.cluster.IStormClusterState; +import org.apache.storm.daemon.DaemonCommon; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.StormCommon; +import org.apache.storm.generated.AlreadyAliveException; +import org.apache.storm.generated.Assignment; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.BeginDownloadResult; +import org.apache.storm.generated.ClusterSummary; +import org.apache.storm.generated.CommonAggregateStats; +import org.apache.storm.generated.ComponentAggregateStats; +import org.apache.storm.generated.ComponentPageInfo; +import org.apache.storm.generated.ComponentType; +import org.apache.storm.generated.Credentials; +import org.apache.storm.generated.DebugOptions; +import org.apache.storm.generated.ErrorInfo; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.ExecutorSummary; +import org.apache.storm.generated.GetInfoOptions; +import org.apache.storm.generated.InvalidTopologyException; +import org.apache.storm.generated.KeyAlreadyExistsException; +import org.apache.storm.generated.KeyNotFoundException; +import org.apache.storm.generated.KillOptions; +import org.apache.storm.generated.LSTopoHistory; +import org.apache.storm.generated.ListBlobsResult; +import org.apache.storm.generated.LogConfig; +import org.apache.storm.generated.LogLevel; +import org.apache.storm.generated.LogLevelAction; +import org.apache.storm.generated.Nimbus.Iface; +import org.apache.storm.generated.Nimbus.Processor; +import org.apache.storm.generated.NimbusSummary; +import org.apache.storm.generated.NodeInfo; +import org.apache.storm.generated.NotAliveException; +import org.apache.storm.generated.NumError
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r90193679 --- Diff: storm-core/src/jvm/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -0,0 +1,3729 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.daemon.nimbus; + +import static org.apache.storm.metric.StormMetricsRegistry.registerMeter; +import static org.apache.storm.utils.Utils.OR; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ServerSocket; +import java.nio.ByteBuffer; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; +import java.security.Principal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.UnaryOperator; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.security.auth.Subject; + +import org.apache.storm.Config; +import org.apache.storm.StormTimer; +import org.apache.storm.blobstore.AtomicOutputStream; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.BlobStoreAclHandler; +import org.apache.storm.blobstore.BlobSynchronizer; +import org.apache.storm.blobstore.InputStreamWithMeta; +import org.apache.storm.blobstore.KeySequenceNumber; +import org.apache.storm.blobstore.LocalFsBlobStore; +import org.apache.storm.cluster.ClusterStateContext; +import org.apache.storm.cluster.ClusterUtils; +import org.apache.storm.cluster.DaemonType; +import org.apache.storm.cluster.IStormClusterState; +import org.apache.storm.daemon.DaemonCommon; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.StormCommon; +import org.apache.storm.generated.AlreadyAliveException; +import org.apache.storm.generated.Assignment; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.BeginDownloadResult; +import org.apache.storm.generated.ClusterSummary; +import org.apache.storm.generated.CommonAggregateStats; +import org.apache.storm.generated.ComponentAggregateStats; +import org.apache.storm.generated.ComponentPageInfo; +import org.apache.storm.generated.ComponentType; +import org.apache.storm.generated.Credentials; +import org.apache.storm.generated.DebugOptions; +import org.apache.storm.generated.ErrorInfo; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.ExecutorSummary; +import org.apache.storm.generated.GetInfoOptions; +import org.apache.storm.generated.InvalidTopologyException; +import org.apache.storm.generated.KeyAlreadyExistsException; +import org.apache.storm.generated.KeyNotFoundException; +import org.apache.storm.generated.KillOptions; +import org.apache.storm.generated.LSTopoHistory; +import org.apache.storm.generated.ListBlobsResult; +import org.apache.storm.generated.LogConfig; +import org.apache.storm.generated.LogLevel; +import org.apache.storm.generated.LogLevelAction; +import org.apache.storm.generated.Nimbus.Iface; +import org.apache.storm.generated.Nimbus.Processor; +import org.apache.storm.generated.NimbusSummary; +import org.apache.storm.generated.NodeInfo; +import org.apache.storm.generated.NotAliveException; +import org.apache.storm.generated.NumError
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r90195674 --- Diff: storm-core/src/jvm/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -0,0 +1,3729 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.daemon.nimbus; + +import static org.apache.storm.metric.StormMetricsRegistry.registerMeter; +import static org.apache.storm.utils.Utils.OR; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ServerSocket; +import java.nio.ByteBuffer; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; +import java.security.Principal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.UnaryOperator; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.security.auth.Subject; + +import org.apache.storm.Config; +import org.apache.storm.StormTimer; +import org.apache.storm.blobstore.AtomicOutputStream; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.BlobStoreAclHandler; +import org.apache.storm.blobstore.BlobSynchronizer; +import org.apache.storm.blobstore.InputStreamWithMeta; +import org.apache.storm.blobstore.KeySequenceNumber; +import org.apache.storm.blobstore.LocalFsBlobStore; +import org.apache.storm.cluster.ClusterStateContext; +import org.apache.storm.cluster.ClusterUtils; +import org.apache.storm.cluster.DaemonType; +import org.apache.storm.cluster.IStormClusterState; +import org.apache.storm.daemon.DaemonCommon; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.StormCommon; +import org.apache.storm.generated.AlreadyAliveException; +import org.apache.storm.generated.Assignment; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.BeginDownloadResult; +import org.apache.storm.generated.ClusterSummary; +import org.apache.storm.generated.CommonAggregateStats; +import org.apache.storm.generated.ComponentAggregateStats; +import org.apache.storm.generated.ComponentPageInfo; +import org.apache.storm.generated.ComponentType; +import org.apache.storm.generated.Credentials; +import org.apache.storm.generated.DebugOptions; +import org.apache.storm.generated.ErrorInfo; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.ExecutorSummary; +import org.apache.storm.generated.GetInfoOptions; +import org.apache.storm.generated.InvalidTopologyException; +import org.apache.storm.generated.KeyAlreadyExistsException; +import org.apache.storm.generated.KeyNotFoundException; +import org.apache.storm.generated.KillOptions; +import org.apache.storm.generated.LSTopoHistory; +import org.apache.storm.generated.ListBlobsResult; +import org.apache.storm.generated.LogConfig; +import org.apache.storm.generated.LogLevel; +import org.apache.storm.generated.LogLevelAction; +import org.apache.storm.generated.Nimbus.Iface; +import org.apache.storm.generated.Nimbus.Processor; +import org.apache.storm.generated.NimbusSummary; +import org.apache.storm.generated.NodeInfo; +import org.apache.storm.generated.NotAliveException; +import org.apache.storm.generated.NumError
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r90198469 --- Diff: storm-core/src/jvm/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -0,0 +1,3729 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.daemon.nimbus; + +import static org.apache.storm.metric.StormMetricsRegistry.registerMeter; +import static org.apache.storm.utils.Utils.OR; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ServerSocket; +import java.nio.ByteBuffer; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; +import java.security.Principal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.UnaryOperator; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.security.auth.Subject; + +import org.apache.storm.Config; +import org.apache.storm.StormTimer; +import org.apache.storm.blobstore.AtomicOutputStream; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.BlobStoreAclHandler; +import org.apache.storm.blobstore.BlobSynchronizer; +import org.apache.storm.blobstore.InputStreamWithMeta; +import org.apache.storm.blobstore.KeySequenceNumber; +import org.apache.storm.blobstore.LocalFsBlobStore; +import org.apache.storm.cluster.ClusterStateContext; +import org.apache.storm.cluster.ClusterUtils; +import org.apache.storm.cluster.DaemonType; +import org.apache.storm.cluster.IStormClusterState; +import org.apache.storm.daemon.DaemonCommon; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.StormCommon; +import org.apache.storm.generated.AlreadyAliveException; +import org.apache.storm.generated.Assignment; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.BeginDownloadResult; +import org.apache.storm.generated.ClusterSummary; +import org.apache.storm.generated.CommonAggregateStats; +import org.apache.storm.generated.ComponentAggregateStats; +import org.apache.storm.generated.ComponentPageInfo; +import org.apache.storm.generated.ComponentType; +import org.apache.storm.generated.Credentials; +import org.apache.storm.generated.DebugOptions; +import org.apache.storm.generated.ErrorInfo; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.ExecutorSummary; +import org.apache.storm.generated.GetInfoOptions; +import org.apache.storm.generated.InvalidTopologyException; +import org.apache.storm.generated.KeyAlreadyExistsException; +import org.apache.storm.generated.KeyNotFoundException; +import org.apache.storm.generated.KillOptions; +import org.apache.storm.generated.LSTopoHistory; +import org.apache.storm.generated.ListBlobsResult; +import org.apache.storm.generated.LogConfig; +import org.apache.storm.generated.LogLevel; +import org.apache.storm.generated.LogLevelAction; +import org.apache.storm.generated.Nimbus.Iface; +import org.apache.storm.generated.Nimbus.Processor; +import org.apache.storm.generated.NimbusSummary; +import org.apache.storm.generated.NodeInfo; +import org.apache.storm.generated.NotAliveException; +import org.apache.storm.generated.NumError
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r90196514 --- Diff: storm-core/src/jvm/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -0,0 +1,3729 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.daemon.nimbus; + +import static org.apache.storm.metric.StormMetricsRegistry.registerMeter; +import static org.apache.storm.utils.Utils.OR; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ServerSocket; +import java.nio.ByteBuffer; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; +import java.security.Principal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.UnaryOperator; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.security.auth.Subject; + +import org.apache.storm.Config; +import org.apache.storm.StormTimer; +import org.apache.storm.blobstore.AtomicOutputStream; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.BlobStoreAclHandler; +import org.apache.storm.blobstore.BlobSynchronizer; +import org.apache.storm.blobstore.InputStreamWithMeta; +import org.apache.storm.blobstore.KeySequenceNumber; +import org.apache.storm.blobstore.LocalFsBlobStore; +import org.apache.storm.cluster.ClusterStateContext; +import org.apache.storm.cluster.ClusterUtils; +import org.apache.storm.cluster.DaemonType; +import org.apache.storm.cluster.IStormClusterState; +import org.apache.storm.daemon.DaemonCommon; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.StormCommon; +import org.apache.storm.generated.AlreadyAliveException; +import org.apache.storm.generated.Assignment; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.BeginDownloadResult; +import org.apache.storm.generated.ClusterSummary; +import org.apache.storm.generated.CommonAggregateStats; +import org.apache.storm.generated.ComponentAggregateStats; +import org.apache.storm.generated.ComponentPageInfo; +import org.apache.storm.generated.ComponentType; +import org.apache.storm.generated.Credentials; +import org.apache.storm.generated.DebugOptions; +import org.apache.storm.generated.ErrorInfo; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.ExecutorSummary; +import org.apache.storm.generated.GetInfoOptions; +import org.apache.storm.generated.InvalidTopologyException; +import org.apache.storm.generated.KeyAlreadyExistsException; +import org.apache.storm.generated.KeyNotFoundException; +import org.apache.storm.generated.KillOptions; +import org.apache.storm.generated.LSTopoHistory; +import org.apache.storm.generated.ListBlobsResult; +import org.apache.storm.generated.LogConfig; +import org.apache.storm.generated.LogLevel; +import org.apache.storm.generated.LogLevelAction; +import org.apache.storm.generated.Nimbus.Iface; +import org.apache.storm.generated.Nimbus.Processor; +import org.apache.storm.generated.NimbusSummary; +import org.apache.storm.generated.NodeInfo; +import org.apache.storm.generated.NotAliveException; +import org.apache.storm.generated.NumError
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r9024 --- Diff: storm-core/src/jvm/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -0,0 +1,3729 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.daemon.nimbus; + +import static org.apache.storm.metric.StormMetricsRegistry.registerMeter; +import static org.apache.storm.utils.Utils.OR; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ServerSocket; +import java.nio.ByteBuffer; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; +import java.security.Principal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.UnaryOperator; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.security.auth.Subject; + +import org.apache.storm.Config; +import org.apache.storm.StormTimer; +import org.apache.storm.blobstore.AtomicOutputStream; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.BlobStoreAclHandler; +import org.apache.storm.blobstore.BlobSynchronizer; +import org.apache.storm.blobstore.InputStreamWithMeta; +import org.apache.storm.blobstore.KeySequenceNumber; +import org.apache.storm.blobstore.LocalFsBlobStore; +import org.apache.storm.cluster.ClusterStateContext; +import org.apache.storm.cluster.ClusterUtils; +import org.apache.storm.cluster.DaemonType; +import org.apache.storm.cluster.IStormClusterState; +import org.apache.storm.daemon.DaemonCommon; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.StormCommon; +import org.apache.storm.generated.AlreadyAliveException; +import org.apache.storm.generated.Assignment; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.BeginDownloadResult; +import org.apache.storm.generated.ClusterSummary; +import org.apache.storm.generated.CommonAggregateStats; +import org.apache.storm.generated.ComponentAggregateStats; +import org.apache.storm.generated.ComponentPageInfo; +import org.apache.storm.generated.ComponentType; +import org.apache.storm.generated.Credentials; +import org.apache.storm.generated.DebugOptions; +import org.apache.storm.generated.ErrorInfo; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.ExecutorSummary; +import org.apache.storm.generated.GetInfoOptions; +import org.apache.storm.generated.InvalidTopologyException; +import org.apache.storm.generated.KeyAlreadyExistsException; +import org.apache.storm.generated.KeyNotFoundException; +import org.apache.storm.generated.KillOptions; +import org.apache.storm.generated.LSTopoHistory; +import org.apache.storm.generated.ListBlobsResult; +import org.apache.storm.generated.LogConfig; +import org.apache.storm.generated.LogLevel; +import org.apache.storm.generated.LogLevelAction; +import org.apache.storm.generated.Nimbus.Iface; +import org.apache.storm.generated.Nimbus.Processor; +import org.apache.storm.generated.NimbusSummary; +import org.apache.storm.generated.NodeInfo; +import org.apache.storm.generated.NotAliveException; +import org.apache.storm.generated.NumError
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r90192685 --- Diff: storm-core/src/jvm/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -0,0 +1,3640 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.daemon.nimbus; + +import static org.apache.storm.metric.StormMetricsRegistry.registerMeter; +import static org.apache.storm.utils.Utils.OR; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ServerSocket; +import java.nio.ByteBuffer; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; +import java.security.Principal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.UnaryOperator; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.security.auth.Subject; + +import org.apache.storm.Config; +import org.apache.storm.StormTimer; +import org.apache.storm.blobstore.AtomicOutputStream; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.BlobStoreAclHandler; +import org.apache.storm.blobstore.BlobSynchronizer; +import org.apache.storm.blobstore.InputStreamWithMeta; +import org.apache.storm.blobstore.KeySequenceNumber; +import org.apache.storm.blobstore.LocalFsBlobStore; +import org.apache.storm.cluster.ClusterStateContext; +import org.apache.storm.cluster.ClusterUtils; +import org.apache.storm.cluster.DaemonType; +import org.apache.storm.cluster.IStormClusterState; +import org.apache.storm.daemon.DaemonCommon; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.StormCommon; +import org.apache.storm.generated.AlreadyAliveException; +import org.apache.storm.generated.Assignment; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.BeginDownloadResult; +import org.apache.storm.generated.ClusterSummary; +import org.apache.storm.generated.CommonAggregateStats; +import org.apache.storm.generated.ComponentAggregateStats; +import org.apache.storm.generated.ComponentPageInfo; +import org.apache.storm.generated.ComponentType; +import org.apache.storm.generated.Credentials; +import org.apache.storm.generated.DebugOptions; +import org.apache.storm.generated.ErrorInfo; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.ExecutorSummary; +import org.apache.storm.generated.GetInfoOptions; +import org.apache.storm.generated.InvalidTopologyException; +import org.apache.storm.generated.KeyAlreadyExistsException; +import org.apache.storm.generated.KeyNotFoundException; +import org.apache.storm.generated.KillOptions; +import org.apache.storm.generated.LSTopoHistory; +import org.apache.storm.generated.ListBlobsResult; +import org.apache.storm.generated.LogConfig; +import org.apache.storm.generated.LogLevel; +import org.apache.storm.generated.LogLevelAction; +import org.apache.storm.generated.Nimbus.Iface; +import org.apache.storm.generated.Nimbus.Processor; +import org.apache.storm.generated.NimbusSummary; +import org.apache.storm.generated.NodeInfo; +import org.apache.storm.generated.NotAliveException; +import org.apache.storm.generated.NumError
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r90191719 --- Diff: storm-core/src/jvm/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -0,0 +1,3640 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.daemon.nimbus; + +import static org.apache.storm.metric.StormMetricsRegistry.registerMeter; +import static org.apache.storm.utils.Utils.OR; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.io.OutputStream; +import java.net.BindException; +import java.net.ServerSocket; +import java.nio.ByteBuffer; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; +import java.security.Principal; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.UnaryOperator; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import javax.security.auth.Subject; + +import org.apache.storm.Config; +import org.apache.storm.StormTimer; +import org.apache.storm.blobstore.AtomicOutputStream; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.BlobStoreAclHandler; +import org.apache.storm.blobstore.BlobSynchronizer; +import org.apache.storm.blobstore.InputStreamWithMeta; +import org.apache.storm.blobstore.KeySequenceNumber; +import org.apache.storm.blobstore.LocalFsBlobStore; +import org.apache.storm.cluster.ClusterStateContext; +import org.apache.storm.cluster.ClusterUtils; +import org.apache.storm.cluster.DaemonType; +import org.apache.storm.cluster.IStormClusterState; +import org.apache.storm.daemon.DaemonCommon; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.StormCommon; +import org.apache.storm.generated.AlreadyAliveException; +import org.apache.storm.generated.Assignment; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.BeginDownloadResult; +import org.apache.storm.generated.ClusterSummary; +import org.apache.storm.generated.CommonAggregateStats; +import org.apache.storm.generated.ComponentAggregateStats; +import org.apache.storm.generated.ComponentPageInfo; +import org.apache.storm.generated.ComponentType; +import org.apache.storm.generated.Credentials; +import org.apache.storm.generated.DebugOptions; +import org.apache.storm.generated.ErrorInfo; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.ExecutorSummary; +import org.apache.storm.generated.GetInfoOptions; +import org.apache.storm.generated.InvalidTopologyException; +import org.apache.storm.generated.KeyAlreadyExistsException; +import org.apache.storm.generated.KeyNotFoundException; +import org.apache.storm.generated.KillOptions; +import org.apache.storm.generated.LSTopoHistory; +import org.apache.storm.generated.ListBlobsResult; +import org.apache.storm.generated.LogConfig; +import org.apache.storm.generated.LogLevel; +import org.apache.storm.generated.LogLevelAction; +import org.apache.storm.generated.Nimbus.Iface; +import org.apache.storm.generated.Nimbus.Processor; +import org.apache.storm.generated.NimbusSummary; +import org.apache.storm.generated.NodeInfo; +import org.apache.storm.generated.NotAliveException; +import org.apache.storm.generated.NumError
[GitHub] storm pull request #1744: STORM-1276: line for line translation of nimbus to...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1744#discussion_r89243737 --- Diff: storm-core/src/jvm/org/apache/storm/blobstore/BlobStore.java --- @@ -304,6 +307,41 @@ public void readBlobTo(String key, OutputStream out, Subject who) throws IOExcep } /** + * Read a stored topology helper method + * @param topoId the id of the topology to read + * @param who who to read it as + * @return the deserialized topology. + * @throws IOException on any error while reading the blob. + * @throws AuthorizationException if who is not allowed to read the blob + * @throws KeyNotFoundException if the blob could not be found + */ +public StormTopology readTopology(String topoId, Subject who) throws KeyNotFoundException, AuthorizationException, IOException { +return Utils.deserialize(readBlob(ConfigUtils.masterStormCodeKey(topoId), who), StormTopology.class); +} + +/** + * Read a stored topology config helper method + * @param topoId the id of the topology whos conf we are reading --- End diff -- nit: whos -> whose --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1755: error log for blobstore was missing a space betwee...
GitHub user erikdw opened a pull request: https://github.com/apache/storm/pull/1755 error log for blobstore was missing a space between string 'key' and ⦠â¦the actual key value Example: 2016-11-01 06:09:58.001 o.a.s.b.BlobStoreUtils [ERROR] Could not download blob with keysmoketest-l-1-1460514403-stormconf.ser Notice how 'key' and 'smoketest...' run together? Solution: make the error log similar to the one in the following function for when a blob couldn't be updated. You can merge this pull request into a Git repository by running: $ git pull https://github.com/erikdw/storm fix-error-log Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1755.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1755 commit b635b0c8ad6437e1a2c265f3d0d3960ef9771747 Author: Erik Weathers <eri...@gmail.com> Date: 2016-11-01T06:16:47Z error log for blobstore was missing a space between string 'key' and the actual key value Example: 2016-11-01 06:09:58.001 o.a.s.b.BlobStoreUtils [ERROR] Could not download blob with keysmoketest-l-1-1460514403-stormconf.ser Notice how 'key' and 'smoketest...' run together? Solution: make the error log similar to the one in the following function for when a blob couldn't be updated. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1443: Log.warn if found a message in kafka topic larger ...
Github user erikdw commented on a diff in the pull request: https://github.com/apache/storm/pull/1443#discussion_r82918924 --- Diff: external/storm-kafka/src/jvm/org/apache/storm/kafka/KafkaUtils.java --- @@ -218,6 +219,11 @@ public static ByteBufferMessageSet fetchMessages(KafkaConfig config, SimpleConsu } } else { msgs = fetchResponse.messageSet(topic, partitionId); +if (msgs.sizeInBytes() > 0 && msgs.validBytes() == 0) { +LOG.warn(String.format("Found a message larger than the maximum fetch size (%d bytes) of this consumer on topic " + +"%s partition %d at fetch offset %d. Increase the fetch size, or decrease the maximum message size the broker will allow." +, config.fetchSizeBytes, partition.topic, partition.partition, offset)); --- End diff -- Nit: Comma at the front of the line is aesthetically displeasing to me (i.e., ugly). ;-) More importantly, what is the value in `msgs.sizeInBytes()` if it's not the message size? i.e., in the [review comments you said](https://github.com/apache/storm/pull/1443#issuecomment-221819857): > It seems we can not get the actual size of the message. So I wonder what the value is in `msgs.sizeInBytes()` then? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-433: Give users visibility to the depth ...
Github user erikdw commented on the pull request: https://github.com/apache/storm/pull/236#issuecomment-217228874 @abhishekagarwal87 : awesome news! Any way you can post a screenshot of the UI you are currently proposing? At least please do so with the PR when you send it. That could help others to brainstorm how to put such values into the UI. Maybe you're instead asking for suggestions on how to handle *obtaining* the instantaneous values? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-433: Give users visibility to the depth ...
Github user erikdw commented on the pull request: https://github.com/apache/storm/pull/236#issuecomment-217026345 @knusbaum : I see you closed this. Are you planning to open a new PR for publishing these stats? It would be a big help to us, we often wonder at various behaviors we witness from our users' topologies, and the only mechanism we have right now for getting visibility into the queue depths seems to be getting every topology owner to use a custom metrics consumer to publish the metrics which we would then need to provide fancy aggregation on top of. Having it in the Storm UI and also in the API stats would be very very helpful. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---