Github user asfgit closed the pull request at:
https://github.com/apache/storm/pull/2292
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2292
+1 again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2199
@Ethanlm
I would like to be clear about `Artifactory` to continue reviewing. I have
no idea what is this, but whether it is open specification or proprietary on
specific company, if we can
Github user Ethanlm commented on the issue:
https://github.com/apache/storm/pull/2199
I refactored the code. It simplifies configurations a lot.
`getConfigLoader()` will choose `IConfigLoader` implementation based on the
scheme of `scheduler.config.loader.uri`. It doesn't require
Hello Jungtaek,
I confirm that we currently do not have multiple Nimbus nodes.
I want to clarify that Nimbus process never crashed : it keep printing in
its log the error:
2017-08-06 03:44:01.777 o.a.s.t.ProcessFunction pool-14-thread-1 [ERROR]
Internal error processing getClusterInfo
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2293
@kevinconaway
TVL topology tries to keep up acked/second rate so unless topology can't
keep up, it should be similar to what we set to rate. (10)
So this tries to lock
Blob files (meta, data) are in storm local directory. ZK only has list of
blob keys and which alive nimbuses have that file. So if you lose storm
local directory, you just can't restore blobs, unless other nimbuses have
these blobs so current nimbus could pull.
(I guess you have only one nimbus,
Github user Angus-Slalom commented on the issue:
https://github.com/apache/storm/pull/2294
Created for pull request https://github.com/apache/storm/pull/2157
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user Angus-Slalom commented on the issue:
https://github.com/apache/storm/pull/2157
Created a pull request on master at
https://github.com/apache/storm/pull/2294
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user Angus-Slalom opened a pull request:
https://github.com/apache/storm/pull/2294
STORM-2517 add interface for Writer, make AbstractHDFSWriter propertiâ¦
â¦es protected
You can merge this pull request into a Git repository by running:
$ git pull
Hello Jungtaek,
I can do what you suggest (ie moving storm local dir to a place which isn't
in /tmp),but since the issue occurs rarely (once per month), I doubt I'll
be able to feedback soon.
What is puzzling to me is that in order to recover from such issue, we have
to stop everything, then
Github user kevinconaway commented on the issue:
https://github.com/apache/storm/pull/2293
Can you explain the results a bit? From my reading, the number of tuples
acked/second is nearly the same
---
If your project is set up for it, you can reply to this email and have your
reply
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/2282
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user revans2 commented on the issue:
https://github.com/apache/storm/pull/2289
@roshannaik I just added in support for a new default reporter that writes
the data out in a fixed width format that should be much more human readable,
with formatting, and it is added as a
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2293
Attached perf. test numbers.
TVL, rate 10, topology.max.spout.pending=5000
> before patch
uptime: 241 acked: 3,015,580 acked/sec: 100,519.33 failed:0 99%:
GitHub user HeartSaVioR opened a pull request:
https://github.com/apache/storm/pull/2293
STORM-2231 Fix multi-threads issue on executor send queue
[STORM-2231](https://issues.apache.org/jira/browse/STORM-2231) is an issue
to report broken thread-safety on output collector. We
Github user bijanfahimi commented on a diff in the pull request:
https://github.com/apache/storm/pull/2282#discussion_r135051662
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/EmptyKafkaTupleListener.java
---
@@ -0,0 +1,54 @@
+/*
+ *
Github user srdo commented on a diff in the pull request:
https://github.com/apache/storm/pull/2282#discussion_r135038754
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/EmptyKafkaTupleListener.java
---
@@ -0,0 +1,54 @@
+/*
+ * Licensed to
Github user bijanfahimi commented on a diff in the pull request:
https://github.com/apache/storm/pull/2282#discussion_r135037987
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/EmptyKafkaTupleListener.java
---
@@ -0,0 +1,54 @@
+/*
+ *
Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2282#discussion_r135033155
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/EmptyKafkaTupleListener.java
---
@@ -0,0 +1,33 @@
+package
Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2282#discussion_r135032704
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/EmptyKafkaTupleListener.java
---
@@ -0,0 +1,54 @@
+/*
+ * Licensed to
Github user revans2 commented on the issue:
https://github.com/apache/storm/pull/2289
@roshannaik The latency reported is a simulation of kafka or something like
it. The start time, is not when the message is emitted by the spout. The
start time is when the message would have been
Github user bijanfahimi commented on a diff in the pull request:
https://github.com/apache/storm/pull/2282#discussion_r134996598
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaTupleListener.java
---
@@ -0,0 +1,78 @@
+/*
+ * Licensed
Alexandre,
I found that your storm local dir is placed to "/tmp/storm" which parts or
all could be removed at any time.
Could you move the path to non-temporary place and try to replicate?
Thanks,
Jungtaek Lim (HeartSaVioR)
2017년 8월 24일 (목) 오후 6:40, Alexandre Vermeerbergen
Github user srdo commented on a diff in the pull request:
https://github.com/apache/storm/pull/2282#discussion_r134980276
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaTupleListener.java
---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the
Github user bijanfahimi commented on a diff in the pull request:
https://github.com/apache/storm/pull/2282#discussion_r134977497
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaTupleListener.java
---
@@ -0,0 +1,78 @@
+/*
+ * Licensed
Hello Jungtaek,
Thank you very much for your answer.
Please find attached the full Nimbus log (gzipped) related to this issue.
Please note that the last ERROR repeats forever until we "repair" Storm.
>From the logs, it could be that the issue began close to when a topology
was restarted
Github user roshannaik commented on the issue:
https://github.com/apache/storm/pull/2289
Would be nice if the latencies are trimmed off at 1 digit after decimal
point
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user sakanaou commented on the issue:
https://github.com/apache/storm/pull/2292
Ok, I will update the PR so it can be applied to master easily.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Hi Alexandre, I missed this mail since I was on vacation.
I followed the stack trace but hard to analyze without context. Do you mind
providing full nimbus log?
Thanks,
Jungtaek Lim (HeartSaVioR)
2017년 8월 16일 (수) 오전 4:12, Alexandre Vermeerbergen 님이
작성:
> Hello,
>
>
Github user roshannaik commented on the issue:
https://github.com/apache/storm/pull/2289
The latency reported in the TVL report differs from what is shown in the UI
by a factor of 10k. Division error maybe ?
Here is a sample report indicating mean latency of 84,246ms but the
31 matches
Mail list logo