Github user liu-zhaokun commented on the issue:
https://github.com/apache/storm/pull/2170
@HeartSaVioR
It says "Note that if this is set to something with a secret (as when using
digest authentication) then it should only be set in the
storm-cluster-auth.yaml file." in
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2165
Oops I left a review comment so please treat this as revoking +1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user HeartSaVioR commented on a diff in the pull request:
https://github.com/apache/storm/pull/2165#discussion_r124716042
--- Diff: bin/storm.py ---
@@ -94,7 +94,7 @@ def init_storm_env():
CONFFILE = ""
JAR_JVM_OPTS = shlex.split(os.getenv('STORM_JAR_JVM_OPTS',
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2165
OK then I would also +1 given that the patch is confirmed to be manually
tested.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user asfgit closed the pull request at:
https://github.com/apache/storm/pull/2178
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2178
Just confirmed that this issue also affects 1.x-branch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user asfgit closed the pull request at:
https://github.com/apache/storm/pull/2166
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2142
@chengxinglin Kindly reminder.
@revans2 If @chengxinglin doesn't respond and if you think this is a
blocker, could you craft pull request after waiting a bit?
---
If your project is set up
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2149
@adityasharad
I guess @revans2 put up a pull request to the your repo. Could you please
merge and update this PR? Thanks in advance!
---
If your project is set up for it, you can reply to
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2179
+1
@MichealShin Could you change the commit title to include STORM-2601 so
that we can track easily? Thanks in advance!
---
If your project is set up for it, you can reply to this email and
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2178
@srdo
+1
I guess that 1.x branch is also affected, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
I submitted topology in local without any problem , but in production mode
i couldn't as you can see in ui zeros values in columns except execute
columns .
i got after sometimes in terminal drpcexecutionexception(msg:request timed
out)
my configurations are
Machine A and Machine B
storm.yaml in
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2170
If this is needed, why not adding storm.yaml.example and commenting it out?
I don't think users want to copy a line from template file and paste to their
storm.yaml.
---
If your project is
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2173
@revans2 @harshach
I've just played with get_wildcard_dir() and found another issues. Please
refer below:
```
>>> import os
>>> def get_wildcard_dir(path):
... if
Github user liu-zhaokun commented on the issue:
https://github.com/apache/storm/pull/2180
@HeartSaVioR
I want to use my payload by setting the configuration which named
"STORM_ZOOKEEPER_TOPOLOGY_AUTH_PAYLOAD",but it doesn't work.The payload of any
topology always be a uuid
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2180
Sorry I don't know the detail of this part and not sure this is a bug. Have
you faced specific issue regarding this ug?
cc. @revans2 I guess you might know about the detail, though the code
Github user liu-zhaokun commented on the issue:
https://github.com/apache/storm/pull/2166
@HeartSaVioR
Please help me merge it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2166
Sorry I already reviewed this but forgot to comment. +1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2172
Filed an issue for versioning state:
https://issues.apache.org/jira/browse/STORM-2605
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
I'm confusing in something i'm facing now . i submitted topology in local
mode and worked well , but in production not due to garbage collector that
means need more RAM !! How topology worked in local with one machine well
and when i used production to distribute work among two machines for
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/1674
@nilday Sure, please take your time. Thanks for keeping interest of
contributing. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user nilday commented on the issue:
https://github.com/apache/storm/pull/1674
@HeartSaVioR I'll rebase it as soon as I can. As the storm-core has been
restructured, it may take a while.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/1674
This feature looks like good thing to adopt. If @nilday is still willing to
adopt this to the Storm codebase, I'd like to spend some time to review this as
well.
@nilday Could you rebase
Hi Alexandre,
I don't know much of storm-kafka-client, but at a glimpse, I can't find
misuse of HashMap in KafkaSpout so I'd rather suspect that OffsetManager
being really huge. If you are willing to dig more on KafkaSpout OOME issue,
you can get more information of KafkaSpout for tracking with
Hi Hugo,
Thanks for your concerns about our troubles with the new storm-kafka-client.
Our "bench" is based on our live production data of our cloud supervision
system, collecting at least 1million metrics/min in our Kafka Brokers
cluster (currently based on Kafka 0.10.1.0, with "compatibility
Hi Alexandre,
In my benchmarks the storm-kafka-client spout improves throughput by 70% and
latency by 40% vs the storm-kafka implementation. I am surprised by your
findings substantiating the opposite. Can you share your benchmark where you
compare the performances of both implementations?
As
I need to know configurations of drpc as I read more about it and every
time I read different settings.
Now if I have 2 machines . Machine A and B . In machine A I will run nimbus
and drpc and ui . But in B I will run supervisor .
In the code I made drpc.servers has IP address of A and drpc client
> On Jun 28, 2017, at 4:01 PM, Jungtaek Lim wrote:
>
> If my memory is right, when releasing 1.1.0 we postponed resolving some
> critical issues for storm-kafka-client, and seems like it still haven't
> been sorted out. I even think these issues can be effectively blocker for
-1 to deprecate the old storm-kafka.
I don't feel storm-kafka-client is stable given that it has some critical
issues, and the module doesn't have enough volunteer committers to be
stabilized faster as it should be.
If my memory is right, when releasing 1.1.0 we postponed resolving some
critical
Hi Alexandre,
About issue 1:
This issue is not by design. It is a side effect of the spout internally
using the KafkaConsumer's subscribe API instead of the assign API. Support
for using the assign API was added a while back, but has a bug that is
preventing the spout from starting when
Hi Alexandre,
Thanks for your input.
I think we’re very much on the same page. The new Kafka spout needs to on par
with the old one in terms of performance and stability before we even think
about deprecation, let alone removal. I’ve heard a lot of complaints both
publicly and privately about
Hello,
If that matters, our current experiences with StormKafkaClient
isdisappointing (see my recent posts "Lag issues using Storm 1.1.1 latest
build with StormKafkaClient 1.1.1 vs old StormKafka spouts" in this mailing
list).
Our current experience is that the old StormKafka spout always beats
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/2155
Yeah... I also think that's the ideal way to do it. Squash at the end and
have a new commit addressing each batch of code review comments.
---
If your project is set up for it, you can reply to this
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/2155
I'll squash the commits soon, I just wanted people to have a chance to
review the changes without having to read the entire diff again
---
If your project is set up for it, you can reply to this email
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/2155
@hmcl I'm not sure how we can do that. The fields in Builder are not
static, so if we move the default definitions there, we'd have to create a
builder and fish out the default values in the tests. The
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/2155
@srdo thanks for addressing the code comments. It LGTM, but I forgot to
publish the following comment - sorry for the extra overhead. Do you want to
address it in this patch as well ?
Most of
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/2155
@hmcl Addressed your comments, and updated the README to reflect the API
changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
> On Jun 28, 2017, at 1:16 PM, Hugo Da Cruz Louro
> wrote:
>
> I still need to go over the entire discussion thread in more detail, but one
> thing I would like to bring up right way is the proposal to DEPRECATE, and
> eventually remove, the KafkaSpout with the old
Github user srdo commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124604957
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -116,217 +141,57 @@
private boolean
> On Jun 28, 2017, at 10:48 AM, Bobby Evans wrote:
>
> +1.
> If the 1.1 and 1.2 lines start to become difficult to maintain we can look at
> putting them in maintenance mode too once we have a 2.x release.
> I am a little nervous about merging a new feature into
Github user srdo commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124601205
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -79,16 +78,46 @@
I still need to go over the entire discussion thread in more detail, but one
thing I would like to bring up right way is the proposal to DEPRECATE, and
eventually remove, the KafkaSpout with the old Kafka Consumer APIs. The
storm-kafka-client KafkaSpout is getting stabilized, and I think we are
Github user srdo commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124600901
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -116,217 +141,57 @@
private boolean
Your KafkaConsumer instance (which boils down to your KafkaSpout) can be in one
of two states:
1 - Has committed to Kafka
Here, EARLIEST fetches from the first offset. LATEST fetches from the last
offset. UNCOMMITTED_EARLIEST and UNCOMMITTED_LATEST will fetch from the last
committed offset -
Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124580202
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -116,217 +141,57 @@
private boolean
Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124586447
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -116,217 +141,57 @@
private boolean
Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124586375
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -116,217 +141,57 @@
private boolean
Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124580583
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -116,217 +141,57 @@
private boolean
Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124572749
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -79,16 +78,46 @@
Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2155#discussion_r124594357
--- Diff:
external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java
---
@@ -116,217 +141,57 @@
private boolean
No, the description is accurate.
EARLIEST and LATEST are for unconditionally starting at the beginning or
end of the subscribed partitions. So if you configure a spout to use either
of these, it will start at the earliest or latest offset on each partition
every time you start it. Example: Say
+1.
If the 1.1 and 1.2 lines start to become difficult to maintain we can look at
putting them in maintenance mode too once we have a 2.x release.
I am a little nervous about merging a new feature into 1.x branch without first
going to master, but I hope that it will not be too much work to port
Github user markthegrea commented on the pull request:
https://github.com/apache/storm/commit/ca17c4ff10231a5d93deb3d4ac934140ccec674d#commitcomment-22810673
How is this enabled?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/1970
I updated this PR to apply the last change of STORM-2369. I'll simply
rebase again after STORM-2369 is merged. And will also craft the PR for master
branch.
---
If your project is set up for
More about this thread: we noticed that with StormKafkaClient 1.1.x latest,
we get OutOfMemoryError after ~2hours of running our simple test topology.
We reproduce it everytime, so we decided to generate a heap dump before the
OutOfMemoryError, and viewed the result using EclipseMAT.
The results
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/1950
@arunmahadevan Ah, please review #2172 as well when you revisit this.
Thanks in advance!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2172
Just update the PR to make equivalent to #1950
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Is there any help.please ?
On Wednesday, June 28, 2017, sam mohel wrote:
> I submitted two topologies in production mode . First one has a data set
with size 215 MB and worked well and gave me the results . Second topology
has a data set with size 170 MB with same
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/1950
@arunmahadevan Thanks I've addressed latest review comment and also
squashed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user arunmahadevan commented on a diff in the pull request:
https://github.com/apache/storm/pull/1950#discussion_r124484454
--- Diff:
external/storm-redis/src/main/java/org/apache/storm/redis/state/RedisKeyValueState.java
---
@@ -316,10 +340,18 @@ private Long
Oops, sent my last mail too fast, let me continue it:
Hello,
Coming back to my original post in this list, we have 3 issues with latest
1.1.x StormKafkaClient spout with our setup:
Issue#1:
Initial lag (which we hadn't using the classic Storm Kafka spout)
For this issue, my understanding of
Hello,
Coming back to my original post in this list, we have two issues with
latest 1.1.x StormKafkaClient spout with our setup:
Issue#1:
Initial lag (which we hadn't using the classic Storm Kafka spout)
For this issue, my understanding of Kristopher's answer is that this is
"by design" of
That's great news that metrics work is ready!
I'm +1 to Taylor's proposal, but in order to respect semantic versioning, I
propose some modifications from Taylor's proposal:
- create 1.1.x-branch with target version 1.1.1-SNAPSHOT and port back only
bug fixes to the 1.1.x-branch
- change the
I submitted two topologies in production mode . First one has a data set
with size 215 MB and worked well and gave me the results . Second topology
has a data set with size 170 MB with same configurations but stopped worked
after some times and didn't complete its result
The error i got is drpc
The storm-kafka-client document explains these two values just almost the
same except the last word.
https://github.com/apache/storm/blob/master/docs/storm-kafka-client.md
- UNCOMMITTED_EARLIEST (DEFAULT) means that the kafka spout polls
records from the last committed offset, if any. If
65 matches
Mail list logo