GitHub user HeartSaVioR opened a pull request:
https://github.com/apache/storm/pull/2161
STORM-2556 Break down AutoCreds implementations into two kinds of claâ¦
â¦sses
* Nimbus plugin: implements INimbusCredentialPlugin, ICredentialsRenewer
* Worker & Topology
Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2160
This can be cherry-picked via 1.x-branch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user HeartSaVioR opened a pull request:
https://github.com/apache/storm/pull/2160
STORM-2555 handle impersonation properly for HBase delegation token
Please refer [STORM-2555](http://issues.apache.org/jira/browse/STORM-2555)
for more details.
There had been code
It would obviously be ideal if Flux could be made to support object
creation via builders, but if that's not possible I think leaving the
KafkaSpoutConfig constructor public is a decent workaround. We are still
getting some of the benefits of the Builder pattern, even if Flux doesn't
use
Hi,
Flux is simply a mechanism to enabling Java objects creation using a descriptor
file. If Flux does not support creating classes that follow the Builder design
pattern, that is a Flux limitation and has to be fixed. It is not reasonable to
impose that no one can write a builder because
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/2159
As I recall (it's been a while), the eventlogging functionality enabled
with "topology.eventlogger.executors" in storm.yaml will make both spouts and
bolts send whatever they emit to an event logger
Hi Stig/Hugo,
That constructor is indeed public. I actually made that change but forgot about
it.
https://github.com/apache/storm/commit/5ff7865cf0b86f40e99b54e789fa60b8843191aa
The reason for making that change is to make it work with flux.
I think changing flux code to access private
Which thread(s) do the tuple fanout from a source blot to the subscriber bolts
please.
Thanks
Madhav.
@Harsha @Stig, I agree with you. Let’s make the de facto implementation manual
partition assignment. I have already adjusted the KafkaTridentSpout code to
reflect @Stig’s changes and things seem to be working very well for Trident as
well. I am tracking that on
Github user askprasanna commented on the issue:
https://github.com/apache/storm/pull/2159
Is there a reason why the same fix need not be applied for
storm-kafka-client?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user askprasanna commented on the issue:
https://github.com/apache/storm/pull/2159
My colleague reported an issue with his topology and the issue was a
runtime exception around KafkaSpoutMessageID not being serializable. He had
debug events enabled in his topology. I found
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/2159
@askprasanna Why does it have to be serializable? It seems that it has been
working until now. What changed that is enforcing this class to be serializable?
---
If your project is set up for it, you
It looks public to me?
https://github.com/apache/storm/blob/38e997ed96ce6627cabb4054224d7044fd2b40f9/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpoutConfig.java#L461
I think its good to be able to avoid telescoping constructors, while at the
same time not having a
GitHub user askprasanna opened a pull request:
https://github.com/apache/storm/pull/2159
STORM-2552: KafkaSpoutMessageId should be serializable
Equivalent to STORM-1848 and https://github.com/apache/storm/pull/1428
You can merge this pull request into a Git repository by running:
Turn on garbage collection logging on your workers. We have found that GC is
typically the culprit in cases like this. A large stop the world GC happens
which makes all threads wait for it so the heartbeats time out and things get
rescheduled.
Once you know it is GC then you need to take a
I merged to track the source of error. I found the first error message is
on nimbus.log: "o.a.s.d.nimbus [INFO] Executor T1-1-1497424747:[8 8] not
alive". The nimbus detects some executors are not alive and thus make
reassignment which cause the worker restart.
I did not find any error in the
16 matches
Mail list logo