@Arun, there is also this 
STORM-2844<https://issues.apache.org/jira/browse/STORM-2844> I just filled. I 
am working on a and and will discuss in the JIRA potential solutions because 
this bug was introduced while fixing 
STORM-2666<https://issues.apache.org/jira/browse/STORM-2666>.

Thanks,
Hugo

On Dec 5, 2017, at 10:18 AM, Arun Mahadevan 
<ar...@apache.org<mailto:ar...@apache.org>> wrote:

Looks like now we are only waiting on below Kafka spout issues :

https://github.com/apache/storm/pull/2428
https://github.com/apache/storm/pull/2438

Maybe we should include the metrics changes as well?
https://github.com/apache/storm/pull/2203


Can we try to get the above merged ASAP and start the 1.2.0 release process ?

Thanks,
Arun



On 11/21/17, 3:18 AM, "generalbas....@gmail.com on behalf of Stig Rohde 
Døssing" <generalbas....@gmail.com on behalf of stigdoess...@gmail.com> wrote:

Alexandre,

It's a bug in the way I tried to fix the NPE you had a few days ago in
https://github.com/apache/storm/pull/2428. I missed that using
setKey/setValue actually builds a new KafkaSpoutConfig.Builder instead of
just setting a field, and the change I made to the copy constructor means
that if the value deserializer is set in kafkaProps (which it is when using
KafkaSpoutConfig.builder), using setKey/Value is ignored.

I've amended the fix to STORM-2826 and added a few more tests. The new jar
is at
https://drive.google.com/file/d/1DgJWjhWwczYgZS82YGd63V3GT2G_v9fd/view?usp=sharing
.

There is not as far as I know a way for you to get the subscribed topics
from the subscription.

2017-11-21 11:04 GMT+01:00 Alexandre Vermeerbergen <avermeerber...@gmail.com
:

Hello Stig,

Here's an update of my tests with storm 1.2.0 preview:
- I accept the limitation on the stability of the string format returned by
getTopicsString(), as I have adapted our code to detect both 1.1.0-style &
1.2.0-style. Isn't there a clean way to get the list of topics other than
our fragile parsing?
- My ~15 topologies have been running for 24 hours with storm 1.2.0 preview
+ our own Kafka spout deriving from Storm kafka client 1.2.0 preview
setting, I have seen no stability nor performance issue (but that's not yet
a large  scale test).
- When I tried to switch one of our topologies to your storm-kafka-client,
I was surprised to get no stats on the topology.
 Then I noticed exceptions for all messages read by the spout:

java.lang.String cannot be cast to
com.dassault_systemes.infra.monitoring.model.Event
java.lang.ClassCast*Excep
<http://ows-171-33-121-83.eu-west-2.compute.outscale.com:8000/log?file=
statefulAlerting_ows-171-33-69-118-eu-west-2-compute-outscale-com_
defaultStormTopic-165-1511258026%2F6706%2Fworker.
log&start=19293&length=51200>*tion:
java.lang.String cannot be cast to
com.acme_systemes.infra.monitoring.model.Event
       at com.dassault_systemes.storm.eval


And also:

2017-11-21 08:28:40.958 o.a.s.k.s.KafkaSpout
Thread-5-eventFromAdminTopic-executor[12 12] [INFO] Kafka Spout opened
with
the following configuration:
KafkaSpoutConfig{kafkaProps={key.deserializer=class
org.apache.kafka.common.serialization.StringDeserializer,
value.deserializer=class
org.apache.kafka.common.serialization.StringDeserializer,
enable.auto.commit=false, request.timeout.ms=1200000,
group.id=Storm_RealTimeSupervision_9XkvRUExS2GFNAZNcBjQug_
defaultStormTopic_alerting_administration,
bootstrap.servers=ows-171-33-69-118.eu-west-2.compute.outscale.com:9092,
auto.commit.interval.ms=60000, session.timeout.ms=120000,
auto.offset.reset=earliest},
key=org.apache.kafka.common.serialization.StringDeserializer@61dc4a48,
value=com.acme_systemes.storm.evaluator.spout.EventKafkaDeserializer@
5d6e1916,
pollTimeoutMs=200, offsetCommitPeriodMs=30000,
maxUncommittedOffsets=10000000, firstPollOffsetStrategy=LATEST,
subscription=org.apache.storm.kafka.spout.ManualPartitionSubscription@
4ff512c9,
translator=com.acme_systemes.storm.evaluator.spout.
EventKafkaRecordTranslator@593c16f5,
retryService=KafkaSpoutRetryExponentialBackoff{delay=TimeInterval{length=
0,
timeUnit=SECONDS}, ratio=TimeInterval{length=2, timeUnit=MILLISECONDS},
maxRetries=2147483647, maxRetryDelay=TimeInterval{length=10,
timeUnit=SECONDS}}, tupleListener=EmptyKafkaTupleListener}

This later stack is very strange: it shows that our custom deserializer was
indeed taken into account in a field called "value=..." but not as the
key.deserializer which remained set to StringDeserializer.

Our Kafka spout initialization code is the following one:

       KafkaSpoutConfig<String, Event> spoutConfigForMainTopic =
KafkaSpoutConfig
               .builder(elasticKafkaBrokers, KafkaTopics.MAIN)
               .setValue(EventKafkaDeserializer.class)
               .setGroupId(consumerId + "_" + KafkaTopics.MAIN)
               .setFirstPollOffsetStrategy(strategy)
               .setProp(kafkaConsumerProp)
               .setRecordTranslator(
                       new EventKafkaRecordTranslator(true))
               .build();

We then noticed the following discussion:

http://mail-archives.apache.org/mod_mbox/storm-user/201709.mbox/%
3ccag09er3yzsxw84u6xvqevdojy-j2hf-jrnnxhk845bzo5d4...@mail.gmail.com%3e

so if I understand well, there's a breaking change between Storm 1.1.0 and
1.2.0 in the way of registering Kafka deserializers.

Do you confirm ?

I can change our code, but I would like to keep the ability to run it with
either 1.1.0 or 1.2.0 for a while : is there a way I can register Kafka
deserializers for storm-kafka-client that will work with both 1.1.0 and
1.2.0 ?

Best regards,
Alexandre Vermeerbergen







2017-11-20 18:40 GMT+01:00 Stig Rohde Døssing <stigdoess...@gmail.com>:

Happy to hear that it's at least running for you now :) Eager to hear
your
results.

Regarding the getTopicsString format, the Subscription interface doesn't
specify the string format (and it was never intended to be parsed in this
way). In my opinion the format is an implementation detail, so we should
be
free to change it at any time.

2017-11-20 15:01 GMT+01:00 Alexandre Vermeerbergen <
avermeerber...@gmail.com
:

Hello All,

Good news today: I found & solved what was preventing my topologies
from
consuming since I had upgraded from storm-kafka-client 1.1.0 to
storm-kafka-client.1.2.0-lastestsnapopfromstig.

The reasons from our own BasicKafkaSpout class, which is our homebrewed
Kafka spout based on same configuration settings as the official
storm-kafka-client Kafka spout.

The issue came from the fact that, at least storm-kafka-client 1.1.0
doesn't exposes a way to get the list of consumed topics, so we had to
parse the value returned by KafkaSpoutConfig<K,
V>.getSubscription().getTopicsString() in order to extract the list of
topics.

Here's where the issue lies:

   public BasicKafkaSpout(KafkaSpoutConfig<K, V> config) {
       this.kafkaBrokers = (String) config.getKafkaProps().get(
               ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
       this.consumerId = config.getConsumerGroupId();
       logger.debug("topics are: {}", config.getSubscription()
               .getTopicsString());
       String topicsStr = config.getSubscription().getTopicsString();
       this.topics = topicsStr.substring(1, topicsStr.length() -
1).split(",");

       => Bug here with storm-kafka-client 1.2.0 snapshot, because if
there is only one topic,
          the value of topicsStr which was "[mytopic]" with 1.1.0 is
now
"mytopic"

and here's a fixed (and ugly) version which works with both 1.1.0 &
1.2.0
snapshot of the same code snippet:

   public BasicKafkaSpout(KafkaSpoutConfig<K, V> config) {
       this.kafkaBrokers = (String) config.getKafkaProps().get(
               ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
       this.consumerId = config.getConsumerGroupId();
       logger.debug("topics are: {}", config.getSubscription()
               .getTopicsString());
       String topicsStr = config.getSubscription().getTopicsString();
       if (topicsStr.startsWith("[") && topicsStr.endsWith("]")) {
           this.topics = topicsStr.substring(1, topicsStr.length() -
1).split(",");
       }
       else {
           this.topics = topicsStr.split(",");
       }

Maybe it would be good to either restore this string representation as
it
was in 1.1.0, or if not your intention, they document the change.

Now I guess your question is : "why on earth did you implement your own
Kafka Spout?"

The answer is simple: we were waiting for 1.2.0 which contains a fix to
storm-kafka-client to that Spout statistics are visible in Storm UI
when
this spout is used in autocommit mode. Our own spout was design on
purpose
to have same config as torm-kafka-client so as to make switch from our
implementation to official storm-kafka-client-1.2.0 as simple as simple
(as
a matter of fact, it's a simple property to change on our side). We
couldn't wait for 1.2.0 because we had to switch to Kafka 0.10.2.0 as
soon
as possible.

Next steps for me related to Storm 1.2.0 preview tests:
* Try our topologies with the official storm-kafka-client-1.2.0 in
order
to
test non-regression
* Try our topologies with the official storm-kafka-client-1.2.0 on our
pre-production in order to test non-regression of performances.

Hope it helps, stay tuned :)

Best regards,
Alexandre Vermeerbergen


2017-11-20 8:49 GMT+01:00 Stig Rohde Døssing <stigdoess...@gmail.com>:

Alexandre,

Could you also post the BasicKafkaSpout source file? I'm curious what
it's
doing.

2017-11-20 7:50 GMT+01:00 Alexandre Vermeerbergen <
avermeerber...@gmail.com>
:

Hello Jungtaek,

OK I will activate these traces, but since we need to capture the
Spouts'
initialization traces, how should I activate these traces?

Indeed, if I use one of the techniques shown here
https://community.hortonworks.com/articles/36151/debugging-
an-apache-storm-topology.html
then I'm afraid I have to wait until the topology is deployed
before
setting its trace level.

Would you please clarify how to activate traces enabling to get
spout's
initialization.
(please be as specific as possible for our current case - I can
even
modify
some code if that's the only to get this "early" activation, but
then
please specify which lines of code I need to add)

Best regards,
Alexandre Vermeerbergen


2017-11-20 1:20 GMT+01:00 Jungtaek Lim <kabh...@gmail.com>:

It would be much appreciated if you could change topology log
level
to
the
following: level: DEBUG, logger name: 'ROOT' or 'org.apache',
timeout:
long
enough (say 1800 or 3600), and kill worker which contains Spout
in
UI
or
console.

Above instruction enables logging with DEBUG level, and Kafka
spout
being
restarted, so we could see initialization phase of Kafka spout.

Thanks,
Jungtaek Lim (HeartSaVioR)

2017년 11월 20일 (월) 오전 8:09, Alexandre Vermeerbergen <
avermeerber...@gmail.com>님이
작성:

Hello Stig,

Thanks again for your latest fix.

I have no longer any exception when submitting my topologies,
but
then
they
read nothing from my Kafka topics.

So I made another test: I reverted from your latest
storm-kafka-client
1.2.0 snapshot jar to storm-kafka-client-1.1.0.jar in my
topologies,
but
I
kept an installation based on Storm 1.2.0 snapshot => Kafka
consumption
is
OK

If I revert again to your latest storm-kafka-client 1.2.0
snapshot
jar,
then Kafka consumption never starts.

I have no exceptions in logs worker's artifact logs, I'm unsure
what
to
try
now...

Any trace which I could activate?

Best regards,
Alexandre Vermeerbergen






2017-11-19 16:48 GMT+01:00 Stig Rohde Døssing <
stigdoess...@gmail.com
:

Thanks, I've addressed the issue here
https://github.com/apache/storm/pull/2428 and uploaded a new
jar
at
the
same link here https://drive.google.com/file/d/
1DgJWjhWwczYgZS82YGd63V3GT2G_
v9fd/view?usp=sharing. I went over the PR that made these
changes,
and
I
don't believe anything else breaks backward compatibility,
but
we'll
see.

2017-11-19 15:48 GMT+01:00 Alexandre Vermeerbergen <
avermeerber...@gmail.com
:

Hi Stig,

Here's the source of our "BasicKafkaSpout" class's
constructor
(I
can
send
the full source if needed),


   public BasicKafkaSpout(KafkaSpoutConfig<K, V> config)
{
       this.kafkaBrokers = (String)
config.getKafkaProps().get(
               ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
       this.consumerId = config.getConsumerGroupId();
       logger.debug("topics are: {}",
config.getSubscription()
               .getTopicsString());
       String topicsStr = config.getSubscription().
getTopicsString();
       this.topics = topicsStr.substring(1,
topicsStr.length() -
1).split(",");
       switch (config.getFirstPollOffsetStrategy()) {
       case UNCOMMITTED_EARLIEST:
       case EARLIEST:
           this.strategy = "earliest";
           break;
       case UNCOMMITTED_LATEST:
       case LATEST:
       default:
           this.strategy = "latest";
       }
       this.keyDeserializer = config.getKeyDeserializer().
getClass();
       this.valueDeserializer =
config.getValueDeserializer().
getClass();
       this.translator = config.getTranslator();
       this.consumerProps = new Properties();
       if (config.getKafkaProps() != null) {
           consumerProps.putAll(config.getKafkaProps());
       }
   }

line 72 is the following one:

       this.keyDeserializer = config.getKeyDeserializer().
getClass();

Hope it helps,

Alexandre Vermeerbergen

2017-11-19 15:37 GMT+01:00 Stig Rohde Døssing <
stigdoess...@gmail.com
:

Alexandre,

I'm sorry this is giving you so much trouble. Looking at
the
stack
trace
you posted, it seems like the NPE is coming from
BasicKafkaSpout
line
72.
Can you post that line (and maybe some of the surrounding
code)?

2017-11-19 15:13 GMT+01:00 Alexandre Vermeerbergen <
avermeerber...@gmail.com
:

Hi Stig,

After having included commons-lang-2.5.jar in my
topologies,
I
still
have
11 topologies failing with this new message, this time
a
NullPointerException:

Running: /usr/local/jdk/bin/java -client -Ddaemon.name=
-Dstorm.options=
-Dstorm.home=/usr/local/Storm/storm-stable
-Dstorm.log.dir=/usr/local/Storm/storm-stable/logs
-Djava.library.path=/usr/
local/lib:/opt/local/lib:/usr/
lib
-Dstorm.conf.file= -cp
/usr/local/Storm/storm-stable/
*:/usr/local/Storm/storm-
stable/lib/*:/usr/local/Storm/
storm-stable/extlib/*:/usr/
local/Storm/StormTopologiesBJ.
jar:/usr/local/Storm/storm-
stable/conf:/usr/local/Storm/storm-stable/bin
-Dstorm.jar=/usr/local/Storm/StormTopologiesBJ.jar
-Dstorm.dependency.jars=
-Dstorm.dependency.artifacts={}
com.acme.storm.evaluator.SLAEventsInterceptorTopology
slaEventsInterceptor
1939 [main] WARN  c.a.s.u.StormUtils - Couldn't read
/usr/local/Storm/flux/slaEventsInterceptor_
kafkaConsumer.properties
file:
/usr/local/Storm/flux/slaEventsInterceptor_
kafkaConsumer.properties
(No
such file or directory)
1978 [main] WARN  o.a.s.k.s.KafkaSpoutConfig - Do not
set
enable.auto.commit manually. Instead use
KafkaSpoutConfig.Builder.setProcessingGuarantee. This
will
be
treated
as
an
error in the next major release. For now the spout will
be
configured
to
behave like it would have in pre-1.2.0 releases.
1982 [main] INFO  c.a.s.u.StormUtils - Use Kafka Spout:
com.acme.storm.evaluator.spout.BasicKafkaSpout
Exception in thread "main"
java.lang.NullPointerException
       at
com.acme.storm.evaluator.spout.BasicKafkaSpout.<init>(
BasicKafkaSpout.java:72)
       at com.acme.storm.util.
StormUtils.getKafkaSpout(
StormUtils.java:888)
       at
com.acme.storm.evaluator.SLAEventsInterceptorTopology.
main(
SLAEventsInterceptorTopology.java:120)

and here's the line from our source from which this
exception
was
triggered:

       IRichSpout kafkaSpoutForMainTopic = StormUtils
               .getKafkaSpout(
spoutConfigForMainTopic,
kafkaSpoutClass);

Looks like upgrade from 1.1.0 (our current production
setup)
to
1.2.0
is
a
bit more difficult than I expected.
But anyway, I greatly appreciate Storm, so I hope my
feedbacks
will
help
keeping it as good as possible :)

Please let me know if further details are required to
solve
this
issue
with
storm-kafka-client 1.2.0 snapshot.

Best regards,
Alexandre


2017-11-19 13:26 GMT+01:00 Stig Rohde Døssing <
stigdoess...@gmail.com
:

Using Maven and controlling dependencies aren't
mutually
exclusive.
Tools
like Nexus (https://help.sonatype.com/
display/NXRM2/Procurement+
Suite)
allow you to control access to dependencies without
manually
putting
jars
in an SCM.

I realize this doesn't help you right now, just
thought
I'd
mention
it
so
you don't get the idea that I'm advocating a build
process
that
blindly
includes new dependencies.

2017-11-19 13:05 GMT+01:00 Alexandre Vermeerbergen <
avermeerber...@gmail.com
:

Believe it or not, we don't internally use Maven :
we
strictly
control
our
dependencies.

So okay, I have delivered a
"storm-commons-lang2.jar"
in
our
SCM
system
in
order to specifically add it to our topologies big
jars
depending
on
storm-kafka-client (feeling sad).

I'll tell you the outcome ASAP.

Best regard,
Alexandre Vermeerbergen


2017-11-19 12:57 GMT+01:00 Stig Rohde Døssing <
stigdoess...@gmail.com
:

I also don't think that adding dependencies
constitutes a
breaking
change,
nor should it. Dependency management tooling like
Maven
will
handle
pulling
the right dependencies automatically.

2017-11-19 12:34 GMT+01:00 Jungtaek Lim <
kabh...@gmail.com
:

Alexandre,

There's so much pain on using relocated
artifact
in
other
module
in a
project. You need to refer the class as
relocated
name,
and
your
IDE
(at
least IntelliJ) complains and the compilation
in
IDE
will
not
succeed.
(The
build via Maven will succeed though.)
There was same issue on 2.0.0 as well, and for
now
we
don't
shade
anything
in 2.0.0. We need to discuss which things to
shade
before
discussing
about
releasing, indeed.

And normally we have been allowing adding
dependency
as
minor
release.
I
understand someone could say that it is
breaking
dependency,
and
should
lead another relocation as well. But my 2 cents
the
requirement
is
too
restrictive, and may end up with Guava-like
versioning.
(releasing
major
versions sooner...)
I think relocating is not an ideal solution,
and
eventually
we
may
have
nice solution (like classloader isolation) and
then
we
could
sort
out
all
the related issues. Before that, I could not
block
adding
dependencies
in
minor release if it is necessary.

Thanks,
Jungtaek Lim (HeartSaVioR)

2017년 11월 19일 (일) 오후 8:02, Stig Rohde Døssing <
stigdoess...@gmail.com
님이
작성:

Alexandre,

Thanks for trying. I think this time it's
not a
problem
with
storm-kafka-client. It has a dependency on
commons-lang
2.5,
which
is
declared in the storm-kafka-client pom. We
usually
recommend
that
people
use Maven/Gradle/Ivy or similar systems to
ensure
that
they
get
all
dependencies when they build their
topologies.
In
such
a
case
Maven
(or
similar) would see that your topology depends
on
storm-kafka-client,
and
also download and include commons-lang in the
topology
jar.
When
you
put
together dependencies manually you need to
include
commons-lang
manually
as
well.

You can get the commons-lang 2.5 jar here

http://central.maven.org/
maven2/commons-lang/commons-
lang/2.5/commons-lang-2.5.jar

2017-11-19 11:49 GMT+01:00 Alexandre
Vermeerbergen
<
avermeerber...@gmail.com
:

Hello Stig,

Than you very much for you quick answer &
fix.
Unfortunaly, I still have topologies
failing
to
start
but
with
a
different
exception this time:

Exception in thread "main" java.lang.
NoClassDefFoundError:
org/apache/commons/lang/StringUtils
       at
org.apache.storm.kafka.spout.
NamedTopicFilter.
getTopicsString(
NamedTopicFilter.java:66)
       at
org.apache.storm.kafka.spout.
ManualPartitionSubscription.
getTopicsString(
ManualPartitionSubscription.java:70)
       at
com.acme.storm.evaluator.
spout.BasicKafkaSpout.<init>(
BasicKafkaSpout.java:59)
       at
com.acme.storm.util.StormUtils.getKafkaSpout(
StormUtils.java:888)
       at
com.acme.storm.evaluator.
SLAEventsInterceptorTopology.
main(
SLAEventsInterceptorTopology.java:120)
Caused by: java.lang.
ClassNotFoundException:
org.apache.commons.lang.StringUtils
       at java.net.URLClassLoader.
findClass(URLClassLoader.java:
381)
       at java.lang.ClassLoader.
loadClass(ClassLoader.java:
424)
       at sun.misc.Launcher$
AppClassLoader.loadClass(
Launcher.java:338)
       at java.lang.ClassLoader.
loadClass(ClassLoader.java:
357)

Looks like you introduced a dependency on
commons
lang
that
wasn't
part
of
Storm, isn't it ?

Keep faith, next time will be better :)

Best regards,
Alexandre

2017-11-19 10:44 GMT+01:00 Stig Rohde
Døssing <
stigdoess...@gmail.com
:

I've put up a fix here
https://github.com/apache/
storm/pull/2426
.
There's
an updated storm-kafka-client jar at
https://drive.google.com/file/d/
1DgJWjhWwczYgZS82YGd63V3GT2G_
v9fd/view?usp=sharing
if you'd like to try it out.

2017-11-19 10:17 GMT+01:00 Stig Rohde
Døssing <
stigdoess...@gmail.com
:

Oops. The exception is not intentional,
it's
a
bug.
In
1.2.0
we
check
for
the "enable.auto.commit" key in the
kafkaConsumerProp
map,
and
if
it
is
set
we warn in the log that it shouldn't
be,
because
users
should
use
the
KafkaSpoutConfig.Builder.
setProcessingGuarantee
method
instead.
When
the
property is set we try to set the
processing
guarantee
to
match
the
pre-1.2.0 behavior, but I made a
mistake
and
assumed
the
property
key
is
a
boolean when it might actually be
either
a
boolean
or a
string.
I'll
put
up
a fix ASAP.

2017-11-19 <20%2017%2011%2019> 9:55
GMT+01:00
Alexandre
Vermeerbergen
<
avermeerber...@gmail.com>:

Hello Stig,

Here's my first feedback on this Storm
1.2.0
preview
on
my
Supervision
system based on Storm : I have 11
topologies
KO
(not
even
able
to
start),
and 4 topologies which seem to be
unaffected.

Details:
- I used the binaries posted on
dropbox
by
Jungteak
and
your
binary
for
storm-kafka-client
- Rebuilt all our topologies using
these
storm-core.jar
&
storm-kafka-client.jar from these
storm-1.2.0-snapshot
(and
I
got
absolutely no error at build time).
 Meaning that I have new "big jar"
files
including
this
newer
storm-kafka-client.jar (except for 1
topology
which
depends
on
another
build system)
- The 11 topologies which fail to
start
show
up
the
following
trace
when
they are submitted:

Running: java -client -Ddaemon.name=
-Dstorm.options=
-Dstorm.home=/usr/local/Storm/
storm-stable
-Dstorm.log.dir=/usr/local/
Storm/storm-stable/logs
-Djava.library.path=/usr/
local/lib:/opt/local/lib:/usr/
lib
-Dstorm.conf.file= -cp
/usr/local/Storm/storm-stable/
*:/usr/local/Storm/storm-stabl
e/lib/*:/usr/local/Storm/
storm-stable/extlib/*:/usr/
local/
Storm/StormTopologiesBJ.jar:/
usr/local/Storm/storm-stable/
conf:/usr/local/Storm/storm-
stable/bin
-Dstorm.jar=/usr/local/Storm/
StormTopologiesBJ.jar
-Dstorm.dependency.jars=
-Dstorm.dependency.artifacts={}
com.acme.storm.evaluator.
SLAEventsInterceptorTopology
slaEventsInterceptor
Latest
1022 [main] INFO  c.a.s.e.
SLAEventsInterceptorTopology
-
Topology :
'slaEventsInterceptor',
KafkaOffsetTimeStrategy
=
'Latest'
1026 [main] WARN  c.a.s.u.StormUtils -
Couldn't
read
/usr/local/Storm/flux/
slaEventsInterceptor_
kafkaConsumer.properties
file:
/usr/local/Storm/flux/
slaEventsInterceptor_
kafkaConsumer.properties
(No
such file or directory)
1037 [main] WARN
o.a.s.k.s.KafkaSpoutConfig -
Do
not
set
enable.auto.commit manually. Instead
use
KafkaSpoutConfig.Builder.
setProcessingGuarantee.
This
will
be
treated
as
an
error in the next major release. For
now
the
spout
will
be
configured
to
behave like it would have in pre-1.2.0
releases.
Exception in thread "main"
java.lang.ClassCastException:
java.lang.String
cannot be cast to java.lang.Boolean
       at
org.apache.storm.kafka.spout.
KafkaSpoutConfig.
setAutoCommitM
ode(KafkaSpoutConfig.java:721)
       at
org.apache.storm.kafka.spout.
KafkaSpoutConfig.<init>(
KafkaSp
outConfig.java:97)
       at
org.apache.storm.kafka.spout.
KafkaSpoutConfig$Builder.
build(
KafkaSpoutConfig.java:671)
       at
com.acme.storm.evaluator.
SLAEventsInterceptorTopology.
main(S
LAEventsInterceptorTopology.java:118)

The line part of our application which
triggered
above
stack
trace
is
the
following one:

       KafkaSpoutConfig<String,
Event>
spoutConfigForMainTopic
=
KafkaSpoutConfig

.builder(elasticKafkaBrokers,
KafkaTopics.MAIN)
               .setValue(
EventKafkaDeserializer.class)

.setGroupId(consumerId +
"_" +
KafkaTopics.MAIN)

.setFirstPollOffsetStrategy(
strategy)

.setProp(kafkaConsumerProp)
               .setRecordTranslator(
                       new
SLAEventsInterceptorKafkaRecor
dTranslator())
               .build();

I understand that there's a WARN
message,
but
if
it's
a
warning,
then
why
do we exit with such a brutal
exception?

Best regards,
Alexandre Vermeerbergen




2017-11-18 14:54 GMT+01:00 Stig Rohde
Døssing
<
stigdoess...@gmail.com
:

Alexandre,

I've uploaded the 1.2.0-SNAPSHOT
storm-kafka-client
jar
here
https://drive.google.com/file/d/
1DgJWjhWwczYgZS82YGd63V3GT2G_
v9fd/view?usp=sharing.
You can probably keep the 1.1.0
versions
of
storm-hbase
and
storm-hdfs,
since the core Storm API hasn't
changed.

If you want to build the jars
yourself,
clone
the
repository
at
https://github.com/apache/storm and
check
out
the
1.x-branch
branch.
You
can build the entire project by
running
"mvn
clean
install
-DskipTests"
in
the root. The individual module jars
will
be
available
in
the
target
directories for each module, e.g.
storm-kafka-client.jar
will
be
in
external/storm-kafka-client/target.

If you need to build the regular
Storm
distribution
(the
tarball
Jungtaek
sent you), you can go to
storm-dist/binary
and
run
"mvn
clean
install
-Dgpg.skip". The tar/zip will be in
storm-dist/binary/target.

Further reference in case you need
it
https://github.com/apache/
storm/blob/master/DEVELOPER
.
md#build-the-code-and-run-the-tests
.

2017-11-18 12:57 GMT+01:00 Alexandre
Vermeerbergen <
avermeerber...@gmail.com
:

Hello Jungtaek,

Thanks for the link to 1.2.0
preview
binaries.
However, since we heavily depend
on
Storm
Kafka
Client,
would
you
please
either add it or remind me how to
build
the
jars
of
this
external
lib, at
same "snapshot" version as the
rest
?
On a side note, some of our
topologies
also
rely
on
storm
hbase
&
storm
hdfs : does it matters if these
later
ones
stay
in
1.1.0
version
for
this
test?
Indeed, I want to focus on Storm
1.2.0
"core"
+
Storm
Kafka
Client
1.2.0

Best regards,
Alexandre Vermeerbergen


2017-11-18 <20%2017%2011%2018>
7:41
GMT+01:00
Jungtaek
Lim <
kabh...@gmail.com>:

Alexandre,

https://www.dropbox.com/s/
mg2gnunk24oesyc/apache-storm-
1.2.0-SNAPSHOT.tar.gz?dl=0

Above link is custom binary
distribution
of
current
1.x-branch
(SNAPSHOT
of
1.2.0). Could you run the test
in
your
environment
first
and
let
us
know
about the result? Regardless of
including
metrics
V2
to
1.2.0
or
not,
your
test report should be valuable
for
us.

And please refer my analysis of
current
metrics:
https://cwiki.apache.org/
confluence/display/STORM/
Limitations+of+current+
metrics+feature
to
see why we want to move toward
to
Metrics
V2.
Current
PR
on
Metrics
V2
is
for initial state and it could
cover
only
some
of
issues
in
list,
but I
expect we will address another
issues
as
well
based
on
the
PR.

Thanks in advance!
Jungtaek Lim (HeartSaVioR)

2017년 11월 18일 (토) 오전 6:40,
Alexandre
Vermeerbergen <
avermeerber...@gmail.com>님이
작성:

Hello Hugo,

As I already posted, I was
getting
ready
to
test
the
upcoming
RC,
so
yeah :
as soon as I have some
binaries
for
testing,
I
can
run
any
Storm
1.2.0
preview on my pre-production
(well
stressed)
environment.
Is
there
some
URL
from where I can download such
"snapshot"
binaries?


Best regards,
Alexandre Vermeerbergen

PS: By the way, I have no
interest
in
the
metrics-related
enhancements
you
mentionned - I can understand
it
is
of
interest
for
people
aware
of
it
and
who are expecting "something"
related
to
it
-
but
I
would
be
sad
to
see a
postponing of the imminent
Storm
1.2.0
release
for
something
we
do
not
use
at all.
PPS: I would love to learn
more
about
these
"metrics",
maybe
it's
something
worth which never quite
understood...
any
link
with
clear
explanation
of
this feature?


2017-11-17 17:44 GMT+01:00
Hugo
Da
Cruz
Louro
<
hlo...@hortonworks.com
:

I am also in agreement that
we
should
not
delay
the
1.2.0
release
for
too
long, but in order to
release
it a
few
things
need
to
be
kept
in
mind:

- If we want to avoid
releasing
1.3.0
soon
or
at
all,
that
means
that
we
should include in 1.2.0 as
many
important
features
(e.g.
metricsV2)
as
possible within a reasonable
time
frame.
Also
to
avoid
as
much
back
porting
as possible one should
really
get
all
features
that
we
foresee
we
want
to
maintain in 1.2.0 right
away,
and
then
simply
maintain
them
there.
However
it’s a fact of life that if
a
bug
is
found
and
it
is a
blocker,
it
will
have to be back ported.

- STORM-2153 details that
the
new
metrics
requirements
were
driven
by
the
users. Therefore if this
feature
is
highly
important
and
sought
after,
it
may not make much sense to
release
1.2.0
without
it.
If
we
do
release
1.2.0
without the metrics, does it
mean
that
metrics
will
go
only
in
2.0? I
would
conjecture that most
production
deployments
will
take
a
while
to
upgrade
to
2.0 even after it is
released.
That
means
that
they
will
still
be
running
Storm without the benefit of
the
new
metrics.

- Several fixes have got in
storm-kafka-client.
There
have
been a
lot
of
changes and I wonder to
which
degree
they
have
been
system
tested
in
addition the existing unit
tests

@Alexandre, since you are
using
storm-kafka-client
and
have
filed
some
bugs and driven some feature
requests, I
would
like
to
ask
if
you
could
help us within what is
reasonably
possible
for
you
with
the
following:

- system test the latest
storm-kafka-client
changes
in
your
test/pre-production
environment
- provide some information
on
about
your
setup,
     - how you are using
storm-kafka-client
     - how it is performing
     - Kafka brokers,
number
of
topics/partitions,
storm
parallelism,
and
some info about your network
- if it is reasonable to do
so
share
some
of
your
tests
such
that
we
can
also test it at our end.
- Tell us specific things
that
you
would
like
us
to
test

Thanks,
Hugo

On Nov 16, 2017, at 11:49
PM,
Stig
Rohde
Døssing <
stigdoess...@gmail.com>
wrote:

I agree with Jungtaek, if
metrics
v2
can
go
in
very
soon
it
should
go
in,
otherwise I'd rather
release
1.2.0
now
and
work
on
getting
2.0.0
ready
for
release.

1.x and master have
drifted
pretty
far
apart,
and
it's
causing a
lot
of
porting work at this
point.
1.x
still
has
a
lot
of
Clojure
code,
storm-core
hasn't been split yet in
that
branch
and
it's
also
targeting
JDK
1.7.
It's
seems rare at this point
that
a
PR
cherry
picks
cleanly
onto
1.x
from
master.

I've linked the list of
issues
that
are
only
fixed
in
2.0.0
and
not
1.x,
to
illustrate how far ahead
2.0.0
is.


https://pste.eu/p/9CJT.html

2017-11-17 5:41 GMT+01:00
Arun
Iyer
<
ai...@hortonworks.com
:

Hi Taylor,

Is it
https://github.com/apache/
storm/pull/2203
?


I think it would be great
to
get
this
in
1.2
release.
Can
we
try
to
address the issues in a
week
or
so
and
get
this
in?

Thanks,
Arun

On 11/17/17, 7:13 AM, "P.
Taylor
Goetz" <
ptgo...@gmail.com

wrote:

The original idea for
the
1.2
release
was
that
would
introduce
the
metrics v2 work (and
there
is
additional
work
required
there,
so I
understand a desire not
to
delay
the
release).
Do
we
want
to
stick
with
that, or deviate? If the
latter,
would
we
do
a
1.3
release
for
metrics?

As far as a 1.1.2
release
I’m
fine
with
releasing
that
at
any
time.

-Taylor

On Nov 15, 2017, at
6:24
PM,
Jungtaek
Lim <
kabh...@gmail.com>
wrote:

I think we could start
release
phase
for
both
1.1.2
and
1.2.0
when

https://github.com/apache/
storm/pull/2423
will
be
merged.

Thanks,
Jungtaek Lim
(HeartSaVioR)

2017년 11월 16일 (목) 오전
7:00,
Alexandre
Vermeerbergen
<
avermeerber...@gmail.com
님이
작성:

Hello,

I'd love to see a
Storm
1.2.0
released:
it's a
perfect
schedule
for
me
for
test any Release
Candidate
that
might
be
available
if
it
happens
soon.

Best regards,
Alexandre

2017-11-15 7:29
GMT+01:00
Arun
Mahadevan
<
ar...@apache.org>:

Hi,

Looks like we are
only
waiting
on
https://issues.apache.org/

jira/browse/STORM-2546
.

Are there any other
issues
which
are
blockers
for
Storm
1.2.0?
Would
be
great to see the
1.2.0
release
out
soon
as
it
has a
lot
of
critical
fixes.

Thanks,
Arun



































Reply via email to