Hello Storm developers,

It's been about 2 weeks that I running Storm 1.2.0 preview with 15
topologies, 6 Supervisors, 1 Nimbus, 3 Zookeepers, and Kafka 0.10 libs all
with Storm Kafka client spout instead of our own Kafka 0.10 spout.

I noticed that statistics are going a bit nuts on bolts, with capacities
reaching hundreds or more while everything seem to be running fine.
This look like this is tied to topologies relying on tick tuple - not sure
but just a guess.

See what kind of ridiculous capacities I can reach:

Id    Executors    Tasks    Emitted    Transferred    Capacity (last
10m)    Execute latency (ms)    Executed    Process latency (ms)
Acked    Failed    Error Host    Error Port    Last error    Error Time
aggregate    2    2    130660    130660    0.000    0.027    130620
0.031    130580    0
alertsToKafka    3    3    0    0    0.010    0.054    2238580
337.080    2238420    0
checkUnknown    20    20    145840    145840    120.391    52060.445
19060    15062.337    18780    0
evaluateTriggers    17    17    2815520    2815520    130.115    112.998
4755580    28.311    4756320    0
eventTimeFilter    1    1    13983880    13983880    51.324    21.711
13989060    37.980    13989260    0
filter    6    6    9103880    9091640    249.974    67.828    13990120
0.063    13989800    0
filterEventsWithDimensions    3    3    4754520    4754520    233.222
143.596    8958840    234.242    8960180    0
flappingDetection    8    8    2229860    2229860    0.003    0.020
2229820    0.017    2229880    0

(sorry for the lost formating)

we have capacity at 249.974 for filter?


Is it a known issue?

Best regards,
Alexandre




2017-12-01 22:38 GMT+01:00 Alexandre Vermeerbergen <[email protected]
>:

> Hello Junktaek,
>
> Thanks for your answer.
>
> Actually the sooner Storm 1.2.0 could be released (now that we have a
> working storm-kafka-client with an impressible list of fixed issue), the
> better.
>
> Is it realistic to have some maximum date for starting the RC on 15th of
> December?
>
> Best regards,
> Alexandre Vermeerbergen
>
>
> 2017-11-30 0:33 GMT+01:00 Jungtaek Lim <[email protected]>:
>
>> We're getting closer to merge Metrics V2 in, so unless it will take more
>> than couple of weeks, we will include to 1.2.0 as well.
>> I think STORM-2835 could come in couple of days, so that's not the issue
>> for releasing 1.2.0.
>>
>> -Jungtaek Lim (HeartSaVioR)
>>
>> 2017년 11월 30일 (목) 오전 1:01, Alexandre Vermeerbergen <
>> [email protected]>님이
>> 작성:
>>
>> > Hello Storm Dev team,
>> >
>> > Our tests with Storm 1.2.0 preview on our (large) preproduction has been
>> > running fine for 1 week.
>> >
>> > This sound like a good opportunity to ask you if a Storm 1.2.0 release
>> > could come soon so that we could target it for our next production
>> upgrade.
>> >
>> > Yet I noticed this new JIRA:
>> > https://issues.apache.org/jira/browse/STORM-2835
>> >
>> > Could it be important enough that we need it to be included in Storm
>> 1.2.0
>> > ?
>> >
>> > best regards,
>> > Alexandre Vermeerbergen
>> >
>> >
>> >
>> > 2017-11-22 17:43 GMT+01:00 Alexandre Vermeerbergen <
>> > [email protected]
>> > >:
>> >
>> > > Hello All,
>> > >
>> > > I seems that the "[Discuss] Release Storm 1.2.0" thread reached some
>> > > limits as my latest posts on it aren't being received.
>> > >
>> > > Whatever, here's the follow-up of my tests with storm-1.2.0 preview
>> with
>> > > latest version of Stig's storm-kafka-client:
>> > >
>> > > It works !
>> > >
>> > > But... to make my topologies compatible with both 1.1.0 and 1.2.0, I
>> had
>> > > to write an ugly method based on reflection so as to activate "acks"
>> on
>> > the
>> > > official Kafka spout when used in autocommit mode:
>> > >
>> > >     /**
>> > >      * Toggle the mode that makes Kafka spout able to send acks, if
>> it's
>> > > supported.
>> > >      * The support for this toggle has been introduced in Storm
>> 1.2.0, so
>> > > we
>> > >      * internal use reflection in order to keep our topologies
>> compatible
>> > > with pre-1.2.0
>> > >      * @param builder A kafka spout config builder.
>> > >      * @return The spout config builder that was passed in argument,
>> with
>> > > modified toggle if it's supported.
>> > >      */
>> > >     public static Builder<?, ?>    fixKafkaSpoutAcking(Builder<?, ?>
>> > > builder) {
>> > >         try {
>> > >             Method m =
>> > builder.getClass().getMethod("setTupleTrackingEnforced",
>> > > boolean.class);
>> > >             m.invoke(builder, true);
>> > >         } catch (NoSuchMethodException | SecurityException |
>> > > IllegalAccessException | IllegalArgumentException |
>> > > InvocationTargetException e) {
>> > >             // Assume we're running with storm-kafka-client 1.1.0
>> > >         }
>> > >         return builder;
>> > >
>> > > Next step for me: now that basic functionnal tests are OK, I want to
>> run
>> > > the tests on my PPD environments having real big data rate and
>> compare it
>> > > with my PRD one with same volume... stay tuned!
>> > >
>> > > Thanks a lot to Stig for his quite reactive fixes on
>> storm-kafka-client !
>> > >
>> > > Best regards,
>> > > Alexandre
>> > >
>> >
>>
>
>

Reply via email to