Github user roshannaik commented on a diff in the pull request:

    https://github.com/apache/storm/pull/2241#discussion_r129729641
  
    --- Diff: conf/defaults.yaml ---
    @@ -253,11 +247,16 @@ topology.trident.batch.emit.interval.millis: 500
     topology.testing.always.try.serialize: false
     topology.classpath: null
     topology.environment: null
    -topology.bolts.outgoing.overflow.buffer.enable: false
    -topology.disruptor.wait.timeout.millis: 1000
    -topology.disruptor.batch.size: 100
    -topology.disruptor.batch.timeout.millis: 1
    -topology.disable.loadaware.messaging: false
    +topology.bolts.outgoing.overflow.buffer.enable: false # TODO: Roshan : 
Whats this ?
    +topology.disruptor.wait.timeout.millis: 1000  # TODO: Roshan: not used, 
but we may/not want this behavior
    +topology.transfer.buffer.size: 50000
    +topology.transfer.batch.size: 10
    +topology.executor.receive.buffer.size: 50000
    +topology.producer.batch.size: 1000  # TODO: Roshan:  rename
    +topology.flush.tuple.freq.millis: 5000
    +topology.spout.recvq.skips: 3  # Check recvQ once every N invocations of 
Spout's nextTuple() [when ACKs disabled]
    +
    +topology.disable.loadaware.messaging: true   # load aware messaging 
reduces throughput by ~20%.
    --- End diff --
    
    My experiments indicated that grouping (even without loadAware ness) seemed 
more expensive than it logically ought to be. with load awareness it was much 
higher. 
    To have a 20% hit on top of an existing bottleneck seems too costly. I am 
ok with reverting this, but i think loadaware messaging needs to be reviewed as 
to why it is so expensive.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to