hi Neelesh, you can implement the feature and contribute to storm.  

--  
Best Regards!
肖康(Kang Xiao)


在 2014年3月10日星期一,1:05,Neelesh 写道:

> Hi kang, thanks for the response. I don't wan to drop messages, but not emit 
> them as tuples until the throttling rates reach a certain level. But you are 
> right that I can use a modified version of kafkaspout by maintaining offset 
> per customer and not increment the corresponding offset until throttling 
> rates for the customer allow it. There are some interesting problems to be 
> solved with this approach, though.
>    Thanks!
>      
> On Mar 9, 2014 7:57 AM, "Kang Xiao" <[email protected] 
> (mailto:[email protected])> wrote:
> > Hi Neelesh  
> >  
> > Do you mean that you need a custom throttling logic by dropping some of the 
> > large volume messages? Maybe you can implement a spout to do that.  
> >  
> > --  
> > Best Regards!
> >  
> > 肖康(Kang Xiao,<[email protected] (mailto:[email protected])>)
> > Distributed Software Engineer
> >  
> >  
> > 在 2014年3月6日 星期四,15:25,Neelesh 写道:
> >  
> > > Hi,
> > >     We're evaluating storm as our real time stream processing 
> > > infrastructure. We are a SaaS company, and the stream has messages from 
> > > different customers. Some of our customers may generate a large volume of 
> > > messages, starving messages from other customers. We've a custom 
> > > throttling logic built on top of Akka in another context, but was 
> > > wondering if there is a way of plugging in a custom throttling strategy 
> > > in to Storm. topology.max.spout.pending does not server our purpose.  Any 
> > > pointers are appreciated
> > >  
> > > Thanks  
> > > -Neelesh
> > >  
> > >  
> > >  
> > >  
> >  
> >  

Reply via email to