Was the number of workers or number of ackers changed across your
experiments ? What are the numbers you used ?
When you have many executors, increasing the ackers reduces the complete
latency.
Thanks and Regards,
Devang
On 20 May 2015 03:15, "Jeffery Maass" wrote:
> Maybe the difference has t
The fail method of the spout gets called on message timeouts. It will not
stop/suspend processing of the current message flow. Everything happens
asynchronously, read about ackers in storm docs.
Thanks and Regards,
Devang
On 10 Jun 2015 08:45, "Ganesh Chandrasekaran"
wrote:
> I am also seeing t
One observation from my side: topologies requiring multiple workers are
distributed evenly across clusters but storm tries to first fill worker
slots of a single supervisor(can be any from the cluster) for topologies
running on single worker.
Thanks,
Devang
On 15 Jun 2015 02:27, "Matthias J. Sax"
What's your max spout pending value for the topology ?
Also observe the CPU usage, like how many cycles it is spending on the
process.
Thanks and Regards,
Devang
On 19 Jun 2015 02:46, "Fang Chen" wrote:
> tried. no effect.
>
> Thanks,
> Fang
>
> On Mon, Jun 15, 2015 at 9:33 PM, Binh Nguyen Van
How many Kafka partitions exist on the topic ?
How many Kafka Spout instances within the topology?
Play around with number of consumers subscribing to the Kafka topic.
Observe the latency, throughput and cpu usage by changing these parameters.
Thanks and Regards,
Devang
On May 19, 2016 3:09 AM,
ritten in go,
> not sure if it’s related to the java process.
>
> --
> Alexandre Wilhelm
>
> On May 19, 2016 at 5:15:09 AM, Devang Shah (devangsha...@gmail.com) wrote:
>
> How many Kafka partitions exist on the topic ?
>
> How many Kafka Spout instances within the t
Hello Storm group,
We are in process of migrating from storm 0.9.3 to 1.0.1 but are facing an
issue with DRPC where we see an exception of "frame size greater than 16MB"
while processing DRPC request. We have NOT changed the default
SimpleTransportPlugin.
Solutions tried so far,
1. Updated the n
We had to extend the SimpleTransportPlugin to override the default
framesize limit.
Does anyone know of any other cleaner solution available ?
PlainSaslTransportPlugin was introduced as its replacement but it doesn't
seem to work.
Thanks and Regards,
Devang
On Sep 15, 2016 8:17 PM, &q
We have a use case in our application where by we want to restart topology
a specified number of times in case of any critical exception. If it cannot
recover in that specified number of tries then eventually we want to kill
the topology.
Now as I understand storm restarts the worker in case of an
Hi Team,
I am facing an issue with one of our failover tests. Storm fails all the
messages post worker restarts.
Steps done,
0. 1 spout, 3 bolts, 5 ackers
1. Pre-load tibems with 50k messages
2. Start the topology
3. Let it run for brief time and the kill the worker where the spout is
executing (
t;
> -Taylor
>
>
> On Oct 23, 2014, at 12:44 PM, Devang Shah wrote:
>
> Hi Team,
>
> I am facing an issue with one of our failover tests. Storm fails all the
> messages post worker restarts.
>
> Steps done,
> 0. 1 spout, 3 bolts, 5 ackers
> 1. Pre-load tibems
It seems to be a bug in storm unless someone confirms otherwise.
How can I file a bug for storm ?
On 25 Oct 2014 07:51, "Devang Shah" wrote:
> You are correct Taylor. Sorry missed to mention all the details.
>
> We have topology.spout.max.pending set to 1000 and we have
r Rao" wrote:
> Yes it is the bug which is raised by Denigel.fixed in 9.3.pls use it. Or
> use zero mq in place of netty ur problem will be resolved.
> On 27 Oct 2014 20:52, "Devang Shah" wrote:
>
>> It seems to be a bug in storm unless someone confirms otherwise
t;
> Which storm version are you using?
> You may want to check STORM-404 and STORM-329.
>
> Sean
>
>
> On Mon, Nov 3, 2014 at 9:27 AM, Devang Shah
> wrote:
>
>> Thanks much for notifying.
>>
>> Would you know the bug id ? I did refer to the change log of 0.9.
sues.apache.org/jira/browse/STORM-406
>>
>>
>>
>>
>>
>> *From:* Devang Shah [mailto:devangsha...@gmail.com]
>> *Sent:* 04 November 2014 10:23
>> *To:* user@storm.apache.org
>> *Subject:* Re: Storm failing all the tuples post worker restart
>&g
May be try reading it from the STORM UI using REST Api.
Thanks and Regards,
Devang
On 16 Nov 2014 15:37, "Vladi Feigin" wrote:
> Hi,
>
> Maybe you can override spout ack/fail method ant take it from there . In
> case you use ackers
> Vladi
>
> On Fri, Nov 14, 2014 at 4:35 PM, Vadim Smirnov wrot
The topology.debug is set to true, please storm.yaml.
Disable that and try putting log statements before and after emit calls and
take it from there. Initially try send a single message to check if the
topology behaves as expected.
Thanks and Regards,
Devang
On 15 Nov 2014 11:07, "Minqi Jiang" w
We had faced similar issues and identified the cause to be heavy processing
of the bolts. GC was hogging the CPU cycles and workers didn't get time to
communicate with Zookeeper which eventually ended in worker restarts. Try
using conc-mark-sweep algo.
Thanks and Regards,
Devang
On 15 Nov 2014 03:
Not required.
Thanks and Regards,
Devang
On 18 Nov 2014 21:10, "Niels Basjes" wrote:
> Hi,
>
> If you write a bolt that receives Ticks from Storm.
> Should you ack such a Tick tuple or not?
>
> --
> Best regards / Met vriendelijke groeten,
>
> Niels Basjes
>
Can you post your topology configuration here like the no. Of workers, no.
of instances of each spout/bolt, max pending spout etc.
What processing are you doing in the bolt B - connecting to external
service ? Try replacing all the code in execute method of bolt B with a log
statement and check if
_SIZE, 32768);
> config.put(Config.TOPOLOGY_DEBUG, true);
>
> thanks,
> Clay
>
> On Thu, Dec 4, 2014 at 6:22 AM, Devang Shah
> wrote:
>
>> Can you post your topology configuration here like the no. Of workers,
>> no. of instances of each spout/bolt, max
Hi All,
Can anyone please shed some light on when the Nimbus high availability
(clustering Nimbus so that it can be choosen based on election) be
available ?
Any alternate solution would also be appreciated.
Thanks and Regards,
Devang
Parth Brahmbhatt"
wrote:
> The PR is open and we have made progress addressing most of the concerns
> raised, see https://github.com/apache/storm/pull/354. That being said I
> am not sure about the timeline.
>
> Thanks
> Parth
>
> From: Devang Shah
> Reply-To: &q
You can also use the power of Zookeeper to share data across JVM's, much
like what storm and Kafka do.
Thanks and Regards,
Devang
On 6 May 2015 09:07, "Supun Kamburugamuva" wrote:
> I think you will need something like redis.
>
> Supun..
> On May 5, 2015 8:17 PM, "Huy Le Van" wrote:
>
>> Coul
24 matches
Mail list logo