your reply . I am using annotation based configuration and using
> Spring Boot.
>
> Any idea how to do it using annotations ?
>
>
>
> On Tue, Sep 29, 2015 at 6:41 PM, Ravi Sharma wrote:
>
> > Bolts and Spouts are created by Storm and not known to Spring Con
Nick,
Look into your queue sizing. Both network bound and in memory.
Also i also try to use this pattern
say i have Spout S1 and two bolts B1 and B2 doing something for it. (S1 ->
B1-> B2)
lets say i have to run bolts in parallel (means 2 instance of B1 and two
instance of B2 )
and assume i hav
Hi Steve,
Storm 's basic design is to process stream(open ended or say No End) in
real time. There will be few hack ways of stoping the cluster once file is
finished but i guess none of them will be good looking.
Basically your storm cluster should be running all the time and waiting for
more messa
gt; I was able to make an Integration with Spring but the problem is that I
>>> have to autowire for every bolt and spout . That means that even if i
>>> parallelize spout and bolt it will get started to each instance . Is there
>>> some way t
ly within a single JVM AFAIK. The local
>> cluster is useful for development, testing your topology, etc. The real
>> deployment has to go through nimbus, run on workers started by supervisors
>> on one or more nodes, etc. Kind of difficult to simulate all that on a
>> single
cluster , I can initialize my context there and someway they
>> are available to all Spouts and Bolts ..Basically some shared location
>> where my application Context can be initialized (once and only once) and
>> this context can be accessed by
>> all instances of Spout
No 100% right ansers , u will have to test and see what will fit..
persoanlly i wud suggest Multiple spouts in one Topology and if you have N
node where topology will be running then each Spout(reading from one queue)
shud run N times in parallel.
if 2 Queues and say 4 Nodes
then one topolgy
4 Sp
ultiple spouts ...What
> if something goes wrong in One spout or its associated bolts .. Does it
> impact other Spout as well?
>
> Thanks
> Ankur
>
> On Sun, Oct 11, 2015 at 10:21 PM, Ravi Sharma wrote:
>
>> No 100% right ansers , u will have to test and see what will fi
nto mysql ? I
>> >> believe
>> >> > I will get acknowledgement inside the fail method in my Spout . So
>> If I
>> >> > reprocess it using 2 bolts , I believe it will again be sent to Bolt
>> >> > for
>> >> > saving i
Hi Rajiv,
I am not sure if this will increase throughput in any ways. You still have
fixed resources and work done is still same. Its just instead of using Bolt
main thread you are spawning new thread.
I see it as all -ve, because your all work must be done on Bolt thread,
thats how you will scale
ur task concurrency, while of course it won't be exactly the
> same. This type of clients are mostly there to allow you to do useful work
> while the request is running and thus improve your response time.
>
> That said you can always test to find out :)
>
>
>
> On Sat, Oc
client could be handling many
> thousands of connections with just a couple of threads.
>
> But agreed that it's uncommon that you benefit from it.
>
> On Sat, Oct 17, 2015 at 9:43 AM, Ravi Sharma wrote:
>
>> Even in those cases it's better to increase parallelism
nnot be passed like above .
>>
>> So the problem is only to make this context available once per jvm .
>> Hence I thought I will wrap it under a singleton and make this available to
>> all spouts and bolts per jvm.
>>
>> Once I have this context initialized a
Calculated in previous Bolt/Spout or calculated in previous run?
Ravi
On Sat, Nov 7, 2015 at 11:27 AM, Miguel Ángel Fernández Fernández <
miguelangelprogramac...@gmail.com> wrote:
> In a trident scenario, a realtime operation needs to know the previous
> calculated result.
>
> My current solutio
Hi All,
I would like to send some extra information back to spout when a tuple is
failed in some Bolt, so that Spout can decide if it want it to replay or
just put the message into queue outside storm for admins to view.
So is there any way i can attach some more information when sending back
fail
tion or simple boolean Flag which implies to the spout that it needs
> to be played . For the ones which dont need to be played it takes default
> value as false .
>
>
> Like I said before , it is a very simple thought but I could think of
> this may work based on info u provided and
}
>
>
> Now , when _collector.fail method is called Spout's fail method gets
> invoked
>
> public void fail(Object msgId) {
>
> Bean b1 = (Bean) msgId;
>
> String failureReason = b1.getFailureReason();
>
> }
>
>
> *You will see the failureReas
Hi Ziang,
I think you should be able to define it. But you will have to make sure
that you wont go into infinite loop.
Ravi.
On Tue, Mar 22, 2016 at 3:44 AM, Xiang Wang wrote:
> Does anyone could help?
> Thanks.
>
>
> ---
> Xiang Wang PhD Candidate
> Database Rese
1.0.2
On Wed, Sep 7, 2016 at 2:22 PM, davo...@crossing-technologies.com <
davo...@crossing-technologies.com> wrote:
> Ùuuh
>
>
>
> Sent from my Samsung Galaxy smartphone.
>
> Original message
> From: Manu Zhang
> Date: 9/7/16 14:15 (GMT+01:00)
> To: user
> Subject: Fwd: [meetu
Hi T.I.
Few things why Spout is responsible for replay rather then Various Bolts.
1. ack and fail messages carry only message ID, Usually your spouts
generate messaged Id and knows what tuple/message is linked to it(via
source i.e. jms etc). If ack or fail happens then Spout can do various
things
Hi Guys,
Recently i have written a small framework for integration tests(including
flux yaml file), thought of sharing with you all. May be it can help
someone.
https://github.com/ping2ravi/storm-integration-test
Thanks
Ravi.
Hi ivan,
I assume you are trying to do per user stream so that you can process each
user's event in same sequence as they arrive. Is this correct assumption?
if yes then with in storm you can manage this even if you read from one
kafka topic using one spout and output events on one stream. just re
>From this documentation : http://storm.apache.org/releases/1.0.2/flux.html
storm jar mytopology.jar org.apache.storm.flux.Flux --local config.yaml
On Thu, Sep 22, 2016 at 2:50 AM, Joaquin Menchaca
wrote:
> What is the minimal storm.yaml configuration do I need for `storm jar ...
> remote`?
>
Hi,
I am using storm 1.2.1, Kafka 0.10.2.1 and storm Kafka client 1.2.1, with
Kafka client 0.10.2.1.
I am able to run topology and able to read messages from Kafka but When I
go to storm ui and click on my topology name, it shows me "Loading topology
summary" message for a minute or so and then sh
Hi Kai,
Seems like tuple timeout errors(no failed tuples in bolts but spout reports
failure), Whats the value for max pending spout?
Set it to a smaller number like 10 just to test it. and then see how high
you can go based on what you doing in your topology.
Thanks
Ravi.
On Thu, Mar 7, 2019 at 2
25 matches
Mail list logo