terests your question (or
> no one can answer your question, who knows). Open source is not free, it
> requires time and efforts instead of money, and that's why open source
> technical support is valid for business model.
>
> Hope this helps to answer your question.
>
> Thanks,
>
Wondering as well.
Unfortunately just tried out weeks ago. I thought it was powerful and easy
to integrate.
Personally got questions not responded for days.
Looking forwards to clear answer here.
On Sun, Oct 16, 2016 at 1:14 PM, Adrienne Kole
wrote:
> Hi ,
>
> I am
Anyone has great idea here?
On Tue, Oct 11, 2016 at 11:05 PM, Cheney Chen <tbcql1...@gmail.com> wrote:
> Hi there,
>
> I tried out Local DRPC similar to the example https://github.com/
> apache/storm/blob/0a5a069c6223bdb1e7b9318f208c1b29cb302560/examples/storm-
> starter/src
Hi there,
I tried out Local DRPC similar to the example
https://github.com/apache/storm/blob/0a5a069c6223bdb1e7b9318f208c1b29cb302560/examples/storm-starter/src/jvm/org/apache/storm/starter/trident/TridentKafkaWordCount.java
It's working well. Eventually I need to query state remotely from hosts
Hi there,
I'm playing w/ Trident. Got one special user case, don't have much clue
yet. Wanna see if anyone has any good idea.
Here it is:
Input data is: (batchId, docId)
Ouput required:
batch1 {
docId1: count1
docId2: count2
}
Detail
input:
batch1 doc1
batch2 doc1
batch1 doc2
batch1 doc1
Wondering if anyone has any clues?
On Wed, Oct 5, 2016 at 12:10 PM, Cheney Chen <tbcql1...@gmail.com> wrote:
> Hi there,
>
> I'm using storm 1.0.1. I have user case like this, given one signal
> (received from kafka spout), fetch a bunch of users (in mongodb), then do
&
Hi there,
I'm using storm 1.0.1. I have user case like this, given one signal
(received from kafka spout), fetch a bunch of users (in mongodb), then do
stream processing.
I'd prefer trident since the lambda like language is so convenient. But
don't have good idea about iterating through mongodb.
r event volume and application logic)
>
> For things like unique counting you can use in-memory approach like we did
> (Hendrix) or use something like Redis with structures like set and
> hhperloglog.
>
> Thanks,
> Ambud
>
> On Sep 14, 2016 1:38 AM, "Cheney Chen&q
;>>
>>>>>
>>>>>
>>>>> If BoltA crashes due to some reason while replaying, only then the
>>>>> Spout should receive this as a failure and whole tuple tree should be
>>>>> replayed.
>>>>>
>>>>>
Hi there,
We're using storm 1.0.1, and I'm checking through
http://storm.apache.org/releases/1.0.1/Guaranteeing-message-processing.html
Got questions for below two scenarios.
Assume topology: S (spout) --> BoltA --> BoltB
1. S: anchored emit, BoltA: anchored emit
Suppose BoltB processing failed
10 matches
Mail list logo