unsubscribe
Hi all,
I would to know if there is way to get the offset/partition of each message
using the
KafkaSpout ?
Thank in advance
--
Florian HUSSONNOIS
instead
> of current 2.2.3), and the worst part that gave me headache before
> understanding the errors I had when integrating this was that it was
> compiled under java 1.6!
>
> So my question is if do you think I can find at this current state a
> proper way to integrate Cassandra
feedbacks.
--
Florian HUSSONNOIS
gt;
> > > > Sent: Tuesday, September 29, 2015 4:28 PM
> > > > Subject: Field Group Hash Computation
> > > >
> > > >
> > > >
> > > > Hi,
> > > > I have a field grouping based on 2 fields. I
> have 32
> > > consumers
> > > > for the tuple and I see most of the times, out
> of 64
> > > bolts, the
> > > > field group is always on 8 of them. Of the 8, 2
> have
> > > more than
> > > > 60% of the data. The data for the field grouping
> can
> > > have 20
> > > > different combinations.
> > > >
> > > > Do you know what is the way to compute the Hash
> > of the
> > > fields
> > > > used for computing? One of the groups mails
> indicate
> > > that the
> > > > approach is -
> > > >
> > > > It calls "hashCode" on the list of selected
> > values and
> > > mods it
> > > > by the
> > > > number of consumer tasks. You can play around
> with
> > > that function
> > > > to see if
> > > > something about your data is causing something
> > > degenerative to
> > > > happen and
> > > > cause skew
> > > >
> > > > I saw the clojure code but not sure how to
> > understand
> > > this.
> > > >
> > > > Thanks
> > > > Kashyap
> > > >
> > > >
> > > >
> > >
> >
>
>
--
Florian HUSSONNOIS
("sensor-id", "type"", "ts")
Another solution could be to use a Filter but is there a way to get the
batch ID ?
A nice feature would be to have this operation directly on the stream
classe :
stream.distinct(new Fields("type")) and stream.partitionDistinct(new
Fields("type")).
Thank you in advance.
--
Florian HUSSONNOIS
Hi,
You should ack input tuple after emitting new ones :
try {
// parse json string
...
// then emit
} catch (Throwable t) {
/*nothing to recover */
} finally {
collector.ack(tuple)
}
Hope this will fix your issue.
Le 21 août 2015 02:17, Jason Chen jason.c...@hulu.com a écrit :
Hi all.
different
spout/bolt?
For instance, Bolt C receive input tuples from Spout A and input tuples
from Bolt B to be processed. How should I implement it? I mean writing the
Java code for Bolt C and also its topology.
--
Florian HUSSONNOIS
which seems to have been submitted to
allow for an external zookeeper with local cluster topology.
https://issues.apache.org/jira/browse/STORM-213
thank you
Clay
--
Florian HUSSONNOIS
Tel +33 6 26 92 82 23
Hi
I would like to know if it is possible (or is it a good idea) to use the
executorData map from the TopologyContext for sharing objects (services)
between tasks ?
How to register objects into that Map before the 'prepare' methods are
invoked ?
Thank you for your time.
--
Florian HUSSONNOIS
worker nodes' log files
are getting really big in size. What is the best way to clear up some
space? Should I just erase them and move them back them up somewhere else?
Thanks,
Nikos
--
Nikolaos Romanos Katsipoulakis,
University of Pittsburgh, PhD candidate
--
Florian HUSSONNOIS
. However the
tuples seems to never reach the last bolt.
Looking into storm UI, we have observed that 100% of our stream is
redirected to the bolt but its execute method is never called.
We have no error into the workers logs.
Thanks you a lot for your help.
--
Florian HUSSONNOIS
* or
*b-1---*
What did I wrong?
Thanks for your help
Denis
--
http://www.avast.com/
L'absence de virus dans ce courrier électronique a été vérifiée par le
logiciel antivirus Avast.
www.avast.com
--
Florian HUSSONNOIS
not necessarily represent those of Microland.
This email may be monitored.
--
Thanks
Deepak
www.bigdatabig.com
www.keosha.net
--
Florian HUSSONNOIS
Tel +33 6 26 92 82 23
14 matches
Mail list logo