You can join like this in main function.
Stream joinStreamInner =
topology.join(streams, joinFields,
new Fields(RequestId, ColumnMapId, FFValue, SFValue,
TFValue),
//JoinType.mixed(JoinType.INNER, JoinType.OUTER))
JoinType.INNER)
.each(new
#9 - 3pts
#10 - 2pts
Thanks
Milinda
On Mon, May 19, 2014 at 9:48 PM, Neville Li neville@gmail.com wrote:
#11 - 3 pts.
#1 - 2 pts.
On Mon, May 19, 2014 at 3:17 PM, Binh Nguyen Van binhn...@gmail.com wrote:
#9 - 4 pts.
#8 - 2 pts.
On Mon, May 19, 2014 at 9:57 AM,
Hi all,
In my topology I observers that one of the supervisor machines get
repeatedly disconnected from the Zookeeper and it prints the following
error,
EndOfStreamException: Unable to read additional data from client sessionid
0x146193a4b70073d, likely client has closed socket
at
You can do something like this: -Xloggc:Your Storm Install
Dir/logs/gc-worker-%ID%.log
On Wed, May 14, 2014 at 2:01 PM, Sean Allen s...@monkeysnatchbanana.comwrote:
is anyone logging gc events for workers in their cluster?
outside of the storm, the following jvm options are pretty standard
Hi Jing,
Was message.max.bytes changed in your Kafka server config to be higher than
the default value (100 bytes)?
-Nathan
On Mon, May 19, 2014 at 5:54 PM, Tao, Jing j...@webmd.net wrote:
I finally found the root cause. Turns out the spout was reading a
message exceeded the max
Got the issue resolved.
1. I was not Anchoring to incoming tuple...so effectively, all the Bolts
after impactBolt , were not transactional. The ack of impact bolt was
causing spout's ack to be called. Proper DAG was not created. So the
number I was seeing in WIP was not the true number of tuples
#9 - 5pts
2014-05-20 18:43 GMT+02:00 Tom Brown tombrow...@gmail.com:
#9 - 5pts
On Tue, May 20, 2014 at 7:20 AM, Milinda Pathirage
mpath...@umail.iu.eduwrote:
#9 - 3pts
#10 - 2pts
Thanks
Milinda
On Mon, May 19, 2014 at 9:48 PM, Neville Li neville@gmail.com
wrote:
#11 - 3
Yes, and I did not set the max message size properly on the Spout.
From: Nathan Leung [mailto:ncle...@gmail.com]
Sent: Tuesday, May 20, 2014 10:43 AM
To: user
Subject: Re: Kafka Spout 0.8-plus stops consuming messages after a while
Hi Jing,
Was message.max.bytes changed in your Kafka server
Hi!
I've been thinking about Nathan Marz lambda architecture with the
components:
1. Kafka as message bus, the entry point of raw data.
2. Camus to dump data into HDFS (the batch layer).
3. And Storm to dump data into HBase (the speed layer).
I guess this is the classical architecture (the
The two bolts that emit and transfer 0 are most likely your
persistantAggregates. They're sinks, so they don't emit or transfer
anything.
I forget the exact definition of capacity, but it indicates when that bolt
is taking too long to process. If it's greater than one, it's a
bottleneck. Its
I'm a researcher and need help from you to make a simple project on
twitter that using storm
as i'm new in open source generally
i searched and found Storm-Election as i'm new ! is it simple for me as i
want to know what's the algorithm used to make edit on the algorithm or use
another algorithm
Executed refers to number of incoming tuples processed.
capacity is determined by (executed * latency) / window (time duration).
UI should give you description of those stats if you hover over table
headers.
On Tue, May 20, 2014, at 03:36 PM, Raphael Hsieh wrote:
I reattached the
12 matches
Mail list logo