Something is definitely broken in your run or in your measurement method
and its not your hardware that is at fault. The machine on which those numbers
were run had lots of cores but the cores were not fast at all. Even my mid 2015
macbook pro has faster cores than that machine which had
Actually is to dynamically scale up the parallelism supported in heron, do
we have a quick answer from storm? Thanks!
On Fri, Mar 30, 2018 at 2:28 PM, Jude Huang Zhipeng
wrote:
> Hi Storm Community,
>
> Wonder if anybody tried to scale up/down storm topology, is there any
>
Hi Storm Community,
Wonder if anybody tried to scale up/down storm topology, is there any storm
feature support dynamically scale up topology worker nodes?
Heard that this feature is supported in twitter heron, did I overlook
something in storm? Thanks!
Regards,
Jude
Le ven. 30 mars 2018 à 17:01, Jude Huang Zhipeng a
écrit :
> Hi Storm Community,
>
> Is there anybody has the experience integrate Prometheus with Storm? Is
> this a drawback to use twitter heron instead of Storm? Thanks in advance!
>
> Regards,
> Jude
>
>
Please be very careful of any benchmark. You are doing the right thing
trying to reproduce it. From the article they are talking about a
"microbenchmark". I have no idea what they setup to do that. If you are a
Hortonworks customer I would suggest that you talk to their support about
that. I
Surely they work on a way more powerful cluster, but the topology is composed
by just one spout. No parallelization, no bolts, for a total of one worker, so
1 thread in a jvm. Even if I had 100 cores like them it shouldn't make any
difference. Please, correct me if I'm wrong.
Such a topology
Hi Storm Community,
Is there anybody has the experience integrate Prometheus with Storm? Is
this a drawback to use twitter heron instead of Storm? Thanks in advance!
Regards,
Jude
for their test, they were using 4 worker nodes (servers) each with 24vCores
for a total of 96vCores.
Most laptops max out at 8vCores and are typically at 4-6vCores
Jacob Johansen
On Fri, Mar 30, 2018 at 9:18 AM, Alessio Pagliari
wrote:
> Hi everybody,
>
> I’m trying to
Hi everybody,
I’m trying to do some preliminary tests with storm, to understand how far it
can go. Now I’m focusing on trying to understand which is his maximum
throughput in terms of tuples per second. I saw the benchmark done by the guys
at Hortonworks (ref:
This should work for the most part. The new metrics API is just a thin
wrapper around the codahale metrics API that has no restrictions on when
you can add or remove metrics. It is separate from storm and it really is
just a naming convention that is used to store which worker, component,
etc, a
10 matches
Mail list logo