Hi Amir,

it would be great if you could link to the details of your benchmark environment if you make such claims. Compared to which IBM system? Characteristics of your machines? Configuration of the software? Implementation code? etc.


In general the Beam Runner also adds some overhead compared to native Flink jobs. There are many factors that could affect results. I don't know the Linear Road Benchmark but 150 times sounds unrealistic.

Timo


Am 16/09/16 um 10:02 schrieb amir bahmanyari:
FYI, we, at a well known IT department, have been actively measuring Beam Flink 
Runner performance using MIT's Linear Road to stress the Flink Cluster servers.The 
results, thus far does not even come close to the previous streaming engines we 
have bench-marked.Our optimistic assumption was, when we started, that Beam runners 
(Flink for instance) will leave Storm & IBM in smoke.Wrong. What IBM managed to 
perform is 150 times better than Flink. Needless to mention Storm, and 
Hortonworks.As an example, IBM  handled 150 expressways in 3.5 hours.In the same 
identical topology, everything fixed, Beam Flink Runner in a Flink Cluster handled 
10 expressways in 17 hours at its best so far.
I have followed every single performance tuning recommendation that is out there 
& none improved it even a bit.Works fine with 1 expressway. Sorry but thats our 
findings so far unless we are doing something wrong.I posted all details to this 
forum but never got any solid response that would make a difference in our 
observations.Therefore, we assume what we are seeing is the reality which we have 
to report to our superiors.Pls prove us wrong. We still have some time.Thanks.Amir-

       From: Fabian Hueske <fhue...@gmail.com>
  To: "dev@flink.apache.org" <dev@flink.apache.org>
  Sent: Friday, September 16, 2016 12:31 AM
  Subject: Re: Performance and Latency Chart for Flink
Hi,

I am not aware of periodic performance runs for the Flink releases.
I know a few benchmarks which have been published at different points in
time like [1], [2], and [3] (you'll probably find more).

In general, fair benchmarks that compare different systems (if there is
such thing) are very difficult and the results often depend on the use case.
IMO the best option is to run your own benchmarks, if you have a concrete
use case.

Best, Fabian

[1] 08/2015:
http://data-artisans.com/high-throughput-low-latency-and-exactly-once-stream-processing-with-apache-flink/
[2] 12/2015:
https://yahooeng.tumblr.com/post/135321837876/benchmarking-streaming-computation-engines-at
[3] 02/2016:
http://data-artisans.com/extending-the-yahoo-streaming-benchmark/


2016-09-16 5:54 GMT+02:00 Chawla,Sumit <sumitkcha...@gmail.com>:

Hi

Is there any performance run that is done for each Flink release? Or you
are aware of any third party evaluation of performance metrics for Flink?
I am interested in seeing how performance has improved over release to
release, and performance vs other competitors.

Regards
Sumit Chawla




--
Freundliche Grüße / Kind Regards

Timo Walther

Follow me: @twalthr
https://www.linkedin.com/in/twalthr

Reply via email to