[GitHub] [incubator-mxnet] marcoabreu commented on issue #15757: [Discussion] Unified performance tests and dashboard

2019-08-22 Thread GitBox
marcoabreu commented on issue #15757: [Discussion] Unified performance tests 
and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-523790181
 
 
   It doesn't reflect reality in so far as that users would not run a cpu only 
build on a p3.16xlarge but on a c5 instead.
   
   Right, they were run on the same instance, but I'm not sure (Intel, please 
confirm) if the CPUs in a c5 might perform differently. But in general I would 
doubt it and say that the relative results are still relevant, just not 
accurate.
   
   I don't think it would make sense to be honest. A user looks at throughput/$ 
(or latency or whatever metric they optimize for). Cpu instances are way 
cheaper, but might underperform In direct comparison. But if you normalize 
these results with the costs, you will get a picture that's way closer to the 
reality of how a real user will use MXNet. In the end, we're optimizing for 
real use cases, so we should make the benchmarks and environment also as close 
to reality as possible.
   
   Correct, that's what I meant :)
   
   
   I didn't check in detail and sorry if my proposal introduces too much of a 
complexity, but what do you think about considering the performance of more 
than one sequential execution (think of a service) but instead measure the 
performance a fully utilized system is capable to handle? Like high batch size 
with one process (throughput optimized) vs batch size one with many processes 
(latency optimized).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #15757: [Discussion] Unified performance tests and dashboard

2019-08-21 Thread GitBox
marcoabreu commented on issue #15757: [Discussion] Unified performance tests 
and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-523756398
 
 
   It's not necessarily only about frugality but also the c5.18xlarge contains
   different processors than p3.16xlarge as far as I know. So the results
   don't really reflect the reality - but I also don't think that they will
   make a big difference. But in the future we should let apples stay apples
   and pears be pears :)
   
   Lin Yuan  schrieb am Mi., 21. Aug. 2019, 22:38:
   
   > @marcoabreu  You are right. We should be
   > more frugal :) @ChaiBapchya 
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #15757: [Discussion] Unified performance tests and dashboard

2019-08-21 Thread GitBox
marcoabreu commented on issue #15757: [Discussion] Unified performance tests 
and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-523754985
 
 
   Erm why are we running cpu only benchmarks on a p3.16xlarge?
   
   Lin Yuan  schrieb am Mi., 21. Aug. 2019, 22:03:
   
   > @pengzhao-intel  There was some
   > mistake in the earlier results due to CPU sharing. Chai has re-run
   > profiling and collected the updated results here:
   >
   >
   > 
https://docs.google.com/spreadsheets/d/1GpdNquQb71Is5B-li99JDuiLeEZd-eSjHIIowzGrwxc/edit?usp=sharing
   >
   > Please check the three sheets: Shape (1024, 1024), Shape (1, 1) and
   > Shape (1, 100) corresponding to three different input shapes. The
   > runtime numbers are the 50 percentile out of 100 runs. There are comparison
   > between int64/int32 and in64mkl/int32mkl. Please feel free to ping
   > @ChaiBapchya  or me should you have any
   > question.
   >
   > Thanks!
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #15757: [Discussion] Unified performance tests and dashboard

2019-08-05 Thread GitBox
marcoabreu commented on issue #15757: [Discussion] Unified performance tests 
and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-518404430
 
 
   We can't use the CI system for performance measurements since it does not 
provide a consistent environment for various reasons (efficiency, 
maintainability, etc). Thus, we need a separate system that has the sole 
purpose of being entirely consistent.
   
   Also, I'm afraid that using tests to also measure performance could be 
misleading since tests might get extended or altered. I'd propose to have 
dedicated benchmarks instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services