marcoabreu commented on issue #15757: [Discussion] Unified performance tests 
and dashboard
URL: 
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-523790181
 
 
   It doesn't reflect reality in so far as that users would not run a cpu only 
build on a p3.16xlarge but on a c5 instead.
   
   Right, they were run on the same instance, but I'm not sure (Intel, please 
confirm) if the CPUs in a c5 might perform differently. But in general I would 
doubt it and say that the relative results are still relevant, just not 
accurate.
   
   I don't think it would make sense to be honest. A user looks at throughput/$ 
(or latency or whatever metric they optimize for). Cpu instances are way 
cheaper, but might underperform In direct comparison. But if you normalize 
these results with the costs, you will get a picture that's way closer to the 
reality of how a real user will use MXNet. In the end, we're optimizing for 
real use cases, so we should make the benchmarks and environment also as close 
to reality as possible.
   
   Correct, that's what I meant :)
   
   
   I didn't check in detail and sorry if my proposal introduces too much of a 
complexity, but what do you think about considering the performance of more 
than one sequential execution (think of a service) but instead measure the 
performance a fully utilized system is capable to handle? Like high batch size 
with one process (throughput optimized) vs batch size one with many processes 
(latency optimized).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to