Github user hvanhovell commented on the pull request:

    https://github.com/apache/spark/pull/11209#issuecomment-184772881
  
    I added a synthetic benchmark in order to check the performance. The 
performance should be the best when we hash large chuncks of data, in this case 
byte arrays of 8223 bytes. The array is chosen in such a way that xxHash64 and 
MurMur both have to deal with non-word aligned input.
    
    I have calculates the speed by making the following calculation (please 
correct me if you feel that this approach is wrong):
    
        (numRows * numIterations * rowSize) / AvgTime
    
    If I do this for the largest case I get to 10,2 GB/s:
    
        val bytesPerSecond = ((1L << 10) * (1L << 11) * 8223L) / 1.569D
        val gbPerSecond = bytesPerSecond / (1024 * 1024 * 1024)
        >> gbPerSecond: Double = 10.236167543021033


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to