Impressive indeed.
I wonder how quick a fully configured z13 could do it in.
Sadly, i suspect we will never know.

On 16 January 2015 at 18:37, John McKown <[email protected]>
wrote:

> http://opensource.com/business/15/1/apache-spark-new-world-record
> <quote>
> In October 2014, Databricks participated in the Sort Benchmark and set a
> new world record for sorting 100 terabytes (TB) of data, or 1 trillion
> 100-byte records. The team used Apache Spark <http://spark.apache.org/> on
> 207 EC2 virtual machines and sorted 100 TB of data in 23 minutes.
> </quote>
>
> Impressive to me.
>
> --
> ​
> While a transcendent vocabulary is laudable, one must be eternally careful
> so that the calculated objective of communication does not become ensconced
> in obscurity.  In other words, eschew obfuscation.
>
> 111,111,111 x 111,111,111 = 12,345,678,987,654,321
>
> Maranatha! <><
> John McKown
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to