Every time we run a test cycle on our Jenkins cluster, we generate hundreds
of XML reports covering all the tests we have (e.g.
`streaming/target/test-reports/org.apache.spark.streaming.util.WriteAheadLogSuite.xml`).
These reports contain interesting information about whether tests succeeded
or
right now, the following logs are archived on to the master:
local log_files=$(
find .\
-name unit-tests.log -o\
-path ./sql/hive/target/HiveCompatibilitySuite.failed -o\
-path ./sql/hive/target/HiveCompatibilitySuite.hiveFailed -o\
-path
How about all of them https://amplab.cs.berkeley.edu/jenkins/view/Spark/? How
much data per day would it roughly be if we uploaded all the logs for all
these builds?
Also, would Databricks be willing to offer up an S3 bucket for this purpose?
Nick
On Mon Dec 15 2014 at 11:48:44 AM shane knapp
i have no problem w/storing all of the logs. :)
i also have no problem w/donated S3 buckets. :)
On Mon, Dec 15, 2014 at 2:39 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
How about all of them https://amplab.cs.berkeley.edu/jenkins/view/Spark/?
How
much data per day would it