Hi, This is rather a broad question.
We would like to run a set of stress tests against our Spark clusters to ensure that the build performs as expected before deploying the cluster. Reasoning behind this is that the users were reporting some ML jobs running on two equal clusters reporting back different times, one cluster was behaving much worse than other using the same workload. This was eventually traced to wrong BIOS setting at hardware level and did not have anything to do with Spark itself. So rather spending a good while doing wild-goose chase, we would like to take spark app through some tests cycles. We have some ideas but appreciate some other feedbacks. The current version is CHDS 5.2. Thanks Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* http://talebzadehmich.wordpress.com *Disclaimer:* Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.