How do I increase readTimeoutMillis parameter in Spark-shell? because in the middle of CassandraCount The job aborts with the following exception
java.io.IOException: Exception during execution of SELECT count(*) FROM "test"."hello" WHERE token("cid") > ? AND token("cid") <= ? ALLOW FILTERING: [ip.us-east-2.compute.internal/X.X.X.X] Timed out waiting for server response I also did nodetool status. It says everything is UN. Thanks, kant