Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/18555
@gatorsmile
Thank you very much for reviewing the code again.
and thank you very much for teaching me to be familiar with the spark step
by step.
Here is the result of run each test case in the master branch without any
extra change
```
test("verify spark.blockManager.port configuration")
current behaviorsï¼
Caused by: java.lang.IllegalArgumentException:
requirement failed: startPort should be between 1024 and 65535 (inclusive),
or 0 for a random free port.
test("verify spark.executor.memory configuration exception")
current behaviorsï¼
Caused by: java.lang.NumberFormatException: Size must be specified as bytes
(b),
kibibytes (k), mebibytes (m), gibibytes (g), tebibytes (t), or
pebibytes(p).
E.g. 50b, 100k, or 250m.
test("verify listenerbus.eventqueue.capacity configuration exception")
current behaviorsï¼
Caused by: java.lang.IllegalArgumentException:
The capacity of listener bus event queue must not be negative
test("verify shuffle file buffer size configuration")
current behaviorsï¼
Caused by: java.lang.IllegalArgumentException: Buffer size <= 0
test("verify spark.shuffle.spill.diskWriteBufferSize configuration")
current behaviorsï¼
while (dataRemaining > 0) Dead loop or Caused by:
java.lang.NegativeArraySizeException
```
```
test("verify spark.shuffle.registration.timeout configuration exception")
current behaviorsï¼no exception was thrown.
**Analysis**
However, Caused by: java.util.concurrent.TimeoutException:
Timeout waiting for task. is reported in registerWithShuffleServer
test("verify spark.task.cpus configuration exception")
current behaviorsï¼no exception was thrown.
**Analysis**
when spark.task.cpus = -1
executorData.freeCores -= scheduler.CPUS_PER_TASK
spark executor will be more and more idle kernel
test("verify spark.reducer.maxReqSizeShuffleToMem configuration exception")
current behaviorsï¼no exception was thrown.
**Analysis**
// Fetch remote shuffle blocks to disk when the request is too large. Since
the shuffle data is
// already encrypted and compressed over the wire(w.r.t. the related
configs), we can just fetch
// the data and write it to file directly.
```
> Suggest: I will remove these checkValue and unit tests.
```
test("verify spark.shuffle.registration.maxAttempts configuration
exception")
current behaviorsï¼no exception was thrown.
test("verify spark.task.maxFailures configuration exception")
current behaviorsï¼no exception was thrown.
test("verify metrics.maxListenerClassesTimed configuration exception")
current behaviorsï¼no exception was thrown.
test("verify spark.ui.retained configuration exception")
current behaviorsï¼no exception was thrown.
test("verify spark.files.maxPartitionBytes configuration exception")
current behaviorsï¼no exception was thrown.
test("verify spark.files.openCostInBytes configuration exception")
current behaviorsï¼no exception was thrown.
test("verify spark.shuffle.accurateBlockThreshold configuration exception")
current behaviorsï¼no exception was thrown.
```
do you agree with my suggestion?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]