Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/22273
> Yeah but for people who don't know these tests or how the _have_xxx flags
work, having an additional positive confirmation that the Arrow tests are run
is nice. We already are doing this for pandas here.
hmm .. I thought the current information is enough to indicate which Arrow
or Pandas we would use and test. For instance,
```
Skipped tests in pyspark.sql.tests with pypy:
test_createDataFrame_column_name_encoding
(pyspark.sql.tests.ArrowTests) ... skipped 'Pandas >= 0.19.2 must be installed;
however, it was not found.'
...
Skipped tests in pyspark.sql.tests with python2.7:
test_createDataFrame_column_name_encoding
(pyspark.sql.tests.ArrowTests) ... skipped 'Pandas >= 0.19.2 must be installed;
however, your version was 0.16.0.'
...
```
Yea, looks we are already doing this for Pandas .. but actually I would
rather remove that test too.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]