Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13165#issuecomment-220515182
@sun-rui @felixcheung Right. It seems finally I made it. I made gists and
upload a PDF file for Spark UI.
Let me tell you the test results first.
Here is the stdout output for the tests on Windwos 7 32bit,
[output.msg](https://gist.github.com/HyukjinKwon/6a10719d2ca67e04ece2b23a8f92dc62).
Here is the stderr output for the tests on Windwos 7 32bit,
[output.err](https://gist.github.com/HyukjinKwon/54984d57ee18236d46e965d07b31f77a).
Here is the PDF for [Spark UI
PDF](https://drive.google.com/open?id=0B7RfLjRU7QTnVVA2bkVMVFkzNEE)
1. I run tests after building Spark on Windows according to
[`./R/WINDOWS.md`] (https://github.com/apache/spark/blob/master/R/WINDOWS.md)
2. It seems `$HADOOP_HOME` should be set.
3. It seems `winutils.exe` is required which is included in Hadoop official
binary although it reads file in the local file system.
4. And run the tests by the command below:
```bash
cd bin
spark-submit2.cmd --conf spark.hadoop.fs.defualt.name="file:///"
..\R\pkg\tests\run-all.R > output.msg 2> output.err
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]