[
https://issues.apache.org/jira/browse/SPARK-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xin Ren updated SPARK-16233:
----------------------------
Description:
By running
{code}
./R/run-tests.sh
{code}
Getting error:
{code}
xin:spark xr$ ./R/run-tests.sh
Warning: Ignoring non-spark config property: SPARK_SCALA_VERSION=2.11
Loading required package: methods
Attaching package: ‘SparkR’
The following object is masked from ‘package:testthat’:
describe
The following objects are masked from ‘package:stats’:
cov, filter, lag, na.omit, predict, sd, var, window
The following objects are masked from ‘package:base’:
as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
rank, rbind, sample, startsWith, subset, summary, transform, union
binary functions: ...........
functions on binary files: ....
broadcast variables: ..
functions in client.R: .....
test functions in sparkR.R: .....Re-using existing Spark Context. Call
sparkR.session.stop() or restart R to create a new Spark Context
....Re-using existing Spark Context. Call sparkR.session.stop() or restart R to
create a new Spark Context
...........
include an external JAR in SparkContext: Warning: Ignoring non-spark config
property: SPARK_SCALA_VERSION=2.11
..
include R packages:
MLlib functions: .........................SLF4J: Failed to load class
"org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
.27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:25 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 65,622
27-Jun-2016 1:51:25 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [label]
BINARY: 1 values, 21B raw, 23B comp, 1 pages, encodings: [PLAIN, RLE,
BIT_PACKED]
27-Jun-2016 1:51:25 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [terms,
list, element, list, element] BINARY: 2 values, 42B raw, 43B comp, 1 pages,
encodings: [PLAIN, RLE]
27-Jun-2016 1:51:25 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for
[hasIntercept] BOOLEAN: 1 values, 1B raw, 3B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED]
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 49
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 90B for [labels,
list, element] BINARY: 3 values, 50B raw, 50B comp, 1 pages, encodings: [PLAIN,
RLE]
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 92
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 61B for
[vectorCol] BINARY: 1 values, 18B raw, 20B comp, 1 pages, encodings: [PLAIN,
RLE, BIT_PACKED]
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for
[prefixesToRewrite, key_value, key] BINARY: 2 values, 61B raw, 61B comp, 1
pages, encodings: [PLAIN, RLE]
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 58B for
[prefixesToRewrite, key_value, value] BINARY: 2 values, 15B raw, 17B comp, 1
pages, encodings: [RLE, PLAIN_DICTIONARY], dic { 1 entries, 12B raw, 1B comp}
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 54
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for
[columnsToPrune, list, element] BINARY: 2 values, 59B raw, 59B comp, 1 pages,
encodings: [PLAIN, RLE]
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 56
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for
[intercept] DOUBLE: 1 values, 8B raw, 10B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED]
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 45B for
[coefficients, type] INT32: 1 values, 10B raw, 12B comp, 1 pages, encodings:
[PLAIN, RLE, BIT_PACKED]
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for
[coefficients, size] INT32: 1 values, 7B raw, 9B comp, 1 pages, encodings:
[PLAIN, RLE, BIT_PACKED]
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for
[coefficients, indices, list, element] INT32: 1 values, 13B raw, 15B comp, 1
pages, encodings: [PLAIN, RLE]
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for
[coefficients, values, list, element] DOUBLE: 3 values, 37B raw, 38B comp, 1
pages, encodings: [PLAIN, RLE]
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer
ver.................................................................W..........
parallelize() and collect(): .............................
...................................................................................................................................................................................................................................................................
SerDe functionality: ...................
partitionBy, groupByKey, reduceByKey etc.: ....................
SparkSQL functions:
.........................................................S................................................................................................................................................................................................................................................................S......................................................................................................................................................................1.....................................S
.....................
tests RDD function take(): ................
the textFile() function: .............
functions in utils.R: ....................................
Windows-specific tests: S
Skipped ------------------------------------------------------------------------
1. create DataFrame from RDD (@test_sparkSQL.R#200) - Hive is not build with
SparkSQL, skipped
2. test HiveContext (@test_sparkSQL.R#1003) - Hive is not build with SparkSQL,
skipped
3. enableHiveSupport on SparkSession (@test_sparkSQL.R#2395) - Hive is not
build with SparkSQL, skipped
4. sparkJars tag in SparkContext (@test_Windows.R#21) - This test is only for
Windows, skipped
Warnings -----------------------------------------------------------------------
1. spark.naiveBayes (@test_mllib.R#390) - `not()` is deprecated.
Failed -------------------------------------------------------------------------
1. Error: read/write ORC files (@test_sparkSQL.R#1705) -------------------------
org.apache.spark.sql.AnalysisException: The ORC data source must be used with
Hive support enabled;
at
org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:137)
at
org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:78)
at
org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:78)
at
org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:414)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at
org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)
at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:86)
at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)
at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
1: write.df(df, orcPath, "orc", mode = "overwrite") at
/Users/xin/workspace/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R:1705
2: write.df(df, orcPath, "orc", mode = "overwrite")
3: .local(df, path, ...)
4: callJMethod(write, "save", path)
5: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
6: stop(readString(conn))
DONE ===========================================================================
Error: Test failures
Execution halted
Warning: Ignoring non-spark config property: SPARK_SCALA_VERSION=2.11
Loading required package: methods
Attaching package: ‘SparkR’
The following object is masked from ‘package:testthat’:
describe
The following objects are masked from ‘package:stats’:
cov, filter, lag, na.omit, predict, sd, var, window
The following objects are masked from ‘package:base’:
as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
rank, rbind, sample, startsWith, subset, summary, transform, union
binary functions: ...........
functions on binary files: ....
broadcast variables: ..
functions in client.R: .....
test functions in sparkR.R: .....Re-using existing Spark Context. Call
sparkR.session.stop() or restart R to create a new Spark Context
....Re-using existing Spark Context. Call sparkR.session.stop() or restart R to
create a new Spark Context
...........
include an external JAR in SparkContext: Warning: Ignoring non-spark config
property: SPARK_SCALA_VERSION=2.11
..
include R packages:
MLlib functions: .........................SLF4J: Failed to load class
"org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
.27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:25 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 65,622
27-Jun-2016 1:51:25 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [label]
BINARY: 1 values, 21B raw, 23B comp, 1 pages, encodings: [PLAIN, RLE,
BIT_PACKED]
27-Jun-2016 1:51:25 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [terms,
list, element, list, element] BINARY: 2 values, 42B raw, 43B comp, 1 pages,
encodings: [PLAIN, RLE]
27-Jun-2016 1:51:25 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for
[hasIntercept] BOOLEAN: 1 values, 1B raw, 3B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED]
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 49
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 90B for [labels,
list, element] BINARY: 3 values, 50B raw, 50B comp, 1 pages, encodings: [PLAIN,
RLE]
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 92
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 61B for
[vectorCol] BINARY: 1 values, 18B raw, 20B comp, 1 pages, encodings: [PLAIN,
RLE, BIT_PACKED]
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for
[prefixesToRewrite, key_value, key] BINARY: 2 values, 61B raw, 61B comp, 1
pages, encodings: [PLAIN, RLE]
27-Jun-2016 1:51:26 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 58B for
[prefixesToRewrite, key_value, value] BINARY: 2 values, 15B raw, 17B comp, 1
pages, encodings: [RLE, PLAIN_DICTIONARY], dic { 1 entries, 12B raw, 1B comp}
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 54
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for
[columnsToPrune, list, element] BINARY: 2 values, 59B raw, 59B comp, 1 pages,
encodings: [PLAIN, RLE]
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer version is: PARQUET_1_0
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Maximum row group padding size is 0 bytes
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 56
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for
[intercept] DOUBLE: 1 values, 8B raw, 10B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED]
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 45B for
[coefficients, type] INT32: 1 values, 10B raw, 12B comp, 1 pages, encodings:
[PLAIN, RLE, BIT_PACKED]
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for
[coefficients, size] INT32: 1 values, 7B raw, 9B comp, 1 pages, encodings:
[PLAIN, RLE, BIT_PACKED]
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for
[coefficients, indices, list, element] INT32: 1 values, 13B raw, 15B comp, 1
pages, encodings: [PLAIN, RLE]
27-Jun-2016 1:51:27 PM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for
[coefficients, values, list, element] DOUBLE: 3 values, 37B raw, 38B comp, 1
pages, encodings: [PLAIN, RLE]
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
Compression: SNAPPY
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet block size to 134217728
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Parquet dictionary page size to 1048576
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Dictionary is on
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Validation is off
27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
Writer
ver.................................................................W..........
parallelize() and collect(): .............................
...................................................................................................................................................................................................................................................................
SerDe functionality: ...................
partitionBy, groupByKey, reduceByKey etc.: ....................
SparkSQL functions:
.........................................................S................................................................................................................................................................................................................................................................S......................................................................................................................................................................1.....................................S
.....................
tests RDD function take(): ................
the textFile() function: .............
functions in utils.R: ....................................
Windows-specific tests: S
Skipped ------------------------------------------------------------------------
1. create DataFrame from RDD (@test_sparkSQL.R#200) - Hive is not build with
SparkSQL, skipped
2. test HiveContext (@test_sparkSQL.R#1003) - Hive is not build with SparkSQL,
skipped
3. enableHiveSupport on SparkSession (@test_sparkSQL.R#2395) - Hive is not
build with SparkSQL, skipped
4. sparkJars tag in SparkContext (@test_Windows.R#21) - This test is only for
Windows, skipped
Warnings -----------------------------------------------------------------------
1. spark.naiveBayes (@test_mllib.R#390) - `not()` is deprecated.
Failed -------------------------------------------------------------------------
1. Error: read/write ORC files (@test_sparkSQL.R#1705) -------------------------
org.apache.spark.sql.AnalysisException: The ORC data source must be used with
Hive support enabled;
at
org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:137)
at
org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:78)
at
org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:78)
at
org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:414)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at
org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)
at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:86)
at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)
at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
1: write.df(df, orcPath, "orc", mode = "overwrite") at
/Users/xin/workspace/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R:1705
2: write.df(df, orcPath, "orc", mode = "overwrite")
3: .local(df, path, ...)
4: callJMethod(write, "save", path)
5: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
6: stop(readString(conn))
DONE ===========================================================================
Error: Test failures
Execution halted
Had test failures; see logs.{code}
Cause: most probably these tests are using 'createDataFrame(sqlContext...)'
which is deprecated. Should update tests method invocations.
was:
By running
{code}
./R/run-tests.sh
{code}
Getting error:
{code}
15. Error: create DataFrame from list or data.frame (@test_sparkSQL.R#277) -----
java.lang.NoClassDefFoundorg/apache/spark/sql/execution/datasources/PreInsertCastAndRename$
at
org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:69)
at
org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
at
org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at
org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:533)
at
org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:293)
at org.apache.spark.sql.api.r.SQLUtils$.createDF(SQLUtils.scala:135)
at org.apache.spark.sql.api.r.SQLUtils.createDF(SQLUtils.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at
org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)
at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:86)
at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)
at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
1: createDataFrame(l, c("a", "b")) at
/Users/quickmobile/workspace/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R:277
2: dispatchFunc("createDataFrame(data, schema = NULL, samplingRatio = 1.0)", x,
...)
3: f(x, ...)
4: callJStatic("org.apache.spark.sql.api.r.SQLUtils", "createDF", srdd,
schema$jobj,
sparkSession)
5: invokeJava(isStatic = TRUE, className, methodName, ...)
6: stop(readString(conn))
DONE ===========================================================================
Execution halted
{code}
Cause: most probably these tests are using 'createDataFrame(sqlContext...)'
which is deprecated. Should update tests method invocations.
> test_sparkSQL.R is failing
> --------------------------
>
> Key: SPARK-16233
> URL: https://issues.apache.org/jira/browse/SPARK-16233
> Project: Spark
> Issue Type: Bug
> Components: SparkR, Tests
> Affects Versions: 2.0.0
> Reporter: Xin Ren
> Priority: Minor
>
> By running
> {code}
> ./R/run-tests.sh
> {code}
> Getting error:
> {code}
> xin:spark xr$ ./R/run-tests.sh
> Warning: Ignoring non-spark config property: SPARK_SCALA_VERSION=2.11
> Loading required package: methods
> Attaching package: ‘SparkR’
> The following object is masked from ‘package:testthat’:
> describe
> The following objects are masked from ‘package:stats’:
> cov, filter, lag, na.omit, predict, sd, var, window
> The following objects are masked from ‘package:base’:
> as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
> rank, rbind, sample, startsWith, subset, summary, transform, union
> binary functions: ...........
> functions on binary files: ....
> broadcast variables: ..
> functions in client.R: .....
> test functions in sparkR.R: .....Re-using existing Spark Context. Call
> sparkR.session.stop() or restart R to create a new Spark Context
> ....Re-using existing Spark Context. Call sparkR.session.stop() or restart R
> to create a new Spark Context
> ...........
> include an external JAR in SparkContext: Warning: Ignoring non-spark config
> property: SPARK_SCALA_VERSION=2.11
> ..
> include R packages:
> MLlib functions: .........................SLF4J: Failed to load class
> "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
> details.
> .27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:25 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 65,622
> 27-Jun-2016 1:51:25 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [label]
> BINARY: 1 values, 21B raw, 23B comp, 1 pages, encodings: [PLAIN, RLE,
> BIT_PACKED]
> 27-Jun-2016 1:51:25 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [terms,
> list, element, list, element] BINARY: 2 values, 42B raw, 43B comp, 1 pages,
> encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:25 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for
> [hasIntercept] BOOLEAN: 1 values, 1B raw, 3B comp, 1 pages, encodings:
> [PLAIN, BIT_PACKED]
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 49
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 90B for [labels,
> list, element] BINARY: 3 values, 50B raw, 50B comp, 1 pages, encodings:
> [PLAIN, RLE]
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 92
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 61B for
> [vectorCol] BINARY: 1 values, 18B raw, 20B comp, 1 pages, encodings: [PLAIN,
> RLE, BIT_PACKED]
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for
> [prefixesToRewrite, key_value, key] BINARY: 2 values, 61B raw, 61B comp, 1
> pages, encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 58B for
> [prefixesToRewrite, key_value, value] BINARY: 2 values, 15B raw, 17B comp, 1
> pages, encodings: [RLE, PLAIN_DICTIONARY], dic { 1 entries, 12B raw, 1B comp}
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 54
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for
> [columnsToPrune, list, element] BINARY: 2 values, 59B raw, 59B comp, 1 pages,
> encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 56
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for
> [intercept] DOUBLE: 1 values, 8B raw, 10B comp, 1 pages, encodings: [PLAIN,
> BIT_PACKED]
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 45B for
> [coefficients, type] INT32: 1 values, 10B raw, 12B comp, 1 pages, encodings:
> [PLAIN, RLE, BIT_PACKED]
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for
> [coefficients, size] INT32: 1 values, 7B raw, 9B comp, 1 pages, encodings:
> [PLAIN, RLE, BIT_PACKED]
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for
> [coefficients, indices, list, element] INT32: 1 values, 13B raw, 15B comp, 1
> pages, encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for
> [coefficients, values, list, element] DOUBLE: 3 values, 37B raw, 38B comp, 1
> pages, encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer
> ver.................................................................W..........
> parallelize() and collect(): .............................
> ...................................................................................................................................................................................................................................................................
> SerDe functionality: ...................
> partitionBy, groupByKey, reduceByKey etc.: ....................
> SparkSQL functions:
> .........................................................S................................................................................................................................................................................................................................................................S......................................................................................................................................................................1.....................................S
> .....................
> tests RDD function take(): ................
> the textFile() function: .............
> functions in utils.R: ....................................
> Windows-specific tests: S
> Skipped
> ------------------------------------------------------------------------
> 1. create DataFrame from RDD (@test_sparkSQL.R#200) - Hive is not build with
> SparkSQL, skipped
> 2. test HiveContext (@test_sparkSQL.R#1003) - Hive is not build with
> SparkSQL, skipped
> 3. enableHiveSupport on SparkSession (@test_sparkSQL.R#2395) - Hive is not
> build with SparkSQL, skipped
> 4. sparkJars tag in SparkContext (@test_Windows.R#21) - This test is only for
> Windows, skipped
> Warnings
> -----------------------------------------------------------------------
> 1. spark.naiveBayes (@test_mllib.R#390) - `not()` is deprecated.
> Failed
> -------------------------------------------------------------------------
> 1. Error: read/write ORC files (@test_sparkSQL.R#1705)
> -------------------------
> org.apache.spark.sql.AnalysisException: The ORC data source must be used with
> Hive support enabled;
> at
> org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:137)
> at
> org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:78)
> at
> org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:78)
> at
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:414)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at
> org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)
> at
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:86)
> at
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)
> at
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> 1: write.df(df, orcPath, "orc", mode = "overwrite") at
> /Users/xin/workspace/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R:1705
> 2: write.df(df, orcPath, "orc", mode = "overwrite")
> 3: .local(df, path, ...)
> 4: callJMethod(write, "save", path)
> 5: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
> 6: stop(readString(conn))
> DONE
> ===========================================================================
> Error: Test failures
> Execution halted
> Warning: Ignoring non-spark config property: SPARK_SCALA_VERSION=2.11
> Loading required package: methods
> Attaching package: ‘SparkR’
> The following object is masked from ‘package:testthat’:
> describe
> The following objects are masked from ‘package:stats’:
> cov, filter, lag, na.omit, predict, sd, var, window
> The following objects are masked from ‘package:base’:
> as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
> rank, rbind, sample, startsWith, subset, summary, transform, union
> binary functions: ...........
> functions on binary files: ....
> broadcast variables: ..
> functions in client.R: .....
> test functions in sparkR.R: .....Re-using existing Spark Context. Call
> sparkR.session.stop() or restart R to create a new Spark Context
> ....Re-using existing Spark Context. Call sparkR.session.stop() or restart R
> to create a new Spark Context
> ...........
> include an external JAR in SparkContext: Warning: Ignoring non-spark config
> property: SPARK_SCALA_VERSION=2.11
> ..
> include R packages:
> MLlib functions: .........................SLF4J: Failed to load class
> "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
> details.
> .27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:25 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:25 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 65,622
> 27-Jun-2016 1:51:25 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [label]
> BINARY: 1 values, 21B raw, 23B comp, 1 pages, encodings: [PLAIN, RLE,
> BIT_PACKED]
> 27-Jun-2016 1:51:25 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [terms,
> list, element, list, element] BINARY: 2 values, 42B raw, 43B comp, 1 pages,
> encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:25 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for
> [hasIntercept] BOOLEAN: 1 values, 1B raw, 3B comp, 1 pages, encodings:
> [PLAIN, BIT_PACKED]
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 49
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 90B for [labels,
> list, element] BINARY: 3 values, 50B raw, 50B comp, 1 pages, encodings:
> [PLAIN, RLE]
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 92
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 61B for
> [vectorCol] BINARY: 1 values, 18B raw, 20B comp, 1 pages, encodings: [PLAIN,
> RLE, BIT_PACKED]
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for
> [prefixesToRewrite, key_value, key] BINARY: 2 values, 61B raw, 61B comp, 1
> pages, encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:26 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 58B for
> [prefixesToRewrite, key_value, value] BINARY: 2 values, 15B raw, 17B comp, 1
> pages, encodings: [RLE, PLAIN_DICTIONARY], dic { 1 entries, 12B raw, 1B comp}
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 54
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for
> [columnsToPrune, list, element] BINARY: 2 values, 59B raw, 59B comp, 1 pages,
> encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer version is: PARQUET_1_0
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Maximum row group padding size is 0 bytes
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem
> columnStore to file. allocated memory: 56
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for
> [intercept] DOUBLE: 1 values, 8B raw, 10B comp, 1 pages, encodings: [PLAIN,
> BIT_PACKED]
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 45B for
> [coefficients, type] INT32: 1 values, 10B raw, 12B comp, 1 pages, encodings:
> [PLAIN, RLE, BIT_PACKED]
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 30B for
> [coefficients, size] INT32: 1 values, 7B raw, 9B comp, 1 pages, encodings:
> [PLAIN, RLE, BIT_PACKED]
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for
> [coefficients, indices, list, element] INT32: 1 values, 13B raw, 15B comp, 1
> pages, encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:27 PM INFO:
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for
> [coefficients, values, list, element] DOUBLE: 3 values, 37B raw, 38B comp, 1
> pages, encodings: [PLAIN, RLE]
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig:
> Compression: SNAPPY
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet block size to 134217728
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Parquet dictionary page size to 1048576
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Dictionary is on
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Validation is off
> 27-Jun-2016 1:51:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat:
> Writer
> ver.................................................................W..........
> parallelize() and collect(): .............................
> ...................................................................................................................................................................................................................................................................
> SerDe functionality: ...................
> partitionBy, groupByKey, reduceByKey etc.: ....................
> SparkSQL functions:
> .........................................................S................................................................................................................................................................................................................................................................S......................................................................................................................................................................1.....................................S
> .....................
> tests RDD function take(): ................
> the textFile() function: .............
> functions in utils.R: ....................................
> Windows-specific tests: S
> Skipped
> ------------------------------------------------------------------------
> 1. create DataFrame from RDD (@test_sparkSQL.R#200) - Hive is not build with
> SparkSQL, skipped
> 2. test HiveContext (@test_sparkSQL.R#1003) - Hive is not build with
> SparkSQL, skipped
> 3. enableHiveSupport on SparkSession (@test_sparkSQL.R#2395) - Hive is not
> build with SparkSQL, skipped
> 4. sparkJars tag in SparkContext (@test_Windows.R#21) - This test is only for
> Windows, skipped
> Warnings
> -----------------------------------------------------------------------
> 1. spark.naiveBayes (@test_mllib.R#390) - `not()` is deprecated.
> Failed
> -------------------------------------------------------------------------
> 1. Error: read/write ORC files (@test_sparkSQL.R#1705)
> -------------------------
> org.apache.spark.sql.AnalysisException: The ORC data source must be used with
> Hive support enabled;
> at
> org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:137)
> at
> org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:78)
> at
> org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:78)
> at
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:414)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at
> org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)
> at
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:86)
> at
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)
> at
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> 1: write.df(df, orcPath, "orc", mode = "overwrite") at
> /Users/xin/workspace/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R:1705
> 2: write.df(df, orcPath, "orc", mode = "overwrite")
> 3: .local(df, path, ...)
> 4: callJMethod(write, "save", path)
> 5: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
> 6: stop(readString(conn))
> DONE
> ===========================================================================
> Error: Test failures
> Execution halted
> Had test failures; see logs.{code}
> Cause: most probably these tests are using 'createDataFrame(sqlContext...)'
> which is deprecated. Should update tests method invocations.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]