See <https://builds.apache.org/job/carbondata-master-spark-2.1/1441/display/redirect>
------------------------------------------ [...truncated 84.29 KB...] [INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ carbondata-hadoop --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 5 source files to <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/test-classes> [INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/test/java/org/apache/carbondata/hadoop/test/util/StoreCreator.java>: <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/test/java/org/apache/carbondata/hadoop/test/util/StoreCreator.java> uses unchecked or unsafe operations. [INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/test/java/org/apache/carbondata/hadoop/test/util/StoreCreator.java>: Recompile with -Xlint:unchecked for details. [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-hadoop --- [INFO] Surefire report directory: <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/surefire-reports> ------------------------------------------------------- T E S T S ------------------------------------------------------- log4j:WARN No appenders could be found for logger (org.apache.carbondata.core.util.CarbonProperties). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Running org.apache.carbondata.hadoop.test.util.ObjectSerializationUtilTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in org.apache.carbondata.hadoop.test.util.ObjectSerializationUtilTest Running org.apache.carbondata.hadoop.ft.CarbonInputMapperTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.084 sec - in org.apache.carbondata.hadoop.ft.CarbonInputMapperTest Running org.apache.carbondata.hadoop.ft.InputFilesTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.28 sec - in org.apache.carbondata.hadoop.ft.InputFilesTest Results : Tests run: 6, Failures: 0, Errors: 0, Skipped: 0 [JENKINS] Recording test results [INFO] [INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-hadoop --- [INFO] Building jar: <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/carbondata-hadoop-1.2.0-SNAPSHOT.jar> [INFO] [INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ carbondata-hadoop --- [INFO] [INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration (default-prepare-agent-integration) @ carbondata-hadoop --- [INFO] argLine set to -javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=<https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/jacoco-it.exec> [INFO] [INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-hadoop --- [INFO] Starting audit... Audit done. [INFO] [INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-hadoop --- [WARNING] sourceDirectory is not specified or does not exist value=<https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/main/scala> Saving to outputFile=<https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/scalastyle-output.xml> Processed 0 file(s) Found 0 errors Found 0 warnings Found 0 infos Finished in 1 ms [INFO] [INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ carbondata-hadoop --- [INFO] Loading execution data file <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/jacoco.exec> [INFO] Analyzed bundle 'Apache CarbonData :: Hadoop' with 29 classes [INFO] [INFO] --- jacoco-maven-plugin:0.7.9:report-integration (default-report-integration) @ carbondata-hadoop --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- jacoco-maven-plugin:0.7.9:check (default-check) @ carbondata-hadoop --- [INFO] Loading execution data file <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/jacoco.exec> [INFO] Analyzed bundle 'carbondata-hadoop' with 0 classes [INFO] All coverage checks have been met. [INFO] [INFO] --- maven-install-plugin:2.5.2:install (default-install) @ carbondata-hadoop --- [INFO] Installing <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/carbondata-hadoop-1.2.0-SNAPSHOT.jar> to /home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.jar [INFO] Installing <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/pom.xml> to /home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.pom [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Apache CarbonData :: Spark Common 1.2.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-spark-common --- [INFO] Deleting <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/target> [INFO] [INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ carbondata-spark-common --- [INFO] argLine set to -javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=<https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/target/jacoco.exec> [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ carbondata-spark-common --- [INFO] [INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ carbondata-spark-common --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/resources> [INFO] Copying 0 resource [INFO] Copying 3 resources [INFO] [INFO] --- maven-scala-plugin:2.15.2:compile (default) @ carbondata-spark-common --- [INFO] Checking for multiple versions of scala [INFO] includes = [**/*.java,**/*.scala,] [INFO] excludes = [] [INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/java>:-1: info: compiling [INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala>:-1: info: compiling [INFO] Compiling 76 source files to <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/target/classes> at 1505924033435 [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableAddColumnRDD.scala>:52: warning: no valid targets for annotation on value newColumns - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param) [INFO] @transient newColumns: Seq[ColumnSchema], [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableDropColumnRDD.scala>:49: warning: no valid targets for annotation on value newColumns - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param) [INFO] @transient newColumns: Seq[ColumnSchema], [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonRDD.scala>:34: warning: no valid targets for annotation on value sc - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param) [INFO] abstract class CarbonRDD[T: ClassTag](@transient sc: SparkContext, [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonRDD.scala>:70: warning: no valid targets for annotation on value sc - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param) [INFO] @transient sc: SparkContext, [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonScanRDD.scala>:54: warning: no valid targets for annotation on value sc - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param) [INFO] @transient sc: SparkContext, [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonScanRDD.scala>:58: warning: no valid targets for annotation on value serializedTableInfo - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param) [INFO] @transient serializedTableInfo: Array[Byte], [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonScanRDD.scala>:59: warning: no valid targets for annotation on value tableInfo - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param) [INFO] @transient tableInfo: TableInfo, inputMetricsStats: InitInputMetrics) [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala>:127: warning: non-variable type argument Any in type pattern scala.collection.Map[Any,Any] is unchecked since it is eliminated by erasure [INFO] case m: scala.collection.Map[Any, Any] => [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/PartitionDropper.scala>:57: warning: match may not be exhaustive. [INFO] It would fail on the following inputs: HASH, RANGE_INTERVAL [INFO] val targetPartitionId = partitionInfo.getPartitionType match { [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala>:69: warning: match may not be exhaustive. [INFO] It would fail on the following inputs: ARRAY, BYTE, BYTE_ARRAY, FLOAT, MAP, NULL, SHORT_INT, STRUCT [INFO] dataType match { [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataTypeConverterUtil.scala>:81: warning: match may not be exhaustive. [INFO] It would fail on the following inputs: BOOLEAN, BYTE, BYTE_ARRAY, MAP, NULL, SHORT_INT [INFO] dataType match { [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/spark/rdd/DataLoadPartitionCoalescer.scala>:193: warning: match may not be exhaustive. [INFO] It would fail on the following input: None [INFO] hostMapPartitionIds.get(loc) match { [INFO] ^ [WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/spark/rdd/DataLoadPartitionCoalescer.scala>:190: warning: match may not be exhaustive. [INFO] It would fail on the following input: None [INFO] partitionIdMapHosts.get(partitionId) match { [INFO] ^ [INFO] # [ERROR] # A fatal error has been detected by the Java Runtime Environment: [INFO] # [ERROR] # Internal Error (output.cpp:1593), pid=31207, tid=0x00007f47e59bf700 [INFO] # guarantee((int)(blk_starts[i+1] - blk_starts[i]) >= (current_offset - blk_offset)) failed: shouldn't increase block size [INFO] # [INFO] # JRE version: Java(TM) SE Runtime Environment (8.0_144-b01) (build 1.8.0_144-b01) [INFO] # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode linux-amd64 compressed oops) [INFO] # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again [INFO] # [ERROR] # An error report file with more information is saved as: [INFO] # <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hs_err_pid31207.log> [INFO] # [INFO] # Compiler replay data is saved as: [INFO] # <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/replay_pid31207.log> [INFO] # [INFO] # If you would like to submit a bug report, please visit: [INFO] # http://bugreport.java.com/bugreport/crash.jsp [INFO] # [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache CarbonData :: Parent ........................ SUCCESS [ 10.279 s] [INFO] Apache CarbonData :: Common ........................ SUCCESS [ 20.182 s] [INFO] Apache CarbonData :: Core .......................... SUCCESS [03:37 min] [INFO] Apache CarbonData :: Processing .................... SUCCESS [01:04 min] [INFO] Apache CarbonData :: Hadoop ........................ SUCCESS [ 39.158 s] [INFO] Apache CarbonData :: Spark Common .................. FAILURE [ 44.521 s] [INFO] Apache CarbonData :: Spark2 ........................ SKIPPED [INFO] Apache CarbonData :: Spark Common Test ............. SKIPPED [INFO] Apache CarbonData :: Assembly ...................... SKIPPED [INFO] Apache CarbonData :: Hive .......................... SKIPPED [INFO] Apache CarbonData :: presto ........................ SKIPPED [INFO] Apache CarbonData :: Spark2 Examples ............... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 06:44 min [INFO] Finished at: 2017-09-20T16:14:28+00:00 [INFO] Final Memory: 106M/1411M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on project carbondata-spark-common: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 134(Exit value: 134) -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on project carbondata-spark-common: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 134(Exit value: 134) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) at org.jvnet.hudson.maven3.launcher.Maven33Launcher.main(Maven33Launcher.java:129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:330) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:238) at jenkins.maven3.agent.Maven33Main.launch(Maven33Main.java:176) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at hudson.maven.Maven3Builder.call(Maven3Builder.java:139) at hudson.maven.Maven3Builder.call(Maven3Builder.java:70) at hudson.remoting.UserRequest.perform(UserRequest.java:153) at hudson.remoting.UserRequest.perform(UserRequest.java:50) at hudson.remoting.Request$2.run(Request.java:336) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.maven.plugin.MojoExecutionException: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 134(Exit value: 134) at org_scala_tools_maven.ScalaMojoSupport.execute(ScalaMojoSupport.java:350) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) ... 31 more Caused by: org.apache.commons.exec.ExecuteException: Process exited with an error: 134(Exit value: 134) at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:346) at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:149) at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:136) at org_scala_tools_maven_executions.JavaMainCallerByFork.run(JavaMainCallerByFork.java:80) at org_scala_tools_maven.ScalaCompilerSupport.compile(ScalaCompilerSupport.java:124) at org_scala_tools_maven.ScalaCompilerSupport.doExecute(ScalaCompilerSupport.java:80) at org_scala_tools_maven.ScalaMojoSupport.execute(ScalaMojoSupport.java:342) ... 33 more [ERROR] [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :carbondata-spark-common [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/processing/pom.xml> to org.apache.carbondata/carbondata-processing/1.2.0-SNAPSHOT/carbondata-processing-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/processing/target/carbondata-processing-1.2.0-SNAPSHOT.jar> to org.apache.carbondata/carbondata-processing/1.2.0-SNAPSHOT/carbondata-processing-1.2.0-SNAPSHOT.jar [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark2/pom.xml> to org.apache.carbondata/carbondata-spark2/1.2.0-SNAPSHOT/carbondata-spark2-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/assembly/pom.xml> to org.apache.carbondata/carbondata-assembly/1.2.0-SNAPSHOT/carbondata-assembly-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/pom.xml> to org.apache.carbondata/carbondata-spark-common/1.2.0-SNAPSHOT/carbondata-spark-common-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/pom.xml> to org.apache.carbondata/carbondata-parent/1.2.0-SNAPSHOT/carbondata-parent-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/core/pom.xml> to org.apache.carbondata/carbondata-core/1.2.0-SNAPSHOT/carbondata-core-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/core/target/carbondata-core-1.2.0-SNAPSHOT.jar> to org.apache.carbondata/carbondata-core/1.2.0-SNAPSHOT/carbondata-core-1.2.0-SNAPSHOT.jar [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/pom.xml> to org.apache.carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/carbondata-hadoop-1.2.0-SNAPSHOT.jar> to org.apache.carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.jar [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/presto/pom.xml> to org.apache.carbondata/carbondata-presto/1.2.0-SNAPSHOT/carbondata-presto-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/hive/pom.xml> to org.apache.carbondata/carbondata-hive/1.2.0-SNAPSHOT/carbondata-hive-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/common/pom.xml> to org.apache.carbondata/carbondata-common/1.2.0-SNAPSHOT/carbondata-common-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/common/target/carbondata-common-1.2.0-SNAPSHOT.jar> to org.apache.carbondata/carbondata-common/1.2.0-SNAPSHOT/carbondata-common-1.2.0-SNAPSHOT.jar [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common-test/pom.xml> to org.apache.carbondata/carbondata-spark-common-test/1.2.0-SNAPSHOT/carbondata-spark-common-test-1.2.0-SNAPSHOT.pom [JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/examples/spark2/pom.xml> to org.apache.carbondata/carbondata-examples-spark2/1.2.0-SNAPSHOT/carbondata-examples-spark2-1.2.0-SNAPSHOT.pom Sending e-mails to: commits@carbondata.apache.org channel stopped