Re: SparkSQL 1.4 can't accept registration of UDF?

2015-07-16 Thread Okehee Goh
The same issue (A custome udf jar added through 'add jar' is not
recognized) is observed on Spark 1.4.1.

Instead of executing,
beeline>add jar udf.jar

My workaround is either
1) to pass the udf.jar by using "--jars" while starting ThriftServer
(This didn't work in AWS EMR's Spark 1.4.0.b).
or
2) to add the custom UDF jar into SPARK_CLASSPATH  ( It works in AWS EMR)

Thanks,


On Tue, Jul 14, 2015 at 9:29 PM, Okehee Goh  wrote:
> The command "list jar" doesn't seem accepted in beeline with Spark's
> ThriftServer in both Spark 1.3.1 and Spark1.4.
>
> 0: jdbc:hive2://localhost:1> list jar;
>
> Error: org.apache.spark.sql.AnalysisException: cannot recognize input
> near 'list' 'jar' ''; line 1 pos 0 (state=,code=0)
>
> Thanks
>
> On Tue, Jul 14, 2015 at 8:46 PM, prosp4300  wrote:
>>
>>
>>
>> What's the result of "list jar" in both 1.3.1 and 1.4.0, please check if
>> there is any difference
>>
>>
>>
>> At 2015-07-15 08:10:44, "ogoh"  wrote:
>>>Hello,
>>>I am using SparkSQL along with ThriftServer so that we can access using
>>> Hive
>>>queries.
>>>With Spark 1.3.1, I can register UDF function. But, Spark 1.4.0 doesn't
>>> work
>>>for that. The jar of the udf is same.
>>>Below is logs:
>>>I appreciate any advice.
>>>
>>>
>>>== With Spark 1.4
>>>Beeline version 1.4.0 by Apache Hive
>>>
>>>0: jdbc:hive2://localhost:1> add jar
>>>hdfs:///user/hive/lib/dw-udf-2015.06.06-SNAPSHOT.jar;
>>>
>>>0: jdbc:hive2://localhost:1> create temporary function parse_trace as
>>>'com. mycom.dataengine.udf.GenericUDFParseTraceAnnotation';
>>>
>>>15/07/14 23:49:43 DEBUG transport.TSaslTransport: writing data length: 206
>>>
>>>15/07/14 23:49:43 DEBUG transport.TSaslTransport: CLIENT: reading data
>>>length: 201
>>>
>>>Error: org.apache.spark.sql.execution.QueryExecutionException: FAILED:
>>>Execution Error, return code 1 from
>>>org.apache.hadoop.hive.ql.exec.FunctionTask (state=,code=0)
>>>
>>>
>>>== With Spark 1.3.1:
>>>
>>>Beeline version 1.3.1 by Apache Hive
>>>
>>>0: jdbc:hive2://localhost:10001> add jar
>>>hdfs:///user/hive/lib/dw-udf-2015.06.06-SNAPSHOT.jar;
>>>
>>>+-+
>>>
>>>| Result  |
>>>
>>>+-+
>>>
>>>+-+
>>>
>>>No rows selected (1.313 seconds)
>>>
>>>0: jdbc:hive2://localhost:10001> create temporary function parse_trace as
>>>'com. mycom.dataengine.udf.GenericUDFParseTraceAnnotation';
>>>
>>>+-+
>>>
>>>| result  |
>>>
>>>+-+
>>>
>>>+-+
>>>
>>>No rows selected (0.999 seconds)
>>>
>>>
>>>=== The logs of ThriftServer of Spark 1.4.0
>>>
>>>15/07/14 23:49:43 INFO SparkExecuteStatementOperation: Running query
>>> 'create
>>>temporary function parse_trace as
>>>'com.quixey.dataengine.udf.GenericUDFParseTraceAnnotation''
>>>
>>>15/07/14 23:49:43 INFO ParseDriver: Parsing command: create temporary
>>>function parse_trace as
>>>'com.quixey.dataengine.udf.GenericUDFParseTraceAnnotation'
>>>
>>>15/07/14 23:49:43 INFO ParseDriver: Parse Completed
>>>
>>>15/07/14 23:49:43 INFO PerfLogger: >>from=org.apache.hadoop.hive.ql.Driver>
>>>
>>>15/07/14 23:49:43 INFO PerfLogger: >>from=org.apache.hadoop.hive.ql.Driver>
>>>
>>>15/07/14 23:49:43 INFO Driver: Concurrency mode is disabled, not creating a
>>>lock manager
>>>
>>>15/07/14 23:49:43 INFO PerfLogger: >>from=org.apache.hadoop.hive.ql.Driver>
>>>
>>>15/07/14 23:49:43 INFO PerfLogger: >>from=org.apache.hadoop.hive.ql.Driver>
>>>
>>>15/07/14 23:49:43 INFO ParseDriver: Parsing command: create temporary
>>>function parse_trace as
>>>'com.quixey.dataengine.udf.GenericUDFParseTraceAnnotation'
>>>
>>>15/07/14 23:49:43 INFO ParseDriver: Parse Completed
>>>
>>>15/07/14 23:49:43 INFO PerfLogger: >>start=1436917783106 end=1436917783106 duration=0
>>>from=org.apache.hadoop.hive.ql.Driver>
>>>
>>>15/07/14 23:49:43 INFO PerfLogger: >>from=org.apache.hadoop.hive.ql.Driver>
>>

Re: SparkSQL 1.4 can't accept registration of UDF?

2015-07-14 Thread Okehee Goh
The command "list jar" doesn't seem accepted in beeline with Spark's
ThriftServer in both Spark 1.3.1 and Spark1.4.

0: jdbc:hive2://localhost:1> list jar;

Error: org.apache.spark.sql.AnalysisException: cannot recognize input
near 'list' 'jar' ''; line 1 pos 0 (state=,code=0)

Thanks

On Tue, Jul 14, 2015 at 8:46 PM, prosp4300  wrote:
>
>
>
> What's the result of "list jar" in both 1.3.1 and 1.4.0, please check if
> there is any difference
>
>
>
> At 2015-07-15 08:10:44, "ogoh"  wrote:
>>Hello,
>>I am using SparkSQL along with ThriftServer so that we can access using
>> Hive
>>queries.
>>With Spark 1.3.1, I can register UDF function. But, Spark 1.4.0 doesn't
>> work
>>for that. The jar of the udf is same.
>>Below is logs:
>>I appreciate any advice.
>>
>>
>>== With Spark 1.4
>>Beeline version 1.4.0 by Apache Hive
>>
>>0: jdbc:hive2://localhost:1> add jar
>>hdfs:///user/hive/lib/dw-udf-2015.06.06-SNAPSHOT.jar;
>>
>>0: jdbc:hive2://localhost:1> create temporary function parse_trace as
>>'com. mycom.dataengine.udf.GenericUDFParseTraceAnnotation';
>>
>>15/07/14 23:49:43 DEBUG transport.TSaslTransport: writing data length: 206
>>
>>15/07/14 23:49:43 DEBUG transport.TSaslTransport: CLIENT: reading data
>>length: 201
>>
>>Error: org.apache.spark.sql.execution.QueryExecutionException: FAILED:
>>Execution Error, return code 1 from
>>org.apache.hadoop.hive.ql.exec.FunctionTask (state=,code=0)
>>
>>
>>== With Spark 1.3.1:
>>
>>Beeline version 1.3.1 by Apache Hive
>>
>>0: jdbc:hive2://localhost:10001> add jar
>>hdfs:///user/hive/lib/dw-udf-2015.06.06-SNAPSHOT.jar;
>>
>>+-+
>>
>>| Result  |
>>
>>+-+
>>
>>+-+
>>
>>No rows selected (1.313 seconds)
>>
>>0: jdbc:hive2://localhost:10001> create temporary function parse_trace as
>>'com. mycom.dataengine.udf.GenericUDFParseTraceAnnotation';
>>
>>+-+
>>
>>| result  |
>>
>>+-+
>>
>>+-+
>>
>>No rows selected (0.999 seconds)
>>
>>
>>=== The logs of ThriftServer of Spark 1.4.0
>>
>>15/07/14 23:49:43 INFO SparkExecuteStatementOperation: Running query
>> 'create
>>temporary function parse_trace as
>>'com.quixey.dataengine.udf.GenericUDFParseTraceAnnotation''
>>
>>15/07/14 23:49:43 INFO ParseDriver: Parsing command: create temporary
>>function parse_trace as
>>'com.quixey.dataengine.udf.GenericUDFParseTraceAnnotation'
>>
>>15/07/14 23:49:43 INFO ParseDriver: Parse Completed
>>
>>15/07/14 23:49:43 INFO PerfLogger: >from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO PerfLogger: >from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO Driver: Concurrency mode is disabled, not creating a
>>lock manager
>>
>>15/07/14 23:49:43 INFO PerfLogger: >from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO PerfLogger: >from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO ParseDriver: Parsing command: create temporary
>>function parse_trace as
>>'com.quixey.dataengine.udf.GenericUDFParseTraceAnnotation'
>>
>>15/07/14 23:49:43 INFO ParseDriver: Parse Completed
>>
>>15/07/14 23:49:43 INFO PerfLogger: >start=1436917783106 end=1436917783106 duration=0
>>from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO PerfLogger: >from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO HiveMetaStore: 2: get_database: default
>>
>>15/07/14 23:49:43 INFO audit: ugi=anonymous ip=unknown-ip-addr
>>cmd=get_database: default
>>
>>15/07/14 23:49:43 INFO HiveMetaStore: 2: Opening raw store with
>>implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
>>
>>15/07/14 23:49:43 INFO ObjectStore: ObjectStore, initialize called
>>
>>15/07/14 23:49:43 INFO MetaStoreDirectSql: MySQL check failed, assuming we
>>are not on mysql: Lexical error at line 1, column 5.  Encountered: "@"
>> (64),
>>after : "".
>>
>>15/07/14 23:49:43 INFO Query: Reading in results for query
>>"org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is
>>closing
>>
>>15/07/14 23:49:43 INFO ObjectStore: Initialized ObjectStore
>>
>>15/07/14 23:49:43 INFO FunctionSemanticAnalyzer: analyze done
>>
>>15/07/14 23:49:43 INFO Driver: Semantic Analysis Completed
>>
>>15/07/14 23:49:43 INFO PerfLogger: >start=1436917783106 end=1436917783114 duration=8
>>from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO Driver: Returning Hive schema:
>>Schema(fieldSchemas:null, properties:null)
>>
>>15/07/14 23:49:43 INFO PerfLogger: >start=1436917783106 end=1436917783114 duration=8
>>from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO PerfLogger: >from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO Driver: Starting command: create temporary function
>>parse_trace as 'com.quixey.dataengine.udf.GenericUDFParseTraceAnnotation'
>>
>>15/07/14 23:49:43 INFO PerfLogger: >start=1436917783105 end=1436917783115 duration=10
>>from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO PerfLogger: >from=org.apache.hadoop.hive.ql.Driver>
>>
>>15/07/14 23:49:43 INFO PerfLogger: >

Re: SparkSQL : using Hive UDF returning Map throws "rror: scala.MatchError: interface java.util.Map (of class java.lang.Class) (state=,code=0)"

2015-06-05 Thread Okehee Goh
I will..that will be great if simple UDF can return complex type.
Thanks!

On Fri, Jun 5, 2015 at 12:17 AM, Cheng, Hao  wrote:
> Confirmed, with latest master, we don't support complex data type for Simple 
> Hive UDF, do you mind file an issue in jira?
>
> -Original Message-
> From: Cheng, Hao [mailto:hao.ch...@intel.com]
> Sent: Friday, June 5, 2015 12:35 PM
> To: ogoh; user@spark.apache.org
> Subject: RE: SparkSQL : using Hive UDF returning Map throws "rror: 
> scala.MatchError: interface java.util.Map (of class java.lang.Class) 
> (state=,code=0)"
>
> Which version of Hive jar are you using? Hive 0.13.1 or Hive 0.12.0?
>
> -Original Message-
> From: ogoh [mailto:oke...@gmail.com]
> Sent: Friday, June 5, 2015 10:10 AM
> To: user@spark.apache.org
> Subject: SparkSQL : using Hive UDF returning Map throws "rror: 
> scala.MatchError: interface java.util.Map (of class java.lang.Class) 
> (state=,code=0)"
>
>
> Hello,
> I tested some custom udf on SparkSql's ThriftServer & Beeline (Spark 1.3.1).
> Some udfs work fine (access array parameter and returning int or string type).
> But my udf returning map type throws an error:
> "Error: scala.MatchError: interface java.util.Map (of class java.lang.Class) 
> (state=,code=0)"
>
> I converted the code into Hive's GenericUDF since I worried that using 
> complex type parameter (array of map) and returning complex type (map) can be 
> supported in Hive's GenericUDF instead of simple UDF.
> But SparkSQL doesn't seem supporting GenericUDF.(error message : Error:
> java.lang.IllegalAccessException: Class
> org.apache.spark.sql.hive.HiveFunctionWrapper can not access ..).
>
> Below is my example udf code returning MAP type.
> I appreciate any advice.
> Thanks
>
> --
>
> public final class ArrayToMap extends UDF {
>
> public Map evaluate(ArrayList arrayOfString) {
> // add code to handle all index problem
>
> Map map = new HashMap();
>
> int count = 0;
> for (String element : arrayOfString) {
> map.put(count + "", element);
> count++;
>
> }
> return map;
> }
> }
>
>
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-using-Hive-UDF-returning-Map-throws-rror-scala-MatchError-interface-java-util-Map-of-class--tp23164.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
> commands, e-mail: user-h...@spark.apache.org
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: SparkSQL : using Hive UDF returning Map throws "rror: scala.MatchError: interface java.util.Map (of class java.lang.Class) (state=,code=0)"

2015-06-05 Thread Okehee Goh
It is Spark 1.3.1.e (it is AWS release .. I think it is close to Spark
1.3.1 with some bug fixes).

My report about GenericUDF not working in SparkSQL is wrong. I tested
with open-source GenericUDF and it worked fine. Just my GenericUDF
which returns Map type didn't work. Sorry about false reporting.



On Thu, Jun 4, 2015 at 9:35 PM, Cheng, Hao  wrote:
> Which version of Hive jar are you using? Hive 0.13.1 or Hive 0.12.0?
>
> -Original Message-
> From: ogoh [mailto:oke...@gmail.com]
> Sent: Friday, June 5, 2015 10:10 AM
> To: user@spark.apache.org
> Subject: SparkSQL : using Hive UDF returning Map throws "rror: 
> scala.MatchError: interface java.util.Map (of class java.lang.Class) 
> (state=,code=0)"
>
>
> Hello,
> I tested some custom udf on SparkSql's ThriftServer & Beeline (Spark 1.3.1).
> Some udfs work fine (access array parameter and returning int or string type).
> But my udf returning map type throws an error:
> "Error: scala.MatchError: interface java.util.Map (of class java.lang.Class) 
> (state=,code=0)"
>
> I converted the code into Hive's GenericUDF since I worried that using 
> complex type parameter (array of map) and returning complex type (map) can be 
> supported in Hive's GenericUDF instead of simple UDF.
> But SparkSQL doesn't seem supporting GenericUDF.(error message : Error:
> java.lang.IllegalAccessException: Class
> org.apache.spark.sql.hive.HiveFunctionWrapper can not access ..).
>
> Below is my example udf code returning MAP type.
> I appreciate any advice.
> Thanks
>
> --
>
> public final class ArrayToMap extends UDF {
>
> public Map evaluate(ArrayList arrayOfString) {
> // add code to handle all index problem
>
> Map map = new HashMap();
>
> int count = 0;
> for (String element : arrayOfString) {
> map.put(count + "", element);
> count++;
>
> }
> return map;
> }
> }
>
>
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-using-Hive-UDF-returning-Map-throws-rror-scala-MatchError-interface-java-util-Map-of-class--tp23164.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
> commands, e-mail: user-h...@spark.apache.org
>

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: SparkSQL can't read S3 path for hive external table

2015-06-01 Thread Okehee Goh
Thanks, Michael and Akhil.
Yes, it worked with Spark 1.3.1 along with AWS EMR AMI 3.7.
Sorry I didn't update the status.


On Mon, Jun 1, 2015 at 5:17 AM, Michael Armbrust  wrote:
> This sounds like a problem that was fixed in Spark 1.3.1.
>
> https://issues.apache.org/jira/browse/SPARK-6351
>
> On Mon, Jun 1, 2015 at 5:44 PM, Akhil Das 
> wrote:
>>
>> This thread has various methods on accessing S3 from spark, it might help
>> you.
>>
>> Thanks
>> Best Regards
>>
>> On Sun, May 24, 2015 at 8:03 AM, ogoh  wrote:
>>>
>>>
>>> Hello,
>>> I am using Spark1.3 in AWS.
>>> SparkSQL can't recognize Hive external table on S3.
>>> The following is the error message.
>>> I appreciate any help.
>>> Thanks,
>>> Okehee
>>> --
>>> 15/05/24 01:02:18 ERROR thriftserver.SparkSQLDriver: Failed in [select
>>> count(*) from api_search where pdate='2015-05-08']
>>> java.lang.IllegalArgumentException: Wrong FS:
>>>
>>> s3://test-emr/datawarehouse/api_s3_perf/api_search/pdate=2015-05-08/phour=00,
>>> expected: hdfs://10.128.193.211:9000
>>> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:647)
>>> at
>>> org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:467)
>>> at
>>>
>>> org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$6.apply(newParquet.scala:252)
>>> at
>>>
>>> org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$6.apply(newParquet.scala:251)
>>> at
>>>
>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-can-t-read-S3-path-for-hive-external-table-tp23002.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>
>

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Generating a schema in Spark 1.3 failed while using DataTypes.

2015-04-02 Thread Okehee Goh
Michael,
You are right.  The build brought " org.scala-lang:scala-library:2.10.1"
from other package (as below).
It works fine after excluding the old scala version.
Thanks a lot,
Okehee

== dependency:

|+--- org.apache.kafka:kafka_2.10:0.8.1.1

||+--- com.yammer.metrics:metrics-core:2.2.0

|||\--- org.slf4j:slf4j-api:1.7.2 -> 1.7.7

||+--- org.xerial.snappy:snappy-java:1.0.5

||+--- org.apache.zookeeper:zookeeper:3.3.4 -> 3.4.5

|||+--- org.slf4j:slf4j-api:1.6.1 -> 1.7.7

|||+--- log4j:log4j:1.2.15

|||+--- jline:jline:0.9.94

|||\--- org.jboss.netty:netty:3.2.2.Final

||+--- net.sf.jopt-simple:jopt-simple:3.2 -> 4.6

||+--- org.scala-lang:scala-library:2.10.1

On Thu, Apr 2, 2015 at 4:45 PM, Michael Armbrust 
wrote:

> This looks to me like you have incompatible versions of scala on your
> classpath?
>
> On Thu, Apr 2, 2015 at 4:28 PM, Okehee Goh  wrote:
>
>> yes, below is the stacktrace.
>> Thanks,
>> Okehee
>>
>> java.lang.NoSuchMethodError: 
>> scala.reflect.NameTransformer$.LOCAL_SUFFIX_STRING()Ljava/lang/String;
>>  at scala.reflect.internal.StdNames$CommonNames.(StdNames.scala:97)
>>  at scala.reflect.internal.StdNames$Keywords.(StdNames.scala:203)
>>  at scala.reflect.internal.StdNames$TermNames.(StdNames.scala:288)
>>  at scala.reflect.internal.StdNames$nme$.(StdNames.scala:1045)
>>  at 
>> scala.reflect.internal.SymbolTable.nme$lzycompute(SymbolTable.scala:16)
>>  at scala.reflect.internal.SymbolTable.nme(SymbolTable.scala:16)
>>  at scala.reflect.internal.StdNames$class.$init$(StdNames.scala:1041)
>>  at scala.reflect.internal.SymbolTable.(SymbolTable.scala:16)
>>  at scala.reflect.runtime.JavaUniverse.(JavaUniverse.scala:16)
>>  at scala.reflect.runtime.package$.universe$lzycompute(package.scala:17)
>>  at scala.reflect.runtime.package$.universe(package.scala:17)
>>  at org.apache.spark.sql.types.NativeType.(dataTypes.scala:337)
>>  at org.apache.spark.sql.types.StringType.(dataTypes.scala:351)
>>  at org.apache.spark.sql.types.StringType$.(dataTypes.scala:367)
>>  at org.apache.spark.sql.types.StringType$.(dataTypes.scala)
>>  at org.apache.spark.sql.types.DataTypes.(DataTypes.java:30)
>>  at 
>> com.quixey.dataengine.dataprocess.parser.ToTableRecord.generateTableSchemaForSchemaRDD(ToTableRecord.java:282)
>>  at 
>> com.quixey.dataengine.dataprocess.parser.ToUDMTest.generateTableSchemaTest(ToUDMTest.java:132)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>  at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>  at java.lang.reflect.Method.invoke(Method.java:483)
>>  at 
>> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85)
>>  at org.testng.internal.Invoker.invokeMethod(Invoker.java:696)
>>  at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:882)
>>  at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1189)
>>  at 
>> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:124)
>>  at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108)
>>  at org.testng.TestRunner.privateRun(TestRunner.java:767)
>>  at org.testng.TestRunner.run(TestRunner.java:617)
>>  at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
>>  at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:343)
>>  at org.testng.SuiteRunner.privateRun(SuiteRunner.java:305)
>>  at org.testng.SuiteRunner.run(SuiteRunner.java:254)
>>  at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
>>  at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
>>  at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
>>  at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
>>  at org.testng.TestNG.run(TestNG.java:1057)
>>  at 
>> org.gradle.api.internal.tasks.testing.testng.TestNGTestClassProcessor.stop(TestNGTestClassProcessor.java:115)
>>  at 
>> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:57)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>  at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor

Re: Generating a schema in Spark 1.3 failed while using DataTypes.

2015-04-02 Thread Okehee Goh
yes, below is the stacktrace.
Thanks,
Okehee

java.lang.NoSuchMethodError:
scala.reflect.NameTransformer$.LOCAL_SUFFIX_STRING()Ljava/lang/String;
at scala.reflect.internal.StdNames$CommonNames.(StdNames.scala:97)
at scala.reflect.internal.StdNames$Keywords.(StdNames.scala:203)
at scala.reflect.internal.StdNames$TermNames.(StdNames.scala:288)
at scala.reflect.internal.StdNames$nme$.(StdNames.scala:1045)
at 
scala.reflect.internal.SymbolTable.nme$lzycompute(SymbolTable.scala:16)
at scala.reflect.internal.SymbolTable.nme(SymbolTable.scala:16)
at scala.reflect.internal.StdNames$class.$init$(StdNames.scala:1041)
at scala.reflect.internal.SymbolTable.(SymbolTable.scala:16)
at scala.reflect.runtime.JavaUniverse.(JavaUniverse.scala:16)
at scala.reflect.runtime.package$.universe$lzycompute(package.scala:17)
at scala.reflect.runtime.package$.universe(package.scala:17)
at org.apache.spark.sql.types.NativeType.(dataTypes.scala:337)
at org.apache.spark.sql.types.StringType.(dataTypes.scala:351)
at org.apache.spark.sql.types.StringType$.(dataTypes.scala:367)
at org.apache.spark.sql.types.StringType$.(dataTypes.scala)
at org.apache.spark.sql.types.DataTypes.(DataTypes.java:30)
at 
com.quixey.dataengine.dataprocess.parser.ToTableRecord.generateTableSchemaForSchemaRDD(ToTableRecord.java:282)
at 
com.quixey.dataengine.dataprocess.parser.ToUDMTest.generateTableSchemaTest(ToUDMTest.java:132)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:696)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:882)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1189)
at 
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:124)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:343)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:305)
at org.testng.SuiteRunner.run(SuiteRunner.java:254)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
at org.testng.TestNG.run(TestNG.java:1057)
at 
org.gradle.api.internal.tasks.testing.testng.TestNGTestClassProcessor.stop(TestNGTestClassProcessor.java:115)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.stop(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.stop(TestWorker.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355)
at 
org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.r