OK I did find my error.  The missing step:

  mvn install

I should have republished (mvn install) all of the other modules .

The mvn -pl  will rely on the modules locally published and so the latest
code that I had git pull'ed was not being used (except  the sql/core module
code).

The tests are passing after having properly performed the mvn install
before  running with the mvn -pl sql/core.




2014-07-24 12:04 GMT-07:00 Stephen Boesch <java...@gmail.com>:

>
> Are other developers seeing the following error for the recently added
> substr() method?  If not, any ideas why the following invocation of tests
> would be failing for me - i.e. how the given invocation would need to be
> tweaked?
>
> mvn -Pyarn -Pcdh5 test  -pl sql/core
> -DwildcardSuites=org.apache.spark.sql.SQLQuerySuite
>
> (note cdh5 is a custom profile for cdh5.0.0 but should not be affecting
> these results)
>
> Only the test("SPARK-2407 Added Parser of SQL SUBSTR()") fails: all of the
> other 33 tests pass.
>
> SQLQuerySuite:
> - SPARK-2041 column name equals tablename
> - SPARK-2407 Added Parser of SQL SUBSTR() *** FAILED ***
>   Exception thrown while executing query:
>   == Logical Plan ==
>   java.lang.UnsupportedOperationException
>   == Optimized Logical Plan ==
>   java.lang.UnsupportedOperationException
>   == Physical Plan ==
>   java.lang.UnsupportedOperationException
>   == Exception ==
>   java.lang.UnsupportedOperationException
>   java.lang.UnsupportedOperationException
>   at
> org.apache.spark.sql.catalyst.analysis.EmptyFunctionRegistry$.lookupFunction(FunctionRegistry.scala:33)
>   at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$5$$anonfun$applyOrElse$3.applyOrElse(Analyzer.scala:131)
>   at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$5$$anonfun$applyOrElse$3.applyOrElse(Analyzer.scala:129)
>   at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:165)
>   at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:183)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>   at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>   at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>   at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>   at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>   at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>   at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>   at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)
>   at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:168)
>   at org.apache.spark.sql.catalyst.plans.QueryPlan.org
> $apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1(QueryPlan.scala:52)
>   at
> org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1$$anonfun$apply$1.apply(QueryPlan.scala:66)
>   at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at scala.collection.immutable.List.foreach(List.scala:318)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at
>

Reply via email to