Hi  Yixu2001,

Can u please provide the detailed step to reproduce this issue.

-Regards
Kumar Vishal

On Thu, Apr 12, 2018 at 9:16 PM, yixu2001 <yixu2...@163.com> wrote:

> dev
>
>  version spark2.1.1  carbondata 1.3.1
>
>
> yixu2001
>
> From: yixu2001
> Date: 2018-04-12 14:17
> To: dev
> Subject: carbon java.lang.NullPointerException
> dev
> spark 2.1.1  carbondate 1.1.1
>
> scala> cc.sql("select count(*) from public.prod_offer_inst_cab").show;
> 18/04/12 11:47:17 AUDIT CarbonMetaStoreFactory:
> [hdd340][ip_crm][Thread-1]File based carbon metastore is enabled
> java.lang.NullPointerException
>   at org.apache.carbondata.core.mutate.CarbonUpdateUtil.
> getSegmentBlockNameKey(CarbonUpdateUtil.java:801)
>   at org.apache.carbondata.core.statusmanager.SegmentUpdateStatusManager.
> populateMap(SegmentUpdateStatusManager.java:117)
>   at org.apache.carbondata.core.statusmanager.SegmentUpdateStatusManager.<
> init>(SegmentUpdateStatusManager.java:106)
>   at org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy.
> driverSideCountStar(CarbonLateDecodeStrategy.scala:96)
>   at org.apache.spark.sql.execution.strategy.CarbonLateDecodeStrategy.
> apply(CarbonLateDecodeStrategy.scala:81)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$1.apply(QueryPlanner.scala:62)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$1.apply(QueryPlanner.scala:62)
>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(
> QueryPlanner.scala:92)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>   at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(
> TraversableOnce.scala:157)
>   at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(
> TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.
> scala:157)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$2.apply(QueryPlanner.scala:74)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$2.apply(QueryPlanner.scala:66)
>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(
> QueryPlanner.scala:92)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>   at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(
> TraversableOnce.scala:157)
>   at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(
> TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.
> scala:157)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$2.apply(QueryPlanner.scala:74)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$
> anonfun$2.apply(QueryPlanner.scala:66)
>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(
> QueryPlanner.scala:92)
>   at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(
> QueryExecution.scala:84)
>   at org.apache.spark.sql.execution.QueryExecution.
> sparkPlan(QueryExecution.scala:80)
>   at org.apache.spark.sql.execution.QueryExecution.
> executedPlan$lzycompute(QueryExecution.scala:89)
>   at org.apache.spark.sql.execution.QueryExecution.
> executedPlan(QueryExecution.scala:89)
>   at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2814)
>   at org.apache.spark.sql.Dataset.head(Dataset.scala:2127)
>   at org.apache.spark.sql.Dataset.take(Dataset.scala:2342)
>   at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
>   at org.apache.spark.sql.Dataset.show(Dataset.scala:638)
>   at org.apache.spark.sql.Dataset.show(Dataset.scala:597)
>   at org.apache.spark.sql.Dataset.show(Dataset.scala:606)
>   ... 50 elided
>
>
>
>
> SegmentUpdateStatusManager.java
>          String blockIdentifier = CarbonUpdateUtil
>           .getSegmentBlockNameKey(blockDetails.getSegmentName(),
> blockDetails.getActualBlockName());    return null
>
> yixu2001
>

Reply via email to