[jira] [Updated] (CARBONDATA-4302) String query error

2021-10-11 Thread SeaAndHill (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SeaAndHill updated CARBONDATA-4302:
---
Attachment: string query error.png

> String query error
> --
>
> Key: CARBONDATA-4302
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4302
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.5.1
>Reporter: SeaAndHill
>Priority: Blocker
> Attachments: string query error.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-4302) String query error

2021-10-11 Thread SeaAndHill (Jira)
SeaAndHill created CARBONDATA-4302:
--

 Summary: String query error
 Key: CARBONDATA-4302
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4302
 Project: CarbonData
  Issue Type: Bug
  Components: sql
Affects Versions: 1.5.1
Reporter: SeaAndHill
 Attachments: string query error.png





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-3872) IndexOutOfBoundsException in ResizableArray

2020-06-24 Thread SeaAndHill (Jira)
SeaAndHill created CARBONDATA-3872:
--

 Summary: IndexOutOfBoundsException in ResizableArray
 Key: CARBONDATA-3872
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3872
 Project: CarbonData
  Issue Type: Bug
  Components: sql
Affects Versions: 1.5.1
Reporter: SeaAndHill
 Attachments: carbondata.png

carbondata 在通过in 查询时,in 是子查询,运行报数组越界, 对应spark 版本是2.2.1, hadoop 版本是 2.7.2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CARBONDATA-3612) Caused by: java.io.IOException: Problem in loading segment blocks: null

2019-12-15 Thread SeaAndHill (Jira)


[ 
https://issues.apache.org/jira/browse/CARBONDATA-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16996924#comment-16996924
 ] 

SeaAndHill commented on CARBONDATA-3612:


[~zzcclp] apply to add your WeChat, please approve it.

> Caused by: java.io.IOException: Problem in loading segment blocks: null
> ---
>
> Key: CARBONDATA-3612
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3612
> Project: CarbonData
>  Issue Type: Bug
>  Components: core, data-load
>Affects Versions: 1.5.1
>Reporter: SeaAndHill
>Priority: Major
>
> at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) 
> at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) 
> at 
> org.apache.spark.sql.execution.exchange.ShuffleExchange.doExecute(ShuffleExchange.scala:115)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at 
> org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:252)
>  at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
>  at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
>  at 
> org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:386)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at 
> org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:88)
>  at 
> org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:124)
>  at 
> org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:115)
>  at 
> org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) 
> ... 35 moreCaused by: java.io.IOException: Problem in loading segment blocks: 
> null at 
> org.apache.carbondata.core.indexstore.BlockletDataMapIndexStore.getAll(BlockletDataMapIndexStore.java:193)
>  at 
> org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMapFactory.getDataMaps(BlockletDataMapFactory.java:144)
>  at 
> org.apache.carbondata.core.datamap.TableDataMap.prune(TableDataMap.java:139) 
> at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:493)
>  at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:412)
>  at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:529)
>  at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:220)
>  at 
> org.apache.carbondata.spark.rdd.CarbonScanRDD.internalGetPartitions(CarbonScanRDD.scala:127)
>  at 
> org.apache.carbondata.spark.rdd.CarbonRDD.getPartitions(CarbonRDD.scala:66) 
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
> scala.Option.getOrElse(Option.scala:121) at 
> org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
> scala.Option.getOrElse(Option.scala:121) at 
> org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
> scala.Option.getOrElse(Option.scala:121) at 
> org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
> org.apache.spark.ShuffleDependency.(Dependency.scala:91) 

[jira] [Commented] (CARBONDATA-3612) Caused by: java.io.IOException: Problem in loading segment blocks: null

2019-12-15 Thread SeaAndHill (Jira)


[ 
https://issues.apache.org/jira/browse/CARBONDATA-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16996912#comment-16996912
 ] 

SeaAndHill commented on CARBONDATA-3612:


[~chenliang613] 可以帮忙看看吗,或者分配下

> Caused by: java.io.IOException: Problem in loading segment blocks: null
> ---
>
> Key: CARBONDATA-3612
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3612
> Project: CarbonData
>  Issue Type: Bug
>  Components: core, data-load
>Affects Versions: 1.5.1
>Reporter: SeaAndHill
>Priority: Major
>
> at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) 
> at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) 
> at 
> org.apache.spark.sql.execution.exchange.ShuffleExchange.doExecute(ShuffleExchange.scala:115)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at 
> org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:252)
>  at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
>  at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
>  at 
> org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:386)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at 
> org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:88)
>  at 
> org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:124)
>  at 
> org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:115)
>  at 
> org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) 
> ... 35 moreCaused by: java.io.IOException: Problem in loading segment blocks: 
> null at 
> org.apache.carbondata.core.indexstore.BlockletDataMapIndexStore.getAll(BlockletDataMapIndexStore.java:193)
>  at 
> org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMapFactory.getDataMaps(BlockletDataMapFactory.java:144)
>  at 
> org.apache.carbondata.core.datamap.TableDataMap.prune(TableDataMap.java:139) 
> at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:493)
>  at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:412)
>  at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:529)
>  at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:220)
>  at 
> org.apache.carbondata.spark.rdd.CarbonScanRDD.internalGetPartitions(CarbonScanRDD.scala:127)
>  at 
> org.apache.carbondata.spark.rdd.CarbonRDD.getPartitions(CarbonRDD.scala:66) 
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
> scala.Option.getOrElse(Option.scala:121) at 
> org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
> scala.Option.getOrElse(Option.scala:121) at 
> org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
> scala.Option.getOrElse(Option.scala:121) at 
> org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
> org.apache.spark.ShuffleDependency.(Dependency.scala:91) at 
> org.apache.spark.sq

[jira] [Created] (CARBONDATA-3613) Carbondata 1.5.1 upgrade to 1.6.1 guide

2019-12-07 Thread SeaAndHill (Jira)
SeaAndHill created CARBONDATA-3613:
--

 Summary: Carbondata 1.5.1 upgrade to 1.6.1 guide
 Key: CARBONDATA-3613
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3613
 Project: CarbonData
  Issue Type: Wish
  Components: docs
Reporter: SeaAndHill


i am using carbondata 1.5.1 ,now i want to use 1.6.1 version, how to upgrade to 
the new version . is it just to replace the jar ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-3612) Caused by: java.io.IOException: Problem in loading segment blocks: null

2019-12-07 Thread SeaAndHill (Jira)
SeaAndHill created CARBONDATA-3612:
--

 Summary: Caused by: java.io.IOException: Problem in loading 
segment blocks: null
 Key: CARBONDATA-3612
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3612
 Project: CarbonData
  Issue Type: Bug
  Components: core, data-load
Affects Versions: 1.5.1
Reporter: SeaAndHill


at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) 
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) 
at 
org.apache.spark.sql.execution.exchange.ShuffleExchange.doExecute(ShuffleExchange.scala:115)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) 
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at 
org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:252)
 at 
org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
 at 
org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
 at 
org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:386)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) 
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at 
org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:88)
 at 
org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:124)
 at 
org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:115)
 at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) 
... 35 moreCaused by: java.io.IOException: Problem in loading segment blocks: 
null at 
org.apache.carbondata.core.indexstore.BlockletDataMapIndexStore.getAll(BlockletDataMapIndexStore.java:193)
 at 
org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMapFactory.getDataMaps(BlockletDataMapFactory.java:144)
 at 
org.apache.carbondata.core.datamap.TableDataMap.prune(TableDataMap.java:139) at 
org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:493)
 at 
org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:412)
 at 
org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:529)
 at 
org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:220)
 at 
org.apache.carbondata.spark.rdd.CarbonScanRDD.internalGetPartitions(CarbonScanRDD.scala:127)
 at org.apache.carbondata.spark.rdd.CarbonRDD.getPartitions(CarbonRDD.scala:66) 
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
scala.Option.getOrElse(Option.scala:121) at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
scala.Option.getOrElse(Option.scala:121) at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at 
scala.Option.getOrElse(Option.scala:121) at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at 
org.apache.spark.ShuffleDependency.(Dependency.scala:91) at 
org.apache.spark.sql.execution.exchange.ShuffleExchange$.prepareShuffleDependency(ShuffleExchange.scala:264)
 at 
org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:87)
 at 
org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:124)
 at 
org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:115)
 at org.apache.spark.sql.catalyst