[ 
https://issues.apache.org/jira/browse/CARBONDATA-1957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320135#comment-16320135
 ] 

Geetika Gupta commented on CARBONDATA-1957:
-------------------------------------------

This bug has been resolved by this PR: 
https://github.com/apache/carbondata/pull/1742

> create datamap query fails on table having dictionary_include
> -------------------------------------------------------------
>
>                 Key: CARBONDATA-1957
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1957
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-query
>    Affects Versions: 1.3.0
>         Environment: spark2.1
>            Reporter: Geetika Gupta
>             Fix For: 1.3.0
>
>         Attachments: 2000_UniqData.csv
>
>
> I created a datamap using the following command:
> create datamap uniqdata_agg on table uniqdata using 'preaggregate' as select 
> cust_id, cust_name,avg(decimal_column1) from uniqdata group by 
> cust_id,cust_name;
> It throws the following error:
> Error: java.lang.Exception: DataLoad failure: (state=,code=0)
> Steps to reproduce:
> CREATE TABLE uniqdata(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')
> Load command:
> LOAD DATA INPATH 'HDFS_URL/BabuStore/Data/uniqdata/2000_UniqData.csv' into 
> table uniqdata OPTIONS('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')
> Create datamap commad:
> create datamap uniqdata_agg on table uniqdata using 'preaggregate' as select 
> cust_id, cust_name,avg(decimal_column1) from uniqdata group by 
> cust_id,cust_name;
> The above command throws the following exception:
> Error: java.lang.Exception: DataLoad failure: (state=,code=0)
> Here are the logs:
> 18/01/02 11:46:58 ERROR ParallelReadMergeSorterImpl: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg 
> java.lang.IllegalArgumentException: requirement failed: Decimal precision 
> 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> 18/01/02 11:46:58 ERROR ForwardDictionaryCache: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg Error loading the dictionary: 
> null
> 18/01/02 11:46:58 ERROR ForwardDictionaryCache: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg Error loading the dictionary: 
> null
> 18/01/02 11:46:58 ERROR ForwardDictionaryCache: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg Error loading the dictionary: 
> null
> 18/01/02 11:46:58 ERROR ForwardDictionaryCache: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg Error loading the dictionary: 
> null
> 18/01/02 11:46:58 ERROR ParallelReadMergeSorterImpl: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg 
> java.lang.IllegalArgumentException: requirement failed: Decimal precision 
> 3128 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> 18/01/02 11:46:58 ERROR ParallelReadMergeSorterImpl: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg 
> java.lang.IllegalArgumentException: requirement failed: Decimal precision 
> 2517 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> 18/01/02 11:46:58 ERROR ParallelReadMergeSorterImpl: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg 
> java.lang.IllegalArgumentException: requirement failed: Decimal precision 
> 2439 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> 18/01/02 11:46:58 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty 
> blocks out of 1 blocks
> 18/01/02 11:46:58 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches 
> in 0 ms
> 18/01/02 11:46:58 ERROR ShuffleBlockFetcherIterator: Error occurred while 
> fetching local blocks
> java.lang.InterruptedException
>       at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>       at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>       at 
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:339)
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.fetchLocalBlocks(ShuffleBlockFetcherIterator.scala:262)
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.initialize(ShuffleBlockFetcherIterator.scala:292)
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.<init>(ShuffleBlockFetcherIterator.scala:120)
>       at 
> org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:45)
>       at 
> org.apache.spark.sql.execution.ShuffledRowRDD.compute(ShuffledRowRDD.scala:169)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.hasNext(NewCarbonDataLoadRDD.scala:504)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.internalHasNext(InputProcessorStepImpl.java:179)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.hasNext(InputProcessorStepImpl.java:171)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.hasNext(DataConverterProcessorStepImpl.java:94)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:217)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> 18/01/02 11:46:58 ERROR ParallelReadMergeSorterImpl: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg 
> org.apache.spark.shuffle.FetchFailedException
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:357)
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:332)
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:54)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>       at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>       at 
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
>       at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>  Source)
>       at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>       at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.hasNext(NewCarbonDataLoadRDD.scala:509)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.internalHasNext(InputProcessorStepImpl.java:179)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.hasNext(InputProcessorStepImpl.java:171)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.hasNext(DataConverterProcessorStepImpl.java:94)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:217)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.InterruptedException
>       at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>       at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>       at 
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:339)
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.fetchLocalBlocks(ShuffleBlockFetcherIterator.scala:262)
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.initialize(ShuffleBlockFetcherIterator.scala:292)
>       at 
> org.apache.spark.storage.ShuffleBlockFetcherIterator.<init>(ShuffleBlockFetcherIterator.scala:120)
>       at 
> org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:45)
>       at 
> org.apache.spark.sql.execution.ShuffledRowRDD.compute(ShuffledRowRDD.scala:169)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.hasNext(NewCarbonDataLoadRDD.scala:504)
>       ... 7 more
> 18/01/02 11:46:58 ERROR ParallelReadMergeSorterImpl: 
> SafeParallelSorterPool:uniqdata_uniqdata_agg 
> java.lang.IllegalArgumentException: requirement failed: Decimal precision 
> 2415 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> 18/01/02 11:46:58 INFO SortDataRows: [Executor task launch 
> worker-27][partitionID:uniqdata;queryID:7124297644577] File based sorting 
> will be used
> 18/01/02 11:46:58 INFO ParallelReadMergeSorterImpl: [Executor task launch 
> worker-27][partitionID:uniqdata;queryID:7124297644577] Record Processed For 
> table: uniqdata_uniqdata_agg
> 18/01/02 11:46:58 INFO NewDataFrameLoaderRDD: DataLoad failure
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> 18/01/02 11:46:58 ERROR NewDataFrameLoaderRDD: [Executor task launch 
> worker-27][partitionID:uniqdata;queryID:7124297644577] 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> 18/01/02 11:46:58 INFO AbstractDataLoadProcessorStep: [Executor task launch 
> worker-27][partitionID:uniqdata;queryID:7124297644577] Total rows processed 
> in step Data Writer: 0
> 18/01/02 11:46:58 INFO AbstractDataLoadProcessorStep: [Executor task launch 
> worker-27][partitionID:uniqdata;queryID:7124297644577] Total rows processed 
> in step Sort Processor: 0
> 18/01/02 11:46:58 INFO AbstractDataLoadProcessorStep: [Executor task launch 
> worker-27][partitionID:uniqdata;queryID:7124297644577] Total rows processed 
> in step Data Converter: 0
> 18/01/02 11:46:58 INFO AbstractDataLoadProcessorStep: [Executor task launch 
> worker-27][partitionID:uniqdata;queryID:7124297644577] Total rows processed 
> in step Input Processor: 0
> 18/01/02 11:46:58 INFO UnsafeMemoryManager: [Executor task launch 
> worker-27][partitionID:uniqdata;queryID:7124297644577] Total memory used 
> after task 7124349470619 is 1542 Current tasks running now are : 
> [5476760201267, 3629759666115, 5715198073676]
> 18/01/02 11:46:58 ERROR Executor: Exception in task 0.0 in stage 30.0 (TID 
> 1034)
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> 18/01/02 11:46:58 INFO CarbonLoaderUtil: 
> LocalFolderDeletionPool:uniqdata_uniqdata_agg Deleted the local store 
> location: /tmp/7124352642546_0 : Time taken: 1
> 18/01/02 11:46:58 WARN TaskSetManager: Lost task 0.0 in stage 30.0 (TID 1034, 
> localhost, executor driver): 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> 18/01/02 11:46:58 ERROR TaskSetManager: Task 0 in stage 30.0 failed 1 times; 
> aborting job
> 18/01/02 11:46:58 INFO TaskSchedulerImpl: Removed TaskSet 30.0, whose tasks 
> have all completed, from pool 
> 18/01/02 11:46:58 INFO TaskSchedulerImpl: Cancelling stage 30
> 18/01/02 11:46:58 INFO DAGScheduler: ResultStage 30 (collect at 
> CarbonDataRDDFactory.scala:987) failed in 0.057 s due to Job aborted due to 
> stage failure: Task 0 in stage 30.0 failed 1 times, most recent failure: Lost 
> task 0.0 in stage 30.0 (TID 1034, localhost, executor driver): 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> Driver stacktrace:
> 18/01/02 11:46:58 INFO DAGScheduler: Job 17 failed: collect at 
> CarbonDataRDDFactory.scala:987, took 0.155142 s
> 18/01/02 11:46:58 ERROR CarbonDataRDDFactory$: pool-23-thread-27 load data 
> frame failed
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 30.0 failed 1 times, most recent failure: Lost task 0.0 in stage 30.0 
> (TID 1034, localhost, executor driver): 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
>       at scala.Option.foreach(Option.scala:257)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
>       at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>       at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>       at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
>       at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadDataFrame(CarbonDataRDDFactory.scala:987)
>       at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:351)
>       at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:433)
>       at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:223)
>       at 
> org.apache.spark.sql.execution.command.DataCommand.run(package.scala:71)
>       at 
> org.apache.spark.sql.execution.command.preaaggregate.PreAggregateUtil$.startDataLoadForDataMap(PreAggregateUtil.scala:529)
>       at 
> org.apache.spark.sql.execution.command.preaaggregate.CreatePreAggregateTableCommand.processData(CreatePreAggregateTableCommand.scala:140)
>       at 
> org.apache.spark.sql.execution.command.datamap.CarbonCreateDataMapCommand.processData(CarbonCreateDataMapCommand.scala:116)
>       at 
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:86)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
>       at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
>       at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
>       at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>       at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>       at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:220)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       ... 3 more
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> 18/01/02 11:46:58 INFO CarbonDataRDDFactory$: pool-23-thread-27 DataLoad 
> failure: 
> 18/01/02 11:46:58 ERROR CarbonDataRDDFactory$: pool-23-thread-27 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 30.0 failed 1 times, most recent failure: Lost task 0.0 in stage 30.0 
> (TID 1034, localhost, executor driver): 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
>       at scala.Option.foreach(Option.scala:257)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
>       at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>       at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>       at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
>       at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadDataFrame(CarbonDataRDDFactory.scala:987)
>       at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:351)
>       at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:433)
>       at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:223)
>       at 
> org.apache.spark.sql.execution.command.DataCommand.run(package.scala:71)
>       at 
> org.apache.spark.sql.execution.command.preaaggregate.PreAggregateUtil$.startDataLoadForDataMap(PreAggregateUtil.scala:529)
>       at 
> org.apache.spark.sql.execution.command.preaaggregate.CreatePreAggregateTableCommand.processData(CreatePreAggregateTableCommand.scala:140)
>       at 
> org.apache.spark.sql.execution.command.datamap.CarbonCreateDataMapCommand.processData(CarbonCreateDataMapCommand.scala:116)
>       at 
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:86)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
>       at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
>       at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
>       at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>       at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>       at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:220)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  
>       at 
> org.apache.carbondata.processing.loading.sort.AbstractMergeSorter.checkError(AbstractMergeSorter.java:39)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl.sort(ParallelReadMergeSorterImpl.java:117)
>       at 
> org.apache.carbondata.processing.loading.steps.SortProcessorStepImpl.execute(SortProcessorStepImpl.java:62)
>       at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:87)
>       at 
> org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:50)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:392)
>       at 
> org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:355)
>       at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       ... 3 more
> Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal 
> precision 2922 exceeds max precision 38
>       at scala.Predef$.require(Predef.scala:224)
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:409)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:514)
>       at 
> org.apache.carbondata.spark.rdd.LazyRddIterator.next(NewCarbonDataLoadRDD.scala:477)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.getBatch(InputProcessorStepImpl.java:239)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:200)
>       at 
> org.apache.carbondata.processing.loading.steps.InputProcessorStepImpl$InputProcessorIterator.next(InputProcessorStepImpl.java:129)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:97)
>       at 
> org.apache.carbondata.processing.loading.steps.DataConverterProcessorStepImpl$1.next(DataConverterProcessorStepImpl.java:83)
>       at 
> org.apache.carbondata.processing.loading.sort.impl.ParallelReadMergeSorterImpl$SortIteratorThread.run(ParallelReadMergeSorterImpl.java:218)
>       ... 3 more
> 18/01/02 11:46:58 INFO HdfsFileLock: pool-23-thread-27 HDFS lock 
> path:hdfs://localhost:54311/opt/carbonStore/28dec/uniqdata_uniqdata_agg/tablestatus.lock
> 18/01/02 11:46:58 INFO CarbonLoaderUtil: pool-23-thread-27 Acquired lock for 
> table28dec.uniqdata_uniqdata_agg for table status updation
> 18/01/02 11:46:58 INFO HdfsFileLock: pool-23-thread-27 Deleted the lock file 
> hdfs://localhost:54311/opt/carbonStore/28dec/uniqdata_uniqdata_agg/tablestatus.lock
> 18/01/02 11:46:58 INFO CarbonLoaderUtil: pool-23-thread-27 Table unlocked 
> successfully after table status updation28dec.uniqdata_uniqdata_agg
> 18/01/02 11:46:58 INFO CarbonDataRDDFactory$: pool-23-thread-27 
> ********starting clean up**********
> 18/01/02 11:46:58 INFO CarbonDataRDDFactory$: pool-23-thread-27 ********clean 
> up done**********
> 18/01/02 11:46:58 AUDIT CarbonDataRDDFactory$: 
> [geetika-Vostro-15-3568][geetika][Thread-1655]Data load is failed for 
> 28dec.uniqdata_uniqdata_agg
> 18/01/02 11:46:58 WARN CarbonDataRDDFactory$: pool-23-thread-27 Cannot write 
> load metadata file as data load failed
> 18/01/02 11:46:58 ERROR CarbonLoadDataCommand: pool-23-thread-27 
> java.lang.Exception: DataLoad failure: 
>       at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:491)
>       at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:433)
>       at 
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:223)
>       at 
> org.apache.spark.sql.execution.command.DataCommand.run(package.scala:71)
>       at 
> org.apache.spark.sql.execution.command.preaaggregate.PreAggregateUtil$.startDataLoadForDataMap(PreAggregateUtil.scala:529)
>       at 
> org.apache.spark.sql.execution.command.preaaggregate.CreatePreAggregateTableCommand.processData(CreatePreAggregateTableCommand.scala:140)
>       at 
> org.apache.spark.sql.execution.command.datamap.CarbonCreateDataMapCommand.processData(CarbonCreateDataMapCommand.scala:116)
>       at 
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:86)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
>       at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
>       at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
>       at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>       at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>       at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:220)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to