[jira] [Commented] (CARBONDATA-1290) [branch-1.1] delete problem

2017-07-12 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083789#comment-16083789
 ] 

Ashwini K commented on CARBONDATA-1290:
---

delete is working fine for me . could you please share your table schema and 
data file you are using ?

> [branch-1.1] delete problem
> ---
>
> Key: CARBONDATA-1290
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1290
> Project: CarbonData
>  Issue Type: Bug
>Reporter: sehriff 【FFCS研究院】
>
> 1.max function is not return the right result;
> scala> cc.sql("select * from qqdata2.fullappend where 
> id=1999").show(false)
> +++--+-+--++--++
> |id  |qqnum   |nick  |age  
> |gender|auth|qunnum|mvcc|
> +++--+-+--++--++
> |1999|19991999|2009-05-27|1999c1999|1 |1   
> |1999dd1999|1   |
> +++--+-+--++--++
> scala> cc.sql("select max(id) from qqdata2.fullappend ").show(false)
> +---+
> |max(id)|
> +---+
> |999|
> +---+
> 2.delete error
> scala> cc.sql("delete from qqdata2.fullappend where id>1 and id<10").show
> 17/07/11 17:32:33 AUDIT ProjectForDeleteCommand:[Thread-1] Delete data 
> request has been received for qqdata2.fullappend.
> [Stage 21:> (0 + 2) / 
> 2]17/07/11 17:32:52 WARN TaskSetManager: Lost task 1.0 in stage 21.0 (TID 40, 
> executor 2): java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.carbondata.core.mutate.CarbonUpdateUtil.getRequiredFieldFromTID(CarbonUpdateUtil.java:67)
> at 
> org.apache.carbondata.core.mutate.CarbonUpdateUtil.getSegmentWithBlockFromTID(CarbonUpdateUtil.java:76)
> at 
> org.apache.spark.sql.execution.command.deleteExecution$$anonfun$4.apply(IUDCommands.scala:555)
> at 
> org.apache.spark.sql.execution.command.deleteExecution$$anonfun$4.apply(IUDCommands.scala:552)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
> at 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:150)
> at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
> at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
> at org.apache.spark.scheduler.Task.run(Task.scala:99)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1184) Incorrect value displays in double data type.

2017-07-04 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073379#comment-16073379
 ] 

Ashwini K commented on CARBONDATA-1184:
---

Data file attached has format problem . However I am able to reproduce the 
issue with similar data . Please attach the correct file .

> Incorrect value displays in double data type. 
> --
>
> Key: CARBONDATA-1184
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1184
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Priority: Minor
> Attachments: 100_olap_C20.csv
>
>
> Incorrect value displays to the user in double datatype.
> Step to reproduces:
> 1:Create table:
> create table VMALL_DICTIONARY_EXCLUDE (imei string,deviceInformationId 
> int,MAC string,deviceColor string,device_backColor string,modelId 
> string,marketName string,AMSize string,ROMSize string,CUPAudit 
> string,CPIClocked string,series string,productionDate timestamp,bomCode 
> string,internalModels string, deliveryTime string, channelsId string, 
> channelsName string , deliveryAreaId string, deliveryCountry string, 
> deliveryProvince string, deliveryCity string,deliveryDistrict string, 
> deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, 
> ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity 
> string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, 
> Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion 
> string, Active_BacVerNumber string, Active_BacFlashVer string, 
> Active_webUIVersion string, Active_webUITypeCarrVer 
> string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
> Active_phonePADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, 
> Latest_DAY Decimal(30,10), Latest_HOUR string, Latest_areaId string, 
> Latest_country string, Latest_province string, Latest_city string, 
> Latest_district string, Latest_street string, Latest_releaseId string, 
> Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber 
> string, Latest_BacFlashVer string, Latest_webUIVersion string, 
> Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, 
> Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, 
> Latest_operatorId string, gamePointDescription string,gamePointId 
> double,contractNumber BigInt) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei');
> 2:Load Data:
> LOAD DATA INPATH 'hdfs://localhost:54310/100_olap_C20.csv' INTO table 
> VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE', 
> 'FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription');
> 3: Run Select Query.
> select gamePointId from VMALL_DICTIONARY_EXCLUDE;
> 4: Result:
> 0: jdbc:hive2://localhost:1> select gamePointId from 
> VMALL_DICTIONARY_EXCLUDE;
> +---+--+
> |  gamePointId  |
> +---+--+
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |

[jira] [Assigned] (CARBONDATA-1143) Incorrect Data load while loading data into struct of struct

2017-07-04 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-1143:
-

Assignee: Ashwini K

> Incorrect Data load while loading data into struct of struct
> 
>
> Key: CARBONDATA-1143
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1143
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
> Environment: spark 2.1
>Reporter: Vandana Yadav
>Assignee: Ashwini K
>Priority: Minor
> Attachments: structinstructnull.csv
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Incorrect Data load while loading data into struct of struct
> Steps to reproduce:
> 1) Create table:
> create table structinstruct(id int, structelem struct struct>)stored by 'carbondata';
> 2)Load data:
> load data inpath 'hdfs://localhost:54310/structinstructnull.csv' into table 
> structinstruct options('delimiter'=',' , 
> 'fileheader'='id,structelem','COMPLEX_DELIMITER_LEVEL_1'='#', 
> 'COMPLEX_DELIMITER_LEVEL_2'='|');
> 3)Query executed:
> select * from structinstruct;
> 4) Actual result:
> +---+--+--+
> |  id   |  structelem  |
> +---+--+--+
> | 1 | {"id1":111,"structelem":{"id2":1001,"name":"abc"}}   |
> | 2 | {"id1":222,"structelem":{"id2":2002,"name":"xyz"}}   |
> | NULL  | {"id1":333,"structelem":{"id2":3003,"name":"def"}}   |
> | 4 | {"id1":null,"structelem":{"id2":4004,"name":"pqr"}}  |
> | 5 | {"id1":555,"structelem":{"id2":null,"name":"ghi"}}   |
> | 6 | {"id1":666,"structelem":{"id2":6006,"name":"null"}}  |
> | 7 | {"id1":null,"structelem":{"id2":1001,"name":null}}   |
> +---+--+--+
> 7 rows selected (1.023 seconds)
> 5) Expected Result: In last row "id2" should be null as there is no such 
> value(1001) provided in csv for that
> 6) Data in CSV:
> 1,111#1001|abc
> 2,222#2002|xyz
> null,333#3003|def
> 4,null#4004|pqr
> 5,555#null|ghi
> 6,666#6006|null
> 7,null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1184) Incorrect value displays in double data type.

2017-07-04 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-1184:
-

Assignee: Ashwini K

> Incorrect value displays in double data type. 
> --
>
> Key: CARBONDATA-1184
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1184
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Assignee: Ashwini K
>Priority: Minor
> Attachments: 100_olap_C20.csv
>
>
> Incorrect value displays to the user in double datatype.
> Step to reproduces:
> 1:Create table:
> create table VMALL_DICTIONARY_EXCLUDE (imei string,deviceInformationId 
> int,MAC string,deviceColor string,device_backColor string,modelId 
> string,marketName string,AMSize string,ROMSize string,CUPAudit 
> string,CPIClocked string,series string,productionDate timestamp,bomCode 
> string,internalModels string, deliveryTime string, channelsId string, 
> channelsName string , deliveryAreaId string, deliveryCountry string, 
> deliveryProvince string, deliveryCity string,deliveryDistrict string, 
> deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, 
> ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity 
> string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, 
> Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion 
> string, Active_BacVerNumber string, Active_BacFlashVer string, 
> Active_webUIVersion string, Active_webUITypeCarrVer 
> string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
> Active_phonePADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, 
> Latest_DAY Decimal(30,10), Latest_HOUR string, Latest_areaId string, 
> Latest_country string, Latest_province string, Latest_city string, 
> Latest_district string, Latest_street string, Latest_releaseId string, 
> Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber 
> string, Latest_BacFlashVer string, Latest_webUIVersion string, 
> Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, 
> Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, 
> Latest_operatorId string, gamePointDescription string,gamePointId 
> double,contractNumber BigInt) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei');
> 2:Load Data:
> LOAD DATA INPATH 'hdfs://localhost:54310/100_olap_C20.csv' INTO table 
> VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE', 
> 'FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription');
> 3: Run Select Query.
> select gamePointId from VMALL_DICTIONARY_EXCLUDE;
> 4: Result:
> 0: jdbc:hive2://localhost:1> select gamePointId from 
> VMALL_DICTIONARY_EXCLUDE;
> +---+--+
> |  gamePointId  |
> +---+--+
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |

[jira] [Commented] (CARBONDATA-1184) Incorrect value displays in double data type.

2017-07-06 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076374#comment-16076374
 ] 

Ashwini K commented on CARBONDATA-1184:
---

Hi , As per the table structure the value from the csv file is read as string 
and stored as a Double objects .What you are seeing as the output format as " 
9.223372036854776E18 " is the default print format for Double . Please use a 
number formatter at the client side to the required format .

> Incorrect value displays in double data type. 
> --
>
> Key: CARBONDATA-1184
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1184
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Assignee: Ashwini K
>Priority: Minor
> Attachments: 100_olap_C20.csv
>
>
> Incorrect value displays to the user in double datatype.
> Step to reproduces:
> 1:Create table:
> create table VMALL_DICTIONARY_EXCLUDE (imei string,deviceInformationId 
> int,MAC string,deviceColor string,device_backColor string,modelId 
> string,marketName string,AMSize string,ROMSize string,CUPAudit 
> string,CPIClocked string,series string,productionDate timestamp,bomCode 
> string,internalModels string, deliveryTime string, channelsId string, 
> channelsName string , deliveryAreaId string, deliveryCountry string, 
> deliveryProvince string, deliveryCity string,deliveryDistrict string, 
> deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, 
> ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity 
> string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, 
> Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion 
> string, Active_BacVerNumber string, Active_BacFlashVer string, 
> Active_webUIVersion string, Active_webUITypeCarrVer 
> string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
> Active_phonePADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, 
> Latest_DAY Decimal(30,10), Latest_HOUR string, Latest_areaId string, 
> Latest_country string, Latest_province string, Latest_city string, 
> Latest_district string, Latest_street string, Latest_releaseId string, 
> Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber 
> string, Latest_BacFlashVer string, Latest_webUIVersion string, 
> Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, 
> Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, 
> Latest_operatorId string, gamePointDescription string,gamePointId 
> double,contractNumber BigInt) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei');
> 2:Load Data:
> LOAD DATA INPATH 'hdfs://localhost:54310/100_olap_C20.csv' INTO table 
> VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE', 
> 'FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription');
> 3: Run Select Query.
> select gamePointId from VMALL_DICTIONARY_EXCLUDE;
> 4: Result:
> 0: jdbc:hive2://localhost:1> select gamePointId from 
> VMALL_DICTIONARY_EXCLUDE;
> +---+--+
> |  gamePointId  |
> +---+--+
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |

[jira] [Commented] (CARBONDATA-1148) Cann't load data to carbon_table

2017-06-19 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055143#comment-16055143
 ] 

Ashwini K commented on CARBONDATA-1148:
---

Is this resolved ? If not please share additional details on table structure 
and load data(csv file) for which you are getting this error .

> Cann't load data to carbon_table
> 
>
> Key: CARBONDATA-1148
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1148
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.1.0
> Environment: HDP  2.6
> Spark 2.1.0.2.6.0.3-8
> HDFS  2.7.3.2.6
> YARN  2.7.3
> Hive  1.2.1.2.6
> Java  1.8.0_112
> Scala 2.11.8
> CarabonData   1.1.0(carbondata_2.11-1.1.0-shade-hadoop2.7.3.jar)
>Reporter: lonly
>Priority: Critical
>  Labels: carbon, spark
>
> cala> carbon.sql("LOAD DATA INPATH 
> 'hdfs://hmly10:8020/testdata/carbondata/sample.csv' INTO TABLE 
> carbon.test_table")
> 17/06/09 15:53:11 WARN TaskSetManager: Lost task 0.0 in stage 6.0 (TID 6, 
> hmly11, executor 1): java.lang.ClassCastException: cannot assign instance of 
> scala.collection.immutable.List$SerializationProxy to field 
> org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type 
> scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
>   at 
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
>   at 
> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2024)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
>   at 
> scala.collection.immutable.List$SerializationProxy.readObject(List.scala:479)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1909)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
>   at 
> scala.collection.immutable.List$SerializationProxy.readObject(List.scala:479)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1909)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)

[jira] [Assigned] (CARBONDATA-1360) Update is not working properly for complex datatype

2017-09-20 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-1360:
-

Assignee: Ashwini K

> Update is not working properly for complex datatype
> ---
>
> Key: CARBONDATA-1360
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1360
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.1.1
> Environment: Spark 2.1
>Reporter: SWATI RAO
>Assignee: Ashwini K
>Priority: Minor
> Attachments: structofarray.csv
>
>
> Steps to reproduce :
> create table STRUCT_OF_ARRAY_update1 (CUST_ID string, YEAR int, MONTH int, 
> AGE int, GENDER string, EDUCATED string, IS_MARRIED string, STRUCT_OF_ARRAY 
> struct,sal1: 
> array,state: array,date1: array>,CARD_COUNT 
> int,DEBIT_COUNT int, CREDIT_COUNT int, DEPOSIT double, HQ_DEPOSIT double) 
> STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('NO_INVERTED_INDEX'='STRUCT_OF_ARRAY');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (1.137 seconds)
> LOAD DATA INPATH 
> 'hdfs://localhost:54311/BabuStore/TestData/Data/complex/structofarray.csv' 
> INTO table STRUCT_OF_ARRAY_update1 options ('DELIMITER'=',', 'QUOTECHAR'='"', 
> 'FILEHEADER'='CUST_ID,YEAR,MONTH,AGE,GENDER,EDUCATED,IS_MARRIED,STRUCT_OF_ARRAY,CARD_COUNT,DEBIT_COUNT,CREDIT_COUNT,DEPOSIT,HQ_DEPOSIT','COMPLEX_DELIMITER_LEVEL_1'='$','COMPLEX_DELIMITER_LEVEL_2'='&');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (3.82 seconds)
> update STRUCT_OF_ARRAY_update1 
> set(struct_of_array)=('{"ID":123457790,"CHECK_DATE":null,"SNo":[1099,3000],"sal1":[1099.123,3999.234],"state":["United
>  States","HI"],"date1":[null,null]},77,112,145,4.123030672E8,7.028563114E8') 
> where cust_id in ('Cust0999') ;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (3.329 seconds)
> select struct_of_array from STRUCT_OF_ARRAY_update1 where cust_id in 
> ('Cust0999') ;
> +-+--+
> | struct_of_array 
> |
> +-+--+
> | 
> {"ID":null,"CHECK_DATE":null,"SNo":[null],"sal1":[null],"state":[null],"date1":[null]}
>   |
> +-+--+
> 1 row selected (0.433 seconds)
>  
> *No column of Structure gets updated*
> When we update query using this query :
> 0: jdbc:hive2://localhost:1> update STRUCT_OF_ARRAY_update1 
> set(struct_of_array)=(8) ;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (2.82 seconds)
> 0: jdbc:hive2://localhost:1> select struct_of_array from 
> STRUCT_OF_ARRAY_update1 where cust_id in ('Cust0999') ;
> +--+--+
> |   struct_of_array   
>  |
> +--+--+
> | 
> {"ID":8,"CHECK_DATE":null,"SNo":[null],"sal1":[null],"state":[null],"date1":[null]}
>   |
> +--+--+
> 1 row selected (0.191 seconds)
> *1st column of Structure gets updated*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1445) if 'carbon.update.persist.enable'='false', it will fail to update data

2017-09-06 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155211#comment-16155211
 ] 

Ashwini K commented on CARBONDATA-1445:
---

This issue is fixed as a part of JIRA#1293 and PR 
https://github.com/apache/carbondata/pull/1161/

> if 'carbon.update.persist.enable'='false', it will fail to update data 
> ---
>
> Key: CARBONDATA-1445
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1445
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load, spark-integration, sql
>Affects Versions: 1.2.0
> Environment: CarbonData master branch, Spark 2.1.1
>Reporter: Zhichao  Zhang
>Assignee: Ashwini K
>Priority: Minor
>
> When updating data, if set 'carbon.update.persist.enable'='false', it will 
> fail.
> I debug code and find that in the method LoadTable.processData the 
> 'dataFrameWithTupleId' will call udf 'getTupleId()' which is defined in 
> CarbonEnv.init(): 'sparkSession.udf.register("getTupleId", () => "")', it 
> will return blank string to 'CarbonUpdateUtil.getRequiredFieldFromTID', so 
> ArrayIndexOutOfBoundsException occur.
> *the plans (logical and physical) for dataFrameWithTupleId :*
> == Parsed Logical Plan ==
> 'Project [unresolvedalias('stringField3, None), unresolvedalias('intField, 
> None), unresolvedalias('longField, None), unresolvedalias('int2Field, None), 
> unresolvedalias('stringfield1-updatedColumn, None), 
> unresolvedalias('stringfield2-updatedColumn, None), UDF('tupleId) AS 
> segId#286]
> +- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
> UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
> stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
> stringfield2-updatedColumn#264]
>+- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
>   +- 
> Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
>  CarbonDatasourceHadoopRelation [ Database name :default, Table name 
> :study_carbondata, Schema 
> :Some(StructType(StructField(stringField1,StringType,true), 
> StructField(stringField2,StringType,true), 
> StructField(stringField3,StringType,true), 
> StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
> StructField(int2Field,IntegerType,true))) ]
> == Analyzed Logical Plan ==
> stringField3: string, intField: int, longField: bigint, int2Field: int, 
> stringfield1-updatedColumn: string, stringfield2-updatedColumn: string, 
> segId: string
> Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
> stringfield1-updatedColumn#263, stringfield2-updatedColumn#264, 
> UDF(tupleId#262) AS segId#286]
> +- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
> UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
> stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
> stringfield2-updatedColumn#264]
>+- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
>   +- 
> Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
>  CarbonDatasourceHadoopRelation [ Database name :default, Table name 
> :study_carbondata, Schema 
> :Some(StructType(StructField(stringField1,StringType,true), 
> StructField(stringField2,StringType,true), 
> StructField(stringField3,StringType,true), 
> StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
> StructField(int2Field,IntegerType,true))) ]
> == Optimized Logical Plan ==
> CarbonDictionaryCatalystDecoder [CarbonDecoderRelation(Map(int2Field#116 -> 
> int2Field#116, longField#115L -> longField#115L, stringField2#112 -> 
> stringField2#112, stringField1#111 -> stringField1#111, stringField3#113 -> 
> stringField3#113, intField#114 -> 
> intField#114),CarbonDatasourceHadoopRelation [ Database name :default, Table 
> name :study_carbondata, Schema 
> :Some(StructType(StructField(stringField1,StringType,true), 
> StructField(stringField2,StringType,true), 
> StructField(stringField3,StringType,true), 
> StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
> StructField(int2Field,IntegerType,true))) ])], 
> ExcludeProfile(ArrayBuffer(stringField2#112, stringField1#111)), 
> CarbonAliasDecoderRelation(), true
> +- Project [stringField3#113, intField#114, longField#115, int2Field#116, 
> concat(stringField1#111, _test) AS stringfield1-updatedColumn#263, 
> concat(stringField2#112, _test) AS stringfield2-updatedColumn#264, 
> UDF(UDF:getTupleId()) AS segId#286]
>+- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
>   +- 
> 

[jira] [Assigned] (CARBONDATA-1445) if 'carbon.update.persist.enable'='false', it will fail to update data

2017-09-06 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-1445:
-

Assignee: Ashwini K

> if 'carbon.update.persist.enable'='false', it will fail to update data 
> ---
>
> Key: CARBONDATA-1445
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1445
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load, spark-integration, sql
>Affects Versions: 1.2.0
> Environment: CarbonData master branch, Spark 2.1.1
>Reporter: Zhichao  Zhang
>Assignee: Ashwini K
>Priority: Minor
>
> When updating data, if set 'carbon.update.persist.enable'='false', it will 
> fail.
> I debug code and find that in the method LoadTable.processData the 
> 'dataFrameWithTupleId' will call udf 'getTupleId()' which is defined in 
> CarbonEnv.init(): 'sparkSession.udf.register("getTupleId", () => "")', it 
> will return blank string to 'CarbonUpdateUtil.getRequiredFieldFromTID', so 
> ArrayIndexOutOfBoundsException occur.
> *the plans (logical and physical) for dataFrameWithTupleId :*
> == Parsed Logical Plan ==
> 'Project [unresolvedalias('stringField3, None), unresolvedalias('intField, 
> None), unresolvedalias('longField, None), unresolvedalias('int2Field, None), 
> unresolvedalias('stringfield1-updatedColumn, None), 
> unresolvedalias('stringfield2-updatedColumn, None), UDF('tupleId) AS 
> segId#286]
> +- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
> UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
> stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
> stringfield2-updatedColumn#264]
>+- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
>   +- 
> Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
>  CarbonDatasourceHadoopRelation [ Database name :default, Table name 
> :study_carbondata, Schema 
> :Some(StructType(StructField(stringField1,StringType,true), 
> StructField(stringField2,StringType,true), 
> StructField(stringField3,StringType,true), 
> StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
> StructField(int2Field,IntegerType,true))) ]
> == Analyzed Logical Plan ==
> stringField3: string, intField: int, longField: bigint, int2Field: int, 
> stringfield1-updatedColumn: string, stringfield2-updatedColumn: string, 
> segId: string
> Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
> stringfield1-updatedColumn#263, stringfield2-updatedColumn#264, 
> UDF(tupleId#262) AS segId#286]
> +- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
> UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
> stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
> stringfield2-updatedColumn#264]
>+- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
>   +- 
> Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
>  CarbonDatasourceHadoopRelation [ Database name :default, Table name 
> :study_carbondata, Schema 
> :Some(StructType(StructField(stringField1,StringType,true), 
> StructField(stringField2,StringType,true), 
> StructField(stringField3,StringType,true), 
> StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
> StructField(int2Field,IntegerType,true))) ]
> == Optimized Logical Plan ==
> CarbonDictionaryCatalystDecoder [CarbonDecoderRelation(Map(int2Field#116 -> 
> int2Field#116, longField#115L -> longField#115L, stringField2#112 -> 
> stringField2#112, stringField1#111 -> stringField1#111, stringField3#113 -> 
> stringField3#113, intField#114 -> 
> intField#114),CarbonDatasourceHadoopRelation [ Database name :default, Table 
> name :study_carbondata, Schema 
> :Some(StructType(StructField(stringField1,StringType,true), 
> StructField(stringField2,StringType,true), 
> StructField(stringField3,StringType,true), 
> StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
> StructField(int2Field,IntegerType,true))) ])], 
> ExcludeProfile(ArrayBuffer(stringField2#112, stringField1#111)), 
> CarbonAliasDecoderRelation(), true
> +- Project [stringField3#113, intField#114, longField#115, int2Field#116, 
> concat(stringField1#111, _test) AS stringfield1-updatedColumn#263, 
> concat(stringField2#112, _test) AS stringfield2-updatedColumn#264, 
> UDF(UDF:getTupleId()) AS segId#286]
>+- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
>   +- 
> Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
>  CarbonDatasourceHadoopRelation [ Database name :default, Table name 
> :study_carbondata, Schema 
> 

[jira] [Updated] (CARBONDATA-1496) Array type : insert into table support

2017-10-19 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K updated CARBONDATA-1496:
--
Description: # Source table data containing Array data needs to convert 
from spark datatype to string , as carbon takes string as input row  (was: 
Source table data containing Array data needs to convert from spark datatype to 
string , as carbon takes string as input row)

> Array type : insert into table support
> --
>
> Key: CARBONDATA-1496
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1496
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: data-load
>Reporter: Venkata Ramana G
> Fix For: 1.3.0
>
>
> # Source table data containing Array data needs to convert from spark 
> datatype to string , as carbon takes string as input row



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1501) Update Array values

2017-10-23 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216300#comment-16216300
 ] 

Ashwini K commented on CARBONDATA-1501:
---

Issue : When we update Array data type , it corrupts the other fields with 
additional delimiters . Below is the snapshot before and after update statement 
-
0: jdbc:hive2://localhost:1> LOAD DATA local INPATH '/rand.csv' INTO table 
testarray options ('DELIMITER'=',', 'QUOTECHAR'='"', 
'FILEHEADER'='f1,f2,f3','COMPLEX_DELIMITER_LEVEL_1'='#','COMPLEX_DELIMITER_LEVEL_2'='$');
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.811 seconds)
0: jdbc:hive2://localhost:1> select * from testarray ;
+---+---++--+
|  f1   |  f2   | f3 |
+---+---++--+
| NULL  | ["f2"]| [null] |
| 1 | ["sam","sam1"]| [251,251]  |
| 2 | ["jane","jane1"]  | [262,262]  |
| 3 | ["dianne","dianne1"]  | [273,273]  |
+---+---++--+
4 rows selected (0.124 seconds)

0: jdbc:hive2://localhost:1> update testarray set (f3) = ('252\\$253') 
where (f1)=(1) ;
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.78 seconds)
0: jdbc:hive2://localhost:1> select * from testarray ;
+---+---+-+--+
|  f1   |  f2   | f3  |
+---+---+-+--+
| 1 | ["sam\","sam1\"]  | [null,253]  |
| NULL  | ["f2"]| [null]  |
| 2 | ["jane","jane1"]  | [262,262]   |
| 3 | ["dianne","dianne1"]  | [273,273]   |

> Update Array values
> ---
>
> Key: CARBONDATA-1501
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1501
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: core, spark-integration
>Reporter: Venkata Ramana G
>Priority: Minor
> Fix For: 1.3.0
>
>
> Update Array values.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1501) Update Array values

2017-10-23 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216301#comment-16216301
 ] 

Ashwini K commented on CARBONDATA-1501:
---

This issue is fixed as a part of PR 1390

> Update Array values
> ---
>
> Key: CARBONDATA-1501
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1501
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: core, spark-integration
>Reporter: Venkata Ramana G
>Priority: Minor
> Fix For: 1.3.0
>
>
> Update Array values.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1496) Array type : insert into table support

2017-10-24 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216441#comment-16216441
 ] 

Ashwini K commented on CARBONDATA-1496:
---

issue - insert into is corrupting data during insert 

: jdbc:hive2://localhost:1> insert into testarray select * from array_t2 ;
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.777 seconds)
0: jdbc:hive2://localhost:1> select * from testarray ;
+---+-+--+--+
|  f1   |   f2|  f3  |
+---+-+--+--+
| NULL  | ["f2\"] | [null]   |
| 1 | ["sam\","sam1\"]| [null,null]  |
| 2 | ["jane\","jane1\"]  | [null,null]  |
| 3 | ["dianne\","dianne1\"]  | [null,null]  |
+---+-+--+--+
4 rows selected (0.09 seconds)
0: jdbc:hive2://localhost:1> select * from array_t2 ;
+---+---++--+
|  f1   |  f2   |   f3   |
+---+---++--+
| NULL  | ["f2"]| ["f3"] |
| 1 | ["sam","sam1"]| ["251","251"]  |
| 2 | ["jane","jane1"]  | ["262","262"]  |
| 3 | ["dianne","dianne1"]  | ["273","273"]  |
+---+---++--+


> Array type : insert into table support
> --
>
> Key: CARBONDATA-1496
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1496
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: data-load
>Reporter: Venkata Ramana G
> Fix For: 1.3.0
>
>
> # Source table data containing Array data needs to convert from spark 
> datatype to string , as carbon takes string as input row



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1501) Update Array values

2017-10-23 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-1501:
-

Assignee: Ashwini K

> Update Array values
> ---
>
> Key: CARBONDATA-1501
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1501
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: core, spark-integration
>Reporter: Venkata Ramana G
>Assignee: Ashwini K
>Priority: Minor
> Fix For: 1.3.0
>
>
> Update Array values.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (CARBONDATA-1654) NullPointerException when insert overwrite table

2017-10-30 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K updated CARBONDATA-1654:
--
Comment: was deleted

(was: hi Can you please update carbondata_table schema ?)

> NullPointerException when insert overwrite table
> 
>
> Key: CARBONDATA-1654
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1654
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
> Environment: spark 2.1.1 carbondata 1.2.0
>Reporter: cen yuhai
>Priority: Critical
>
> carbon.sql("insert overwrite table carbondata_table select * from hive_table 
> where dt = '2017-10-10' ").collect
> carbondata wanto find directory Segment_1, but there is Segment_2
> {code}
> [Stage 0:>  (0 + 504) / 
> 504]17/10/28 19:11:28 WARN [org.glassfish.jersey.internal.Errors(191) -- 
> SparkUI-174]: The following warnings have been detected: WARNING: The 
> (sub)resource method stageData in 
> org.apache.spark.status.api.v1.OneStageResource contains empty path 
> annotation.
> 17/10/28 19:25:20 ERROR 
> [org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile(141) 
> -- main]: main Exception occurred:File does not exist: 
> hdfs://bipcluster/user/master/carbon/store/dm_test/carbondata_table/Fact/Part0/Segment_1
> 17/10/28 19:25:22 ERROR 
> [org.apache.spark.sql.execution.command.LoadTable(143) -- main]: main 
> java.lang.NullPointerException
> at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.isDirectory(AbstractDFSCarbonFile.java:88)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteRecursive(CarbonUtil.java:364)
> at 
> org.apache.carbondata.core.util.CarbonUtil.access$100(CarbonUtil.java:93)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:326)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteFoldersAndFiles(CarbonUtil.java:322)
> at 
> org.apache.carbondata.spark.load.CarbonLoaderUtil.recordLoadMetadata(CarbonLoaderUtil.java:331)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.updateStatus$1(CarbonDataRDDFactory.scala:595)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:1107)
> at 
> org.apache.spark.sql.execution.command.LoadTable.processData(carbonTableSchema.scala:1046)
> at 
> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:754)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.processData(carbonTableSchema.scala:651)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.run(carbonTableSchema.scala:637)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
> at org.apache.spark.sql.Dataset.(Dataset.scala:180)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:619)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:36)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:41)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:43)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:45)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw.(:47)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw.(:49)
> at $line23.$read$$iw$$iw$$iw$$iw.(:51)
> at $line23.$read$$iw$$iw$$iw.(:53)
> at $line23.$read$$iw$$iw.(:55)
> at $line23.$read$$iw.(:57)
> at $line23.$read.(:59)
> at $line23.$read$.(:63)
> at $line23.$read$.()
> at $line23.$eval$.$print$lzycompute(:7)
> at $line23.$eval$.$print(:6)
> at $line23.$eval.$print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> 

[jira] [Commented] (CARBONDATA-1654) NullPointerException when insert overwrite table

2017-10-30 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224449#comment-16224449
 ] 

Ashwini K commented on CARBONDATA-1654:
---

hi Can you please update carbondata_table schema ?

> NullPointerException when insert overwrite table
> 
>
> Key: CARBONDATA-1654
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1654
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
> Environment: spark 2.1.1 carbondata 1.2.0
>Reporter: cen yuhai
>Priority: Critical
>
> carbon.sql("insert overwrite table carbondata_table select * from hive_table 
> where dt = '2017-10-10' ").collect
> carbondata wanto find directory Segment_1, but there is Segment_2
> {code}
> [Stage 0:>  (0 + 504) / 
> 504]17/10/28 19:11:28 WARN [org.glassfish.jersey.internal.Errors(191) -- 
> SparkUI-174]: The following warnings have been detected: WARNING: The 
> (sub)resource method stageData in 
> org.apache.spark.status.api.v1.OneStageResource contains empty path 
> annotation.
> 17/10/28 19:25:20 ERROR 
> [org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile(141) 
> -- main]: main Exception occurred:File does not exist: 
> hdfs://bipcluster/user/master/carbon/store/dm_test/carbondata_table/Fact/Part0/Segment_1
> 17/10/28 19:25:22 ERROR 
> [org.apache.spark.sql.execution.command.LoadTable(143) -- main]: main 
> java.lang.NullPointerException
> at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.isDirectory(AbstractDFSCarbonFile.java:88)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteRecursive(CarbonUtil.java:364)
> at 
> org.apache.carbondata.core.util.CarbonUtil.access$100(CarbonUtil.java:93)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:326)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteFoldersAndFiles(CarbonUtil.java:322)
> at 
> org.apache.carbondata.spark.load.CarbonLoaderUtil.recordLoadMetadata(CarbonLoaderUtil.java:331)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.updateStatus$1(CarbonDataRDDFactory.scala:595)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:1107)
> at 
> org.apache.spark.sql.execution.command.LoadTable.processData(carbonTableSchema.scala:1046)
> at 
> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:754)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.processData(carbonTableSchema.scala:651)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.run(carbonTableSchema.scala:637)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
> at org.apache.spark.sql.Dataset.(Dataset.scala:180)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:619)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:36)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:41)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:43)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:45)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw.(:47)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw.(:49)
> at $line23.$read$$iw$$iw$$iw$$iw.(:51)
> at $line23.$read$$iw$$iw$$iw.(:53)
> at $line23.$read$$iw$$iw.(:55)
> at $line23.$read$$iw.(:57)
> at $line23.$read.(:59)
> at $line23.$read$.(:63)
> at $line23.$read$.()
> at $line23.$eval$.$print$lzycompute(:7)
> at $line23.$eval$.$print(:6)
> at $line23.$eval.$print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> 

[jira] [Commented] (CARBONDATA-1654) NullPointerException when insert overwrite table

2017-10-30 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224446#comment-16224446
 ] 

Ashwini K commented on CARBONDATA-1654:
---

hi Can you please update carbondata_table schema ?

> NullPointerException when insert overwrite table
> 
>
> Key: CARBONDATA-1654
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1654
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
> Environment: spark 2.1.1 carbondata 1.2.0
>Reporter: cen yuhai
>Priority: Critical
>
> carbon.sql("insert overwrite table carbondata_table select * from hive_table 
> where dt = '2017-10-10' ").collect
> carbondata wanto find directory Segment_1, but there is Segment_2
> {code}
> [Stage 0:>  (0 + 504) / 
> 504]17/10/28 19:11:28 WARN [org.glassfish.jersey.internal.Errors(191) -- 
> SparkUI-174]: The following warnings have been detected: WARNING: The 
> (sub)resource method stageData in 
> org.apache.spark.status.api.v1.OneStageResource contains empty path 
> annotation.
> 17/10/28 19:25:20 ERROR 
> [org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile(141) 
> -- main]: main Exception occurred:File does not exist: 
> hdfs://bipcluster/user/master/carbon/store/dm_test/carbondata_table/Fact/Part0/Segment_1
> 17/10/28 19:25:22 ERROR 
> [org.apache.spark.sql.execution.command.LoadTable(143) -- main]: main 
> java.lang.NullPointerException
> at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.isDirectory(AbstractDFSCarbonFile.java:88)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteRecursive(CarbonUtil.java:364)
> at 
> org.apache.carbondata.core.util.CarbonUtil.access$100(CarbonUtil.java:93)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:326)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteFoldersAndFiles(CarbonUtil.java:322)
> at 
> org.apache.carbondata.spark.load.CarbonLoaderUtil.recordLoadMetadata(CarbonLoaderUtil.java:331)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.updateStatus$1(CarbonDataRDDFactory.scala:595)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:1107)
> at 
> org.apache.spark.sql.execution.command.LoadTable.processData(carbonTableSchema.scala:1046)
> at 
> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:754)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.processData(carbonTableSchema.scala:651)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.run(carbonTableSchema.scala:637)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
> at org.apache.spark.sql.Dataset.(Dataset.scala:180)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:619)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:36)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:41)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:43)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:45)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw.(:47)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw.(:49)
> at $line23.$read$$iw$$iw$$iw$$iw.(:51)
> at $line23.$read$$iw$$iw$$iw.(:53)
> at $line23.$read$$iw$$iw.(:55)
> at $line23.$read$$iw.(:57)
> at $line23.$read.(:59)
> at $line23.$read$.(:63)
> at $line23.$read$.()
> at $line23.$eval$.$print$lzycompute(:7)
> at $line23.$eval$.$print(:6)
> at $line23.$eval.$print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> 

[jira] [Commented] (CARBONDATA-1654) NullPointerException when insert overwrite table

2017-10-30 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224862#comment-16224862
 ] 

Ashwini K commented on CARBONDATA-1654:
---

 pls share the table definition 

> NullPointerException when insert overwrite table
> 
>
> Key: CARBONDATA-1654
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1654
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
> Environment: spark 2.1.1 carbondata 1.2.0
>Reporter: cen yuhai
>Priority: Critical
>
> carbon.sql("insert overwrite table carbondata_table select * from hive_table 
> where dt = '2017-10-10' ").collect
> carbondata wanto find directory Segment_1, but there is Segment_2
> {code}
> [Stage 0:>  (0 + 504) / 
> 504]17/10/28 19:11:28 WARN [org.glassfish.jersey.internal.Errors(191) -- 
> SparkUI-174]: The following warnings have been detected: WARNING: The 
> (sub)resource method stageData in 
> org.apache.spark.status.api.v1.OneStageResource contains empty path 
> annotation.
> 17/10/28 19:25:20 ERROR 
> [org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile(141) 
> -- main]: main Exception occurred:File does not exist: 
> hdfs://bipcluster/user/master/carbon/store/dm_test/carbondata_table/Fact/Part0/Segment_1
> 17/10/28 19:25:22 ERROR 
> [org.apache.spark.sql.execution.command.LoadTable(143) -- main]: main 
> java.lang.NullPointerException
> at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.isDirectory(AbstractDFSCarbonFile.java:88)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteRecursive(CarbonUtil.java:364)
> at 
> org.apache.carbondata.core.util.CarbonUtil.access$100(CarbonUtil.java:93)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:326)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteFoldersAndFiles(CarbonUtil.java:322)
> at 
> org.apache.carbondata.spark.load.CarbonLoaderUtil.recordLoadMetadata(CarbonLoaderUtil.java:331)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.updateStatus$1(CarbonDataRDDFactory.scala:595)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:1107)
> at 
> org.apache.spark.sql.execution.command.LoadTable.processData(carbonTableSchema.scala:1046)
> at 
> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:754)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.processData(carbonTableSchema.scala:651)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.run(carbonTableSchema.scala:637)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
> at org.apache.spark.sql.Dataset.(Dataset.scala:180)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:619)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:36)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:41)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:43)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:45)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw.(:47)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw.(:49)
> at $line23.$read$$iw$$iw$$iw$$iw.(:51)
> at $line23.$read$$iw$$iw$$iw.(:53)
> at $line23.$read$$iw$$iw.(:55)
> at $line23.$read$$iw.(:57)
> at $line23.$read.(:59)
> at $line23.$read$.(:63)
> at $line23.$read$.()
> at $line23.$eval$.$print$lzycompute(:7)
> at $line23.$eval$.$print(:6)
> at $line23.$eval.$print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> 

[jira] [Commented] (CARBONDATA-1654) NullPointerException when insert overwrite table

2017-10-31 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16226343#comment-16226343
 ] 

Ashwini K commented on CARBONDATA-1654:
---

Hi , I created the same tables you shared and tried to insert overwrite . It 
executed with out any error . I am using  hadoop 2.7.2 and spark 2.1 and 
executed using beeline . From the error you have shared there seem to be ssh 
configuration problem i had the similar error some time back . Can you check on 
your ssh configuration?

> NullPointerException when insert overwrite table
> 
>
> Key: CARBONDATA-1654
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1654
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
> Environment: spark 2.1.1 carbondata 1.2.0
>Reporter: cen yuhai
>Priority: Critical
>
> carbon.sql("insert overwrite table carbondata_table select * from hive_table 
> where dt = '2017-10-10' ").collect
> carbondata wanto find directory Segment_1, but there is Segment_2
> {code}
> [Stage 0:>  (0 + 504) / 
> 504]17/10/28 19:11:28 WARN [org.glassfish.jersey.internal.Errors(191) -- 
> SparkUI-174]: The following warnings have been detected: WARNING: The 
> (sub)resource method stageData in 
> org.apache.spark.status.api.v1.OneStageResource contains empty path 
> annotation.
> 17/10/28 19:25:20 ERROR 
> [org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile(141) 
> -- main]: main Exception occurred:File does not exist: 
> hdfs://bipcluster/user/master/carbon/store/dm_test/carbondata_table/Fact/Part0/Segment_1
> 17/10/28 19:25:22 ERROR 
> [org.apache.spark.sql.execution.command.LoadTable(143) -- main]: main 
> java.lang.NullPointerException
> at 
> org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.isDirectory(AbstractDFSCarbonFile.java:88)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteRecursive(CarbonUtil.java:364)
> at 
> org.apache.carbondata.core.util.CarbonUtil.access$100(CarbonUtil.java:93)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:326)
> at 
> org.apache.carbondata.core.util.CarbonUtil$2.run(CarbonUtil.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at 
> org.apache.carbondata.core.util.CarbonUtil.deleteFoldersAndFiles(CarbonUtil.java:322)
> at 
> org.apache.carbondata.spark.load.CarbonLoaderUtil.recordLoadMetadata(CarbonLoaderUtil.java:331)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.updateStatus$1(CarbonDataRDDFactory.scala:595)
> at 
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:1107)
> at 
> org.apache.spark.sql.execution.command.LoadTable.processData(carbonTableSchema.scala:1046)
> at 
> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:754)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.processData(carbonTableSchema.scala:651)
> at 
> org.apache.spark.sql.execution.command.LoadTableByInsert.run(carbonTableSchema.scala:637)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
> at org.apache.spark.sql.Dataset.(Dataset.scala:180)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:619)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:36)
> at 
> $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:41)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:43)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:45)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw.(:47)
> at $line23.$read$$iw$$iw$$iw$$iw$$iw.(:49)
> at $line23.$read$$iw$$iw$$iw$$iw.(:51)
> at $line23.$read$$iw$$iw$$iw.(:53)
> at $line23.$read$$iw$$iw.(:55)
> at $line23.$read$$iw.(:57)
> at $line23.$read.(:59)
> at $line23.$read$.(:63)
> at $line23.$read$.()
> at $line23.$eval$.$print$lzycompute(:7)
> at $line23.$eval$.$print(:6)
> at $line23.$eval.$print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> 

[jira] [Assigned] (CARBONDATA-1341) the data load not failing for bigdecimal datatype even though if action is fail and data is not in valid range

2017-12-21 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-1341:
-

Assignee: Ashwini K

> the data load not failing for bigdecimal datatype even though if action is 
> fail and data is not in valid range
> --
>
> Key: CARBONDATA-1341
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1341
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Mohammad Shahid Khan
>Assignee: Ashwini K
>Priority: Minor
>
> Create table t1(c1 Decimal(3,2), c2 String) stored by 'carbondata'
> For column c1, the expected allowed data should be 
> 2
> 2.1
> 2.11
> Invalid data
> 2.111
> Actual: But when we load / insert data 2.111 carbon is storing as null, when 
> action is fail.
> Expected : The load should fail if bad record action is fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1341) the data load not failing for bigdecimal datatype even though if action is fail and data is not in valid range

2017-12-21 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299943#comment-16299943
 ] 

Ashwini K commented on CARBONDATA-1341:
---

HI , In the current version of the code invalid value 2.111 is getting rounded 
to nearest value like 2.11 . I think this is the correct behavior . Can you 
please suggest if this jira can be marked resolved ?

> the data load not failing for bigdecimal datatype even though if action is 
> fail and data is not in valid range
> --
>
> Key: CARBONDATA-1341
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1341
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Mohammad Shahid Khan
>Assignee: Ashwini K
>Priority: Minor
>
> Create table t1(c1 Decimal(3,2), c2 String) stored by 'carbondata'
> For column c1, the expected allowed data should be 
> 2
> 2.1
> 2.11
> Invalid data
> 2.111
> Actual: But when we load / insert data 2.111 carbon is storing as null, when 
> action is fail.
> Expected : The load should fail if bad record action is fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-2059) Compaction support for complex type

2018-01-19 Thread Ashwini K (JIRA)
Ashwini K created CARBONDATA-2059:
-

 Summary: Compaction support for complex type 
 Key: CARBONDATA-2059
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2059
 Project: CarbonData
  Issue Type: Sub-task
Reporter: Ashwini K






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (CARBONDATA-2059) Compaction support for complex type

2018-01-19 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-2059:
-

Assignee: Ashwini K

> Compaction support for complex type 
> 
>
> Key: CARBONDATA-2059
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2059
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: Ashwini K
>Assignee: Ashwini K
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (CARBONDATA-778) Alter table support for complex type

2018-01-02 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-778:


Assignee: Ashwini K

> Alter table support for complex type
> 
>
> Key: CARBONDATA-778
> URL: https://issues.apache.org/jira/browse/CARBONDATA-778
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: Manish Gupta
>Assignee: Ashwini K
>Priority: Minor
>
> Alter table need to support add, drop complex type columns and data type 
> change



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)