[jira] [Closed] (CARBONDATA-663) Major compaction is not working properly as per the configuration

2017-02-09 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-663.

Resolution: Fixed

> Major compaction is not working properly as per the configuration
> -
>
> Key: CARBONDATA-663
> URL: https://issues.apache.org/jira/browse/CARBONDATA-663
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark - 2.1
>Reporter: Anurag Srivastava
>Assignee: Rahul Kumar
> Attachments: logs, sample_str_more1.csv, show_segment.png, 
> show_segments_after_compaction.png
>
>
> I have set property *carbon.major.compaction.size= 3* and load data which is 
> the size of 5 MB and when I perform compaction it compacted, but initially it 
> shouldn't be perform. Here is the queries :
> *create table :* create table test_major_compaction(id Int,name string)stored 
> by 'carbondata';
> *Load Data :* Load two segments.
> LOAD DATA inpath 'hdfs://localhost:54310/sample_str_more1.csv' INTO table 
> test_major_compaction options('DELIMITER'=',', 'FILEHEADER'='id, 
> name','QUOTECHAR'='"');
> *Show segments :* show segments for table test_major_compaction;
> !https://issues.apache.org/jira/secure/attachment/12848287/show_segment.png!
> *Alter Table :* ALTER TABLE test_major_compaction COMPACT 'MAJOR';
> *Show segments :* Again see the segments :
> show segments for table test_major_compaction;
> !https://issues.apache.org/jira/secure/attachment/12848286/show_segments_after_compaction.png!
> I have attached all the data with the it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (CARBONDATA-583) Replace Function is not working for string/char

2017-02-09 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-583.

Resolution: Not A Bug

> Replace Function is not working  for string/char
> 
>
> Key: CARBONDATA-583
> URL: https://issues.apache.org/jira/browse/CARBONDATA-583
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
> spark 2.1
>Reporter: Anurag Srivastava
>Assignee: Rahul Kumar
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am running "replace" function but it is giving error : "undefined function 
> replace".
> Query : select replace('aaabbccaabb', 'aaa', 't');
> Expected Result : "tbbccaabb"
> Result : Error: org.apache.spark.sql.AnalysisException: undefined function 
> replace; line 1 pos 30 (state=,code=0) 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-23 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-597.


Resolved

> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
> spark 2.1
>Reporter: Anurag Srivastava
>Assignee: Mohammad Shahid Khan
> Attachments: createTable.png, ErrorLog.png, loaddata.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> !https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> !https://issues.apache.org/jira/secure/attachment/12845771/loaddata.png!
> *Select Query :* select * from uniqdata;
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-672) Complex data type is not working while fetching it from Database

2017-01-20 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-672:
-
Description: 
I created a table with an complex data type and then load data into it. Data 
loaded successfully but when I am trying to fetch data it is giving me error.

*Create Table :*

create table ct10(id Int,data array)stored by 'carbondata';

*Load Data :*

LOAD DATA inpath 'hdfs://localhost:54310/ct4.csv' INTO table ct1 
options('DELIMITER'=',', 'FILEHEADER'='id, 
data','QUOTECHAR'='"','COMPLEX_DELIMITER_LEVEL_1'='#');

*Data Query :*

Select * from ct10;

*Error :*

!https://issues.apache.org/jira/secure/attachment/12848539/complex_data_type.png!


While when we use query *select  id from ct10;*, its working fine.

!https://issues.apache.org/jira/secure/attachment/12848538/pass_fetch.png!

I am attaching screen shot, CSV and log with it.

  was:
I created a table with an complex data type and then load data into it. Data 
loaded successfully but when I am trying to fetch data it is giving me error.

*Create Table :*

create table ct10(id Int,data array)stored by 'carbondata';

*Load Data :*

LOAD DATA inpath 'hdfs://localhost:54310/ct4.csv' INTO table ct1 
options('DELIMITER'=',', 'FILEHEADER'='id, 
data','QUOTECHAR'='"','COMPLEX_DELIMITER_LEVEL_1'='#');

*Data Query :*

Select * from ct10;

*Error :*


While when we use query *select  id from ct10;*, its working fine.

I am attaching screen shot, CSV and log with it.


> Complex data type is not working while fetching it from Database
> 
>
> Key: CARBONDATA-672
> URL: https://issues.apache.org/jira/browse/CARBONDATA-672
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark - 2.1
>Reporter: Anurag Srivastava
> Attachments: complex_data_type.png, ct5.csv, log, pass_fetch.png
>
>
> I created a table with an complex data type and then load data into it. Data 
> loaded successfully but when I am trying to fetch data it is giving me error.
> *Create Table :*
> create table ct10(id Int,data array)stored by 'carbondata';
> *Load Data :*
> LOAD DATA inpath 'hdfs://localhost:54310/ct4.csv' INTO table ct1 
> options('DELIMITER'=',', 'FILEHEADER'='id, 
> data','QUOTECHAR'='"','COMPLEX_DELIMITER_LEVEL_1'='#');
> *Data Query :*
> Select * from ct10;
> *Error :*
> !https://issues.apache.org/jira/secure/attachment/12848539/complex_data_type.png!
> While when we use query *select  id from ct10;*, its working fine.
> !https://issues.apache.org/jira/secure/attachment/12848538/pass_fetch.png!
> I am attaching screen shot, CSV and log with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-672) Complex data type is not working while fetching it from Database

2017-01-20 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-672:


 Summary: Complex data type is not working while fetching it from 
Database
 Key: CARBONDATA-672
 URL: https://issues.apache.org/jira/browse/CARBONDATA-672
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
 Environment: Spark - 2.1
Reporter: Anurag Srivastava
 Attachments: complex_data_type.png, ct5.csv, log, pass_fetch.png

I created a table with an complex data type and then load data into it. Data 
loaded successfully but when I am trying to fetch data it is giving me error.

*Create Table :*

create table ct10(id Int,data array)stored by 'carbondata';

*Load Data :*

LOAD DATA inpath 'hdfs://localhost:54310/ct4.csv' INTO table ct1 
options('DELIMITER'=',', 'FILEHEADER'='id, 
data','QUOTECHAR'='"','COMPLEX_DELIMITER_LEVEL_1'='#');

*Data Query :*

Select * from ct10;

*Error :*


While when we use query *select  id from ct10;*, its working fine.

I am attaching screen shot, CSV and log with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-603) Unable to use filter with Date Data Type

2017-01-20 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-603.

Resolution: Fixed

Closed as for the reference : 

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-603?focusedWorklogId=36045&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-36045
 ]

ASF GitHub Bot logged work on CARBONDATA-603:
-

Author: ASF GitHub Bot
Created on: 20/Jan/17 08:12
Start Date: 20/Jan/17 08:12
Worklog Time Spent: 10m
  *Work Description: Github user asfgit closed the pull request at:*

https://github.com/apache/incubator-carbondata/pull/551


Issue Time Tracking
---

Worklog Id: (was: 36045)
Time Spent: 3h 50m  (was: 3h 40m)

> Unable to use filter with Date Data Type
> 
>
> Key: CARBONDATA-603
> URL: https://issues.apache.org/jira/browse/CARBONDATA-603
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
> spark 2.1
>Reporter: Anurag Srivastava
>Assignee: kumar vishal
> Attachments: 2000_UniqData.csv, Date.png
>
>  Time Spent: 3h 50m
 

> Unable to use filter with Date Data Type
> 
>
> Key: CARBONDATA-603
> URL: https://issues.apache.org/jira/browse/CARBONDATA-603
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
> spark 2.1
>Reporter: Anurag Srivastava
>Assignee: kumar vishal
> Attachments: 2000_UniqData.csv, Date.png
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> I am creating table with *DATE* Data Type and loading data with CSV into the 
> table.
> After that as I run the select query with *WHERE* clause, it converted value 
> as NULL and provide me result with Null Value.
> *Create Table :*   CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> *Select Query :* Select cust_id, cust_name, dob from uniqdata where 
> dob='1975-06-22';
> It is working fine on hive. I am attaching CSV with this.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-630) Unable to use string function on string/char data type column

2017-01-20 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-630.


Resolved


> Unable to use string function on string/char data type column
> -
>
> Key: CARBONDATA-630
> URL: https://issues.apache.org/jira/browse/CARBONDATA-630
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
> Environment: SPARK-2.1.0
>Reporter: Anurag Srivastava
>Priority: Minor
> Fix For: 1.0.0-incubating
>
> Attachments: 2000_UniqData.csv, exception.png, Executor log
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am trying to execute string function like: reverse, concat, lower, upper 
> with the string/char column but it is giving error and when I am giving 
> direct string value to it, it is working.
> *Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata_char OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query :*  select Lower(cust_name) from uniqdata_char;
> After running the query I am getting error.
> !https://issues.apache.org/jira/secure/attachment/12847185/exception.png!
> But when I am running :
> select Lower('TESTING') from uniqdata_char;
> It is working fine.
> I have attached CSV and Executor log with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-663) Major compaction is not working properly as per the configuration

2017-01-19 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-663:
-
Summary: Major compaction is not working properly as per the configuration  
(was: Major compaction is not woking properly)

> Major compaction is not working properly as per the configuration
> -
>
> Key: CARBONDATA-663
> URL: https://issues.apache.org/jira/browse/CARBONDATA-663
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark - 2.1
>Reporter: Anurag Srivastava
> Attachments: logs, sample_str_more1.csv, show_segment.png, 
> show_segments_after_compaction.png
>
>
> I have set property *carbon.major.compaction.size= 3* and load data which is 
> the size of 5 MB and when I perform compaction it compacted, but initially it 
> shouldn't be perform. Here is the queries :
> *create table :* create table test_major_compaction(id Int,name string)stored 
> by 'carbondata';
> *Load Data :* Load two segments.
> LOAD DATA inpath 'hdfs://localhost:54310/sample_str_more1.csv' INTO table 
> test_major_compaction options('DELIMITER'=',', 'FILEHEADER'='id, 
> name','QUOTECHAR'='"');
> *Show segments :* show segments for table test_major_compaction;
> !https://issues.apache.org/jira/secure/attachment/12848287/show_segment.png!
> *Alter Table :* ALTER TABLE test_major_compaction COMPACT 'MAJOR';
> *Show segments :* Again see the segments :
> show segments for table test_major_compaction;
> !https://issues.apache.org/jira/secure/attachment/12848286/show_segments_after_compaction.png!
> I have attached all the data with the it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-663) Major compaction is not woking properly

2017-01-19 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-663:
-
Description: 
I have set property *carbon.major.compaction.size= 3* and load data which is 
the size of 5 MB and when I perform compaction it compacted, but initially it 
shouldn't be perform. Here is the queries :

*create table :* create table test_major_compaction(id Int,name string)stored 
by 'carbondata';

*Load Data :* Load two segments.

LOAD DATA inpath 'hdfs://localhost:54310/sample_str_more1.csv' INTO table 
test_major_compaction options('DELIMITER'=',', 'FILEHEADER'='id, 
name','QUOTECHAR'='"');

*Show segments :* show segments for table test_major_compaction;

!https://issues.apache.org/jira/secure/attachment/12848287/show_segment.png!

*Alter Table :* ALTER TABLE test_major_compaction COMPACT 'MAJOR';

*Show segments :* Again see the segments :
show segments for table test_major_compaction;

!https://issues.apache.org/jira/secure/attachment/12848286/show_segments_after_compaction.png!

I have attached all the data with the it.

  was:
I have set property *carbon.major.compaction.size= 3* and load data which is 
the size of 5 MB and when I perform compaction it compacted, but initially it 
shouldn't be perform. Here is the queries :

*create table :* create table test_major_compaction(id Int,name string)stored 
by 'carbondata';

*Load Data :* Load two segments.

LOAD DATA inpath 'hdfs://localhost:54310/sample_str_more1.csv' INTO table 
test_major_compaction options('DELIMITER'=',', 'FILEHEADER'='id, 
name','QUOTECHAR'='"');

*Show segments :* show segments for table test_major_compaction;

*Alter Table :* ALTER TABLE test_major_compaction COMPACT 'MAJOR';

*Show segments :* Again see the segments :
show segments for table test_major_compaction;

I have attached all the data with the it.


> Major compaction is not woking properly
> ---
>
> Key: CARBONDATA-663
> URL: https://issues.apache.org/jira/browse/CARBONDATA-663
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark - 2.1
>Reporter: Anurag Srivastava
> Attachments: logs, sample_str_more1.csv, show_segment.png, 
> show_segments_after_compaction.png
>
>
> I have set property *carbon.major.compaction.size= 3* and load data which is 
> the size of 5 MB and when I perform compaction it compacted, but initially it 
> shouldn't be perform. Here is the queries :
> *create table :* create table test_major_compaction(id Int,name string)stored 
> by 'carbondata';
> *Load Data :* Load two segments.
> LOAD DATA inpath 'hdfs://localhost:54310/sample_str_more1.csv' INTO table 
> test_major_compaction options('DELIMITER'=',', 'FILEHEADER'='id, 
> name','QUOTECHAR'='"');
> *Show segments :* show segments for table test_major_compaction;
> !https://issues.apache.org/jira/secure/attachment/12848287/show_segment.png!
> *Alter Table :* ALTER TABLE test_major_compaction COMPACT 'MAJOR';
> *Show segments :* Again see the segments :
> show segments for table test_major_compaction;
> !https://issues.apache.org/jira/secure/attachment/12848286/show_segments_after_compaction.png!
> I have attached all the data with the it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-663) Major compaction is not woking properly

2017-01-19 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-663:


 Summary: Major compaction is not woking properly
 Key: CARBONDATA-663
 URL: https://issues.apache.org/jira/browse/CARBONDATA-663
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
 Environment: Spark - 2.1
Reporter: Anurag Srivastava
 Attachments: logs, sample_str_more1.csv, show_segment.png, 
show_segments_after_compaction.png

I have set property *carbon.major.compaction.size= 3* and load data which is 
the size of 5 MB and when I perform compaction it compacted, but initially it 
shouldn't be perform. Here is the queries :

*create table :* create table test_major_compaction(id Int,name string)stored 
by 'carbondata';

*Load Data :* Load two segments.

LOAD DATA inpath 'hdfs://localhost:54310/sample_str_more1.csv' INTO table 
test_major_compaction options('DELIMITER'=',', 'FILEHEADER'='id, 
name','QUOTECHAR'='"');

*Show segments :* show segments for table test_major_compaction;

*Alter Table :* ALTER TABLE test_major_compaction COMPACT 'MAJOR';

*Show segments :* Again see the segments :
show segments for table test_major_compaction;

I have attached all the data with the it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Description: 
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!


I have attached screen shot of log, executor log and CSV with this.

  was:
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!


I have attached screen shot of log and CSV with this.


> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Assignee: ravikiran
>Priority: Minor
> Attachments: 2000_UniqData.csv, error_random_function.png, 
> executor_log
>
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ("TABLE_BLOCKSIZE"= "256 MB");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation 

[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Attachment: executor_log

> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Assignee: ravikiran
>Priority: Minor
> Attachments: 2000_UniqData.csv, error_random_function.png, 
> executor_log
>
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ("TABLE_BLOCKSIZE"= "256 MB");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
> java.lang.ArrayIndexOutOfBoundsException: 1
> !https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!
> I have attached screen shot of log and CSV with this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-641) DICTIONARY_EXCLUDE is not working with 'DATE' column

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-641.


Resolved

> DICTIONARY_EXCLUDE is not working with 'DATE' column
> 
>
> Key: CARBONDATA-641
> URL: https://issues.apache.org/jira/browse/CARBONDATA-641
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.0-incubating
> Environment: Spark - 1.6 and Spark - 2.1
>Reporter: Anurag Srivastava
>Assignee: Mohammad Shahid Khan
> Fix For: 1.0.0-incubating
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> I am trying to create a table with *"DICTIONARY_EXCLUDE"* and this property 
> is not working for *"DATE"* Data Type.
> *Query :*  CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Expected Result :* Table created.
> *Actual Result :* Error: 
> org.apache.carbondata.spark.exception.MalformedCarbonCommandException: 
> DICTIONARY_EXCLUDE is unsupported for date data type column: dob 
> (state=,code=0)
> But is is working fine, If I use 'TIMESTAMP' in place of 'DATE'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Description: 
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!


I have attached screen shot of log and CSV with this.

  was:
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!


I have attached screen shot of log and CSV with this.


> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, error_random_function.png
>
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ("TABLE_BLOCKSIZE"= "256 MB");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due 

[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Description: 
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!


I have attached screen shot of log and CSV with this.

  was:
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!


i have attached screen shot of log and CSV with this.


> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, error_random_function.png
>
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error

[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Description: 
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!


i have attached screen shot of log and CSV with this.

  was:
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!





> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, error_random_function.png
>
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed

[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Description: 
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1

!https://issues.apache.org/jira/secure/attachment/12847788/error_random_function.png!




  was:
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1






> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, error_random_function.png
>
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 3.0 (TID 205, 192

[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Attachment: 2000_UniqData.csv
error_random_function.png

> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, error_random_function.png
>
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
> java.lang.ArrayIndexOutOfBoundsException: 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Attachment: (was: error.png)

> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
> java.lang.ArrayIndexOutOfBoundsException: 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Description: 
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1





  was:
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1


!https://issues.apache.org/jira/secure/attachment/12847787/error.png!



> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
> java.lang.ArrayIndexOutOfBoundsException: 1



--
This message was sent

[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Description: 
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1


!https://issues.apache.org/jira/secure/attachment/12847787/error.png!


  was:
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1




> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
> java.lang.ArrayIndexOutOfBoundsException: 1
> !https://issues.apache.org/

[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Attachment: error.png

> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: error.png
>
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
> java.lang.ArrayIndexOutOfBoundsException: 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-649:
-
Description: 
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));

Expected Result : Update column with random value.

Actual Result :Error: java.lang.RuntimeException: Update operation failed. Job 
aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
java.lang.ArrayIndexOutOfBoundsException: 1



  was:
I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
Expected Result : Table created.

Actual Result : Error: 
org.apache.carbondata.spark.exception.MalformedCarbonCommandException: 
DICTIONARY_EXCLUDE is unsupported for date data type column: dob (state=,code=0)


> Rand() function is not working while updating data
> --
>
> Key: CARBONDATA-649
> URL: https://issues.apache.org/jira/browse/CARBONDATA-649
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6
>Reporter: Anurag Srivastava
>Priority: Minor
>
> I am using update functionality with the *rand(1)* and *rand()* which return 
> deterministic value or random value.
> But as I run query it gives error.
> *Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query-1 :* Update uniqdata  set (decimal_column1) = (rand());
> *Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
> Expected Result : Update column with random value.
> Actual Result :Error: java.lang.RuntimeException: Update operation failed. 
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 3.0 (TID 205, 192.168.2.140): 
> java.lang.ArrayIndexOutOfBoundsException: 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-649) Rand() function is not working while updating data

2017-01-17 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-649:


 Summary: Rand() function is not working while updating data
 Key: CARBONDATA-649
 URL: https://issues.apache.org/jira/browse/CARBONDATA-649
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
 Environment: Spark 1.6
Reporter: Anurag Srivastava
Priority: Minor


I am using update functionality with the *rand(1)* and *rand()* which return 
deterministic value or random value.

But as I run query it gives error.

*Create Table :* CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query-1 :* Update uniqdata  set (decimal_column1) = (rand());

*Query-2 :* Update uniqdata  set (decimal_column1) = (rand(1));
Expected Result : Table created.

Actual Result : Error: 
org.apache.carbondata.spark.exception.MalformedCarbonCommandException: 
DICTIONARY_EXCLUDE is unsupported for date data type column: dob (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-396) Implement test cases for datastorage package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-396.

Resolution: Later

> Implement test cases for datastorage package
> 
>
> Key: CARBONDATA-396
> URL: https://issues.apache.org/jira/browse/CARBONDATA-396
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-543) Implement unit test cases for DataBlockIteratorImpl, IntermediateFileMerger and SortDataRows classes

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-543.

Resolution: Later

> Implement unit test cases for DataBlockIteratorImpl, IntermediateFileMerger 
> and SortDataRows classes
> 
>
> Key: CARBONDATA-543
> URL: https://issues.apache.org/jira/browse/CARBONDATA-543
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-551) Implement unit test cases for classes in processing package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-551.

Resolution: Later

> Implement unit test cases for classes in processing package
> ---
>
> Key: CARBONDATA-551
> URL: https://issues.apache.org/jira/browse/CARBONDATA-551
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-541) Implement unit test cases for processing.newflow.dictionary package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-541.

Resolution: Later

> Implement unit test cases for processing.newflow.dictionary package
> ---
>
> Key: CARBONDATA-541
> URL: https://issues.apache.org/jira/browse/CARBONDATA-541
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-515) Implement unit test cases for processing.newflow.converter package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-515.

Resolution: Later

> Implement unit test cases for processing.newflow.converter package
> --
>
> Key: CARBONDATA-515
> URL: https://issues.apache.org/jira/browse/CARBONDATA-515
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-496) Implement unit test cases for core.carbon.datastore package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-496.


Resolved


> Implement unit test cases for core.carbon.datastore package
> ---
>
> Key: CARBONDATA-496
> URL: https://issues.apache.org/jira/browse/CARBONDATA-496
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
> Fix For: 1.0.0-incubating
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-475) Implement unit test cases for core.carbon.querystatics package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-475.


Resolved

> Implement unit test cases for core.carbon.querystatics package
> --
>
> Key: CARBONDATA-475
> URL: https://issues.apache.org/jira/browse/CARBONDATA-475
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
> Fix For: 1.0.0-incubating
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-413) Implement unit test cases for scan.expression package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-413.


Resolved

> Implement unit test cases for scan.expression package
> -
>
> Key: CARBONDATA-413
> URL: https://issues.apache.org/jira/browse/CARBONDATA-413
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Anurag Srivastava
>Priority: Trivial
> Fix For: 1.0.0-incubating
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-474) Implement unit test cases for core.datastorage package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-474.


Resolved

> Implement unit test cases for core.datastorage package
> --
>
> Key: CARBONDATA-474
> URL: https://issues.apache.org/jira/browse/CARBONDATA-474
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Anurag Srivastava
>Priority: Trivial
> Fix For: 1.0.0-incubating
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-494) Implement unit test cases for filter.executer package

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-494.

Resolution: Later

> Implement unit test cases for filter.executer package
> -
>
> Key: CARBONDATA-494
> URL: https://issues.apache.org/jira/browse/CARBONDATA-494
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-340) Implement test cases for load package in core module

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava closed CARBONDATA-340.


Resoved

> Implement test cases for load package in core module
> 
>
> Key: CARBONDATA-340
> URL: https://issues.apache.org/jira/browse/CARBONDATA-340
> Project: CarbonData
>  Issue Type: Test
>Reporter: Anurag Srivastava
>Priority: Trivial
> Fix For: 1.0.0-incubating
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-603) Unable to use filter with Date Data Type

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-603:
-
Environment: 
spark 1.6.2
spark 2.1

> Unable to use filter with Date Data Type
> 
>
> Key: CARBONDATA-603
> URL: https://issues.apache.org/jira/browse/CARBONDATA-603
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
> spark 2.1
>Reporter: Anurag Srivastava
>Assignee: kumar vishal
> Attachments: 2000_UniqData.csv, Date.png
>
>
> I am creating table with *DATE* Data Type and loading data with CSV into the 
> table.
> After that as I run the select query with *WHERE* clause, it converted value 
> as NULL and provide me result with Null Value.
> *Create Table :*   CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> *Select Query :* Select cust_id, cust_name, dob from uniqdata where 
> dob='1975-06-22';
> It is working fine on hive. I am attaching CSV with this.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-612) Bucket table option does not throw error while using with Spark-1.6

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-612:
-
Environment: Spark 1.6.2

> Bucket table option does not throw error while using with Spark-1.6
> ---
>
> Key: CARBONDATA-612
> URL: https://issues.apache.org/jira/browse/CARBONDATA-612
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 1.6.2
>Reporter: Anurag Srivastava
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I am trying to use bucket feature on Spark-1.6 and It create table on 
> Spark-1.6 but as I know that Bucket functionality support from Spark-2.x.
> *Query :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("bucketnum"="2", 
> "bucketcolumns"="cust_name,DOB","tableName"="uniqdata"); 
> It creates table successfully in Spark-1.6 . But as I mention earlier bucket 
> functionality introduce in Spark-2.x, so when we are trying to create table 
> with bucket in Spark-1.6, it should provide error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Environment: 
spark 1.6.2
spark 2.1

> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
> spark 2.1
>Reporter: Anurag Srivastava
>Assignee: Mohammad Shahid Khan
> Attachments: createTable.png, ErrorLog.png, loaddata.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> !https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> !https://issues.apache.org/jira/secure/attachment/12845771/loaddata.png!
> *Select Query :* select * from uniqdata;
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-583) Replace Function is not working for string/char

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-583:
-
Environment: 
spark 1.6.2
spark 2.1

  was:spark 1.6.2


> Replace Function is not working  for string/char
> 
>
> Key: CARBONDATA-583
> URL: https://issues.apache.org/jira/browse/CARBONDATA-583
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
> spark 2.1
>Reporter: Anurag Srivastava
>Assignee: Rahul Kumar
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am running "replace" function but it is giving error : "undefined function 
> replace".
> Query : select replace('aaabbccaabb', 'aaa', 't');
> Expected Result : "tbbccaabb"
> Result : Error: org.apache.spark.sql.AnalysisException: undefined function 
> replace; line 1 pos 30 (state=,code=0) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-583) Replace Function is not working for string/char

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-583:
-
Environment: spark 1.6.2  (was: cluster)

> Replace Function is not working  for string/char
> 
>
> Key: CARBONDATA-583
> URL: https://issues.apache.org/jira/browse/CARBONDATA-583
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
>Reporter: Anurag Srivastava
>Assignee: Rahul Kumar
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am running "replace" function but it is giving error : "undefined function 
> replace".
> Query : select replace('aaabbccaabb', 'aaa', 't');
> Expected Result : "tbbccaabb"
> Result : Error: org.apache.spark.sql.AnalysisException: undefined function 
> replace; line 1 pos 30 (state=,code=0) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-586) Create table with 'Char' data type but it workes as 'String' data type

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-586:
-
Environment: 
spark 1.6.2
spark 2.1

  was:Spark - 1.6, spark - 2.1


> Create table with 'Char' data type but it workes as 'String' data type
> --
>
> Key: CARBONDATA-586
> URL: https://issues.apache.org/jira/browse/CARBONDATA-586
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6.2
> spark 2.1
>Reporter: Anurag Srivastava
>Assignee: QiangCai
>Priority: Minor
>
> I am trying to use Char data type with Carbon Data latest version and it 
> created successfully. When I started loading data in this that time I found 
> that it is taking data more then its size. 
> I have checked it with hive and there it is working fine.
> EX :- 
> 1. *Carbon Data :* 
> 1.1 create table test_carbon (name char(10)) stored by 
> 'org.apache.carbondata.format';
> 1.2 desc test_carbon;
> *Output :* 
> +-+--+--+--+
> | col_name | data_type  | comment   |
> +-+--+--+
> | name| string |  |
> +-+--+--+
> 1.3 LOAD DATA INPATH 'hdfs://localhost:54310/test.csv' into table test_carbon 
> OPTIONS ('FILEHEADER'='name');
> 1.4 select * from test_carbon;
> *Output :* 
> ++
> |name   |
> ++
> | Anurag Srivasrata  |
> | Robert|
> | james james   |
> ++
> 2. *Hive :* 
> 2.1 create table test_hive (name char(10));
> 2.2 desc test_hive;
> *Output :* 
> +-+--+-+
> | col_name | data_type  | comment  |
> +-+--+-+
> | name| char(10)| NULL   |
> +-+--+-+
> 2.3 LOAD DATA INPATH 'hdfs://localhost:54310/test.csv' into table test_hive;
> 2.4 select * from test_hive;
> *Output :* 
> ++
> |name |
> ++
> | james jame   |
> | Anurag Sri|
> | Robert  |
> ++
> So as hive truncate remaining string with Char data type in carbon data it 
> should work like hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-586) Create table with 'Char' data type but it workes as 'String' data type

2017-01-16 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-586:
-
Environment: Spark - 1.6, spark - 2.1  (was: Cluster)

> Create table with 'Char' data type but it workes as 'String' data type
> --
>
> Key: CARBONDATA-586
> URL: https://issues.apache.org/jira/browse/CARBONDATA-586
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: Spark - 1.6, spark - 2.1
>Reporter: Anurag Srivastava
>Assignee: QiangCai
>Priority: Minor
>
> I am trying to use Char data type with Carbon Data latest version and it 
> created successfully. When I started loading data in this that time I found 
> that it is taking data more then its size. 
> I have checked it with hive and there it is working fine.
> EX :- 
> 1. *Carbon Data :* 
> 1.1 create table test_carbon (name char(10)) stored by 
> 'org.apache.carbondata.format';
> 1.2 desc test_carbon;
> *Output :* 
> +-+--+--+--+
> | col_name | data_type  | comment   |
> +-+--+--+
> | name| string |  |
> +-+--+--+
> 1.3 LOAD DATA INPATH 'hdfs://localhost:54310/test.csv' into table test_carbon 
> OPTIONS ('FILEHEADER'='name');
> 1.4 select * from test_carbon;
> *Output :* 
> ++
> |name   |
> ++
> | Anurag Srivasrata  |
> | Robert|
> | james james   |
> ++
> 2. *Hive :* 
> 2.1 create table test_hive (name char(10));
> 2.2 desc test_hive;
> *Output :* 
> +-+--+-+
> | col_name | data_type  | comment  |
> +-+--+-+
> | name| char(10)| NULL   |
> +-+--+-+
> 2.3 LOAD DATA INPATH 'hdfs://localhost:54310/test.csv' into table test_hive;
> 2.4 select * from test_hive;
> *Output :* 
> ++
> |name |
> ++
> | james jame   |
> | Anurag Sri|
> | Robert  |
> ++
> So as hive truncate remaining string with Char data type in carbon data it 
> should work like hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-641) DICTIONARY_EXCLUDE is not working with 'DATE' column

2017-01-15 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-641:
-
Description: 
I am trying to create a table with *"DICTIONARY_EXCLUDE"* and this property is 
not working for *"DATE"* Data Type.

*Query :*  CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Expected Result :* Table created.

*Actual Result :* Error: 
org.apache.carbondata.spark.exception.MalformedCarbonCommandException: 
DICTIONARY_EXCLUDE is unsupported for date data type column: dob (state=,code=0)

But is is working fine, If I use 'TIMESTAMP' in place of 'DATE'.

  was:
I am trying to create a table with *"DICTIONARY_EXCLUDE"* and this property is 
now working for *"DATE"* Data Type.

*Query :*  CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Expected Result :* Table created.

*Actual Result :* Error: 
org.apache.carbondata.spark.exception.MalformedCarbonCommandException: 
DICTIONARY_EXCLUDE is unsupported for date data type column: dob (state=,code=0)

But is is working fine, If I use 'TIMESTAMP' in place of 'DATE'.


> DICTIONARY_EXCLUDE is not working with 'DATE' column
> 
>
> Key: CARBONDATA-641
> URL: https://issues.apache.org/jira/browse/CARBONDATA-641
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.0-incubating
> Environment: Spark - 1.6 and Spark - 2.1
>Reporter: Anurag Srivastava
>
> I am trying to create a table with *"DICTIONARY_EXCLUDE"* and this property 
> is not working for *"DATE"* Data Type.
> *Query :*  CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");
> *Expected Result :* Table created.
> *Actual Result :* Error: 
> org.apache.carbondata.spark.exception.MalformedCarbonCommandException: 
> DICTIONARY_EXCLUDE is unsupported for date data type column: dob 
> (state=,code=0)
> But is is working fine, If I use 'TIMESTAMP' in place of 'DATE'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CARBONDATA-640) Insert Query with Hardcoded values is not working

2017-01-15 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15823527#comment-15823527
 ] 

Anurag Srivastava edited comment on CARBONDATA-640 at 1/16/17 6:36 AM:
---

Hi Vyom,

Can you please use this query ?

*Query :* insert into employees select t.* from ( select 
'harry','h2399','v788232',99823230205 ) t;

This query will insert data into your table.

!https://issues.apache.org/jira/secure/attachment/12847578/insert_data.png!


was (Author: anuragknoldus):
Hi Vyom,

Can you please use this query ?

*Query :* insert into employees select t.* from ( select 
'harry','h2399','v788232',99823230205 ) t;

This query will insert data into your table.



> Insert Query with Hardcoded values is not working
> -
>
> Key: CARBONDATA-640
> URL: https://issues.apache.org/jira/browse/CARBONDATA-640
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Reporter: Vyom Rastogi
>Priority: Minor
> Attachments: insert_data.png
>
>
> 1)Creating table employees,Managers
> create table employees(name string, empid string, mgrid string, mobileno 
> bigint) stored by 'carbondata';
> 2)create table managers(name string, empid string, mgrid string, mobileno 
> bigint) stored by 'carbondata';
> Insert into Select Queries
> insert into managers select 'harry','h2399','v788232',99823230205;
> Error Description:
> Error: org.apache.spark.sql.AnalysisException: Failed to recognize predicate 
> ''. Failed rule: 'regularBody' in statement; line 1 pos 65 
> (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CARBONDATA-640) Insert Query with Hardcoded values is not working

2017-01-15 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15823527#comment-15823527
 ] 

Anurag Srivastava commented on CARBONDATA-640:
--

Hi Vyom,

Can you please use this query ?

*Query :* insert into employees select t.* from ( select 
'harry','h2399','v788232',99823230205 ) t;

This query will insert data into your table.



> Insert Query with Hardcoded values is not working
> -
>
> Key: CARBONDATA-640
> URL: https://issues.apache.org/jira/browse/CARBONDATA-640
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Reporter: Vyom Rastogi
>Priority: Minor
>
> 1)Creating table employees,Managers
> create table employees(name string, empid string, mgrid string, mobileno 
> bigint) stored by 'carbondata';
> 2)create table managers(name string, empid string, mgrid string, mobileno 
> bigint) stored by 'carbondata';
> Insert into Select Queries
> insert into managers select 'harry','h2399','v788232',99823230205;
> Error Description:
> Error: org.apache.spark.sql.AnalysisException: Failed to recognize predicate 
> ''. Failed rule: 'regularBody' in statement; line 1 pos 65 
> (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-640) Insert Query with Hardcoded values is not working

2017-01-15 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-640:
-
Attachment: insert_data.png

> Insert Query with Hardcoded values is not working
> -
>
> Key: CARBONDATA-640
> URL: https://issues.apache.org/jira/browse/CARBONDATA-640
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Reporter: Vyom Rastogi
>Priority: Minor
> Attachments: insert_data.png
>
>
> 1)Creating table employees,Managers
> create table employees(name string, empid string, mgrid string, mobileno 
> bigint) stored by 'carbondata';
> 2)create table managers(name string, empid string, mgrid string, mobileno 
> bigint) stored by 'carbondata';
> Insert into Select Queries
> insert into managers select 'harry','h2399','v788232',99823230205;
> Error Description:
> Error: org.apache.spark.sql.AnalysisException: Failed to recognize predicate 
> ''. Failed rule: 'regularBody' in statement; line 1 pos 65 
> (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-641) DICTIONARY_EXCLUDE is not working with 'DATE' column

2017-01-15 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-641:


 Summary: DICTIONARY_EXCLUDE is not working with 'DATE' column
 Key: CARBONDATA-641
 URL: https://issues.apache.org/jira/browse/CARBONDATA-641
 Project: CarbonData
  Issue Type: Bug
  Components: core
Affects Versions: 1.0.0-incubating
 Environment: Spark - 1.6 and Spark - 2.1
Reporter: Anurag Srivastava


I am trying to create a table with *"DICTIONARY_EXCLUDE"* and this property is 
now working for *"DATE"* Data Type.

*Query :*  CREATE TABLE uniqdata_date_dictionary (CUST_ID int,CUST_NAME 
string,ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB","DICTIONARY_EXCLUDE"="DOB,DOJ");

*Expected Result :* Table created.

*Actual Result :* Error: 
org.apache.carbondata.spark.exception.MalformedCarbonCommandException: 
DICTIONARY_EXCLUDE is unsupported for date data type column: dob (state=,code=0)

But is is working fine, If I use 'TIMESTAMP' in place of 'DATE'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-630) Unable to use string function on string/char data type column

2017-01-12 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-630:
-
Description: 
I am trying to execute string function like: reverse, concat, lower, upper with 
the string/char column but it is giving error and when I am giving direct 
string value to it, it is working.

*Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata_char OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query :*  select Lower(cust_name) from uniqdata_char;

After running the query I am getting error.

!https://issues.apache.org/jira/secure/attachment/12847185/exception.png!

But when I am running :
select Lower('TESTING') from uniqdata_char;
It is working fine.

I have attached CSV and Executor log with it.

  was:
I am trying to execute string function like: reverse, concat, lower, upper with 
the string/char column but it is giving error and when I am giving direct 
string value to it, it is working.

*Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata_char OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query :*  select Lower(cust_name) from uniqdata_char;

After running the query I am getting error.

!https://issues.apache.org/jira/secure/attachment/12847185/exception.png!

But when I am running :
select Lower('TESTING') from uniqdata_char;
It is working fine.


> Unable to use string function on string/char data type column
> -
>
> Key: CARBONDATA-630
> URL: https://issues.apache.org/jira/browse/CARBONDATA-630
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
> Environment: SPARK-2.1.0
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, Executor log, exception.png
>
>
> I am trying to execute string function like: reverse, concat, lower, upper 
> with the string/char column but it is giving error and when I am giving 
> direct string value to it, it is working.
> *Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata_char OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query :*  select Lower(cust_name) from uniqdata_char;
> After running the query I am getting error.
> !https://issues.apache.org/jira/secure/attachment/12847185/exception.png!
> But when I am running :
> select Lower('TESTING') from uniqdata_char;
> It is working fine.
> I have attached CSV and Executor log with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-630) Unable to use string function on string/char data type column

2017-01-12 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-630:
-
Attachment: Executor log

> Unable to use string function on string/char data type column
> -
>
> Key: CARBONDATA-630
> URL: https://issues.apache.org/jira/browse/CARBONDATA-630
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
> Environment: SPARK-2.1.0
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, Executor log, exception.png
>
>
> I am trying to execute string function like: reverse, concat, lower, upper 
> with the string/char column but it is giving error and when I am giving 
> direct string value to it, it is working.
> *Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata_char OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query :*  select Lower(cust_name) from uniqdata_char;
> After running the query I am getting error.
> !https://issues.apache.org/jira/secure/attachment/12847185/exception.png!
> But when I am running :
> select Lower('TESTING') from uniqdata_char;
> It is working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-630) Unable to use string function on string/char data type column

2017-01-12 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-630:
-
Component/s: (was: data-query)
 sql

> Unable to use string function on string/char data type column
> -
>
> Key: CARBONDATA-630
> URL: https://issues.apache.org/jira/browse/CARBONDATA-630
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
> Environment: SPARK-2.1.0
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, exception.png
>
>
> I am trying to execute string function like: reverse, concat, lower, upper 
> with the string/char column but it is giving error and when I am giving 
> direct string value to it, it is working.
> *Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata_char OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query :*  select Lower(cust_name) from uniqdata_char;
> After running the query I am getting error.
> !https://issues.apache.org/jira/secure/attachment/12847185/exception.png!
> But when I am running :
> select Lower('TESTING') from uniqdata_char;
> It is working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-630) Unable to use string function on string/char data type column

2017-01-12 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-630:
-
Description: 
I am trying to execute string function like: reverse, concat, lower, upper with 
the string/char column but it is giving error and when I am giving direct 
string value to it, it is working.

*Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata_char OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query :*  select Lower(cust_name) from uniqdata_char;

After running the query I am getting error.

!https://issues.apache.org/jira/secure/attachment/12847185/exception.png!

But when I am running :
select Lower('TESTING') from uniqdata_char;
It is working fine.

  was:
I am trying to execute string function like: reverse, concat, lower, upper with 
the string/char column but it is giving error and when I am giving direct 
string value to it, it is working.

*Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata_char OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query :*  select Lower(cust_name) from uniqdata_char;

After running the query I am getting error.

But when I am running :
select Lower('TESTING') from uniqdata_char;
It is working fine.


> Unable to use string function on string/char data type column
> -
>
> Key: CARBONDATA-630
> URL: https://issues.apache.org/jira/browse/CARBONDATA-630
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: SPARK-2.1.0
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, exception.png
>
>
> I am trying to execute string function like: reverse, concat, lower, upper 
> with the string/char column but it is giving error and when I am giving 
> direct string value to it, it is working.
> *Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata_char OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query :*  select Lower(cust_name) from uniqdata_char;
> After running the query I am getting error.
> !https://issues.apache.org/jira/secure/attachment/12847185/exception.png!
> But when I am running :
> select Lower('TESTING') from uniqdata_char;
> It is working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-630) Unable to use string function on string/char data type column

2017-01-12 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-630:
-
Description: 
I am trying to execute string function like: reverse, concat, lower, upper with 
the string/char column but it is giving error and when I am giving direct 
string value to it, it is working.

*Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata_char OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query :*  select Lower(cust_name) from uniqdata_char;

After running the query I am getting error.

But when I am running :
select Lower('TESTING') from uniqdata_char;
It is working fine.

  was:
I am trying to execute string function like: reverse, concat, lower, upper with 
the string/char column but it is giving error and when I am giving direct 
string value to it, it is working.

*Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');

*Load Data : * LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata_char OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query :*  select Lower(cust_name) from uniqdata_char;

After running the query I am getting error.

But when I am running :
select Lower('TESTING') from uniqdata_char;
It is working fine.


> Unable to use string function on string/char data type column
> -
>
> Key: CARBONDATA-630
> URL: https://issues.apache.org/jira/browse/CARBONDATA-630
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: SPARK-2.1.0
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, exception.png
>
>
> I am trying to execute string function like: reverse, concat, lower, upper 
> with the string/char column but it is giving error and when I am giving 
> direct string value to it, it is working.
> *Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata_char OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');
> *Query :*  select Lower(cust_name) from uniqdata_char;
> After running the query I am getting error.
> But when I am running :
> select Lower('TESTING') from uniqdata_char;
> It is working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-630) Unable to use string function on string/char data type column

2017-01-12 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-630:


 Summary: Unable to use string function on string/char data type 
column
 Key: CARBONDATA-630
 URL: https://issues.apache.org/jira/browse/CARBONDATA-630
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
 Environment: SPARK-2.1.0
Reporter: Anurag Srivastava
Priority: Minor
 Attachments: 2000_UniqData.csv, exception.png

I am trying to execute string function like: reverse, concat, lower, upper with 
the string/char column but it is giving error and when I am giving direct 
string value to it, it is working.

*Create Table :* CREATE TABLE uniqdata_char (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION char(30), DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');

*Load Data : * LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata_char OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='""','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','MAXCOLUMNS'='12');

*Query :*  select Lower(cust_name) from uniqdata_char;

After running the query I am getting error.

But when I am running :
select Lower('TESTING') from uniqdata_char;
It is working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-615) Update query store wrong value for Date data type

2017-01-09 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-615:
-
Description: 
I am trying to update DOB column with Date Data Type. It is storing a day 
before date which I have mentioned for updating in DOB column.

*Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB Date, DOJ Date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format';

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_PASS'='true');

*Update Query :*  update uniqdata set (dob)=(to_date('2016-12-01')) where 
cust_name = 'CUST_NAME_01999';

*Expected Result :* It should update DOB column with *2016-12-01*.

*Actual Result :* It is updating DOB column with *2016-11-30*.

!https://issues.apache.org/jira/secure/attachment/12846515/update_dob.png!



  was:
I am trying to update DOB column with Date Data Type. It is storing a day 
before date which I have mentioned for updating in DOB column.

*Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB Date, DOJ Date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format';

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_PASS'='true');

*Update Query :*  update uniqdata set (dob)=(to_date('2016-12-01')) where 
cust_name = 'CUST_NAME_01999';

*Expected Result :* It should update DOB column with *2016-12-01*.

*Actual Result :* It is updating DOB column with *2016-11-30*.

!!




> Update query store wrong value for Date data type
> -
>
> Key: CARBONDATA-615
> URL: https://issues.apache.org/jira/browse/CARBONDATA-615
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, update_dob.png
>
>
> I am trying to update DOB column with Date Data Type. It is storing a day 
> before date which I have mentioned for updating in DOB column.
> *Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB Date, DOJ Date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format';
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_PASS'='true');
> *Update Query :*  update uniqdata set (dob)=(to_date('2016-12-01')) where 
> cust_name = 'CUST_NAME_01999';
> *Expected Result :* It should update DOB column with *2016-12-01*.
> *Actual Result :* It is updating DOB column with *2016-11-30*.
> !https://issues.apache.org/jira/secure/attachment/12846515/update_dob.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-615) Update query store wrong value for Date data type

2017-01-09 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-615:
-
Description: 
I am trying to update DOB column with Date Data Type. It is storing a day 
before date which I have mentioned for updating in DOB column.

*Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB Date, DOJ Date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format';

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_PASS'='true');

*Update Query :*  update uniqdata set (dob)=(to_date('2016-12-01')) where 
cust_name = 'CUST_NAME_01999';

*Expected Result :* It should update DOB column with *2016-12-01*.

*Actual Result :* It is updating DOB column with *2016-11-30*.

!!



  was:
I am trying to update DOB column with Date Data Type. It is storing a day 
before date which I have mentioned for updating in DOB column.

*Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB Date, DOJ Date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format';

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_PASS'='true');

*Update Query :*  update uniqdata set (dob)=(to_date('2016-12-01')) where 
cust_name = 'CUST_NAME_01999';

*Expected Result :* It should update DOB column with *2016-12-01*.

*Actual Result :* It is updating DOB column with *2016-11-30*.




> Update query store wrong value for Date data type
> -
>
> Key: CARBONDATA-615
> URL: https://issues.apache.org/jira/browse/CARBONDATA-615
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
>Priority: Minor
> Attachments: 2000_UniqData.csv, update_dob.png
>
>
> I am trying to update DOB column with Date Data Type. It is storing a day 
> before date which I have mentioned for updating in DOB column.
> *Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB Date, DOJ Date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format';
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_PASS'='true');
> *Update Query :*  update uniqdata set (dob)=(to_date('2016-12-01')) where 
> cust_name = 'CUST_NAME_01999';
> *Expected Result :* It should update DOB column with *2016-12-01*.
> *Actual Result :* It is updating DOB column with *2016-11-30*.
> !!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-615) Update query store wrong value for Date data type

2017-01-09 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-615:


 Summary: Update query store wrong value for Date data type
 Key: CARBONDATA-615
 URL: https://issues.apache.org/jira/browse/CARBONDATA-615
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
Reporter: Anurag Srivastava
Priority: Minor
 Attachments: 2000_UniqData.csv, update_dob.png

I am trying to update DOB column with Date Data Type. It is storing a day 
before date which I have mentioned for updating in DOB column.

*Create Table :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB Date, DOJ Date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format';

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_PASS'='true');

*Update Query :*  update uniqdata set (dob)=(to_date('2016-12-01')) where 
cust_name = 'CUST_NAME_01999';

*Expected Result :* It should update DOB column with *2016-12-01*.

*Actual Result :* It is updating DOB column with *2016-11-30*.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-612) Bucket table option does not throw error while using with Spark-1.6

2017-01-09 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-612:
-
Summary: Bucket table option does not throw error while using with 
Spark-1.6  (was: Bucket table option does not through error while using with 
Spark-1.6)

> Bucket table option does not throw error while using with Spark-1.6
> ---
>
> Key: CARBONDATA-612
> URL: https://issues.apache.org/jira/browse/CARBONDATA-612
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
>Priority: Minor
>
> I am trying to use bucket feature on Spark-1.6 and It create table on 
> Spark-1.6 but as I know that Bucket functionality support from Spark-2.x.
> *Query :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("bucketnum"="2", 
> "bucketcolumns"="cust_name,DOB","tableName"="uniqdata"); 
> It creates table successfully in Spark-1.6 . But as I mention earlier bucket 
> functionality introduce in Spark-2.x, so when we are trying to create table 
> with bucket in Spark-1.6, it should provide error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-612) Bucket table option does not through error while using with Spark-1.6

2017-01-09 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-612:
-
Summary: Bucket table option does not through error while using with 
Spark-1.6  (was: Bucket is not support in Spark-1.6)

> Bucket table option does not through error while using with Spark-1.6
> -
>
> Key: CARBONDATA-612
> URL: https://issues.apache.org/jira/browse/CARBONDATA-612
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
>Priority: Minor
>
> I am trying to use bucket feature on Spark-1.6 and It create table on 
> Spark-1.6 but as I know that Bucket functionality support from Spark-2.x.
> *Query :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("bucketnum"="2", 
> "bucketcolumns"="cust_name,DOB","tableName"="uniqdata"); 
> It creates table successfully in Spark-1.6 . But as I mention earlier bucket 
> functionality introduce in Spark-2.x, so when we are trying to create table 
> with bucket in Spark-1.6, it should provide error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-612) Bucket is not support in Spark-1.6

2017-01-09 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-612:
-
Description: 
I am trying to use bucket feature on Spark-1.6 and It create table on Spark-1.6 
but as I know that Bucket functionality support from Spark-2.x.

*Query :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("bucketnum"="2", 
"bucketcolumns"="cust_name,DOB","tableName"="uniqdata"); 

It creates table successfully in Spark-1.6 . But as I mention earlier bucket 
functionality introduce in Spark-2.x, so when we are trying to create table 
with bucket in Spark-1.6, it should provide error message.

  was:
I am trying to use bucket feature on Spark-1.6 and It create table on Spark-1.6 
but as I know that Bucket functionality support from Spark-2.x.

So when we are trying to create table with bucket in Spark-1.6, it should 
provide error message.


> Bucket is not support in Spark-1.6
> --
>
> Key: CARBONDATA-612
> URL: https://issues.apache.org/jira/browse/CARBONDATA-612
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
>Priority: Minor
>
> I am trying to use bucket feature on Spark-1.6 and It create table on 
> Spark-1.6 but as I know that Bucket functionality support from Spark-2.x.
> *Query :* CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("bucketnum"="2", 
> "bucketcolumns"="cust_name,DOB","tableName"="uniqdata"); 
> It creates table successfully in Spark-1.6 . But as I mention earlier bucket 
> functionality introduce in Spark-2.x, so when we are trying to create table 
> with bucket in Spark-1.6, it should provide error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-612) Bucket is not support in Spark-1.6

2017-01-09 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-612:


 Summary: Bucket is not support in Spark-1.6
 Key: CARBONDATA-612
 URL: https://issues.apache.org/jira/browse/CARBONDATA-612
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
Reporter: Anurag Srivastava
Priority: Minor


I am trying to use bucket feature on Spark-1.6 and It create table on Spark-1.6 
but as I know that Bucket functionality support from Spark-2.x.

So when we are trying to create table with bucket in Spark-1.6, it should 
provide error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-603) Unable to use filter with Date Data Type

2017-01-06 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-603:
-
Attachment: 2000_UniqData.csv

> Unable to use filter with Date Data Type
> 
>
> Key: CARBONDATA-603
> URL: https://issues.apache.org/jira/browse/CARBONDATA-603
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: 2000_UniqData.csv, Date.png
>
>
> I am creating table with *DATE* Data Type and loading data with CSV into the 
> table.
> After that as I run the select query with *WHERE* clause, it converted value 
> as NULL and provide me result with Null Value.
> *Create Table :*   CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> *Select Query :* Select cust_id, cust_name, dob from uniqdata where 
> dob='1975-06-22';
> It is working fine on hive. I am attaching CSV with this.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-603) Unable to use filter with Date Data Type

2017-01-06 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-603:
-
Description: 
I am creating table with *DATE* Data Type and loading data with CSV into the 
table.

After that as I run the select query with *WHERE* clause, it converted value as 
NULL and provide me result with Null Value.

*Create Table :*   CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

*Select Query :* Select cust_id, cust_name, dob from uniqdata where 
dob='1975-06-22';


It is working fine on hive. I am attaching CSV with this.
 

  was:
I am creating table with *DATE* Data Type and loading data with CSV into the 
table.

After that as I run the select query with *WHERE* clause, it converted value as 
NULL and provide me result with Null Value.

*Create Table :*   CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

*Select Query :* Select cust_id, cust_name, dob from uniqdata where 
dob='1975-06-22';


It is working fine on hive. I am attaching CSV 
 


> Unable to use filter with Date Data Type
> 
>
> Key: CARBONDATA-603
> URL: https://issues.apache.org/jira/browse/CARBONDATA-603
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: Date.png
>
>
> I am creating table with *DATE* Data Type and loading data with CSV into the 
> table.
> After that as I run the select query with *WHERE* clause, it converted value 
> as NULL and provide me result with Null Value.
> *Create Table :*   CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
> bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> *Select Query :* Select cust_id, cust_name, dob from uniqdata where 
> dob='1975-06-22';
> It is working fine on hive. I am attaching CSV with this.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-603) Unable to use filter with Date Data Type

2017-01-06 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-603:


 Summary: Unable to use filter with Date Data Type
 Key: CARBONDATA-603
 URL: https://issues.apache.org/jira/browse/CARBONDATA-603
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
Reporter: Anurag Srivastava
 Attachments: Date.png

I am creating table with *DATE* Data Type and loading data with CSV into the 
table.

After that as I run the select query with *WHERE* clause, it converted value as 
NULL and provide me result with Null Value.

*Create Table :*   CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB date, DOJ date, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB");

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

*Select Query :* Select cust_id, cust_name, dob from uniqdata where 
dob='1975-06-22';


It is working fine on hive. I am attaching CSV 
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Description: 
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.


*Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
("TABLE_BLOCKSIZE"= "256 MB");

!https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

!https://issues.apache.org/jira/secure/attachment/12845771/loaddata.png!

*Select Query :* select * from uniqdata;

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 

  was:
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.


*Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
("TABLE_BLOCKSIZE"= "256 MB");

!https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 


> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png, createTable.png, loaddata.png
>
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> !https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> !https://issues.apache.org/jira/secure/attachment/12845771/loaddata.png!
> *Select Query :* select * from uniqdata;
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Description: 
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.


*Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
("TABLE_BLOCKSIZE"= "256 MB");

!https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!

*Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into 
table uniqdata OPTIONS ('DELIMITER'=',' 
,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 

  was:
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.


*Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
("TABLE_BLOCKSIZE"= "256 MB");

!!



PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 


> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png, createTable.png, loaddata.png
>
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> !https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Attachment: loaddata.png

> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png, createTable.png, loaddata.png
>
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> !https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Attachment: (was: LoadData.png)

> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png, createTable.png, loaddata.png
>
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> !https://issues.apache.org/jira/secure/attachment/12845768/createTable.png!
> *Load Data :* LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' 
> into table uniqdata OPTIONS ('DELIMITER'=',' 
> ,'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Description: 
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.


*Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
("TABLE_BLOCKSIZE"= "256 MB");

!!



PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 

  was:
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.


*Create Table :* 



PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 


> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png, LoadData.png, createTable.png
>
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :*  CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
> char(30),ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> !!
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Description: 
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.


*Create Table :* 



PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 

  was:
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 


> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png, LoadData.png, createTable.png
>
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :* 
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Attachment: createTable.png
LoadData.png

> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png, LoadData.png, createTable.png
>
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> *Create Table :* 
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Description: 
I am running Carbon Data with thrift server and I am able to Create Table and 
Load Data but as I run *select * from table_name;*, Its giving me error : 
*Block B-tree loading failed*.

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 

  was:
I am running Carbon Data with beeline and I am able to Create Table and Load 
Data but as I run *select * from table_name;*, Its giving me error : *Block 
B-tree loading failed*.

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 


> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png
>
>
> I am running Carbon Data with thrift server and I am able to Create Table and 
> Load Data but as I run *select * from table_name;*, Its giving me error : 
> *Block B-tree loading failed*.
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Description: 
I am running Carbon Data with beeline and I am able to Create Table and Load 
Data but as I run *select * from table_name;*, Its giving me error : *Block 
B-tree loading failed*.

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 

  was:
I am running Carbon Data with beeline and I am able to Create Table and Load 
Data but as I run *select * from table_name;*, Its giving me error : *Block 
B-tree loading failed*

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 


> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png
>
>
> I am running Carbon Data with beeline and I am able to Create Table and Load 
> Data but as I run *select * from table_name;*, Its giving me error : *Block 
> B-tree loading failed*.
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Priority: Major  (was: Blocker)

> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png
>
>
> I am running Carbon Data with beeline and I am able to Create Table and Load 
> Data but as I run *select * from table_name;*, Its giving me error : *Block 
> B-tree loading failed*
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Priority: Blocker  (was: Major)

> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
>Priority: Blocker
> Attachments: ErrorLog.png
>
>
> I am running Carbon Data with beeline and I am able to Create Table and Load 
> Data but as I run *select * from table_name;*, Its giving me error : *Block 
> B-tree loading failed*
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-597:


 Summary: Unable to fetch data with "select" query
 Key: CARBONDATA-597
 URL: https://issues.apache.org/jira/browse/CARBONDATA-597
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
Reporter: Anurag Srivastava
 Attachments: ErrorLog.png

I am running Carbon Data with beeline and I am able to Create Table and Load 
Data but as I run *select * from table_name;*, Its giving me error : *Block 
B-tree loading failed*

PFA for stack Trace.



 

 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-597:
-
Description: 
I am running Carbon Data with beeline and I am able to Create Table and Load 
Data but as I run *select * from table_name;*, Its giving me error : *Block 
B-tree loading failed*

PFA for stack Trace.

!https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!

 

 

  was:
I am running Carbon Data with beeline and I am able to Create Table and Load 
Data but as I run *select * from table_name;*, Its giving me error : *Block 
B-tree loading failed*

PFA for stack Trace.



 

 


> Unable to fetch data with "select" query
> 
>
> Key: CARBONDATA-597
> URL: https://issues.apache.org/jira/browse/CARBONDATA-597
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
>Reporter: Anurag Srivastava
> Attachments: ErrorLog.png
>
>
> I am running Carbon Data with beeline and I am able to Create Table and Load 
> Data but as I run *select * from table_name;*, Its giving me error : *Block 
> B-tree loading failed*
> PFA for stack Trace.
> !https://issues.apache.org/jira/secure/attachment/12845764/ErrorLog.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-586) Create table with 'Char' data type but it workes as 'String' data type

2017-01-02 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-586:


 Summary: Create table with 'Char' data type but it workes as 
'String' data type
 Key: CARBONDATA-586
 URL: https://issues.apache.org/jira/browse/CARBONDATA-586
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 1.0.0-incubating
 Environment: Cluster
Reporter: Anurag Srivastava
Priority: Minor


I am trying to use Char data type with Carbon Data latest version and it 
created successfully. When I started loading data in this that time I found 
that it is taking data more then its size. 

I have checked it with hive and there it is working fine.

EX :- 

1. *Carbon Data :* 

1.1 create table test_carbon (name char(10)) stored by 
'org.apache.carbondata.format';

1.2 desc test_carbon;

*Output :* 
+-+--+--+--+
| col_name | data_type  | comment   |
+-+--+--+
| name| string |  |
+-+--+--+

1.3 LOAD DATA INPATH 'hdfs://localhost:54310/test.csv' into table test_carbon 
OPTIONS ('FILEHEADER'='name');

1.4 select * from test_carbon;

*Output :* 
++
|name   |
++
| Anurag Srivasrata  |
| Robert|
| james james   |
++

2. *Hive :* 

2.1 create table test_hive (name char(10));

2.2 desc test_hive;

*Output :* 
+-+--+-+
| col_name | data_type  | comment  |
+-+--+-+
| name| char(10)| NULL   |
+-+--+-+


2.3 LOAD DATA INPATH 'hdfs://localhost:54310/test.csv' into table test_hive;

2.4 select * from test_hive;

*Output :* 
++
|name |
++
| james jame   |
| Anurag Sri|
| Robert  |
++

So as hive truncate remaining string with Char data type in carbon data it 
should work like hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-583) Replace Function is not working for string/char

2017-01-02 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-583:
-
Environment: cluster
Component/s: data-load

> Replace Function is not working  for string/char
> 
>
> Key: CARBONDATA-583
> URL: https://issues.apache.org/jira/browse/CARBONDATA-583
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: cluster
>Reporter: Anurag Srivastava
>Assignee: Rahul Kumar
>Priority: Minor
>
> I am running "replace" function but it is giving error : "undefined function 
> replace".
> Query : select replace('aaabbccaabb', 'aaa', 't');
> Expected Result : "tbbccaabb"
> Result : Error: org.apache.spark.sql.AnalysisException: undefined function 
> replace; line 1 pos 30 (state=,code=0) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-583) Replace Function is not working for string/char

2017-01-02 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-583:


 Summary: Replace Function is not working  for string/char
 Key: CARBONDATA-583
 URL: https://issues.apache.org/jira/browse/CARBONDATA-583
 Project: CarbonData
  Issue Type: Bug
Affects Versions: 1.0.0-incubating
Reporter: Anurag Srivastava
Priority: Minor


I am running "replace" function but it is giving error : "undefined function 
replace".

Query : select replace('aaabbccaabb', 'aaa', 't');

Expected Result : "tbbccaabb"

Result : Error: org.apache.spark.sql.AnalysisException: undefined function 
replace; line 1 pos 30 (state=,code=0) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-551) Implement unit test cases for classes in processing package

2016-12-21 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-551:


 Summary: Implement unit test cases for classes in processing 
package
 Key: CARBONDATA-551
 URL: https://issues.apache.org/jira/browse/CARBONDATA-551
 Project: CarbonData
  Issue Type: Test
Reporter: Anurag Srivastava
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-422) [Bad Records]Select query failed with "NullPointerException" after data-load with options as MAXCOLUMN and BAD_RECORDS_ACTION

2016-12-21 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-422:
-
Attachment: (was: Screenshot2.png)

> [Bad Records]Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> -
>
> Key: CARBONDATA-422
> URL: https://issues.apache.org/jira/browse/CARBONDATA-422
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 0.1.1-incubating
> Environment: 3 node Cluster
>Reporter: SOURYAKANTA DWIVEDY
>Priority: Minor
>
> Description : Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> Steps:
> 1. Create table
> 2. Load data into table with BAD_RECORDS_ACTION option [ Create Table -- 
> columns -9 ,CSV coulmn - 10 , Header - 9]
> 3. Do select * query ,it will pass
>  4. Then Load data into table with BAD_RECORDS_ACTION and MAXCOLUMN option [ 
> Create Table -- columns -9 ,CSV coulmn - 10 , Header - 9,MAXCOLUMNS -- 9]
> 5. Do select * query ,it will fail with "NullPointerException"
> Log :- 
> ---
> 0: jdbc:hive2://ha-cluster/default> create table emp3(ID int,Name string,DOJ 
> timestamp,Designation string,Salary double,Dept string,DOB timestamp,Addr 
> string,Gender string) STORED BY 'org.apache.carbondata.format';
> +-+--+
> | result |
> +-+--+
> +-+--+
> No rows selected (0.589 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.415 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> +---+---+---+--+--+---+---++-+--+
> | id | name | doj | designation | salary | dept | dob | addr | gender |
> +---+---+---+--+--+---+---++-+--+
> | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
> | 1 | AAA | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 2 | BBB | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 3 | CCC | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 4 | DDD | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 5 | EEE | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | 6 | FFF | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 7 | GGG | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 8 | HHH | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 9 | III | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 10 | JJJ | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | NULL | Name | NULL | Designation | NULL | Dept | NULL | Addr | Gender |
> +---+---+---+--+--+---+---++-+--+
> 12 rows selected (0.418 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (1.424 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> Error: java.io.IOException: java.lang.NullPointerException (state=,code=0)
> 0: jdbc:hive2://ha-cluster/default>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CARBONDATA-422) [Bad Records]Select query failed with "NullPointerException" after data-load with options as MAXCOLUMN and BAD_RECORDS_ACTION

2016-12-21 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-422:
-
Comment: was deleted

(was: Hello SOURYAKANTA DWIVEDY ,

I have tested your same query with the latest build and get the result :

Quert : LOAD DATA inpath 'hdfs://localhost:54310/test1234567.csv' into table 
emp5 options('DELIMITER'=',', 
'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
 'BAD_RECORDS_ACTION'='FORCE');

Result : 

!https://issues.apache.org/jira/secure/attachment/12844241/Screenshot2.png!

Could you please verify this with latest build and close the bug?)

> [Bad Records]Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> -
>
> Key: CARBONDATA-422
> URL: https://issues.apache.org/jira/browse/CARBONDATA-422
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 0.1.1-incubating
> Environment: 3 node Cluster
>Reporter: SOURYAKANTA DWIVEDY
>Priority: Minor
> Attachments: Screenshot2.png
>
>
> Description : Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> Steps:
> 1. Create table
> 2. Load data into table with BAD_RECORDS_ACTION option [ Create Table -- 
> columns -9 ,CSV coulmn - 10 , Header - 9]
> 3. Do select * query ,it will pass
>  4. Then Load data into table with BAD_RECORDS_ACTION and MAXCOLUMN option [ 
> Create Table -- columns -9 ,CSV coulmn - 10 , Header - 9,MAXCOLUMNS -- 9]
> 5. Do select * query ,it will fail with "NullPointerException"
> Log :- 
> ---
> 0: jdbc:hive2://ha-cluster/default> create table emp3(ID int,Name string,DOJ 
> timestamp,Designation string,Salary double,Dept string,DOB timestamp,Addr 
> string,Gender string) STORED BY 'org.apache.carbondata.format';
> +-+--+
> | result |
> +-+--+
> +-+--+
> No rows selected (0.589 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.415 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> +---+---+---+--+--+---+---++-+--+
> | id | name | doj | designation | salary | dept | dob | addr | gender |
> +---+---+---+--+--+---+---++-+--+
> | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
> | 1 | AAA | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 2 | BBB | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 3 | CCC | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 4 | DDD | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 5 | EEE | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | 6 | FFF | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 7 | GGG | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 8 | HHH | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 9 | III | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 10 | JJJ | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | NULL | Name | NULL | Designation | NULL | Dept | NULL | Addr | Gender |
> +---+---+---+--+--+---+---++-+--+
> 12 rows selected (0.418 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (1.424 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> Error: java.io.IOException: java.lang.NullPointerException (state=,code=0)
> 0: jdbc:hive2://ha-cluster/default>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CARBONDATA-422) [Bad Records]Select query failed with "NullPointerException" after data-load with options as MAXCOLUMN and BAD_RECORDS_ACTION

2016-12-21 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766923#comment-15766923
 ] 

Anurag Srivastava edited comment on CARBONDATA-422 at 12/21/16 12:23 PM:
-

Hello SOURYAKANTA DWIVEDY ,

I have tested your same query with the latest build and get the result :

Quert : LOAD DATA inpath 'hdfs://localhost:54310/test1234567.csv' into table 
emp5 options('DELIMITER'=',', 
'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
 'BAD_RECORDS_ACTION'='FORCE');

Result : 

!https://issues.apache.org/jira/secure/attachment/12844241/Screenshot2.png!

Could you please verify this with latest build and close the bug?


was (Author: anuragknoldus):
Hello SOURYAKANTA DWIVEDY ,

I have tested your same query with the latest build and get the result :

Quert : 

Result : 

!https://issues.apache.org/jira/secure/attachment/12844241/Screenshot2.png!

Could you please verify this with latest build and close the bug?

> [Bad Records]Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> -
>
> Key: CARBONDATA-422
> URL: https://issues.apache.org/jira/browse/CARBONDATA-422
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 0.1.1-incubating
> Environment: 3 node Cluster
>Reporter: SOURYAKANTA DWIVEDY
>Priority: Minor
> Attachments: Screenshot2.png
>
>
> Description : Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> Steps:
> 1. Create table
> 2. Load data into table with BAD_RECORDS_ACTION option [ Create Table -- 
> columns -9 ,CSV coulmn - 10 , Header - 9]
> 3. Do select * query ,it will pass
>  4. Then Load data into table with BAD_RECORDS_ACTION and MAXCOLUMN option [ 
> Create Table -- columns -9 ,CSV coulmn - 10 , Header - 9,MAXCOLUMNS -- 9]
> 5. Do select * query ,it will fail with "NullPointerException"
> Log :- 
> ---
> 0: jdbc:hive2://ha-cluster/default> create table emp3(ID int,Name string,DOJ 
> timestamp,Designation string,Salary double,Dept string,DOB timestamp,Addr 
> string,Gender string) STORED BY 'org.apache.carbondata.format';
> +-+--+
> | result |
> +-+--+
> +-+--+
> No rows selected (0.589 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.415 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> +---+---+---+--+--+---+---++-+--+
> | id | name | doj | designation | salary | dept | dob | addr | gender |
> +---+---+---+--+--+---+---++-+--+
> | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
> | 1 | AAA | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 2 | BBB | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 3 | CCC | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 4 | DDD | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 5 | EEE | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | 6 | FFF | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 7 | GGG | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 8 | HHH | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 9 | III | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 10 | JJJ | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | NULL | Name | NULL | Designation | NULL | Dept | NULL | Addr | Gender |
> +---+---+---+--+--+---+---++-+--+
> 12 rows selected (0.418 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (1.424 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> Error: java.io.IOException: java.lang.NullPointerException (state=,code=0)
> 0: jdbc:hive2://ha-cluster/default>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CARBONDATA-422) [Bad Records]Select query failed with "NullPointerException" after data-load with options as MAXCOLUMN and BAD_RECORDS_ACTION

2016-12-21 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766923#comment-15766923
 ] 

Anurag Srivastava commented on CARBONDATA-422:
--

Hello SOURYAKANTA DWIVEDY ,

I have tested your same query with the latest build and get the result :

Quert : 

Result : 

!https://issues.apache.org/jira/secure/attachment/12844241/Screenshot2.png!

Could you please verify this with latest build and close the bug?

> [Bad Records]Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> -
>
> Key: CARBONDATA-422
> URL: https://issues.apache.org/jira/browse/CARBONDATA-422
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 0.1.1-incubating
> Environment: 3 node Cluster
>Reporter: SOURYAKANTA DWIVEDY
>Priority: Minor
> Attachments: Screenshot2.png
>
>
> Description : Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> Steps:
> 1. Create table
> 2. Load data into table with BAD_RECORDS_ACTION option [ Create Table -- 
> columns -9 ,CSV coulmn - 10 , Header - 9]
> 3. Do select * query ,it will pass
>  4. Then Load data into table with BAD_RECORDS_ACTION and MAXCOLUMN option [ 
> Create Table -- columns -9 ,CSV coulmn - 10 , Header - 9,MAXCOLUMNS -- 9]
> 5. Do select * query ,it will fail with "NullPointerException"
> Log :- 
> ---
> 0: jdbc:hive2://ha-cluster/default> create table emp3(ID int,Name string,DOJ 
> timestamp,Designation string,Salary double,Dept string,DOB timestamp,Addr 
> string,Gender string) STORED BY 'org.apache.carbondata.format';
> +-+--+
> | result |
> +-+--+
> +-+--+
> No rows selected (0.589 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.415 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> +---+---+---+--+--+---+---++-+--+
> | id | name | doj | designation | salary | dept | dob | addr | gender |
> +---+---+---+--+--+---+---++-+--+
> | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
> | 1 | AAA | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 2 | BBB | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 3 | CCC | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 4 | DDD | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 5 | EEE | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | 6 | FFF | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 7 | GGG | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 8 | HHH | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 9 | III | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 10 | JJJ | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | NULL | Name | NULL | Designation | NULL | Dept | NULL | Addr | Gender |
> +---+---+---+--+--+---+---++-+--+
> 12 rows selected (0.418 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (1.424 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> Error: java.io.IOException: java.lang.NullPointerException (state=,code=0)
> 0: jdbc:hive2://ha-cluster/default>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-422) [Bad Records]Select query failed with "NullPointerException" after data-load with options as MAXCOLUMN and BAD_RECORDS_ACTION

2016-12-21 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-422:
-
Attachment: (was: Screenshot1.png)

> [Bad Records]Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> -
>
> Key: CARBONDATA-422
> URL: https://issues.apache.org/jira/browse/CARBONDATA-422
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 0.1.1-incubating
> Environment: 3 node Cluster
>Reporter: SOURYAKANTA DWIVEDY
>Priority: Minor
> Attachments: Screenshot2.png
>
>
> Description : Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> Steps:
> 1. Create table
> 2. Load data into table with BAD_RECORDS_ACTION option [ Create Table -- 
> columns -9 ,CSV coulmn - 10 , Header - 9]
> 3. Do select * query ,it will pass
>  4. Then Load data into table with BAD_RECORDS_ACTION and MAXCOLUMN option [ 
> Create Table -- columns -9 ,CSV coulmn - 10 , Header - 9,MAXCOLUMNS -- 9]
> 5. Do select * query ,it will fail with "NullPointerException"
> Log :- 
> ---
> 0: jdbc:hive2://ha-cluster/default> create table emp3(ID int,Name string,DOJ 
> timestamp,Designation string,Salary double,Dept string,DOB timestamp,Addr 
> string,Gender string) STORED BY 'org.apache.carbondata.format';
> +-+--+
> | result |
> +-+--+
> +-+--+
> No rows selected (0.589 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.415 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> +---+---+---+--+--+---+---++-+--+
> | id | name | doj | designation | salary | dept | dob | addr | gender |
> +---+---+---+--+--+---+---++-+--+
> | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
> | 1 | AAA | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 2 | BBB | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 3 | CCC | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 4 | DDD | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 5 | EEE | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | 6 | FFF | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 7 | GGG | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 8 | HHH | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 9 | III | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 10 | JJJ | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | NULL | Name | NULL | Designation | NULL | Dept | NULL | Addr | Gender |
> +---+---+---+--+--+---+---++-+--+
> 12 rows selected (0.418 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (1.424 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> Error: java.io.IOException: java.lang.NullPointerException (state=,code=0)
> 0: jdbc:hive2://ha-cluster/default>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-422) [Bad Records]Select query failed with "NullPointerException" after data-load with options as MAXCOLUMN and BAD_RECORDS_ACTION

2016-12-21 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-422:
-
Attachment: Screenshot2.png

> [Bad Records]Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> -
>
> Key: CARBONDATA-422
> URL: https://issues.apache.org/jira/browse/CARBONDATA-422
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 0.1.1-incubating
> Environment: 3 node Cluster
>Reporter: SOURYAKANTA DWIVEDY
>Priority: Minor
> Attachments: Screenshot2.png
>
>
> Description : Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> Steps:
> 1. Create table
> 2. Load data into table with BAD_RECORDS_ACTION option [ Create Table -- 
> columns -9 ,CSV coulmn - 10 , Header - 9]
> 3. Do select * query ,it will pass
>  4. Then Load data into table with BAD_RECORDS_ACTION and MAXCOLUMN option [ 
> Create Table -- columns -9 ,CSV coulmn - 10 , Header - 9,MAXCOLUMNS -- 9]
> 5. Do select * query ,it will fail with "NullPointerException"
> Log :- 
> ---
> 0: jdbc:hive2://ha-cluster/default> create table emp3(ID int,Name string,DOJ 
> timestamp,Designation string,Salary double,Dept string,DOB timestamp,Addr 
> string,Gender string) STORED BY 'org.apache.carbondata.format';
> +-+--+
> | result |
> +-+--+
> +-+--+
> No rows selected (0.589 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.415 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> +---+---+---+--+--+---+---++-+--+
> | id | name | doj | designation | salary | dept | dob | addr | gender |
> +---+---+---+--+--+---+---++-+--+
> | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
> | 1 | AAA | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 2 | BBB | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 3 | CCC | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 4 | DDD | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 5 | EEE | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | 6 | FFF | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 7 | GGG | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 8 | HHH | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 9 | III | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 10 | JJJ | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | NULL | Name | NULL | Designation | NULL | Dept | NULL | Addr | Gender |
> +---+---+---+--+--+---+---++-+--+
> 12 rows selected (0.418 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (1.424 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> Error: java.io.IOException: java.lang.NullPointerException (state=,code=0)
> 0: jdbc:hive2://ha-cluster/default>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-422) [Bad Records]Select query failed with "NullPointerException" after data-load with options as MAXCOLUMN and BAD_RECORDS_ACTION

2016-12-21 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-422:
-
Attachment: Screenshot1.png

> [Bad Records]Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> -
>
> Key: CARBONDATA-422
> URL: https://issues.apache.org/jira/browse/CARBONDATA-422
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 0.1.1-incubating
> Environment: 3 node Cluster
>Reporter: SOURYAKANTA DWIVEDY
>Priority: Minor
> Attachments: Screenshot1.png
>
>
> Description : Select query failed with "NullPointerException" after data-load 
> with options as MAXCOLUMN and BAD_RECORDS_ACTION
> Steps:
> 1. Create table
> 2. Load data into table with BAD_RECORDS_ACTION option [ Create Table -- 
> columns -9 ,CSV coulmn - 10 , Header - 9]
> 3. Do select * query ,it will pass
>  4. Then Load data into table with BAD_RECORDS_ACTION and MAXCOLUMN option [ 
> Create Table -- columns -9 ,CSV coulmn - 10 , Header - 9,MAXCOLUMNS -- 9]
> 5. Do select * query ,it will fail with "NullPointerException"
> Log :- 
> ---
> 0: jdbc:hive2://ha-cluster/default> create table emp3(ID int,Name string,DOJ 
> timestamp,Designation string,Salary double,Dept string,DOB timestamp,Addr 
> string,Gender string) STORED BY 'org.apache.carbondata.format';
> +-+--+
> | result |
> +-+--+
> +-+--+
> No rows selected (0.589 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (2.415 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> +---+---+---+--+--+---+---++-+--+
> | id | name | doj | designation | salary | dept | dob | addr | gender |
> +---+---+---+--+--+---+---++-+--+
> | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
> | 1 | AAA | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 2 | BBB | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 3 | CCC | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 4 | DDD | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 5 | EEE | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | 6 | FFF | NULL | Trainee | 1.0 | IT | NULL | Pune | Male |
> | 7 | GGG | NULL | SE | 3.0 | NW | NULL | Bangalore | Female |
> | 8 | HHH | NULL | SSE | 4.0 | DATA | NULL | Mumbai | Female |
> | 9 | III | NULL | TL | 6.0 | OPER | NULL | Delhi | Male |
> | 10 | JJJ | NULL | STL | 8.0 | MAIN | NULL | Chennai | Female |
> | NULL | Name | NULL | Designation | NULL | Dept | NULL | Addr | Gender |
> +---+---+---+--+--+---+---++-+--+
> 12 rows selected (0.418 seconds)
> 0: jdbc:hive2://ha-cluster/default> LOAD DATA inpath 
> 'hdfs://hacluster/chetan/emp11.csv' into table emp3 options('DELIMITER'=',', 
> 'QUOTECHAR'='"','FILEHEADER'='ID,Name,DOJ,Designation,Salary,Dept,DOB,Addr,Gender','MAXCOLUMNS'='9',
>  'BAD_RECORDS_ACTION'='FORCE');
> +-+--+
> | Result |
> +-+--+
> +-+--+
> No rows selected (1.424 seconds)
> 0: jdbc:hive2://ha-cluster/default> select * from emp3;
> Error: java.io.IOException: java.lang.NullPointerException (state=,code=0)
> 0: jdbc:hive2://ha-cluster/default>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CARBONDATA-514) Select string type columns will return error.

2016-12-20 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766331#comment-15766331
 ] 

Anurag Srivastava edited comment on CARBONDATA-514 at 12/21/16 7:10 AM:


Hello Cao, Lionel,  

I have created a table as you did with same column and data type, after that I 
run all the command and the command for which you are getting error, is running 
fine with the latest build : 
Query :  cc.sql("select vin, count as cnt from carbontest_002 group by 
vin").show 

Output : 
!https://issues.apache.org/jira/secure/attachment/12844191/Screenshot.png!

So could you please test it with latest code and close the bug?


was (Author: anuragknoldus):
Hello Cao, Lionel,  

I have created a table as you did with same column and data type, after that I 
run all the command and for that command for which you are getting error, is 
running fine with the latest build : 
Query :  cc.sql("select vin, count as cnt from carbontest_002 group by 
vin").show 

Output : 
!https://issues.apache.org/jira/secure/attachment/12844191/Screenshot.png!

So could you please test it with latest code and close the bug?

> Select string type columns will return error.
> -
>
> Key: CARBONDATA-514
> URL: https://issues.apache.org/jira/browse/CARBONDATA-514
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
>Reporter: Cao, Lionel
> Attachments: Screenshot.png
>
>
> The data successfully loaded and count(*) is OK, but when I tried to query 
> the detail data, it returns below error:
> scala> cc.sql("desc carbontest_002").show 
> +-+-+---+ 
> | col_name|data_type|comment| 
> +-+-+---+ 
> |  vin|   string|   | 
> |data_date|   string|   | 
> +-+-+---+ 
> scala> cc.sql("load data inpath 
> 'hdfs://nameservice2/user/appuser/lucao/mydata4.csv' into table 
> default.carbontest_002 OPTIONS('DELIMITER'=',')") 
> WARN  07-12 16:30:30,241 - main skip empty input file: 
> hdfs://nameservice2/user/appuser/lucao/mydata4.csv/_SUCCESS 
> AUDIT 07-12 16:30:34,338 - [*.com][appuser][Thread-1]Data load request has 
> been received for table default.carbontest_002 
> AUDIT 07-12 16:30:38,410 - [*.com][appuser][Thread-1]Data load is successful 
> for default.carbontest_002 
> res12: org.apache.spark.sql.DataFrame = [] 
> scala> cc.sql("select count(*) from carbontest_002") 
> res14: org.apache.spark.sql.DataFrame = [_c0: bigint] 
> scala> res14.show 
> +---+ 
> |_c0| 
> +---+ 
> |100| 
> +---+ 
> scala> cc.sql("select vin, count(*) as cnt from carbontest_002 group by 
> vin").show 
> WARN  07-12 16:32:04,250 - Lost task 1.0 in stage 20.0 (TID 40, *.com): 
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Integer 
> at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106) 
> at 
> org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getInt(rows.scala:41)
>  
> at 
> org.apache.spark.sql.catalyst.expressions.GenericMutableRow.getInt(rows.scala:248)
>  
> at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source) 
> at 
> org.apache.spark.sql.CarbonScan$$anonfun$1$$anon$1.next(CarbonScan.scala:155) 
> at 
> org.apache.spark.sql.CarbonScan$$anonfun$1$$anon$1.next(CarbonScan.scala:149) 
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:512)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.(TungstenAggregationIterator.scala:686)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>  
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>  
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>  
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
> at org.apache.spark.scheduler.Task.run(Task.scala:89) 
> at org.apache.spark.executor.Ex

[jira] [Commented] (CARBONDATA-514) Select string type columns will return error.

2016-12-20 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766331#comment-15766331
 ] 

Anurag Srivastava commented on CARBONDATA-514:
--

Hello Cao, Lionel,  

I have created a table as you did with same column and data type, after that I 
run all the command and for that command for which you are getting error, is 
running fine with the latest build : 
Query :  cc.sql("select vin, count as cnt from carbontest_002 group by 
vin").show 

Output : 
!https://issues.apache.org/jira/secure/attachment/12844191/Screenshot.png!

So could you please test it with latest code and close the bug?

> Select string type columns will return error.
> -
>
> Key: CARBONDATA-514
> URL: https://issues.apache.org/jira/browse/CARBONDATA-514
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
>Reporter: Cao, Lionel
> Attachments: Screenshot.png
>
>
> The data successfully loaded and count(*) is OK, but when I tried to query 
> the detail data, it returns below error:
> scala> cc.sql("desc carbontest_002").show 
> +-+-+---+ 
> | col_name|data_type|comment| 
> +-+-+---+ 
> |  vin|   string|   | 
> |data_date|   string|   | 
> +-+-+---+ 
> scala> cc.sql("load data inpath 
> 'hdfs://nameservice2/user/appuser/lucao/mydata4.csv' into table 
> default.carbontest_002 OPTIONS('DELIMITER'=',')") 
> WARN  07-12 16:30:30,241 - main skip empty input file: 
> hdfs://nameservice2/user/appuser/lucao/mydata4.csv/_SUCCESS 
> AUDIT 07-12 16:30:34,338 - [*.com][appuser][Thread-1]Data load request has 
> been received for table default.carbontest_002 
> AUDIT 07-12 16:30:38,410 - [*.com][appuser][Thread-1]Data load is successful 
> for default.carbontest_002 
> res12: org.apache.spark.sql.DataFrame = [] 
> scala> cc.sql("select count(*) from carbontest_002") 
> res14: org.apache.spark.sql.DataFrame = [_c0: bigint] 
> scala> res14.show 
> +---+ 
> |_c0| 
> +---+ 
> |100| 
> +---+ 
> scala> cc.sql("select vin, count(*) as cnt from carbontest_002 group by 
> vin").show 
> WARN  07-12 16:32:04,250 - Lost task 1.0 in stage 20.0 (TID 40, *.com): 
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Integer 
> at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106) 
> at 
> org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getInt(rows.scala:41)
>  
> at 
> org.apache.spark.sql.catalyst.expressions.GenericMutableRow.getInt(rows.scala:248)
>  
> at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source) 
> at 
> org.apache.spark.sql.CarbonScan$$anonfun$1$$anon$1.next(CarbonScan.scala:155) 
> at 
> org.apache.spark.sql.CarbonScan$$anonfun$1$$anon$1.next(CarbonScan.scala:149) 
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:512)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.(TungstenAggregationIterator.scala:686)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>  
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>  
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>  
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
> at org.apache.spark.scheduler.Task.run(Task.scala:89) 
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
> at java.lang.Thread.run(Thread.java:745) 
> ERROR 07-12 16:32:04,516 - Task 1 in stage 20.0 failed 4 times; aborting job 
> WARN  07-12 16:32:04,600 - Lost task 0.1 in stage 20.0 (TID 45, *): 
> TaskKilled (killed intentionally) 
> ERROR 07-12 16:32:04,604 - Listener SQLListener threw an exception 
>

[jira] [Updated] (CARBONDATA-514) Select string type columns will return error.

2016-12-20 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-514:
-
Attachment: Screenshot.png

> Select string type columns will return error.
> -
>
> Key: CARBONDATA-514
> URL: https://issues.apache.org/jira/browse/CARBONDATA-514
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
>Reporter: Cao, Lionel
> Attachments: Screenshot.png
>
>
> The data successfully loaded and count(*) is OK, but when I tried to query 
> the detail data, it returns below error:
> scala> cc.sql("desc carbontest_002").show 
> +-+-+---+ 
> | col_name|data_type|comment| 
> +-+-+---+ 
> |  vin|   string|   | 
> |data_date|   string|   | 
> +-+-+---+ 
> scala> cc.sql("load data inpath 
> 'hdfs://nameservice2/user/appuser/lucao/mydata4.csv' into table 
> default.carbontest_002 OPTIONS('DELIMITER'=',')") 
> WARN  07-12 16:30:30,241 - main skip empty input file: 
> hdfs://nameservice2/user/appuser/lucao/mydata4.csv/_SUCCESS 
> AUDIT 07-12 16:30:34,338 - [*.com][appuser][Thread-1]Data load request has 
> been received for table default.carbontest_002 
> AUDIT 07-12 16:30:38,410 - [*.com][appuser][Thread-1]Data load is successful 
> for default.carbontest_002 
> res12: org.apache.spark.sql.DataFrame = [] 
> scala> cc.sql("select count(*) from carbontest_002") 
> res14: org.apache.spark.sql.DataFrame = [_c0: bigint] 
> scala> res14.show 
> +---+ 
> |_c0| 
> +---+ 
> |100| 
> +---+ 
> scala> cc.sql("select vin, count(*) as cnt from carbontest_002 group by 
> vin").show 
> WARN  07-12 16:32:04,250 - Lost task 1.0 in stage 20.0 (TID 40, *.com): 
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Integer 
> at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106) 
> at 
> org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getInt(rows.scala:41)
>  
> at 
> org.apache.spark.sql.catalyst.expressions.GenericMutableRow.getInt(rows.scala:248)
>  
> at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source) 
> at 
> org.apache.spark.sql.CarbonScan$$anonfun$1$$anon$1.next(CarbonScan.scala:155) 
> at 
> org.apache.spark.sql.CarbonScan$$anonfun$1$$anon$1.next(CarbonScan.scala:149) 
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:512)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.(TungstenAggregationIterator.scala:686)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
>  
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>  
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>  
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>  
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
> at org.apache.spark.scheduler.Task.run(Task.scala:89) 
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
> at java.lang.Thread.run(Thread.java:745) 
> ERROR 07-12 16:32:04,516 - Task 1 in stage 20.0 failed 4 times; aborting job 
> WARN  07-12 16:32:04,600 - Lost task 0.1 in stage 20.0 (TID 45, *): 
> TaskKilled (killed intentionally) 
> ERROR 07-12 16:32:04,604 - Listener SQLListener threw an exception 
> java.lang.NullPointerException 
> at 
> org.apache.spark.sql.execution.ui.SQLListener.onTaskEnd(SQLListener.scala:167)
>  
> at 
> org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:42)
>  
> at 
> org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
>  
> at 
> org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
>  
> at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.sc

[jira] [Created] (CARBONDATA-543) Implement unit test cases for DataBlockIteratorImpl, IntermediateFileMerger and SortDataRows classes

2016-12-19 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-543:


 Summary: Implement unit test cases for DataBlockIteratorImpl, 
IntermediateFileMerger and SortDataRows classes
 Key: CARBONDATA-543
 URL: https://issues.apache.org/jira/browse/CARBONDATA-543
 Project: CarbonData
  Issue Type: Test
Reporter: Anurag Srivastava
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-541) Implement unit test cases for processing.newflow.dictionary package

2016-12-18 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-541:


 Summary: Implement unit test cases for 
processing.newflow.dictionary package
 Key: CARBONDATA-541
 URL: https://issues.apache.org/jira/browse/CARBONDATA-541
 Project: CarbonData
  Issue Type: Test
Reporter: Anurag Srivastava
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CARBONDATA-324) Decimal and Bigint type columns contains Null, after load data

2016-12-15 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15750746#comment-15750746
 ] 

Anurag Srivastava commented on CARBONDATA-324:
--

!https://issues.apache.org/jira/secure/attachment/12843370/Screenshot%20from%202016-10-19%2010-54-06.png!

> Decimal and Bigint type columns contains Null, after load data
> --
>
> Key: CARBONDATA-324
> URL: https://issues.apache.org/jira/browse/CARBONDATA-324
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Harmeet Singh
> Attachments: Screenshot from 2016-10-19 10-54-06.png
>
>
> Using Thrift server and Beeling client, i am trying to create a table and 
> load the data from CSV. My tables contains BigInt and Decimal Column types, 
> After load the data using Load Data command, The BigInt and Decimal Column 
> contains Null Value. Bellow are the steps:
> Step 1: 
> > create database wednesday;
> > use wednesday;
> > CREATE TABLE one (id int, age iNt, name String, salary decimal, data 
> > bigInt, weight double, dob timeStamp) STORED BY 'carbondata';
> Step 2: 
> Create a csv file which contains column values as below: 
> id, age, name, salary, data, weight, dob
> 1, 54, james, 90, 292092, 34.2, 2016-05-04 22:55:00
> Step 3: 
> Load the data from CSV file as below: 
> > LOAD DATA INPATH 'hdfs://localhost:54310/home/harmeet/sample3.csv' INTO 
> > TABLE one;
> Step 4: 
> Select the data from table one, and BigInt and Decimal column contains Null 
> value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CARBONDATA-324) Decimal and Bigint type columns contains Null, after load data

2016-12-15 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-324:
-
Comment: was deleted

(was: 
!https://issues.apache.org/jira/secure/attachment/12843370/Screenshot%20from%202016-10-19%2010-54-06.png!)

> Decimal and Bigint type columns contains Null, after load data
> --
>
> Key: CARBONDATA-324
> URL: https://issues.apache.org/jira/browse/CARBONDATA-324
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Harmeet Singh
> Attachments: Screenshot from 2016-10-19 10-54-06.png
>
>
> Using Thrift server and Beeling client, i am trying to create a table and 
> load the data from CSV. My tables contains BigInt and Decimal Column types, 
> After load the data using Load Data command, The BigInt and Decimal Column 
> contains Null Value. Bellow are the steps:
> Step 1: 
> > create database wednesday;
> > use wednesday;
> > CREATE TABLE one (id int, age iNt, name String, salary decimal, data 
> > bigInt, weight double, dob timeStamp) STORED BY 'carbondata';
> Step 2: 
> Create a csv file which contains column values as below: 
> id, age, name, salary, data, weight, dob
> 1, 54, james, 90, 292092, 34.2, 2016-05-04 22:55:00
> Step 3: 
> Load the data from CSV file as below: 
> > LOAD DATA INPATH 'hdfs://localhost:54310/home/harmeet/sample3.csv' INTO 
> > TABLE one;
> Step 4: 
> Select the data from table one, and BigInt and Decimal column contains Null 
> value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CARBONDATA-408) Unable to create view from a table

2016-12-13 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15747470#comment-15747470
 ] 

Anurag Srivastava commented on CARBONDATA-408:
--

This is not a bug, Carbon data currently does not support View.

> Unable to create view from a table
> --
>
> Key: CARBONDATA-408
> URL: https://issues.apache.org/jira/browse/CARBONDATA-408
> Project: CarbonData
>  Issue Type: Bug
>Reporter: SWATI RAO
>Priority: Trivial
>
> When we tried to execute the following query to create view in carbon :
> create view emp_view AS Select name,sal from demo2;
> NOTE :demo2 table contains following columns: 
> id Int,
> name String, 
> sal decimal
> we got the following exception:
> Error: org.apache.spark.sql.execution.QueryExecutionException: FAILED: 
> SemanticException [Error 10004]: Line 1:31 Invalid table alias or column 
> reference 'name': (possible column names are: col) (state=,code=0)
> where as we are able to create view in hive using the same query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CARBONDATA-442) SELECT querry result mismatched with hive result

2016-12-13 Thread Anurag Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15747459#comment-15747459
 ] 

Anurag Srivastava commented on CARBONDATA-442:
--

This is not a bug, There is difference column in create table query and load 
data query. It should be same. Please closed it.

> SELECT querry result mismatched with hive result
> 
>
> Key: CARBONDATA-442
> URL: https://issues.apache.org/jira/browse/CARBONDATA-442
> Project: CarbonData
>  Issue Type: Bug
>Reporter: SWATI RAO
>
> => I created table using following command : 
> create table Carbon_automation_test5 (imei string,deviceInformationId int,MAC 
> string,deviceColor string,device_backColor string,modelId string,marketName 
> string,AMSize string,ROMSize string,CUPAudit string,CPIClocked string,series 
> string,productionDate string,bomCode string,internalModels string, 
> deliveryTime string, channelsId string,channelsName string , deliveryAreaId 
> string, deliveryCountry string, deliveryProvince string, deliveryCity 
> string,deliveryDistrict string, deliveryStreet string,oxSingleNumber string, 
> ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, 
> ActiveProvince string, Activecity string, ActiveDistrict string, ActiveStreet 
> string, ActiveOperatorId string, Active_releaseId string, Active_EMUIVersion 
> string,Active_operaSysVersion string, Active_BacVerNumber string, 
> Active_BacFlashVer string,Active_webUIVersion string, Active_webUITypeCarrVer 
> string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
> Active_phonePADPartitionedVersions string,Latest_YEAR int, Latest_MONTH int, 
> Latest_DAY int, Latest_HOUR string, Latest_areaId string, Latest_country 
> string, Latest_province string, Latest_city string,Latest_district string, 
> Latest_street string, Latest_releaseId string,Latest_EMUIVersion string, 
> Latest_operaSysVersion string, Latest_BacVerNumber string,Latest_BacFlashVer 
> string, Latest_webUIVersion string, Latest_webUITypeCarrVer 
> string,Latest_webTypeDataVerNumber string, Latest_operatorsVersion 
> string,Latest_phonePADPartitionedVersions string, Latest_operatorId 
> string,gamePointDescription string, gamePointId int,contractNumber int) 
> stored by 'org.apache.carbondata.format' 
> => Load csv to table : 
> LOAD DATA INPATH 'hdfs://localhost:54310/user/hduser/100_olap.csv' INTO table 
> Carbon_automation_test5 OPTIONS('DELIMITER'= ',' ,'QUOTECHAR'= '"', 
> 'FILEHEADER'= 
> 'imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription')
> =>now executed SELECT querry : 
> SELECT Carbon_automation_test5.AMSize AS AMSize, 
> Carbon_automation_test5.ActiveCountry AS ActiveCountry, 
> Carbon_automation_test5.Activecity AS Activecity , 
> SUM(Carbon_automation_test5.gamePointId) AS Sum_gamePointId FROM ( SELECT 
> AMSize,ActiveCountry,gamePointId, Activecity FROM (select * from 
> Carbon_automation_test5) SUB_QRY ) Carbon_automation_test5 INNER JOIN ( 
> SELECT ActiveCountry, Activecity, AMSize FROM (select * from 
> Carbon_automation_test5) SUB_QRY ) Carbon_automation_vmall_test1 ON 
> Carbon_automation_test5.AMSize = Carbon_automation_vmall_test1.AMSize WHERE 
> NOT(Carbon_automation_test5.AMSize <= '3RAM size') GROUP BY 
> Carbon_automation_test5.AMSize, Carbon_automation_test5.ActiveCountry, 
> Carbon_automation_test5.Activecity ORDER BY Carbon_automation_test5.AMSize 
> ASC, Carbon_automation_test5.ActiveCountry ASC, 
> Carbon_automation_test5.Activecity ASC;
> +++-+--+--+
> |   AMSize   | ActiveCountry  | Activecity  | Sum_gamePointId  |
> +++-+--+--+
> | 4RAM size  | Chinese| changsha| 200860   |
> | 4RAM size  | Chinese   

[jira] [Created] (CARBONDATA-515) Implement unit test cases for processing.newflow.converter package

2016-12-08 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-515:


 Summary: Implement unit test cases for 
processing.newflow.converter package
 Key: CARBONDATA-515
 URL: https://issues.apache.org/jira/browse/CARBONDATA-515
 Project: CarbonData
  Issue Type: Test
Reporter: Anurag Srivastava
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-496) Implement unit test cases for core.carbon.datastore package

2016-12-05 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-496:


 Summary: Implement unit test cases for core.carbon.datastore 
package
 Key: CARBONDATA-496
 URL: https://issues.apache.org/jira/browse/CARBONDATA-496
 Project: CarbonData
  Issue Type: Test
Reporter: Anurag Srivastava
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-494) Implement unit test cases for filter.executer package

2016-12-05 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-494:


 Summary: Implement unit test cases for filter.executer package
 Key: CARBONDATA-494
 URL: https://issues.apache.org/jira/browse/CARBONDATA-494
 Project: CarbonData
  Issue Type: Test
Reporter: Anurag Srivastava
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-474) Implement unit test cases for core.datastorage package

2016-12-04 Thread Anurag Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Srivastava updated CARBONDATA-474:
-
Summary: Implement unit test cases for core.datastorage package  (was: 
Implement unit test cases for core.datastorage.store.columnar package)

> Implement unit test cases for core.datastorage package
> --
>
> Key: CARBONDATA-474
> URL: https://issues.apache.org/jira/browse/CARBONDATA-474
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Anurag Srivastava
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >