[jira] [Created] (CARBONDATA-1037) Null pointer Exception is coming ,when we do select * from old table after Alter Table Rename Operation.

2017-05-07 Thread Priyal Sachdeva (JIRA)
Priyal Sachdeva created CARBONDATA-1037:
---

 Summary: Null pointer Exception is coming ,when we do select * 
from old table  after Alter Table Rename Operation.
 Key: CARBONDATA-1037
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1037
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.1.0
 Environment: 3 node cluster SUSE 11 SP4
Reporter: Priyal Sachdeva
 Fix For: NONE
 Attachments: Null_Pointer_exception-Error.JPG, show_tables.JPG

create database Priyal;

Use Priyal;

Create Table

CREATE TABLE uniqdata111785 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES('DICTIONARY_INCLUDE'='INTEGER_COLUMN1,CUST_ID');


Load Data into Table

LOAD DATA INPATH 'hdfs://hacluster/user/Priyal/2000_UniqData.csv' into table 
uniqdata111785 OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='TRUE', 
'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

LOAD DATA INPATH 'hdfs://hacluster/user/Priyal/2000_UniqData.csv' into table 
uniqdata111785 OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='TRUE', 
'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

Alter Table name

alter table Priyal.uniqdata111785 RENAME TO  uniqdata1117856;

Select query on Old Table

select * from Priyal.uniqdata111785 limit 10;

0: jdbc:hive2://172.168.100.199:23040> select * from Priyal.uniqdata111785 
limit 10;
Error: java.lang.NullPointerException (state=,code=0)


Show tables;

0: jdbc:hive2://172.168.100.199:23040> show tables;
+---+--+--+--+
| database  |tableName | isTemporary  |
+---+--+--+--+
| priyal| uniqdata1117856  | false|
+---+--+--+--+

Expected Output: Table does not exist error message should come.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CARBONDATA-1036) Add Support to Read from CarbonData as a source - Flink Integration

2017-05-07 Thread Sangeeta Gulia (JIRA)
Sangeeta Gulia created CARBONDATA-1036:
--

 Summary: Add Support to Read from CarbonData as a source - Flink 
Integration
 Key: CARBONDATA-1036
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1036
 Project: CarbonData
  Issue Type: New Feature
  Components: flink-integration
Reporter: Sangeeta Gulia
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CARBONDATA-1035) 8. IUD on partition table

2017-05-07 Thread QiangCai (JIRA)
QiangCai created CARBONDATA-1035:


 Summary: 8. IUD on partition table
 Key: CARBONDATA-1035
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1035
 Project: CarbonData
  Issue Type: Sub-task
Reporter: QiangCai






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [jira] [Created] (CARBONDATA-1030) Support reading specified segment or carbondata file

2017-05-07 Thread Liang Chen
Hi 

+1 for this feature. 
How about the DDL script as below : 
carbon.sql("select * from carbontable in segmentid(0,3,5,7) where filter 
conditions").show() 

Regards 
Liang 



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/jira-Created-CARBONDATA-1030-Support-reading-specified-segment-or-carbondata-file-tp12126p12154.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


Re: [jira] [Created] (CARBONDATA-1030) Support reading specified segment or carbondata file

2017-05-07 Thread Liang Chen
Hi

+1 for this feature.
How about the DDL script as below :
carbon.sql("select * from carbontable in segmentid(0,3,5,7) where filter
conditions").show()

Regards
Liang


2017-05-05 22:33 GMT+08:00 Jin Zhou (JIRA) :

> Jin Zhou created CARBONDATA-1030:
> 
>
>  Summary: Support reading specified segment or carbondata file
>  Key: CARBONDATA-1030
>  URL: https://issues.apache.org/
> jira/browse/CARBONDATA-1030
>  Project: CarbonData
>   Issue Type: Improvement
> Reporter: Jin Zhou
> Priority: Minor
>
>
> We can query whole table in SQL way currently, but reading specified
> segments or data files is useful in some scenarios such as incremental data
> processing.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.15#6346)
>



-- 
Regards
Liang


[jira] [Created] (CARBONDATA-1034) FilterUnsupportedException thrown for select from table where = filter for int column has negative of value larger than int max range

2017-05-07 Thread Chetan Bhat (JIRA)
Chetan Bhat created CARBONDATA-1034:
---

 Summary: FilterUnsupportedException thrown for select from table 
where = filter for int column has negative of value larger than int max range
 Key: CARBONDATA-1034
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1034
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.1.0
 Environment: 3 node cluster
Reporter: Chetan Bhat
 Attachments: file.csv

In Beeline user creates a table and loads data in the table.
User executes select from table where = filter for int column has negative of 
value larger than int max range.
0: jdbc:hive2://172.168.100.199:23040> CREATE table mycube21 (column1 STRING, 
column2 STRING,column3 INT, column4 INT,column5 INT, column6 INT) stored by 
'org.apache.carbondata.format' 
TBLPROPERTIES("columnproperties.column1.shared_column"="shared.column1","columnproperties.column2.shared_column"="shared.column2");
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.059 seconds)
0: jdbc:hive2://172.168.100.199:23040> LOAD DATA INPATH 
'hdfs://hacluster/chetan/file.csv' INTO TABLE mycube21 
OPTIONS('DELIMITER'=',','QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='');
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (1.198 seconds)
0: jdbc:hive2://172.168.100.199:23040> select * from mycube21 where 
column4=-9223372036854775808;


Actual Result : FilterUnsupportedException thrown for select from table where = 
filter for int column has negative of value larger than int max range.
0: jdbc:hive2://172.168.100.199:23040> select * from mycube21 where 
column4=-9223372036854775808;
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 
0 in stage 936.0 failed 4 times, most recent failure: Lost task 0.3 in stage 
936.0 (TID 42603, linux-53, executor 1): 
org.apache.spark.util.TaskCompletionListenerException: 
java.util.concurrent.ExecutionException: 
org.apache.carbondata.core.scan.expression.exception.FilterUnsupportedException:
 java.lang.Long cannot be cast to java.lang.Integer
at 
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace: (state=,code=0)


Expected Result : Exception should not be thrown. Select query should return 
correct result set (0 rows).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CARBONDATA-1033) using column with array type bucket table is created but exception thrown when select performed

2017-05-07 Thread Chetan Bhat (JIRA)
Chetan Bhat created CARBONDATA-1033:
---

 Summary: using column with array type bucket table is 
created but exception thrown when select performed
 Key: CARBONDATA-1033
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1033
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.1.0
 Environment: 3 node cluster 
Reporter: Chetan Bhat


User tries to create a bucket table with array.
Table is successful as shown below.
0: jdbc:hive2://172.168.100.199:23040> CREATE TABLE uniqData_t4(ID Int, date 
Timestamp, country String,name String, phonetype String, serialname String, 
salary Int,mobile array)USING org.apache.spark.sql.CarbonSource 
OPTIONS("bucketnumber"="1", "bucketcolumns"="name","tableName"="uniqData_t4");
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.061 seconds)

User executes select query on bucket table with column type having 
array.

Actual Issue :
When user performs select query on bucket table with column type having 
array the UncheckedExecutionException is thrown.
0: jdbc:hive2://172.168.100.199:23040> select count(*) from uniqData_t4;
Error: org.spark_project.guava.util.concurrent.UncheckedExecutionException: 
java.lang.Exception: Do not have default and uniqdata_t4 (state=,code=0)
0: jdbc:hive2://172.168.100.199:23040> select * from uniqData_t4;
Error: org.spark_project.guava.util.concurrent.UncheckedExecutionException: 
java.lang.Exception: Do not have default and uniqdata_t4 (state=,code=0)


Expected : The bucket table creation with array type should not be 
created. If its created then the select query should return correct result set 
without throwing exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)