[jira] [Updated] (CARBONDATA-1046) Single_pass_loading is throwing an error in Spark1.6 in automation

2017-05-15 Thread SWATI RAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SWATI RAO updated CARBONDATA-1046:
--
Request participants:   (was: )
 Summary: Single_pass_loading is throwing an error in Spark1.6 
in automation  (was: Single_pass_loading is throwing an error in Spark1.6)

> Single_pass_loading is throwing an error in Spark1.6 in automation
> --
>
> Key: CARBONDATA-1046
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1046
> Project: CarbonData
>  Issue Type: Bug
> Environment: Spark1.6
>Reporter: SWATI RAO
> Fix For: 1.1.0
>
> Attachments: 7000_UniqData.csv
>
>
> Steps to Reproduce :
> Create Table :
> CREATE TABLE uniqdata_INCLUDEDICTIONARY (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (1.709 seconds)
> Load Query :
> LOAD DATA INPATH 
> 'hdfs://hadoop-master:54310/BabuStore/Data/uniqdata/7000_UniqData.csv' into 
> table uniqdata_INCLUDEDICTIONARY OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='TRUE', 
> 'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_Pass'='true');
> Stack Trace :
> INFO  11-05 13:54:45,047 - Running query 'LOAD DATA INPATH 
> 'hdfs://hadoop-master:54310/BabuStore/Data/uniqdata/7000_UniqData.csv' into 
> table uniqdata_INCLUDEDICTIONARY OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='TRUE', 
> 'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1','SINGLE_Pass'='true')'
>  with 44e92bcb-f9e1-4b2e-835e-a82eae525fe4
> INFO  11-05 13:54:45,047 - pool-31-thread-3 Query [LOAD DATA INPATH 
> 'HDFS://HADOOP-MASTER:54310/BABUSTORE/DATA/UNIQDATA/7000_UNIQDATA.CSV' INTO 
> TABLE UNIQDATA_INCLUDEDICTIONARY OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='TRUE', 
> 'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,DOUBLE_COLUMN1,DOUBLE_COLUMN2,INTEGER_COLUMN1','SINGLE_PASS'='TRUE')]
> INFO  11-05 13:54:45,065 - pool-31-thread-3 HDFS lock 
> path:hdfs://192.168.2.145:54310/opt/olapcontent/default/uniqdata_includedictionary/meta.lock
> INFO  11-05 13:54:45,097 - Successfully able to get the table metadata file 
> lock
> INFO  11-05 13:54:45,099 - pool-31-thread-3 Initiating Direct Load for the 
> Table : (default.uniqdata_includedictionary)
> AUDIT 11-05 13:54:45,100 - [hadoop-master][hduser][Thread-150]Data load 
> request has been received for table default.uniqdata_includedictionary
> AUDIT 11-05 13:54:45,100 - [hadoop-master][hduser][Thread-150]Data is loading 
> with New Data Flow for table default.uniqdata_includedictionary
> ERROR 11-05 13:54:45,104 - Dictionary server Dictionary Server Start Failed
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
>   at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
>   at 
> io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
>   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
>   at 
> io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
>   at 
> 

[GitHub] carbondata issue #900: [CARBONDATA 1040] Add notes of unsupported update tab...

2017-05-15 Thread sgururajshetty
Github user sgururajshetty commented on the issue:

https://github.com/apache/carbondata/pull/900
  
LGTM
@chenliang613 kindly review 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #821: [CARBONDATA-921] resolved bug for unable to select ou...

2017-05-15 Thread anubhav100
Github user anubhav100 commented on the issue:

https://github.com/apache/carbondata/pull/821
  
@chenliang613 @cenyuhai  can it be merged now there are some other prs 
dependent on this pr


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (CARBONDATA-1035) 8. IUD on partition table

2017-05-15 Thread QiangCai (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

QiangCai reassigned CARBONDATA-1035:


Assignee: QiangCai

> 8. IUD on partition table
> -
>
> Key: CARBONDATA-1035
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1035
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: core, data-load, data-query
>Reporter: QiangCai
>Assignee: QiangCai
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (CARBONDATA-941) 7. Compaction of Partition Table

2017-05-15 Thread QiangCai (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

QiangCai closed CARBONDATA-941.
---
Resolution: Fixed
  Assignee: QiangCai

Because using partition id as task id, we can reuse old code.

> 7. Compaction of Partition Table
> 
>
> Key: CARBONDATA-941
> URL: https://issues.apache.org/jira/browse/CARBONDATA-941
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: core, data-load, data-query
>Reporter: QiangCai
>Assignee: QiangCai
>
> compaction same partition of segments



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] carbondata pull request #902: [CARBONDATA-938] Prune partitions for query ba...

2017-05-15 Thread QiangCai
Github user QiangCai closed the pull request at:

https://github.com/apache/carbondata/pull/902


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (CARBONDATA-1034) FilterUnsupportedException thrown for select from table where = filter for int column has negative of value larger than int max range

2017-05-15 Thread Srigopal Mohanty (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srigopal Mohanty reassigned CARBONDATA-1034:


Assignee: (was: Srigopal Mohanty)

> FilterUnsupportedException thrown for select from table where = filter for 
> int column has negative of value larger than int max range
> -
>
> Key: CARBONDATA-1034
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1034
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.1.0
> Environment: 3 node cluster
>Reporter: Chetan Bhat
> Attachments: file.csv
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In Beeline user creates a table and loads data in the table.
> User executes select from table where = filter for int column has negative of 
> value larger than int max range.
> 0: jdbc:hive2://172.168.100.199:23040> CREATE table mycube21 (column1 STRING, 
> column2 STRING,column3 INT, column4 INT,column5 INT, column6 INT) stored by 
> 'org.apache.carbondata.format' 
> TBLPROPERTIES("columnproperties.column1.shared_column"="shared.column1","columnproperties.column2.shared_column"="shared.column2");
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.059 seconds)
> 0: jdbc:hive2://172.168.100.199:23040> LOAD DATA INPATH 
> 'hdfs://hacluster/chetan/file.csv' INTO TABLE mycube21 
> OPTIONS('DELIMITER'=',','QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (1.198 seconds)
> 0: jdbc:hive2://172.168.100.199:23040> select * from mycube21 where 
> column4=-9223372036854775808;
> Actual Result : FilterUnsupportedException thrown for select from table where 
> = filter for int column has negative of value larger than int max range.
> 0: jdbc:hive2://172.168.100.199:23040> select * from mycube21 where 
> column4=-9223372036854775808;
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 936.0 failed 4 times, most recent failure: Lost task 0.3 in 
> stage 936.0 (TID 42603, linux-53, executor 1): 
> org.apache.spark.util.TaskCompletionListenerException: 
> java.util.concurrent.ExecutionException: 
> org.apache.carbondata.core.scan.expression.exception.FilterUnsupportedException:
>  java.lang.Long cannot be cast to java.lang.Integer
> at 
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
> at org.apache.spark.scheduler.Task.run(Task.scala:112)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace: (state=,code=0)
> 0: jdbc:hive2://172.168.100.199:23040> select c1_int from test_boundary where 
> c1_int in (2.147483647E9,2345.0,1234.0);
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 787.0 failed 4 times, most recent failure: Lost task 0.3 in 
> stage 787.0 (TID 9758, linux-49, executor 2): 
> org.apache.spark.util.TaskCompletionListenerException: 
> java.util.concurrent.ExecutionException: 
> org.apache.carbondata.core.scan.expression.exception.FilterUnsupportedException:
>  java.lang.Long cannot be cast to java.lang.Integer
> at 
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
> at org.apache.spark.scheduler.Task.run(Task.scala:112)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Also the issue is happening with the below queries.
> select c1_int from test_boundary where c1_int not in 
> (2.147483647E9,2345.0,1234.0);
> select c1_int+0.100 from Test_Boundary where c1_int < 2.147483647E9 ;
> select * from (select c1_int from Test_Boundary where c1_int between 
> -2.147483648E9 and 2.147483647E9) e ;
> Expected Result : Exception should not be thrown. Only error message should 
> be displayed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (CARBONDATA-774) Not like operator does not work properly in carbondata

2017-05-15 Thread Vinod Rohilla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Rohilla closed CARBONDATA-774.


Issue Fixed

> Not like operator does not work properly in carbondata
> --
>
> Key: CARBONDATA-774
> URL: https://issues.apache.org/jira/browse/CARBONDATA-774
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Priority: Trivial
> Fix For: 1.1.0
>
> Attachments: CSV.tar.gz
>
>
> Not Like operator result does not display same as hive.
> Steps to reproduces:
> A): Create table in Hive
> CREATE TABLE uniqdata_h (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
> 2:Load Data in hive
> a)load data local inpath '/opt/TestData/Data/uniqdata/2000_UniqData.csv' into 
> table uniqdata_h
> b)load data local inpath '/opt/TestData/Data/uniqdata/4000_UniqData.csv' into 
> table uniqdata_h
> c)load data local inpath '/opt/TestData/Data/uniqdata/6000_UniqData.csv' into 
> table uniqdata_h
> d)load data local inpath '/opt/TestData/Data/uniqdata/7000_UniqData.csv' into 
> table uniqdata_h
> e)load data local inpath '/opt/TestData/Data/uniqdata/3000_1_UniqData.csv' 
> into table uniqdata_h
> 3: Run the Query:
> select CUST_ID from uniqdata_h where CUST_ID NOT LIKE 100079
> 4:Result in Hive
> +--+--+
> | CUST_ID  |
> +--+--+
> | 8999 |
> | 9000 |
> | 9001 |
> | 9002 |
> | 9003 |
> | 9004 |
> | 9005 |
> | 9006 |
> | 9007 |
> | 9008 |
> | 9009 |
> | 9010 |
> | 9011 |
> | 9012 |
> | 9013 |
> | 9014 |
> | 9015 |
> | 9016 |
> | 9017 |
> | 9018 |
> | 9019 |
> | 9020 |
> | 9021 |
> | 9022 |
> | 9023 |
> | 9024 |
> | 9025 |
> | 9026 |
> | 9027 |
> | 9028 |
> | 9029 |
> | 9030 |
> | 9031 |
> | 9032 |
> | 9033 |
> | 9034 |
> | 9035 |
> | 9036 |
> | 9037 |
> | 9038 |
> | 9039 |
> | 9040 |
> | 9041 |
> | 9042 |
> | 9043 |
> | 9044 |
> | 9045 |
> | 9046 |
> | 9047 |
> | 9048 |
> | 9049 |
> | 9050 |
> | 9051 |
> | 9052 |
> | 9053 |
> | 9054 |
> | 9055 |
> | 9056 |
> | 9057 |
> | 9058 |
> | 9059 |
> | 9060 |
> | 9061 |
> | 9062 |
> | 9063 |
> | 9064 |
> | 9065 |
> | 9066 |
> | 9067 |
> | 9068 |
> | 9069 |
> | 9070 |
> | 9071 |
> | 9072 |
> | 9073 |
> | 9074 |
> | 9075 |
> | 9076 |
> | 9077 |
> | 9078 |
> | 9079 |
> | 9080 |
> | 9081 |
> | 9082 |
> | 9083 |
> | 9084 |
> | 9085 |
> | 9086 |
> | 9087 |
> | 9088 |
> | 9089 |
> | 9090 |
> | 9091 |
> | 9092 |
> | 9093 |
> | 9094 |
> | 9095 |
> | 9096 |
> | 9097 |
> | 9098 |
> +--+--+
> | CUST_ID  |
> +--+--+
> | 9099 |
> | 9100 |
> | 9101 |
> | 9102 |
> | 9103 |
> | 9104 |
> | 9105 |
> | 9106 |
> | 9107 |
> | 9108 |
> | 9109 |
> | 9110 |
> | 9111 |
> | 9112 |
> | 9113 |
> | 9114 |
> | 9115 |
> | 9116 |
> | 9117 |
> | 9118 |
> | 9119 |
> | 9120 |
> | 9121 |
> | 9122 |
> | 9123 |
> | 9124 |
> | 9125 |
> | 9126 |
> | 9127 |
> | 9128 |
> | 9129 |
> | 9130 |
> | 9131 |
> | 9132 |
> | 9133 |
> | 9134 |
> | 9135 |
> | 9136 |
> | 9137 |
> | 9138 |
> | 9139 |
> | 9140 |
> | 9141 |
> | 9142 |
> | 9143 |
> | 9144 |
> | 9145 |
> | 9146 |
> | 9147 |
> | 9148 |
> | 9149 |
> | 9150 |
> | 9151 |
> | 9152 |
> | 9153 |
> | 9154 |
> | 9155 |
> | 9156 |
> | 9157 |
> | 9158 |
> | 9159 |
> | 9160 |
> | 9161 |
> | 9162 |
> | 9163 |
> | 9164 |
> | 9165 |
> | 9166 |
> | 9167 |
> | 9168 |
> | 9169 |
> | 9170 |
> | 9171 |
> | 9172 |
> | 9173 |
> | 9174 |
> | 9175 |
> | 9176 |
> | 9177 |
> | 9178 |
> | 9179 |
> | 9180 |
> | 9181 |
> | 9182 |
> | 9183 |
> | 9184 |
> | 9185 |
> | 9186 |
> | 9187 |
> | 9188 |
> | 9189 |
> | 9190 |
> | 9191 |
> | 9192 |
> | 9193 |
> | 9194 |
> | 9195 |
> | 9196 |
> | 9197 |
> | 9198 |
> +--+--+
> | CUST_ID  |
> +--+--+
> | 9199 |
> | 

[jira] [Closed] (CARBONDATA-717) AND operator does not work properly in carbondata

2017-05-15 Thread Vinod Rohilla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Rohilla closed CARBONDATA-717.

Resolution: Fixed

Fixed

> AND operator does not work properly in carbondata
> -
>
> Key: CARBONDATA-717
> URL: https://issues.apache.org/jira/browse/CARBONDATA-717
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Priority: Minor
> Attachments: sample.csv
>
>
> Incorrect result displays to a user while use AND operator.
> Note:Issue exist when you take ID as a String but if you take ID as a int 
> then it's working fine.
> Steps to Reproduce:
> 1: create a table:
>  CREATE TABLE IF NOT EXISTS t4 (ID string, name string) STORED BY 
> 'carbondata';
> 2:Load the data
> LOAD DATA LOCAL INPATH '/home/Desktop/sample.csv' into table t4;
> 3:Total record in the table.
> 0: jdbc:hive2://localhost:1> select * from t4;
> +-++--+
> | ID  |  name  |
> +-++--+
> | 1   | david  |
> | 2   | eason  |
> | 3   | jarry  |
> +-++--+
> 4: SELECT * FROM t3 where id>=1 and id <3;
> 0: jdbc:hive2://localhost:1> SELECT * FROM t4 where id>=1 and id <3;
> +-++--+
> | ID  |  name  |
> +-++--+
> | 2   | eason  |
> | 3   | jarry  |
> +-++--+
> Expected Result: It should show the below result:
> 0: jdbc:hive2://localhost:1> SELECT * FROM t4 where id>=1 and id <3;
> +-++--+
> | ID  |  name  |
> +-++--+
> | 1   | david  |
> | 2   | eason  |
> +-++--+



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CARBONDATA-717) AND operator does not work properly in carbondata

2017-05-15 Thread Vinod Rohilla (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010356#comment-16010356
 ] 

Vinod Rohilla commented on CARBONDATA-717:
--

Issue Fixed.

> AND operator does not work properly in carbondata
> -
>
> Key: CARBONDATA-717
> URL: https://issues.apache.org/jira/browse/CARBONDATA-717
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.0.0-incubating
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Priority: Minor
> Attachments: sample.csv
>
>
> Incorrect result displays to a user while use AND operator.
> Note:Issue exist when you take ID as a String but if you take ID as a int 
> then it's working fine.
> Steps to Reproduce:
> 1: create a table:
>  CREATE TABLE IF NOT EXISTS t4 (ID string, name string) STORED BY 
> 'carbondata';
> 2:Load the data
> LOAD DATA LOCAL INPATH '/home/Desktop/sample.csv' into table t4;
> 3:Total record in the table.
> 0: jdbc:hive2://localhost:1> select * from t4;
> +-++--+
> | ID  |  name  |
> +-++--+
> | 1   | david  |
> | 2   | eason  |
> | 3   | jarry  |
> +-++--+
> 4: SELECT * FROM t3 where id>=1 and id <3;
> 0: jdbc:hive2://localhost:1> SELECT * FROM t4 where id>=1 and id <3;
> +-++--+
> | ID  |  name  |
> +-++--+
> | 2   | eason  |
> | 3   | jarry  |
> +-++--+
> Expected Result: It should show the below result:
> 0: jdbc:hive2://localhost:1> SELECT * FROM t4 where id>=1 and id <3;
> +-++--+
> | ID  |  name  |
> +-++--+
> | 1   | david  |
> | 2   | eason  |
> +-++--+



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CARBONDATA-1053) Support Reading Char Data Type in Hive From CarbonTable

2017-05-15 Thread anubhav tarar (JIRA)
anubhav tarar created CARBONDATA-1053:
-

 Summary: Support Reading Char Data Type in Hive From CarbonTable
 Key: CARBONDATA-1053
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1053
 Project: CarbonData
  Issue Type: Improvement
  Components: hive-integration
Reporter: anubhav tarar
Assignee: anubhav tarar
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] carbondata pull request #915: [CARBONDATA-946] Spark 2x tupleId support for ...

2017-05-15 Thread nareshpr
GitHub user nareshpr opened a pull request:

https://github.com/apache/carbondata/pull/915

[CARBONDATA-946] Spark 2x tupleId support for IUD Feature



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nareshpr/incubator-carbondata 
spark2xIUDtupleId

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/915.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #915


commit 9b817c6dad626d5636f6c6e5d01550595e082f55
Author: nareshpr 
Date:   2017-05-15T07:40:08Z

Spark 2x tupleId support for IUD Feature




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (CARBONDATA-1052) Complete result does not display in Sort_column table.

2017-05-15 Thread Vinod Rohilla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Rohilla updated CARBONDATA-1052:
--
Summary: Complete result does not display in Sort_column table.  (was: 
Complete result does not display in Short_column table.)

> Complete result does not display in Sort_column table.
> --
>
> Key: CARBONDATA-1052
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1052
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Priority: Minor
> Attachments: newdata.csv
>
>
> Incomplete result displays to the user while select query.
> Steps to reproduces:
> 1: Create table:
> CREATE TABLE sorttable4_offheap_safe (empno int, empname String, designation 
> String, doj Timestamp, workgroupcategory int, workgroupcategoryname String, 
> deptno int, deptname String, projectcode int, projectjoindate Timestamp, 
> projectenddate Timestamp,attendance int,utilization int,salary int) STORED BY 
> 'org.apache.carbondata.format' 
> tblproperties('sort_columns'='workgroupcategory, empname');
> 2: Load Data
> LOAD DATA local inpath 'hdfs://localhost:54310/newdata.csv' INTO TABLE 
> sorttable4_offheap_safe OPTIONS('DELIMITER'= ',', 'QUOTECHAR'= '\"');
> 3: Select Query & result set:
> 0: jdbc:hive2://localhost:1> Select  * from sorttable4_offheap_safe;
> ++---+---++++-+---+--+++-+--+-+--+
> | empno  |  empname  |  designation   
>|  doj   | workgroupcategory  | workgroupcategoryname  
> | deptno  |   deptname| 
> projectcode  |projectjoindate | projectenddate | attendance  
> | utilization  | salary  |
> ++---+---++++-+---+--+++-+--+-+--+
> | 15012  | Robert| Component Engineer 
>| 1980-06-29 01:31:41.0  | 2067   | NM 
> | 10963   | Component Engineer| 
> 121  | 2015-07-13 17:34:48.0  | 2016-09-01 09:13:53.0  | 33691   
> | 6341805  | 66463   |
> | 15013  | Richard   | Conference Coordinator 
>| 2010-05-27 13:04:06.0  | 10408  | AL 
> | 76341   | Conference Coordinator| 
> 131  | 2015-07-15 07:56:36.0  | 2016-11-25 13:14:17.0  | 61992   
> | 58731| 59651   |
> | 15027  | Jonathon  | Flight Nurse   
>| 1999-05-14 09:17:19.0  | 10635  | MA 
> | 14845   | Flight Nurse  | 
> 271  | 2015-09-29 23:19:06.0  | 2016-04-28 19:55:40.0  | 57908   
> | 161869   | 78633   |
> | 15010  | Sue   | Clinical Reviewer  
>| 2004-01-02 20:04:47.0  | 11623  | IA 
> | 34148   | Clinical Reviewer | 
> 101  | 2015-09-16 09:49:18.0  | 2016-03-24 06:31:12.0  | 46362   
> | 6696935  | 32713   |
> | 15011  | Carlos| Commercial Print Management Consultant 
>| 1993-07-03 04:17:06.0  | 20091  | RI 
> | 32071   | Commercial Print Management Consultant| 
> 111  | 2015-07-09 04:04:15.0  | 2016-03-13 14:42:45.0  | 19847   
> | 6654411  | 18747   |
> | 15028  | Daniell   | Front Desk Agent   
>| 1981-04-19 13:14:14.0  | 20192  | PR 
> | 31469   | Front Desk Agent  | 
> 281  | 2015-03-08 22:12:39.0  | 2016-12-20 11:59:57.0  | 98291   
> | 4566010  | 61481   |
> | 15020  | Kathryn   | Dispatcher 
>| 1981-08-21 04:34:26.0  | 21670  | ME 
> | 2543| Dispatcher| 
> 201  | 2015-11-28 

[jira] [Created] (CARBONDATA-1052) Complete result does not display in Short_column table.

2017-05-15 Thread Vinod Rohilla (JIRA)
Vinod Rohilla created CARBONDATA-1052:
-

 Summary: Complete result does not display in Short_column table.
 Key: CARBONDATA-1052
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1052
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
 Environment: Spark 2.1
Reporter: Vinod Rohilla
 Attachments: newdata.csv

Incomplete result displays to the user while select query.

Steps to reproduces:

1: Create table:

CREATE TABLE sorttable4_offheap_safe (empno int, empname String, designation 
String, doj Timestamp, workgroupcategory int, workgroupcategoryname String, 
deptno int, deptname String, projectcode int, projectjoindate Timestamp, 
projectenddate Timestamp,attendance int,utilization int,salary int) STORED BY 
'org.apache.carbondata.format' tblproperties('sort_columns'='workgroupcategory, 
empname');

2: Load Data

LOAD DATA local inpath 'hdfs://localhost:54310/newdata.csv' INTO TABLE 
sorttable4_offheap_safe OPTIONS('DELIMITER'= ',', 'QUOTECHAR'= '\"');

3: Select Query & result set:

0: jdbc:hive2://localhost:1> Select  * from sorttable4_offheap_safe;
++---+---++++-+---+--+++-+--+-+--+
| empno  |  empname  |  designation 
 |  doj   | workgroupcategory  | workgroupcategoryname  | 
deptno  |   deptname| 
projectcode  |projectjoindate | projectenddate | attendance  | 
utilization  | salary  |
++---+---++++-+---+--+++-+--+-+--+
| 15012  | Robert| Component Engineer   
 | 1980-06-29 01:31:41.0  | 2067   | NM | 
10963   | Component Engineer| 
121  | 2015-07-13 17:34:48.0  | 2016-09-01 09:13:53.0  | 33691   | 
6341805  | 66463   |
| 15013  | Richard   | Conference Coordinator   
 | 2010-05-27 13:04:06.0  | 10408  | AL | 
76341   | Conference Coordinator| 
131  | 2015-07-15 07:56:36.0  | 2016-11-25 13:14:17.0  | 61992   | 
58731| 59651   |
| 15027  | Jonathon  | Flight Nurse 
 | 1999-05-14 09:17:19.0  | 10635  | MA | 
14845   | Flight Nurse  | 
271  | 2015-09-29 23:19:06.0  | 2016-04-28 19:55:40.0  | 57908   | 
161869   | 78633   |
| 15010  | Sue   | Clinical Reviewer
 | 2004-01-02 20:04:47.0  | 11623  | IA | 
34148   | Clinical Reviewer | 
101  | 2015-09-16 09:49:18.0  | 2016-03-24 06:31:12.0  | 46362   | 
6696935  | 32713   |
| 15011  | Carlos| Commercial Print Management Consultant   
 | 1993-07-03 04:17:06.0  | 20091  | RI | 
32071   | Commercial Print Management Consultant| 
111  | 2015-07-09 04:04:15.0  | 2016-03-13 14:42:45.0  | 19847   | 
6654411  | 18747   |
| 15028  | Daniell   | Front Desk Agent 
 | 1981-04-19 13:14:14.0  | 20192  | PR | 
31469   | Front Desk Agent  | 
281  | 2015-03-08 22:12:39.0  | 2016-12-20 11:59:57.0  | 98291   | 
4566010  | 61481   |
| 15020  | Kathryn   | Dispatcher   
 | 1981-08-21 04:34:26.0  | 21670  | ME | 
2543| Dispatcher| 
201  | 2015-11-28 11:17:36.0  | 2016-02-11 23:25:53.0  | 69271   | 
5991103  | 20462   |
| 15019  | Donald| Director of Guidance 
 | 1997-05-14 03:26:26.0  | 26266  | NJ | 
9497| Director of Guidance  | 
191  | 2015-03-13 10:10:36.0  | 2016-08-12 10:24:11.0  | 51480   | 
9364519  | 98681   |
| 15026  | Tracy |