[GitHub] [carbondata] ajantha-bhat commented on pull request #3785: [CARBONDATA-3843] Fix merge index issue in streaming table

2020-06-03 Thread GitBox


ajantha-bhat commented on pull request #3785:
URL: https://github.com/apache/carbondata/pull/3785#issuecomment-638607552


   @QiangCai, please check



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (CARBONDATA-3843) Fix Merge index is not created for normal segment on streaming table

2020-06-03 Thread Ajantha Bhat (Jira)
Ajantha Bhat created CARBONDATA-3843:


 Summary: Fix Merge index is not created for normal segment on 
streaming table
 Key: CARBONDATA-3843
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3843
 Project: CarbonData
  Issue Type: Bug
Reporter: Ajantha Bhat
Assignee: Ajantha Bhat


problem :

Merge index is not created for normal segment on streaming table

 

Solution: 

For a streaming table other than streaming segment (Row_V1), allow merge index 
creation for all kinds of segments.

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3785: [WIP] Fix merge index issue in streaming table

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3785:
URL: https://github.com/apache/carbondata/pull/3785#issuecomment-638412987


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1407/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3785: [WIP] Fix merge index issue in streaming table

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3785:
URL: https://github.com/apache/carbondata/pull/3785#issuecomment-638412548


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3131/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#issuecomment-638235543


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1406/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#issuecomment-638234676


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3130/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3785: [WIP] Fix merge index issue in streaming table

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3785:
URL: https://github.com/apache/carbondata/pull/3785#issuecomment-638184776


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3129/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3785: [WIP] Fix merge index issue in streaming table

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3785:
URL: https://github.com/apache/carbondata/pull/3785#issuecomment-638183666


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1405/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#issuecomment-638155466


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] akashrn5 commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


akashrn5 commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434515732



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();
+rowCountInSplits.add(sum);
+for (int i = 1; i < splits.size(); i++) {
+  // prepare a summation array of row counts in each blocklet,
+  // this is used for pruning with pagination vales.
+  // At current index, it contains sum of rows of all the blocklet from 
previous + current.
+  sum += ((CarbonInputSplit) splits.get(i)).getDetailInfo().getRowCount();
+  rowCountInSplits.add(sum);
+}
+  }
+
+  /**
+   * Pagination query with from and to range.
+   *
+   * @param fromRowNumber must be greater 0 (as row id starts from 1)
+   *  and less than equal to toRowNumber
+   * @param toRowNumber must be greater than 0 (as row id starts from 1)
+   *and greater than equals to fromRowNumber and not outside 
the total rows
+   * @return array of rows between fromRowNumber and toRowNumber (inclusive)
+   * @throws Exception
+   */
+  public Object[] read(long fromRowNumber, long toRowNumber)
+  throws IOException, InterruptedException {
+if (isClosed) {
+  throw new RuntimeException("Pagination Reader is closed. please build 
again");
+}
+if (fromRowNumber < 1) {
+  throw new IllegalArgumentException("from row id:" + fromRowNumber + " is 
less than 1");
+}
+if (fromRowNumber > toRowNumber) {
+  throw new IllegalArgumentException(

Review comment:
   same as above





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] akashrn5 commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


akashrn5 commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434515638



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();
+rowCountInSplits.add(sum);
+for (int i = 1; i < splits.size(); i++) {
+  // prepare a summation array of row counts in each blocklet,
+  // this is used for pruning with pagination vales.
+  // At current index, it contains sum of rows of all the blocklet from 
previous + current.
+  sum += ((CarbonInputSplit) splits.get(i)).getDetailInfo().getRowCount();
+  rowCountInSplits.add(sum);
+}
+  }
+
+  /**
+   * Pagination query with from and to range.
+   *
+   * @param fromRowNumber must be greater 0 (as row id starts from 1)
+   *  and less than equal to toRowNumber
+   * @param toRowNumber must be greater than 0 (as row id starts from 1)
+   *and greater than equals to fromRowNumber and not outside 
the total rows
+   * @return array of rows between fromRowNumber and toRowNumber (inclusive)
+   * @throws Exception
+   */
+  public Object[] read(long fromRowNumber, long toRowNumber)
+  throws IOException, InterruptedException {
+if (isClosed) {
+  throw new RuntimeException("Pagination Reader is closed. please build 
again");
+}
+if (fromRowNumber < 1) {
+  throw new IllegalArgumentException("from row id:" + fromRowNumber + " is 
less than 1");

Review comment:
   just from row id in error, doesn't make much sense right





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] akashrn5 commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


akashrn5 commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434515260



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();

Review comment:
   `summationOfRowsInEachBlocklet` would be more meaningful i guess, 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#issuecomment-638130343







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434412447



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();

Review comment:
   sum is understandable. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3786: [CARBONDATA-3842] Fix incorrect results on mv with limit (Missed code during mv refcatory)

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3786:
URL: https://github.com/apache/carbondata/pull/3786#issuecomment-638067919


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3127/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3786: [CARBONDATA-3842] Fix incorrect results on mv with limit (Missed code during mv refcatory)

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3786:
URL: https://github.com/apache/carbondata/pull/3786#issuecomment-638066931


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1403/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434414946



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();
+rowCountInSplits.add(sum);
+for (int i = 1; i < splits.size(); i++) {
+  // prepare a summation array of row counts in each blocklet,
+  // this is used for pruning with pagination vales.
+  // At current index, it contains sum of rows of all the blocklet from 
previous + current.
+  sum += ((CarbonInputSplit) splits.get(i)).getDetailInfo().getRowCount();
+  rowCountInSplits.add(sum);
+}
+  }
+
+  /**
+   * Pagination query with from and to range.
+   *
+   * @param fromRowNumber must be greater 0 (as row id starts from 1)
+   *  and less than equal to toRowNumber
+   * @param toRowNumber must be greater than 0 (as row id starts from 1)
+   *and greater than equals to fromRowNumber and not outside 
the total rows
+   * @return array of rows between fromRowNumber and toRowNumber (inclusive)
+   * @throws Exception
+   */
+  public Object[] read(long fromRowNumber, long toRowNumber)
+  throws IOException, InterruptedException {
+if (isClosed) {
+  throw new RuntimeException("Pagination Reader is closed. please build 
again");
+}
+if (fromRowNumber < 1) {
+  throw new IllegalArgumentException("from row id:" + fromRowNumber + " is 
less than 1");

Review comment:
   argument itslef is fromRowNumber, error message is  enough





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434414895



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();
+rowCountInSplits.add(sum);
+for (int i = 1; i < splits.size(); i++) {
+  // prepare a summation array of row counts in each blocklet,
+  // this is used for pruning with pagination vales.
+  // At current index, it contains sum of rows of all the blocklet from 
previous + current.
+  sum += ((CarbonInputSplit) splits.get(i)).getDetailInfo().getRowCount();
+  rowCountInSplits.add(sum);
+}
+  }
+
+  /**
+   * Pagination query with from and to range.
+   *
+   * @param fromRowNumber must be greater 0 (as row id starts from 1)
+   *  and less than equal to toRowNumber
+   * @param toRowNumber must be greater than 0 (as row id starts from 1)
+   *and greater than equals to fromRowNumber and not outside 
the total rows
+   * @return array of rows between fromRowNumber and toRowNumber (inclusive)
+   * @throws Exception
+   */
+  public Object[] read(long fromRowNumber, long toRowNumber)
+  throws IOException, InterruptedException {
+if (isClosed) {
+  throw new RuntimeException("Pagination Reader is closed. please build 
again");
+}
+if (fromRowNumber < 1) {
+  throw new IllegalArgumentException("from row id:" + fromRowNumber + " is 
less than 1");
+}
+if (fromRowNumber > toRowNumber) {
+  throw new IllegalArgumentException(

Review comment:
   argument itslef is fromRowNumber, error message is  enough





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434414895



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();
+rowCountInSplits.add(sum);
+for (int i = 1; i < splits.size(); i++) {
+  // prepare a summation array of row counts in each blocklet,
+  // this is used for pruning with pagination vales.
+  // At current index, it contains sum of rows of all the blocklet from 
previous + current.
+  sum += ((CarbonInputSplit) splits.get(i)).getDetailInfo().getRowCount();
+  rowCountInSplits.add(sum);
+}
+  }
+
+  /**
+   * Pagination query with from and to range.
+   *
+   * @param fromRowNumber must be greater 0 (as row id starts from 1)
+   *  and less than equal to toRowNumber
+   * @param toRowNumber must be greater than 0 (as row id starts from 1)
+   *and greater than equals to fromRowNumber and not outside 
the total rows
+   * @return array of rows between fromRowNumber and toRowNumber (inclusive)
+   * @throws Exception
+   */
+  public Object[] read(long fromRowNumber, long toRowNumber)
+  throws IOException, InterruptedException {
+if (isClosed) {
+  throw new RuntimeException("Pagination Reader is closed. please build 
again");
+}
+if (fromRowNumber < 1) {
+  throw new IllegalArgumentException("from row id:" + fromRowNumber + " is 
less than 1");
+}
+if (fromRowNumber > toRowNumber) {
+  throw new IllegalArgumentException(

Review comment:
   argument itslef if fromRowNumber, error message is  enough

##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF 

[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434414148



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();
+rowCountInSplits.add(sum);
+for (int i = 1; i < splits.size(); i++) {
+  // prepare a summation array of row counts in each blocklet,
+  // this is used for pruning with pagination vales.
+  // At current index, it contains sum of rows of all the blocklet from 
previous + current.
+  sum += ((CarbonInputSplit) splits.get(i)).getDetailInfo().getRowCount();
+  rowCountInSplits.add(sum);
+}
+  }
+
+  /**
+   * Pagination query with from and to range.
+   *
+   * @param fromRowNumber must be greater 0 (as row id starts from 1)
+   *  and less than equal to toRowNumber
+   * @param toRowNumber must be greater than 0 (as row id starts from 1)
+   *and greater than equals to fromRowNumber and not outside 
the total rows
+   * @return array of rows between fromRowNumber and toRowNumber (inclusive)
+   * @throws Exception
+   */
+  public Object[] read(long fromRowNumber, long toRowNumber)
+  throws IOException, InterruptedException {
+if (isClosed) {
+  throw new RuntimeException("Pagination Reader is closed. please build 
again");
+}
+if (fromRowNumber < 1) {
+  throw new IllegalArgumentException("from row id:" + fromRowNumber + " is 
less than 1");
+}
+if (fromRowNumber > toRowNumber) {
+  throw new IllegalArgumentException(
+  "from row id:" + fromRowNumber + " is greater than to row id:" + 
toRowNumber);
+}
+if (toRowNumber > getTotalRows()) {
+  throw new IllegalArgumentException(

Review comment:
   user should know what to pass, getTotalRows() provided for this reason. 
If by mistake he passes wrong value. we should not assume and give till end of 
rows.





This is an 

[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434413651



##
File path: 
sdk/sdk/src/test/java/org/apache/carbondata/sdk/file/PaginationCarbonReaderTest.java
##
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.File;
+import java.io.IOException;
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.metadata.datatype.DataTypes;
+import org.apache.carbondata.core.metadata.datatype.Field;
+import org.apache.carbondata.core.util.CarbonProperties;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang.RandomStringUtils;
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Test suite for {@link CSVCarbonWriter}
+ */
+public class PaginationCarbonReaderTest {

Review comment:
   already there. please check





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434412447



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();

Review comment:
   sum is understandable. There are 2 people reviewed before !





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434412721



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();
+rowCountInSplits.add(sum);
+for (int i = 1; i < splits.size(); i++) {
+  // prepare a summation array of row counts in each blocklet,
+  // this is used for pruning with pagination vales.
+  // At current index, it contains sum of rows of all the blocklet from 
previous + current.
+  sum += ((CarbonInputSplit) splits.get(i)).getDetailInfo().getRowCount();
+  rowCountInSplits.add(sum);
+}
+  }
+
+  /**
+   * Pagination query with from and to range.
+   *
+   * @param fromRowNumber must be greater 0 (as row id starts from 1)
+   *  and less than equal to toRowNumber
+   * @param toRowNumber must be greater than 0 (as row id starts from 1)

Review comment:
   ok





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434411308



##
File path: docs/sdk-guide.md
##
@@ -766,6 +771,41 @@ public VectorSchemaRoot getArrowVectors() throws 
IOException;
 public static ArrowRecordBatch byteArrayToArrowBatch(byte[] batchBytes, 
BufferAllocator bufferAllocator) throws IOException;
 ```
 
+### Class org.apache.carbondata.sdk.file.PaginationCarbonReader
+```
+/**
+* Pagination query with from and to range.
+*
+* @param fromRowNumber must be greater 0 (as row id starts from 1)
+*  and less than equal to toRowNumber
+* @param toRowNumber must be greater than 0 (as row id starts from 1)
+*and greater than equals to fromRowNumber and not outside the 
total rows
+* @return array of rows between fromRowNumber and toRowNumber (inclusive)
+* @throws Exception
+*/
+public Object[] read(long fromRowNumber, long toRowNumber) throws IOException, 
InterruptedException;
+```
+
+```
+/**
+* Get total rows in the folder or a list of CarbonData files.
+* It is based on the snapshot of files taken while building the reader.
+*
+* @return total rows from all the files in the reader.
+*/
+public long getTotalRows();
+```
+
+```
+/**
+* Closes the pagination reader, drops the cache and snapshot.
+* Need to build reader again if the files need to be read again.
+* Suggest to call this when new files are added in the folder.

Review comment:
   * call this when the all pagination queries are finished and can the 
drop cache.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434409645



##
File path: docs/sdk-guide.md
##
@@ -766,6 +771,41 @@ public VectorSchemaRoot getArrowVectors() throws 
IOException;
 public static ArrowRecordBatch byteArrayToArrowBatch(byte[] batchBytes, 
BufferAllocator bufferAllocator) throws IOException;
 ```
 
+### Class org.apache.carbondata.sdk.file.PaginationCarbonReader
+```
+/**
+* Pagination query with from and to range.
+*
+* @param fromRowNumber must be greater 0 (as row id starts from 1)
+*  and less than equal to toRowNumber
+* @param toRowNumber must be greater than 0 (as row id starts from 1)
+*and greater than equals to fromRowNumber and not outside the 
total rows

Review comment:
   ok





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434408720



##
File path: docs/sdk-guide.md
##
@@ -766,6 +771,41 @@ public VectorSchemaRoot getArrowVectors() throws 
IOException;
 public static ArrowRecordBatch byteArrayToArrowBatch(byte[] batchBytes, 
BufferAllocator bufferAllocator) throws IOException;
 ```
 
+### Class org.apache.carbondata.sdk.file.PaginationCarbonReader
+```
+/**
+* Pagination query with from and to range.
+*
+* @param fromRowNumber must be greater 0 (as row id starts from 1)
+*  and less than equal to toRowNumber

Review comment:
   ok





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


ajantha-bhat commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434407280



##
File path: docs/configuration-parameters.md
##
@@ -141,6 +141,7 @@ This section provides the details of all the configurations 
required for the Car
 | carbon.load.all.segment.indexes.to.cache | true | Setting this configuration 
to false, will prune and load only matched segment indexes to cache using 
segment metadata information such as columnid and it's minmax values, which 
decreases the usage of driver memory.  |
 | carbon.secondary.index.creation.threads | 1 | Specifies the number of 
threads to concurrently process segments during secondary index creation. This 
property helps fine tuning the system when there are a lot of segments in a 
table. The value range is 1 to 50. |
 | carbon.si.lookup.partialstring | true | When true, it includes starts with, 
ends with and contains. When false, it includes only starts with secondary 
indexes. |
+| carbon.max.pagination.lru.cache.size.in.mb | -1 | Maximum memory **(in MB)** 
upto which the SDK pagination reader can cache the blocklet rows. Suggest to 
configure as multiple of blocklet size. Default value of -1 means there is no 
memory limit for caching. Only integer values greater than 0 are accepted. |

Review comment:
   multiple of blocklet means, 
   N * blocklet size in mb. No need to give example





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3785: [WIP] Fix merge index issue in streaming table

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3785:
URL: https://github.com/apache/carbondata/pull/3785#issuecomment-638037693


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1402/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3785: [WIP] Fix merge index issue in streaming table

2020-06-03 Thread GitBox


CarbonDataQA1 commented on pull request #3785:
URL: https://github.com/apache/carbondata/pull/3785#issuecomment-638037020


   Build Failed  with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3126/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] akashrn5 commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


akashrn5 commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434379315



##
File path: 
sdk/sdk/src/test/java/org/apache/carbondata/sdk/file/PaginationCarbonReaderTest.java
##
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.File;
+import java.io.IOException;
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.metadata.datatype.DataTypes;
+import org.apache.carbondata.core.metadata.datatype.Field;
+import org.apache.carbondata.core.util.CarbonProperties;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang.RandomStringUtils;
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Test suite for {@link CSVCarbonWriter}
+ */
+public class PaginationCarbonReaderTest {

Review comment:
   can you add test cases for negative scenarios also, which throws 
exceptions?

##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import org.apache.carbondata.common.annotations.InterfaceStability;
+import org.apache.carbondata.core.cache.CarbonLRUCache;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.indexstore.BlockletDetailInfo;
+import org.apache.carbondata.hadoop.CarbonInputSplit;
+import org.apache.carbondata.sdk.file.cache.BlockletRows;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * CarbonData SDK reader with pagination support
+ */
+@InterfaceAudience.User
+@InterfaceStability.Evolving
+public class PaginationCarbonReader extends CarbonReader {
+  // Splits based the file present in the reader path when the reader is built.
+  private List allBlockletSplits;
+
+  // Rows till the current splits stored as list.
+  private List rowCountInSplits;
+
+  // Reader builder used to create the pagination reader, used for building 
split level readers.
+  private CarbonReaderBuilder readerBuilder;
+
+  private boolean isClosed;
+
+  // to store the rows of each blocklet in memory based LRU cache.
+  // key: unique blocklet id
+  // value: BlockletRows
+  private CarbonLRUCache cache =
+  new 
CarbonLRUCache(CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB,
+  
CarbonCommonConstants.CARBON_MAX_PAGINATION_LRU_CACHE_SIZE_IN_MB_DEFAULT);
+
+  /**
+   * Call {@link #builder(String)} to construct an instance
+   */
+
+  PaginationCarbonReader(List splits, CarbonReaderBuilder 
readerBuilder) {
+// Initialize super class with no readers.
+// Based on the splits identified for pagination query, readers will be 
built for the query.
+super(null);
+this.allBlockletSplits = splits;
+this.readerBuilder = readerBuilder;
+// prepare the mapping.
+rowCountInSplits = new ArrayList<>(splits.size());
+long sum = ((CarbonInputSplit) 
splits.get(0)).getDetailInfo().getRowCount();
+rowCountInSplits.add(sum);
+for (int i = 1; i < splits.size(); i++) {
+  // prepare a summation array of row counts in each blocklet,
+   

[GitHub] [carbondata] akashrn5 commented on a change in pull request #3770: [CARBONDATA-3829] Support pagination in SDK reader

2020-06-03 Thread GitBox


akashrn5 commented on a change in pull request #3770:
URL: https://github.com/apache/carbondata/pull/3770#discussion_r434325061



##
File path: docs/configuration-parameters.md
##
@@ -141,6 +141,7 @@ This section provides the details of all the configurations 
required for the Car
 | carbon.load.all.segment.indexes.to.cache | true | Setting this configuration 
to false, will prune and load only matched segment indexes to cache using 
segment metadata information such as columnid and it's minmax values, which 
decreases the usage of driver memory.  |
 | carbon.secondary.index.creation.threads | 1 | Specifies the number of 
threads to concurrently process segments during secondary index creation. This 
property helps fine tuning the system when there are a lot of segments in a 
table. The value range is 1 to 50. |
 | carbon.si.lookup.partialstring | true | When true, it includes starts with, 
ends with and contains. When false, it includes only starts with secondary 
indexes. |
+| carbon.max.pagination.lru.cache.size.in.mb | -1 | Maximum memory **(in MB)** 
upto which the SDK pagination reader can cache the blocklet rows. Suggest to 
configure as multiple of blocklet size. Default value of -1 means there is no 
memory limit for caching. Only integer values greater than 0 are accepted. |

Review comment:
   `configure as multiple of blocklet size` for this in the bracket can you 
mention an example? 

##
File path: docs/sdk-guide.md
##
@@ -766,6 +771,41 @@ public VectorSchemaRoot getArrowVectors() throws 
IOException;
 public static ArrowRecordBatch byteArrayToArrowBatch(byte[] batchBytes, 
BufferAllocator bufferAllocator) throws IOException;
 ```
 
+### Class org.apache.carbondata.sdk.file.PaginationCarbonReader
+```
+/**
+* Pagination query with from and to range.
+*
+* @param fromRowNumber must be greater 0 (as row id starts from 1)
+*  and less than equal to toRowNumber

Review comment:
   `less than or equal to`

##
File path: docs/sdk-guide.md
##
@@ -766,6 +771,41 @@ public VectorSchemaRoot getArrowVectors() throws 
IOException;
 public static ArrowRecordBatch byteArrayToArrowBatch(byte[] batchBytes, 
BufferAllocator bufferAllocator) throws IOException;
 ```
 
+### Class org.apache.carbondata.sdk.file.PaginationCarbonReader
+```
+/**
+* Pagination query with from and to range.
+*
+* @param fromRowNumber must be greater 0 (as row id starts from 1)
+*  and less than equal to toRowNumber
+* @param toRowNumber must be greater than 0 (as row id starts from 1)
+*and greater than equals to fromRowNumber and not outside the 
total rows

Review comment:
   change `greater than equals to fromRowNumber and not outside the total 
rows`  to `greater than or equal to fromRowNumber and should not cross the 
total rows count value`

##
File path: docs/sdk-guide.md
##
@@ -766,6 +771,41 @@ public VectorSchemaRoot getArrowVectors() throws 
IOException;
 public static ArrowRecordBatch byteArrayToArrowBatch(byte[] batchBytes, 
BufferAllocator bufferAllocator) throws IOException;
 ```
 
+### Class org.apache.carbondata.sdk.file.PaginationCarbonReader
+```
+/**
+* Pagination query with from and to range.
+*
+* @param fromRowNumber must be greater 0 (as row id starts from 1)
+*  and less than equal to toRowNumber
+* @param toRowNumber must be greater than 0 (as row id starts from 1)
+*and greater than equals to fromRowNumber and not outside the 
total rows
+* @return array of rows between fromRowNumber and toRowNumber (inclusive)
+* @throws Exception
+*/
+public Object[] read(long fromRowNumber, long toRowNumber) throws IOException, 
InterruptedException;
+```
+
+```
+/**
+* Get total rows in the folder or a list of CarbonData files.
+* It is based on the snapshot of files taken while building the reader.
+*
+* @return total rows from all the files in the reader.
+*/
+public long getTotalRows();
+```
+
+```
+/**
+* Closes the pagination reader, drops the cache and snapshot.
+* Need to build reader again if the files need to be read again.
+* Suggest to call this when new files are added in the folder.

Review comment:
   ```suggestion
   * Do not call this unless new files are added in the folder of you want to 
drop the cache.
   ```

##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/PaginationCarbonReader.java
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless 

[GitHub] [carbondata] Indhumathi27 opened a new pull request #3786: [CARBONDATA-3842] Fix incorrect results on mv with limit (Missed code during mv refcatory)

2020-06-03 Thread GitBox


Indhumathi27 opened a new pull request #3786:
URL: https://github.com/apache/carbondata/pull/3786


### Why is this PR needed?
Already issue fixed in 
[PR-3652](https://github.com/apache/carbondata/pull/3651). Missed code during 
mv code refactory

### What changes were proposed in this PR?
   Copy subsume Flag and FlagSpec to subsumerPlan while rewriting with 
summarydatasets.
   Update the flagSpec as per the mv attributes and copy to relation.
   
### Does this PR introduce any user interface change?
- No
   
### Is any new testcase added?
- No
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org