[GitHub] [carbondata] akashrn5 commented on a change in pull request #3988: [CARBONDATA-4037] Improve the table status and segment file writing

2020-12-02 Thread GitBox


akashrn5 commented on a change in pull request #3988:
URL: https://github.com/apache/carbondata/pull/3988#discussion_r534107457



##
File path: 
core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
##
@@ -162,15 +162,25 @@ public static void writeSegmentFile(String tablePath, 
String segmentId, String t
* corresponding partitions.
*/
   public static void writeSegmentFile(String tablePath, final String taskNo, 
String location,
-  String timeStamp, List partitionNames, boolean isMergeIndexFlow) 
throws IOException {
-String tempFolderLoc = timeStamp + ".tmp";
-String writePath = CarbonTablePath.getSegmentFilesLocation(tablePath) + 
"/" + tempFolderLoc;
+  String timeStamp, List partitionNames, boolean isMergeIndexFlow,
+  boolean readFileFooterFromCarbonDataFile) throws IOException {

Review comment:
   rename to some meaningful , its confusing now 
`readFileFooterFromCarbonDataFile`

##
File path: 
core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
##
@@ -198,20 +208,17 @@ public boolean accept(CarbonFile file) {
 folderDetails.setRelative(isRelative);
 folderDetails.setPartitions(partitionNames);
 folderDetails.setStatus(SegmentStatus.SUCCESS.getMessage());
-for (CarbonFile file : carbonFiles) {
-  if (file.getName().endsWith(CarbonTablePath.MERGE_INDEX_FILE_EXT)) {
-folderDetails.setMergeFileName(file.getName());
-  } else {
-folderDetails.getFiles().add(file.getName());
-  }
-}
+setIndexFileNamesToFolderDetails(folderDetails, carbonFiles);
 segmentFile.addPath(location, folderDetails);
 String path = null;
 if (isMergeIndexFlow) {
   // in case of merge index flow, tasks are launched per partition and 
all the tasks
   // will be written to the same tmp folder, in that case taskNo is 
not unique.
   // To generate a unique fileName UUID is used
   path = writePath + "/" + CarbonUtil.generateUUID() + 
CarbonTablePath.SEGMENT_EXT;
+  if (readFileFooterFromCarbonDataFile) {
+path = writePath + "/" + timeStamp + CarbonTablePath.SEGMENT_EXT;

Review comment:
   can you please check why this change is added, if valid change, please 
add a proper comment explaining the scenario why this required

##
File path: 
core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
##
@@ -439,6 +430,73 @@ public boolean accept(CarbonFile file) {
 return null;
   }
 
+  /**

Review comment:
   please check if the flow is in below order in case of SI
   
   in progress - merge index MT - update or generate segment file for MT - call 
load SI - (all SI functions) - success MT

##
File path: 
core/src/main/java/org/apache/carbondata/core/writer/CarbonIndexFileMergeWriter.java
##
@@ -140,33 +141,36 @@ private String mergeCarbonIndexFilesOfSegment(String 
segmentId,
 groupIndexesBySegment(fileStore.getCarbonIndexMapWithFullPath());
 SegmentFileStore.FolderDetails folderDetails = null;
 for (Map.Entry> entry : 
indexLocationMap.entrySet()) {
-  String mergeIndexFile =
-  writeMergeIndexFile(null, partitionPath, entry.getValue(), 
segmentId, uuid);
-  folderDetails = new SegmentFileStore.FolderDetails();
-  folderDetails.setMergeFileName(mergeIndexFile);
-  folderDetails.setStatus("Success");
-  if (partitionPath.startsWith(tablePath)) {
-partitionPath = partitionPath.substring(tablePath.length() + 1);
-List partitions = new 
ArrayList<>(Arrays.asList(partitionPath.split("/")));
+  Map> mergeToIndexFileMap = 
fileStore.getCarbonMergeFileToIndexFilesMap();

Review comment:
   please cross verify, i think this changes not required if you handle in 
compact merge index for new tables whether to go for merging index files or no 
for new tables based on segment file and list files in the absence of segment 
file.

##
File path: 
core/src/main/java/org/apache/carbondata/core/mutate/CarbonUpdateUtil.java
##
@@ -672,7 +678,7 @@ public static boolean isMaxQueryTimeoutExceeded(long 
fileTimestamp) {
 
 long minutesElapsed = (difference / (1000 * 60));
 
-return minutesElapsed > maxTime;
+return minutesElapsed >= maxTime;

Review comment:
   please revert this

##
File path: core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
##
@@ -2558,8 +2558,11 @@ public static long getCarbonIndexSize(SegmentFileStore 
fileStore,
   // Get the total size of carbon data and the total size of carbon index
   public static HashMap getDataSizeAndIndexSize(String tablePath,
   Segment segment) throws IOException {
+SegmentFileStore fileStore = null;
 if (segment.getSegmentFileName() != null) {
-  SegmentFileStore fileStore = new SegmentFileStore(tablePath, 

[GitHub] [carbondata] akashrn5 commented on a change in pull request #3988: [CARBONDATA-4037] Improve the table status and segment file writing

2020-11-03 Thread GitBox


akashrn5 commented on a change in pull request #3988:
URL: https://github.com/apache/carbondata/pull/3988#discussion_r515746959



##
File path: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
##
@@ -361,6 +361,11 @@ private CarbonCommonConstants() {
 
   public static final String CARBON_INDEX_SCHEMA_STORAGE_DATABASE = "DATABASE";
 
+  @CarbonProperty
+  public static final String CARBON_INDEXFILES_DELETETIME = 
"carbon.index.delete.time";

Review comment:
   agree with @ajantha-bhat , we can reuse the query execution timeout and 
delete as its also one hour.

##
File path: 
integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/cleanfiles/TestCleanFileCommand.scala
##
@@ -0,0 +1,170 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.spark.testsuite.cleanfiles
+
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.test.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.datastore.filesystem.{CarbonFile, 
CarbonFileFilter}
+import org.apache.carbondata.core.datastore.impl.FileFactory
+import org.apache.carbondata.core.index.Segment
+import org.apache.carbondata.core.metadata.{CarbonMetadata, SegmentFileStore}
+import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.core.util.path.CarbonTablePath
+
+class TestCleanFileCommand extends QueryTest with BeforeAndAfterAll {
+
+  var count = 0
+
+  override protected def beforeAll(): Unit = {
+sql("DROP TABLE IF EXISTS cleantest")
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.MAX_QUERY_EXECUTION_TIME, "0")
+  }
+
+  override protected def afterAll(): Unit = {
+sql("DROP TABLE IF EXISTS cleantest")
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.MAX_QUERY_EXECUTION_TIME,
+CarbonCommonConstants.DEFAULT_MAX_QUERY_EXECUTION_TIME.toString)
+  }
+
+  test("test clean files command for index files") {
+sql("drop table if exists cleantest")
+sql("create table cleantest(id int, issue date) STORED AS carbondata")
+sql("insert into table cleantest select '1','2000-02-01'")
+assert(getIndexFileCountFromSegmentPath("default_cleantest", "0") == 1)
+sql("clean files for table cleantest")
+assert(getIndexFileCountFromSegmentPath("default_cleantest", "0") == 0)
+sql("drop table if exists cleantest")
+  }
+
+  test("test clean files command for index files with SI") {
+sql("drop table if exists cleantest")
+sql("create table cleantest(id int, issue date, name string) STORED AS 
carbondata")
+sql("insert into table cleantest select '1','2000-02-01', 'abc' ")
+sql("create index indextable1 on table cleantest (name) AS 'carbondata'")
+assert(getIndexFileCountFromSegmentPath("default_cleantest", "0") == 1)
+assert(getIndexFileCountFromSegmentPath("default_indextable1", "0") == 1)
+sql("clean files for table cleantest")
+assert(getIndexFileCountFromSegmentPath("default_cleantest", "0") == 0)
+assert(getIndexFileCountFromSegmentPath("default_indextable1", "0") == 0)
+sql("drop table if exists cleantest")
+  }
+
+  test("test clean files command on partition table") {
+sql("drop table if exists cleantest")
+sql("create table cleantest(id int, issue date) STORED AS carbondata " +
+"partitioned by (name string)")
+sql("insert into table cleantest select '1','2000-02-01', 'abc' ")
+assert(getIndexFileCountFromSegmentPath("default_cleantest", "0") == 1)
+sql("clean files for table cleantest")
+assert(getIndexFileCountFromSegmentPath("default_cleantest", "0") == 0)
+sql("drop table if exists cleantest")
+  }
+
+  test("test clean files command for index files after update") {
+sql("drop table if exists cleantest")
+sql("create table cleantest(id int, name string) STORED AS carbondata")
+sql("insert into table cleantest select '1', 'abc' ")
+sql("insert into table cleantest select '2', 'abc' ")
+assert(getIndexFileCountFromSegmentPath("default_cleantest", "0") == 1)
+

[GitHub] [carbondata] akashrn5 commented on a change in pull request #3988: [CARBONDATA-4037] Improve the table status and segment file writing

2020-10-28 Thread GitBox


akashrn5 commented on a change in pull request #3988:
URL: https://github.com/apache/carbondata/pull/3988#discussion_r513210534



##
File path: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
##
@@ -361,6 +361,11 @@ private CarbonCommonConstants() {
 
   public static final String CARBON_INDEX_SCHEMA_STORAGE_DATABASE = "DATABASE";
 
+  @CarbonProperty
+  public static final String CARBON_INDEXFILES_DELETETIME = 
"carbon.index.delete.time";

Review comment:
   1. Since this is user exposed property, millisecond level is not good, 
please change to seconds level.
   2. 
   ```suggestion
 public static final String CARBON_INDEXFILES_DELETETIME_IN_SECONDS = 
"carbon.index.files.delete.time.seconds";
   ```
   3. please add the proper comments.

##
File path: 
core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
##
@@ -464,6 +472,38 @@ public boolean accept(CarbonFile file) {
 return null;
   }
 
+  public static List getMergedIndexFiles(CarbonFile[] indexFiles) 
throws IOException {

Review comment:
   can you please add comment to this method with example, so that it will 
be clear why this required, now its little confusing. Also add the comments in 
caller if required. Then we will have one more review

##
File path: 
core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
##
@@ -306,11 +306,15 @@ public static boolean writeSegmentFile(CarbonTable 
carbonTable, Segment segment)
   folderDetails.setStatus(SegmentStatus.SUCCESS.getMessage());
   folderDetails.setRelative(false);
   segmentFile.addPath(segment.getSegmentPath(), folderDetails);
+  List mergedIndexFiles = getMergedIndexFiles(indexFiles);

Review comment:
   i think duplicate code is present in some three places, so please try if 
you can refactor it.

##
File path: 
core/src/main/java/org/apache/carbondata/core/readcommitter/TableStatusReadCommittedScope.java
##
@@ -91,6 +95,19 @@ public TableStatusReadCommittedScope(AbsoluteTableIdentifier 
identifier,
 
segment.setSegmentMetaDataInfo(fileStore.getSegmentFile().getSegmentMetaDataInfo());
   }
 }
+List index = new ArrayList<>(indexFiles.keySet());
+CarbonFile[] carbonIndexFiles = new CarbonFile[index.size()];
+for (int i = 0; i < index.size(); i++) {
+  carbonIndexFiles[i] = FileFactory.getCarbonFile(index.get(i));
+}

Review comment:
   same as above

##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/api/CarbonStore.scala
##
@@ -353,6 +356,79 @@ object CarbonStore {
 }
   }
 
+  /**
+   * Clean invalid and expired index files of carbon table.
+   *
+   * @param carbonTable CarbonTable

Review comment:
   remove variable desc, its straight forward, just keep method level 
comment

##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/api/CarbonStore.scala
##
@@ -353,6 +356,79 @@ object CarbonStore {
 }
   }
 
+  /**
+   * Clean invalid and expired index files of carbon table.
+   *
+   * @param carbonTable CarbonTable
+   */
+  def cleanUpIndexFiles(carbonTable: CarbonTable): Unit = {
+val validSegments = new 
SegmentStatusManager(carbonTable.getAbsoluteTableIdentifier)
+  .getValidAndInvalidSegments.getValidSegments.asScala.toList
+validSegments.foreach( segment => {
+  val sfs: SegmentFileStore = new 
SegmentFileStore(carbonTable.getTablePath,
+segment.getSegmentFileName)
+  var indexFiles = List[CarbonFile]()
+  if (carbonTable.isHivePartitionTable) {
+val partitionSpecs = sfs.getPartitionSpecs.asScala.toList
+val segmentName = 
segment.getSegmentFileName.replace(CarbonTablePath.SEGMENT_EXT, "")
+for (partitionSpec <- partitionSpecs) {
+  var carbonIndexFiles = SegmentIndexFileStore
+.getCarbonIndexFiles(partitionSpec.getLocation.toString, 
FileFactory.getConfiguration)

Review comment:
   `SegmentIndexFileStore.getCarbonIndexFiles` does list files at a path, 
it will be slow, why do you need to use this? can't we get the index files from 
`sfs`?

##
File path: 
core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
##
@@ -464,6 +472,38 @@ public boolean accept(CarbonFile file) {
 return null;
   }
 
+  public static List getMergedIndexFiles(CarbonFile[] indexFiles) 
throws IOException {
+SegmentIndexFileStore indexFileStore = new SegmentIndexFileStore();
+List mergedIndexFiles = new ArrayList<>();
+long lastModifiedTime = 0L;
+long length = 0L;
+CarbonFile validMergeIndexFile = null;
+List mergeIndexCarbonFiles = new ArrayList<>();
+for (CarbonFile file : indexFiles) {

Review comment:
   ```suggestion
   for (CarbonFile indexFile : indexFiles) {
   ```

##
File path: