[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r597409507



##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
##
@@ -276,7 +294,40 @@ class CarbonTableCompactor(
   segmentMetaDataAccumulator)
   } else {
 if (mergeRDD != null) {
-  mergeRDD.collect
+  val result = mergeRDD.collect
+  if (!updatePartitionSpecs.isEmpty) {
+val tableIdentifier = new TableIdentifier(carbonTable.getTableName,
+  Some(carbonTable.getDatabaseName))
+// To update partitionSpec in hive metastore, drop and add with 
latest path.
+val oldPartitions: util.List[TablePartitionSpec] =
+  new util.ArrayList[TablePartitionSpec]()
+val newPartitions: util.List[TablePartitionSpec] =
+  new util.ArrayList[TablePartitionSpec]()
+updatePartitionSpecs.asScala.foreach {
+  partitionSpec =>
+var spec = PartitioningUtils.parsePathFragment(
+  String.join(CarbonCommonConstants.FILE_SEPARATOR, 
partitionSpec.getPartitions))
+oldPartitions.add(spec)
+val addPartition = 
mergeRDD.checkAndUpdatePartitionLocation(partitionSpec)
+spec = PartitioningUtils.parsePathFragment(

Review comment:
partitionSpec.getPartitions and addPartition.getPartitions will be same 
only. so, please remove oldPartiiton and new partition list and keep one




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r597114312



##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
##
@@ -263,6 +267,16 @@ class CarbonTableCompactor(
   if (partitionSpecs != null && partitionSpecs.nonEmpty) {
 compactionCallableModel.compactedPartitions = Some(partitionSpecs)
   }
+  partitionSpecs.foreach(partitionSpec => {
+carbonLoadModel.getLoadMetadataDetails.asScala.foreach(loadMetaDetail 
=> {
+  if (loadMetaDetail.getPath != null &&
+  
loadMetaDetail.getPath.split(",").contains(partitionSpec.getLocation.toString)) 
{

Review comment:
   same comment as above




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r597112863



##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala
##
@@ -98,6 +98,24 @@ class CarbonMergerRDD[K, V](
 broadCastSplits = sparkContext.broadcast(new 
CarbonInputSplitWrapper(splits))
   }
 
+  // checks for added partition specs with external path.
+  // after compaction, location path to be updated with table path.
+  def checkAndUpdatePartitionLocation(partitionSpec: PartitionSpec) : 
PartitionSpec = {
+if (partitionSpec != null) {
+  carbonLoadModel.getLoadMetadataDetails.asScala.foreach(loadMetaDetail => 
{
+if (loadMetaDetail.getPath != null &&
+
loadMetaDetail.getPath.split(",").contains(partitionSpec.getLocation.toString)) 
{
+  val updatedPartitionLocation = CarbonDataProcessorUtil

Review comment:
   can break from loop, if matches partition




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r597099633



##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
##
@@ -276,7 +290,25 @@ class CarbonTableCompactor(
   segmentMetaDataAccumulator)
   } else {
 if (mergeRDD != null) {
-  mergeRDD.collect
+  val result = mergeRDD.collect

Review comment:
   Current code will not handle multi-partitions properly and Add/Drop 
partitions is called for each partition. Please change the code as below:
  
if (!updatePartitionSpecs.isEmpty) {
   val tableIdentifier = new 
TableIdentifier(carbonTable.getTableName,
 Some(carbonTable.getDatabaseName))
   // To update partitionSpec in hive metastore, drop and add with 
latest path.
   val partitionSpecs = updatePartitionSpecs.asScala.map {
 partitionSpec =>
   PartitioningUtils.parsePathFragment(
 String.join(CarbonCommonConstants.FILE_SEPARATOR, 
partitionSpec.getPartitions))
   }
   AlterTableDropPartitionCommand(
 tableIdentifier,
 partitionSpecs,
 true, false, true).run(sqlContext.sparkSession)
   AlterTableAddPartitionCommand(tableIdentifier,
 partitionSpecs.map(p => (p, None)), 
false).run(sqlContext.sparkSession)
 }




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r597099633



##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
##
@@ -276,7 +290,25 @@ class CarbonTableCompactor(
   segmentMetaDataAccumulator)
   } else {
 if (mergeRDD != null) {
-  mergeRDD.collect
+  val result = mergeRDD.collect

Review comment:
   Current code will not handle multi-partitions properly and Add/Drop 
partitions is called for each partition. Please change the code as below:
   
 if (!updatePartitionSpecs.isEmpty) {
   val tableIdentifier = new 
TableIdentifier(carbonTable.getTableName,
 Some(carbonTable.getDatabaseName))
   // To update partitionSpec in hive metastore, drop and add with 
latest path.
   val partitionSpecList: util.List[TablePartitionSpec] =
 new util.ArrayList[TablePartitionSpec]()
   updatePartitionSpecs.asScala.foreach {
 partitionSpec =>
   var spec = PartitioningUtils.parsePathFragment(
 String.join(CarbonCommonConstants.FILE_SEPARATOR, 
partitionSpec.getPartitions))
   partitionSpecList.add(spec)
   }
   AlterTableDropPartitionCommand(
 tableIdentifier,
 partitionSpecList.asScala,
 true, false, true).run(sqlContext.sparkSession)
   AlterTableAddPartitionCommand(tableIdentifier,
 partitionSpecList.asScala.map(p => (p, None)), 
false).run(sqlContext.sparkSession)
   }




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r597100457



##
File path: 
index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithPartition.scala
##
@@ -380,6 +384,37 @@ class TestSIWithPartition extends QueryTest with 
BeforeAndAfterAll {
 sql("drop table if exists partition_table")
   }
 
+  test("test si with add partition based on location on partition table") {
+sql("drop table if exists partition_table")
+sql("create table partition_table (id int,name String) " +
+"partitioned by(email string) stored as carbondata")
+sql("insert into partition_table select 1,'blue','abc'")
+sql("CREATE INDEX partitionTable_si  on table partition_table (name) as 
'carbondata'")

Review comment:
   add a new test case for multiple partitions as well




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r597099633



##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
##
@@ -276,7 +290,25 @@ class CarbonTableCompactor(
   segmentMetaDataAccumulator)
   } else {
 if (mergeRDD != null) {
-  mergeRDD.collect
+  val result = mergeRDD.collect

Review comment:
   Current code will not handle multi-partitions properly and Add/Drop 
partitions is called for each partition. Please change the code as below:
   
if (!updatePartitionSpecs.isEmpty) {
   val tableIdentifier = new 
TableIdentifier(carbonTable.getTableName,
 Some(carbonTable.getDatabaseName))
   // To update partitionSpec in hive metastore, drop and add with 
latest path.
   val oldPartition: util.List[TablePartitionSpec] =
 new util.ArrayList[TablePartitionSpec]()
   val newPartition: util.List[TablePartitionSpec] =
 new util.ArrayList[TablePartitionSpec]()
   updatePartitionSpecs.asScala.foreach {
 partitionSpec =>
   var spec = PartitioningUtils.parsePathFragment(
 String.join(CarbonCommonConstants.FILE_SEPARATOR, 
partitionSpec.getPartitions))
   oldPartition.add(spec)
   val addPartition = 
mergeRDD.checkAndUpdatePartitionLocation(partitionSpec)
   spec = PartitioningUtils.parsePathFragment(
 String.join(CarbonCommonConstants.FILE_SEPARATOR, 
addPartition.getPartitions))
   newPartition.add(spec)
   }
   AlterTableDropPartitionCommand(
 tableIdentifier,
 oldPartition.asScala,
 true, false, true).run(sqlContext.sparkSession)
   AlterTableAddPartitionCommand(tableIdentifier,
 newPartition.asScala.map(p => (p, None)), 
false).run(sqlContext.sparkSession)
 }




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r597099633



##
File path: 
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
##
@@ -276,7 +290,25 @@ class CarbonTableCompactor(
   segmentMetaDataAccumulator)
   } else {
 if (mergeRDD != null) {
-  mergeRDD.collect
+  val result = mergeRDD.collect

Review comment:
   Current code will not handle multi-partitions properly and Add/Drop 
partitions is called for each partition. Please change the code as below:
   `   if (!updatePartitionSpecs.isEmpty) {
   val tableIdentifier = new 
TableIdentifier(carbonTable.getTableName,
 Some(carbonTable.getDatabaseName))
   // To update partitionSpec in hive metastore, drop and add with 
latest path.
   val oldPartition: util.List[TablePartitionSpec] =
 new util.ArrayList[TablePartitionSpec]()
   val newPartition: util.List[TablePartitionSpec] =
 new util.ArrayList[TablePartitionSpec]()
   updatePartitionSpecs.asScala.foreach {
 partitionSpec =>
   var spec = PartitioningUtils.parsePathFragment(
 String.join(CarbonCommonConstants.FILE_SEPARATOR, 
partitionSpec.getPartitions))
   oldPartition.add(spec)
   val addPartition = 
mergeRDD.checkAndUpdatePartitionLocation(partitionSpec)
   spec = PartitioningUtils.parsePathFragment(
 String.join(CarbonCommonConstants.FILE_SEPARATOR, 
addPartition.getPartitions))
   newPartition.add(spec)
   }
   AlterTableDropPartitionCommand(
 tableIdentifier,
 oldPartition.asScala,
 true, false, true).run(sqlContext.sparkSession)
   AlterTableAddPartitionCommand(tableIdentifier,
 newPartition.asScala.map(p => (p, None)), 
false).run(sqlContext.sparkSession)
 }`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r596714740



##
File path: core/src/main/java/org/apache/carbondata/core/index/Segment.java
##
@@ -417,4 +417,8 @@ public void setSegmentMetaDataInfo(SegmentMetaDataInfo 
segmentMetaDataInfo) {
   public boolean isExternalSegment() {
 return isExternalSegment;
   }
+
+  public void setIsExternalSegment(boolean isExternalSegment) {

Review comment:
   check, if you can add Path in LoadMetadetail for external partition also





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4107: [CARBONDATA-4149] Query with SI after add partition based on location on partition table gives incorrect results

2021-03-18 Thread GitBox


Indhumathi27 commented on a change in pull request #4107:
URL: https://github.com/apache/carbondata/pull/4107#discussion_r596686311



##
File path: 
core/src/main/java/org/apache/carbondata/core/indexstore/ExtendedBlockletWrapper.java
##
@@ -121,8 +121,9 @@ public ExtendedBlockletWrapper(List 
extendedBlockletList, Stri
 DataOutputStream stream = new DataOutputStream(bos);
 try {
   for (ExtendedBlocklet extendedBlocklet : extendedBlockletList) {
+boolean isExternalPath = 
!extendedBlocklet.getFilePath().contains(tablePath);

Review comment:
   please check and remove, as discussed

##
File path: 
core/src/main/java/org/apache/carbondata/core/readcommitter/TableStatusReadCommittedScope.java
##
@@ -86,6 +87,13 @@ public TableStatusReadCommittedScope(AbsoluteTableIdentifier 
identifier,
 } else {
   SegmentFileStore fileStore =
   new SegmentFileStore(identifier.getTablePath(), 
segment.getSegmentFileName());
+  Optional

Review comment:
   please add a comment, about this scenario

##
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableAddHivePartitionCommand.scala
##
@@ -137,6 +153,19 @@ case class CarbonAlterTableAddHivePartitionCommand(
 CarbonUtil.checkAndCreateFolderWithPermission(segmentsLoc)
 val segmentPath = segmentsLoc + CarbonCommonConstants.FILE_SEPARATOR + 
segmentFileName
 SegmentFileStore.writeSegmentFile(segmentFile, segmentPath)
+CarbonLoaderUtil
+  .recordNewLoadMetadata(newMetaEntry, loadModel, false, false)
+operationContext.setProperty(table.getTableUniqueName + "_Segment", 
loadModel.getSegmentId)

Review comment:
   We no need to fire events for SI, since we are not going to load data to 
SI





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org