Alexey Serbin has posted comments on this change. ( 
http://gerrit.cloudera.org:8080/16683 )

Change subject: KUDU-1563 Use DELETE_IGNORE in KuduRestore job
......................................................................


Patch Set 5: Code-Review+1

(1 comment)

http://gerrit.cloudera.org:8080/#/c/16683/5/java/kudu-backup/src/test/scala/org/apache/kudu/backup/TestKuduBackup.scala
File 
java/kudu-backup/src/test/scala/org/apache/kudu/backup/TestKuduBackup.scala:

http://gerrit.cloudera.org:8080/#/c/16683/5/java/kudu-backup/src/test/scala/org/apache/kudu/backup/TestKuduBackup.scala@634
PS5, Line 634:   def testLegacyDeleteIgnore(): Unit = {
             :     insertRows(table, 100) // Insert data into the default test 
table.
             :
             :     // Run and validate initial backup.
             :     backupAndValidateTable(tableName, 100, false)
             :
             :     // Delete the rows and validate incremental backup.
             :     Range(0, 100).foreach(deleteRow)
             :     backupAndValidateTable(tableName, 100, true)
             :
             :     // When restoring the table, delete half the rows after each 
job completes.
             :     // This will force delete rows to cause NotFound errors and 
allow validation
             :     // that they are correctly handled.
             :     val listener = new SparkListener {
             :       override def onJobEnd(jobEnd: SparkListenerJobEnd): Unit = 
{
             :         val client = kuduContext.syncClient
             :         val table = client.openTable(s"$tableName-restore")
             :         val scanner = 
kuduContext.syncClient.newScannerBuilder(table).build()
             :         val session = client.newSession()
             :         scanner.asScala.foreach { rr =>
             :           if (rr.getInt("key") % 2 == 0) {
             :             val delete = table.newDelete()
             :             val row = delete.getRow
             :             row.addInt("key", rr.getInt("key"))
             :             session.apply(delete)
             :           }
             :         }
             :       }
             :     }
             :     ss.sparkContext.addSparkListener(listener)
             :
             :     restoreAndValidateTable(tableName, 0)
             :   }
Is it possible to separate this out into some sort of re-usable unit of code 
and use it here and in the above test,  having the same code but custom 
MasterServerConfig for the KuduTestHarness?



--
To view, visit http://gerrit.cloudera.org:8080/16683
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: kudu
Gerrit-Branch: master
Gerrit-MessageType: comment
Gerrit-Change-Id: Ib6f6d5a31be77630e79ff1566e796eb5183a5d22
Gerrit-Change-Number: 16683
Gerrit-PatchSet: 5
Gerrit-Owner: Grant Henke <[email protected]>
Gerrit-Reviewer: Alexey Serbin <[email protected]>
Gerrit-Reviewer: Andrew Wong <[email protected]>
Gerrit-Reviewer: Attila Bukor <[email protected]>
Gerrit-Reviewer: Grant Henke <[email protected]>
Gerrit-Reviewer: Kudu Jenkins (120)
Gerrit-Comment-Date: Mon, 09 Nov 2020 07:49:38 +0000
Gerrit-HasComments: Yes

Reply via email to