Adar Dembo has posted comments on this change. ( 
http://gerrit.cloudera.org:8080/13864 )

Change subject: KUDU-2625: Support per-row error check in prepare stage
......................................................................


Patch Set 3:

(8 comments)

http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/client/client-test.cc
File src/kudu/client/client-test.cc:

http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/client/client-test.cc@2732
PS3, Line 2732:   ASSERT_EQ(R"((int32 key=1, int32 int_val=1, string 
string_val="One", int32 non_null_with_default=12345))", rows[0]);
              :   ASSERT_EQ(R"((int32 key=2, int32 int_val=22, string 
string_val="Two", int32 non_null_with_default=12345))", rows[1]);
These lines look too long, could you wrap?

Below too.


http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/client/client-test.cc@2789
PS3, Line 2789:   CHECK(!overflow);
              :   CHECK_EQ(2, errors.size());
Use ASSERTs here.


http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/client/client-test.cc@2798
PS3, Line 2798:   ScanTableToStrings(client_table_.get(), &rows);
Need to wrap these calls in NO_FATALS.


http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/common/row_operations.cc
File src/kudu/common/row_operations.cc:

http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/common/row_operations.cc@449
PS3, Line 449:       continue;
Looking over DecodeUpdateOrDelete, it looks like the "nothing to consume, move 
on to the next column" case is less common than the "we need to consume the 
value in order to properly validate subsequent rows" case. As such, could you 
add a comment here explaining why we're avoiding ReadColumn?

Alternatively, you could augment "First process the key columns" to explain the 
overall flow and help readers understand why, in the event of a key column 
validation failure, we can't just short circuit out.

We also need equivalent breadcrumbs in the comments below, dealing with non-key 
columns.

Finally, could you add some multi-row tests to row_operations-test, where a 
validation error in the first row doesn't mess up the subsequent rows? Or do we 
have such coverage already?


http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/common/row_operations.cc@527
PS3, Line 527:         ReadColumn(col, scratch);
Wrap in RETURN_NOT_OK? Or should this be wrapped in ignore_result?


http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/common/row_operations.cc@616
PS3, Line 616:   if (!ops->empty() && all_error) {
             :     return Status::InvalidArgument("all row operations decoded 
error");
             :   }
Why this short-circuit? Shouldn't we surface a batch of 10/10 decoding errors 
using per-row errors?


http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/tablet/transactions/write_transaction.cc
File src/kudu/tablet/transactions/write_transaction.cc:

http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/tablet/transactions/write_transaction.cc@169
PS3, Line 169:     UpdatePerRowErrors();
Is this here just to accommodate the new "if all ops failed to decode, fail the 
entire write" codepath?


http://gerrit.cloudera.org:8080/#/c/13864/3/src/kudu/tablet/transactions/write_transaction.cc@220
PS3, Line 220:     i++;
Nit: not your fault (since you're just moving code), but this would probably be 
clearer if you used a for loop iterating over i and set up a local const RowOp* 
in the body of the loop.



--
To view, visit http://gerrit.cloudera.org:8080/13864
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: kudu
Gerrit-Branch: master
Gerrit-MessageType: comment
Gerrit-Change-Id: I497fc3d5d1c9cbb0c183997c9adb8f5efeb9c9d0
Gerrit-Change-Number: 13864
Gerrit-PatchSet: 3
Gerrit-Owner: Yingchun Lai <[email protected]>
Gerrit-Reviewer: Adar Dembo <[email protected]>
Gerrit-Reviewer: Alexey Serbin <[email protected]>
Gerrit-Reviewer: Kudu Jenkins (120)
Gerrit-Reviewer: Todd Lipcon <[email protected]>
Gerrit-Reviewer: Yingchun Lai <[email protected]>
Gerrit-Comment-Date: Mon, 22 Jul 2019 21:25:38 +0000
Gerrit-HasComments: Yes

Reply via email to