What you're seeing is the expected behavior in both cases.
One way to achieve the semantics you want in both situations is to read in
the Kudu table to a data frame, then filter it in Spark SQL to contain just
the rows you want to delete, and then use that dataframe to do the
deletion. There
Hi all,
I am currently evaluating using Spark with Kudu.
So I am facing the following issues:
1) If you try to DELETE a row with a key that is not present on the table
you will have an Exception like this:
java.lang.RuntimeException: failed to write N rows from DataFrame to Kudu;
sample errors: