Hi all,

I am currently evaluating using Spark with Kudu.
So I am facing the following issues:

1) If you try to DELETE a row with a key that is not present on the table
you will have an Exception like this:

java.lang.RuntimeException: failed to write N rows from DataFrame to Kudu;
sample errors: Not found: key not found (error 0)

2) If you try to DELETE a row using a subset of a table key you will face
the following:

Caused by: java.lang.RuntimeException: failed to write N rows from
DataFrame to Kudu; sample errors: Invalid argument: No value provided for
key column:

The use cases presented above are correctly working if you interact with
kudu using Impala.

Any suggestions to overcome these limitation?

Thanks.
Best Regards

Pietro

Reply via email to