[ 
https://issues.apache.org/jira/browse/CASSANDRA-20276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17924880#comment-17924880
 ] 

Stefan Miklosovic edited comment on CASSANDRA-20276 at 2/7/25 1:26 PM:
-----------------------------------------------------------------------

[~bernardo.botella]  I want to raise few minor things I see before we merge 
this, we might just address them in this PR and be done with it.

When I look into the tests, I see there is this:
{code:java}
create table %s.tblconstraints (id int primary key, x int check x > 100 and x < 
200 and NOT_NULL(x), v text check LENGTH(v) > 10) {code}
Now, when you notice this:
{code:java}
 x int check x > 100 and x < 200 and NOT_NULL(x){code}
What this does is that it will run ScalarColumnConstraint.evaluate method to 
check > 100 and < 100 and there it will compare the numbers and _then_ it will 
go to check the nullity.

I consider this example not happening in real world because it just doesnt make 
sense to check for nullity after the first scalar check passes. Scalar check 
itself is basically checking nullity in isSatisfiedBy - but that is just a 
byproduct of comparison when it compares empty ByteBuffer (representing "null") 
with ByteBuffer of some value (e.g representing "100").

However, if you try to insert "null" in CQLSH for "x", it will indeed fail that 
check, but it is unnecessarily exercising the code where it is comparing empty 
ByteBuffer with a ByteBuffer of value "100" (relationType.isSatisfiedBy). 

Same logic holds for LengthConstraint.

This is not necessary and we can fail way earlier before calling all of that 
stuff, saving some time when evaluating value which is null. 

 

I have updated the PR to improve these things along with few cosmetic 
improvements as well.  Please take a look at let me know. We would still need 
to accommodate the tests. Not sure if you test the exception messages in case 
we pass null into scalar checks. 

How I look at this is that NOT_NULL is necessary only in case it is the only 
check on that column. All checks should test on nullity and fail as soon as 
possible. 

That also means that it is questionable what we have NOT_NULL for when there 
are other checks in place and which are evaluated before NOT_NULL and they 
check nullity implicitly too.

If we do:

{code}
ALTER  TABLE ks.tb ALTER x CHECK x > 0 AND NOT_NULL(x);
{code}

This should most probably fail because "x > 0" is checking nullity itself.

{code}
ALTER  TABLE ks.tb ALTER x  CHECK NOT_NULL(x) AND x > 0;
{code}
 
This should most probably fail too because if it is null then it does not make 
sense to have a check specified which checks if it is bigger than 0.

We might postpone this for other PR but I would like to see what I suggested in 
the updated PR included in this ticket.


was (Author: smiklosovic):
[~bernardo.botella]  I want to raise few minor things I see before we merge 
this, we might just address them in this PR and be done with it.

When I look into the tests, I see there is this:
{code:java}
create table %s.tblconstraints (id int primary key, x int check x > 100 and x < 
200 and NOT_NULL(x), v text check LENGTH(v) > 10) {code}
Now, when you notice this:
{code:java}
 x int check x > 100 and x < 200 and NOT_NULL(x){code}
What this does is that it will run ScalarColumnConstraint.evaluate method to 
check > 100 and < 100 and there it will compare the numbers and _then_ it will 
go to check the nullity.

I consider this example not happening in real world because it just doesnt make 
sense to check for nullity after the first scalar check passes. Scalar check 
itself is basically checking nullity in isSatisfiedBy - but that is just a 
byproduct of comparison when it compares empty ByteBuffer (representing "null") 
with ByteBuffer of some value (e.g representing "100").

However, if you try to insert "null" in CQLSH for "x", it will indeed fail that 
check, but it is unnecessarily exercising the code where it is comparing empty 
ByteBuffer with a ByteBuffer of value "100" (relationType.isSatisfiedBy). 

Same logic holds for LengthConstraint.

This is not necessary and we can fail way earlier before calling all of that 
stuff, saving some time when evaluating value which is null. 

 

I have updated the PR to improve these things along with few cosmetic 
improvements as well.  Please take a look at let me know. We would still need 
to accommodate the tests. Not sure if you test the exception messages in case 
we pass null into scalar checks. 

How I look at this is that NOT_NULL is necessary only in case it is the only 
check on that column. All checks should test on nullity and fail as soon as 
possible. 

That also means that it is questionable what we have NOT_NULL for when there 
are other checks in place and which are evaluated before NOT_NULL and they 
check nullity implicitly too.

> Add not null constraint
> -----------------------
>
>                 Key: CASSANDRA-20276
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-20276
>             Project: Apache Cassandra
>          Issue Type: Improvement
>          Components: Feature/Constraints
>            Reporter: Bernardo Botella Corbi
>            Assignee: Stefan Miklosovic
>            Priority: Normal
>             Fix For: 5.x
>
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Add a new constraint that checks if a column is null, as defined in CEP-42: 
> https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-42%3A+Constraints+Framework



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to