[
https://issues.apache.org/jira/browse/DERBY-3330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mike Matrigali updated DERBY-3330:
----------------------------------
Derby Info: [Patch Available, Release Note Needed] (was: [Release Note
Needed, Patch Available])
In the derby-3330_followup_1.diff I don't think the sort changes will work
correctly. In the
uniqueWithDuplicateNulls we need the sorter to sort on the rowlocation column
also in the
case where there are 2 rows that otherwise are duplicate by store null
comparison standards.
Imagine the case of a table with a single column nullable index that has a
million rows all
with null. Order does not really matter for user, but when the base row is
deleted the system
will want to exactly find the matching (null, (page 5, row7)) index entry and
if the rows are not
properly sorted on the row location then the btree search for this row will
probably fail.
So what we want from sorter if N is the number of columns including the row
location column:
1) sort on N columns
2) if any column has a null don't do any duplicate checking
3) if no column has a null then duplicate checking based on leading N-1 columns
It would nice to have a test that verified the sorting is correct when there
are many duplicate nulls. It is a little
tricky to get a good test case as the normal case is for rows scanned from a
heap to build an index to have
rowlocations in ascending order - but we should not be counting on that. I
will have to think about this. I don't
know if once you get the sorter to go external with multiple merge runs I think
it will shuffle the rows from input
order based on what sort keys you told it to consider. We should verify that
the existing checked in code handles
this case correctly.
> provide support for unique constraint over keys that include one or more
> nullable columns.
> ------------------------------------------------------------------------------------------
>
> Key: DERBY-3330
> URL: https://issues.apache.org/jira/browse/DERBY-3330
> Project: Derby
> Issue Type: New Feature
> Components: Store
> Affects Versions: 10.4.0.0
> Environment: all
> Reporter: Anurag Shekhar
> Assignee: Anurag Shekhar
> Attachments: BTreeController.diff, db2Compatibility-v2.diff,
> db2Compatibility.diff, derby-3330-testcase.diff,
> derby-3330-UpgradeTests.diff, derby-3330.diff, derby-3330_followup_1.diff,
> derby-3330v10.diff, derby-3330v11.diff, derby-3330v12.diff,
> derby-3330v13.diff, derby-3330v2.diff, derby-3330v3.diff, derby-3330v4.diff,
> derby-3330v5.diff, derby-3330v6.diff, derby-3330v7.diff, derby-3330v8.diff,
> derby-3330v9.diff, derbyall_report.txt, FunctionalSpec_DERBY-3330-V2.html,
> FunctionalSpec_DERBY-3330.html, UniqueConstraint_Implementation.html,
> UniqueConstraint_Implementation_V2.html,
> UniqueConstraint_Implementation_V3.html,
> UniqueConstraint_Implementation_V4.html
>
>
> Allow unique constraint over keys which include one or more nullable fields.
> Prior to this change Derby only supported unique constraints on keys that
> included no nullable columns. The new constraint will allow unlimited
> inserts of any key with one more null columns, but will limit insert of keys
> with no null columns to 1 unique value per table.
> There is no change to existing or newly created unique indexes on null
> columns (as opposed to unique constraints on null columns). Also there is no
> change to existing or newly created constraints on keys with no nullable
> columns.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.