[
https://issues.apache.org/jira/browse/DERBY-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12557912#action_12557912
]
mamtas edited comment on DERBY-3302 at 1/10/08 10:10 PM:
------------------------------------------------------------------
The majority of getLocaleFinder calls are from dead national character datatype
code.
In addition, it is also called by existing code (ie the code before collation
feature was added) for date, time and timestamp. I am not sure if that can run
into problem during recovery.
One place that I find suspicious is the SQLChar.like(dvd, dvd) method line
number 1767. This code is executed for non-national/non-collation sensitive
character types ie for UCS_BASIC character types and the code looks as follows
// Make sure we fail for both varchar an nvarchar
// for multiple collation characters.
SQLChar escapeSQLChar = (SQLChar) escape;
int[] escapeIntArray = escapeSQLChar.getIntArray();
if (escapeIntArray != null && (escapeIntArray.length != 1))
{
throw
StandardException.newException(SQLState.LANG_INVALID_ESCAPE_CHARACTER,new
String (escapeSQLChar.getCharArray()));
}
So, it appears that we are trying to see if number of collation elements
associated with escape character is more than 1 and if yes, then we throw
exception. Seems like a code like above should be done for collation sensitive
character types and not for UCS_BASIC character types. Interestingly, nothing
like this is getting checked for national character datatypes. I entered jira
entry DERBY-3315 for this.
was (Author: mamtas):
The majority of getLocaleFinder calls are from dead national character
datatype code.
In addition, it is also called by existing code (ie the code before collation
feature was added) for date, time and timestamp. I am not sure if that can run
into problem during recovery.
One place that I find suspicious is the SQLChar.like(dvd, dvd) method line
number 1767. This code is executed for non-national/non-collation sensitive
character types ie for UCS_BASIC character types and the code looks as follows
// Make sure we fail for both varchar an nvarchar
// for multiple collation characters.
SQLChar escapeSQLChar = (SQLChar) escape;
int[] escapeIntArray = escapeSQLChar.getIntArray();
if (escapeIntArray != null && (escapeIntArray.length != 1))
{
throw
StandardException.newException(SQLState.LANG_INVALID_ESCAPE_CHARACTER,new
String (escapeSQLChar.getCharArray()));
}
So, it appears that we are trying to see if number of collation elements
associated with escape character is more than 1 and if yes, then we throw
exception. Seems like a code like above should be done for collation sensitive
character types and not for UCS_BASIC character types. Interestingly, nothing
like this is getting checked for national character datatypes. I will enter a
jira entry for this.
> NullPointerException during recovery of database with territory-based
> collation
> -------------------------------------------------------------------------------
>
> Key: DERBY-3302
> URL: https://issues.apache.org/jira/browse/DERBY-3302
> Project: Derby
> Issue Type: Bug
> Components: Store
> Affects Versions: 10.3.1.4, 10.3.2.1, 10.4.0.0
> Reporter: Knut Anders Hatlen
> Assignee: Mamta A. Satoor
> Priority: Critical
> Fix For: 10.4.0.0
>
> Attachments: npe.sql
>
>
> When logical undo is performed on a database with territory-based collation,
> you may get a NullPointerException in SQLChar.getCollationKey() because
> SQLChar.getLocaleFinder() returns null.
> This bug was reported on derby-user:
> http://thread.gmane.org/gmane.comp.apache.db.derby.user/8253
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.