On Jan 11, 2010, at 10:57 AM, Michael Dick wrote:
On Mon, Jan 11, 2010 at 12:38 PM, Craig L Russell <[email protected]
>wrote:
Hi Mike,
On Jan 11, 2010, at 7:24 AM, Michael Dick wrote:
Hi Craig,
That sounds reasonable for this specific use case. I'm a little
leery of
doing too much validation of the columnDefinition attribute,
though. It
just
seems pretty easy for us to get it wrong (ie converting VARCHAR to
LVARCHAR
based on the column length, or some other optimization).
I'm really not suggesting that we do extensive analysis of the
columnDefinition. Just transforming NVARCHAR(n) which is ANSI
standard SQL
into the dialect needed by non-ANSI databases, instead of simply
passing the
columnDefinition as is to the DDL.
Where would we draw the line though? Just column types that we know
won't
work? I could go along with that as long as we're clear on exactly
what we
will change.
I'm suggesting that we look at the columnDefinition and if it contains
NVARCHAR or NCHAR then we do a string substitution that is mediated by
DBDictionary and implemented by a subclass.
Craig
What about adding a variable to DBDictionary along the lines of
"preferNLSVarChar", and then we'll try to use the database's
specific
NVARCHAR equivalent?
That's not the issue at all. As I understand it, the application
has some
columns that have national use characters and those specific
columns need to
be defined to use NVARCHAR or its non-ANSI dialect. Not all columns
should
be NVARCHAR.
Marc, presumably you have different persistence unit definitions
for each
database. If that's the case then you could use a different
xml-mapping-file
and set the columnDefinitions to the database specific type in the
xml-mapping-file.
Or even more hacky, you could just override the VARCHAR type in the
DBDictionary. Ie add this property to persistence.xml :
<property
name="openjpa.jdbc.DBDictionary(varCharTypeName=NVARCHAR)"/>
Either way the application will have to know the proper type, but
at least
you can make some progress.
I think that either you or I misunderstand the issue. As I
understand it,
the application knows the the column type (national use or not),
and the
problem is how to get OpenJPA to generate the proper DDL for the
database
variant.
It was me, thanks for setting me straight. The multiple mapping-file
approach would work for only a few columns, but DBDictionary hacks
wouldn't
be optimal for most apps.
-mike
On Fri, Jan 8, 2010 at 1:34 PM, Craig L Russell <[email protected]
wrote:
On Jan 8, 2010, at 11:00 AM, Marc.Boudreau wrote:
No, the problem is that code can be run on a variety of database
platforms
like DB2, SQL Server, Oracle, etc...
So if I use @Column(columnDefinition="NVARCHAR(256)"), it will
only work
on
SQL Server and Sybase, because the other database platforms don't
recognize
the NVARCHAR type.
I see. How about having the DataDictionary process the
columnDefinition
in
a database-specific way? IIRC, all of the databases support
national use
character set columns but in their own way.
The columnDefinition is not further standardized in the
specification so
we
can do anything we want to with it.
We could analyze the columnDefinition and look for the ANSI
standard
strings NCHAR(n) and NVARCHAR(n) and translate these into the
database-specific type.
Craig
Craig L Russell wrote:
Hi,
On Jan 8, 2010, at 7:53 AM, Marc Boudreau wrote:
Currently, OpenJPA maps String fields to VARCHAR on SQLServer
and
Sybase.
There doesn't appear to be a way to cause a String field to be
mapped to
NVARCHAR other than by using the @Column annotation and
settings its
columnDefinition to "NVARCHAR".
What is the objection to using this technique on the columns
that you
want to hold national use characters? It seems this use case is
exactly suited to this feature.
At the same time, blindly using NVARCHAR
for all String fields is too costly in terms of storage space
on the
database. It ends up limiting the maximum size of the column
(less
characters can fit because more bytes are used to store them).
Unfortunately, the applications we write are required to be
database
neutral because we support multiple vendors.
I'd like to start a discussion on this matter. Here are a
couple of
points
to lead us off...
What's the severity of this missing functionality?
Could an OpenJPA specific annotation be introduced to allow the
mapping
tool to use NVARCHAR instead of VARCHAR?.
Is the problem that the OpenJPA mapping tool doesn't support the
standard columnDefinition annotation in the way you want it to?
Craig
Marc Boudreau
Software Developer
IBM Cognos Content Manager
[email protected]
Phone: 613-356-6412
Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:[email protected]
P.S. A good JDO? O, Gasp!
--
View this message in context:
http://n2.nabble.com/Multibyte-characters-on-SQL-Server-and-Sybase-tp4274154p4274294.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.
Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:[email protected]
P.S. A good JDO? O, Gasp!
Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:[email protected]
P.S. A good JDO? O, Gasp!
Craig L Russell
Architect, Sun Java Enterprise System http://db.apache.org/jdo
408 276-5638 mailto:[email protected]
P.S. A good JDO? O, Gasp!