At least for mySQL the driver just keeps a map of columnname->index, so that 
lookup is probably not too expensive.  Mostly I used the arrays because when 
having multiple tables worth of columns in the result there can be duplicate 
column names (id, name) in the result.  Also you need to load table 1 with 
offset 0, table 2 with offset n, etc, which isn't possible with the names.  It 
is possible with the mySQL driver to get the values out by use the alias, too 
(like A0.id, A1.id), but I don't know if that's universally available.

One thing I don't fully understand is getFieldDescriptorsForMultiMappedTable 
and how it is used in SqlSelectStatement.appendListOfColumnsForSelect.  I 
didn't have time to get into it fully enough to know everything.  However, I 
didn't want to use the results from that because the columns are first added 
to a map for uniqueness and then put pack into an array, which affects the 
order of the column list.  Now most likely within an execution of the JVM with 
the same map implementation the columns will probably always come back out in 
the same order, but I don't like relying on things like that.  I wonder if an 
ordering issue like that could ever affect the change you suggest below??  In 
any case, for my change it was easier to just count on the field descriptor 
array coming right from the class descriptor for each class in the join.

To answer Jakob's question, I don't think this handles extents correctly, 
probably because of my avoidance of getFieldDescriptorsForMultiMappedTable 
that I mention, but I'm not sure because we don't make that much use of 
extents and I just needed to get this working for me at the moment.

I may have time to make this functionality more robust but I would probably 
need some guidance as to the issues I need to look out for.  As I said, this 
just gets what I need at the moment working, and I tried to keep things from 
intermingling too much with the existing code to make it easy on me.

I have attached the patches and the new query type as Jakob requested.  This 
includes the PersistenceBrokerImpl patch that I posted to the dev list because 
it is fairly important when loading all the objects at once not to create 
proxies for them all only to find them later.

John Marshall
Connectria

PS I also submitted a patch to the commons-sql project that allows generation 
of alter sql statements to bring a database up-to-date to the new schema.  We 
transform the OJB repository to the sql schema xml via xsl and automatically 
check the db on startup of the application.  If you don't like that you can 
just generate the sql via the ant task.  Just mentioning it because I see a 
lot of questions about the existing tools.


>fwiw, I did the following in SQLHelper
>
>   public static Object getObjectFromColumn(ResultSet rs, FieldDescriptor 
fld) throws SQLException
>    {
>     if (fld.getColumnIndex() == -1)
>     {
>      fld.setColumnIndex(rs.findColumn(fld.getColumnName()));
>     }
>        return getObjectFromColumn(rs, fld.getColumnJdbcType(), 
fld.getColumnIndex());
>    }
>
>as a very easy way of using the name to resolve the index only once, and then
>use the index from then on. the performance tests showed almost 0%
>improvement. However this could be a per-driver performance issue.
>
>I have not removed the hashmap row that is used during buildWithReflection 
and
>replaced it with a directly indexed array, indexed by ColumnIndex in the
>fielddescriptor yet. I will try that and see if their is a performance gain.
>
>cheers,
>Matthew

Attachment: join-load.zip
Description: Zip compressed data

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to