As it was mentioned earlier, reader/writer approach is not very useful in
case of features being processed through JDBC. 
I had a goal to implement datastore for Oracle Spatial based on GeoTools
architecture and interfaces to be used for direct access to Oracle database
from UDIG. GeoTools' standard implementation is very narrow and not
appropriate for high customization - I mean customization for data model
being used in particular project.

What is done is not a new Hibernate to bind Feature instances to table rows
in the database. But the idea of "feature binding" is used. 

There are several major parts in the "new" JDBC datastore framework design.

The part where "feature binding" takes place is in package
org.geotools.data.jdbc.operation. For SELECT/DELETE/INSERT/UPDATE SQL tokens
there are interfaces like SelectJDBCOperation... etc. And there is a
JDBCOperationFactory that creates instances of operations.  Implementors of
operation interfaces encapsulate all SQL stuff to process Feature objects
and bind them to the relational representation in  the database. 

Feature objects and coming in and out to/from operations and this part of
the framework is responsible for the SQL and real JDBC processing.

Another part of the framework is AbstractJDBCDataStore class and other
classes around it. AbstractJDBCDataStore mostly contains get/set methods to
configure various factories, builders and providers:

-JDBCOperationFactory
-JDBCFeatureTypeBuilder
-FIDMapperFactory
-FilterToSQLFactory
-ConnectionProvider
....

So, AbstractJDBCDataStore is general enough to let the developer charge its
implementation for the particular database with different factories and
builders that customize the framework for this particular database.

SQL stuff is far from FeatureStore/FeatureSource/DataStore implementations
and it is located in "operations" part of the framework.

org.geotools.data.jdbc.prepared  - is a general implementation of JDBC
operations that deal with prepared statements of JDBC.

org.geotools.data.jdbc.type - is a package with classes to handle data types
conversion (AttributeType of the feature <-> JDBC type ).

JDBCConnectionProvider extracts physical Connection with respect to the
passed transaction object. It does not replace the work what I see in H2
module of GeoTools 2.5. The work has been done in H2 module related to
getting physical connections from pools should be integrated in this
framework to improve it. But JDBCConnectionProvider is a thing that works
with already created and cached Connections with respect to transaction.


For the convenience I merged UDIG UI and Geotools Oracle plugin with all
made changes into one UDIG plugin for testing. To run this stuff it is
necessary manually put Oracle's JDBC driver (ojdbc14.jar) into the lib
folder.


Come classes are almost full copies from GeoTools trunk but with some
necessary changes for the framework. The idea is to force this stuff to
work, then make backward improvements in GeoTools trunk if necessary if this
framework would be useful for next JDBC architecture generation.


Originally this framework and Oracle implementation was developed for UDIG
1.1 + GeoTools 2.2.x, but what is available in SVN - is a port for GeoTools
trunk with latest changes in interfaces (like FilterToSQL instead of
SQLEncoder). That's why some things were ported in a dirty way - just force
them to work. And some work is required to bring things to date - I am able
to continue this work for the community at my free time.

What else.. may be something but later. Jody may be  is a person who could
look into this stuff, comment or propose the next steps.

I tested UDIG trunk + GeoTools trunk + Oracle Spatial 10g that is used in a
production mode right now in projects for our customers. Of course, getting
full functionality of INSERT/UPDATE queries requires some steps at the
client side that is outside of GeoTools framework and more related to
feature types model tuning to create valid Feature instances with acceptable
values of attributes when there are various column restrictions at the data
model in the database.

Operations part of the framework is optimized for the performance not
validation while our project was implemented in a such way that we guarantee
that Feature instances coming into JDBCFeatureStore  are valid and conform
to the database data model - no complex validation - only binding Feature
object to SQL query and execution. This is a white place right now in the
operations framework. It assumes that features are valid and its feature
types match the database data model and feature attribute values are also
valid, etc.


SVN:
http://svn.geotools.org/udig/community/vitalus/org.geotools.data.oracle



Vitali.



-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Geotools-devel mailing list
Geotools-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geotools-devel

Reply via email to