Last one about the serialization subsystem.

The idea of storing all the elements in text/binary blobs is not new
(DeeGree actually does that to handle complex features, only making
some of the attributes explicit, those that the admin thinks is going
to be used as filters), and I fully agree it maps well to key/value
nosql databases, as well as any db that has first class support for
xml and json and allows to do direct searches on it.

Doing that in a relational BDMS is clunky though, for a variety of
reasons, but I agree it's a quick way to get a R-DBMS catalog working
(though.. say goodbye to spatial filtering, which is not good at all).

Ah, about the ability to filter and me not seeing PredicateToSQL,
I actually saw it and went over it a few times, what I did not
see was the exporting of attributes in the second table and the
usage while building the sql.

To my defense, how can you guess that the "relations" table is
actually exported attributes from a bean? The naming makes no sense to me:

CREATE TABLE CATALOG (id VARCHAR(100) PRIMARY KEY, blob bytea);

CREATE TABLE DEFAULTS (id VARCHAR(256) PRIMARY KEY, type VARCHAR(100));

CREATE TABLE RELATIONS (id VARCHAR(100), type VARCHAR(50), relation
VARCHAR(256), value VARCHAR(4096));

The code of the PredicateToSQL is rather criptic, and does a lot of string
manipulation, a standard OGC filter encoder seems actually simpler.
Now that I know it's exporting attributes and encoding searches on them
I still have a hard time imagining how the actual query to the database
looks like, though it seems to be something with one subquery for each
queried property, something like
"select blob from catalog where id in (suquery1) or id in (subquery2) or
..."

However, if it works, it works, the problematic point will be
selling it to the db admin that manages the database cluster sitting
behind GeoServer in a HA setup, and the people wanting to interact with the
database.

Now, you might rightfully say that you don't care and that they are free to
implement their own.
On the other side, I believe the current in memory catalog will keep on
fitting
most simple installation needs (100-1000 layers).
Installations that have lot more and are setup for multitenancy will likely
be fully HA and have the dreaded db admin looking at the schema and indexes.
People that want to publish new layers and hope to use the familiar SQL
commands
will also be disappointed as we turn them to REST config.

Which leaves the proposed implementations for those green field
installation or
places where the db admin actually does not mind seeing the db used as a
key/value
store.. that is, what looks like a relatively narrow use case.

Justin makes a good point about transactions handling in a
servlet/dispatcher filter,
I agree it's a good idea, something that we should add and that would also
reduce to zero the need for the existing config locks. How hard would it be
to wire
the catalog trasaction handling with the typical Spring filters for
transaction management?

Long story short, the persistent implementations that good community
modules meant
to demonstrate the feasibility of doing secondary storage catalog
implementations.
Not happy about how the jdbc one looks, but they serve the need of showing
multiple
implementations just fine.

Cheers
Andrea
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Reply via email to