On Mon, Sep 1, 2014 at 7:00 PM, Craig Ringer <cr...@2ndquadrant.com> wrote:
> On 09/02/2014 12:50 AM, Dobes Vandermeer wrote:
> > Hmm yes I am learning that the BG worker system isn't as helpful as I
> > had hoped due to the single database restriction.
> > As for a writing a frontend this might be the best solution.
> > A java frontend would be easy but pointless because the whole point here
> > is to provide a lightweight access method to the database for
> > environments that don't have the ability to use the jdbc or libpq
> > libraries. Deploying a java setup would be too much trouble.
> If you can't run libpq, you can't run *anything* really, it's very
> lightweight. I think you misunderstood what I was saying; I'm talking
> about it acting as a proxy for HTTP-based requests, running on or in
> front of the PostgreSQL server like a server-side connection pool would.
I was just referring to an environment that doesn't have a binding to libpq
or JSBC, for example node.js for a long time had no postgresql client so I
didn't use PostgreSQL when I used node.js.
> Same idea as PgBouncer or PgPool. The advantage over hacking
> PgBouncer/PgPool for the job is that Tomcat can already do a lot of what
> you want using built-in, pre-existing functionality. Connection pool
> management, low level REST-style HTTP processing, JSON handling etc are
> all done for you.
Yeah, those are nice conveniences but I still think installing Java and
getting something to run on startup is a bit more of a hurdle. Better maek
life easier up front by having a simple standalone proxy you can compile
and run with just whatever is already available on a typical AWS ubuntu
> > A C frontend using libevent would be easy enough to make and deploy for
> > this I guess.
> > But... Maybe nobody really wants this thing anyway, there seem to be
> > some other options out there already.
> It's something I think would be interesting to have, but IMO to be
> really useful it'd need to support composing object graphs as json, a
> json query format, etc. So you can say "get me this customer with all
> their addresses and contact records" without having to issue a bunch of
> queries (round trips) or use ORM-style left-join-and-deduplicate hacks
> that waste bandwidth and are messy and annoying.
If the SQL outputs rows with ARRAY and JSON type columns in them then that
may be sufficient to construct whatever kind of JSON structure you want for
the query result. I'm not sure why ORMs don't take better advantage of
this; maybe they're just too cross-DB or maybe this feature isn't as
powerful as I think it is?
PostgreSQL also allows you to query and index fields inside of a json
value, so at least initially you can get all this power without inventing
any new query language.
But later a translator could be made, like an ORM-ish thingy, that might
have less clutter than the SQL one because some shorthand could be used for
peeking inside the JSON structures.
> Close care to security and auth would also need to be taken. You don't
> want to be sending a username/password with each request; you need a
> reasonable authentication token system, request signing to prevent
> replay attacks, idempotent requests, etc.
Well, these would be needed for use cases where the DB is exposed to
untrusted parties, which has never been the case on projects I've worked
on. I wouldn't be against these sorts of improvements if people want to
make them, but wouldn't matter much to me. I was hoping to re-use postgres
built-in password/ident security system.