On Mon, Nov 24, 2008 at 11:58 AM, matt <[EMAIL PROTECTED]> wrote:

>
>  That's exactly what constraints, rules in SQL etc are for. Maybe some
>> similar ruling system for filesystems would be fine :)
>> (any suggestions ?)
>>
>>
> That's what I was driving at. To map SQL <> FS you end up replicating lots
> of SQL logic in your client FS. Reads are *always* out of date. Writes can
> happen across tables but need to be atomic and able to roll back.
>

Probably I was not clear on what I'm thinking about.

I think that rebuild a relational database on a filesystem is (quite)
pointless.


What I'm proposing is to design/develop the interface to interact with
(any?) rdbms through a filesystem.

A kind of "proxy" to the db with a filesystem interface.

A "draft" could be (even if I've already found some problems in it):


>    - a "ctrl" file which accept only COMMIT, ROLLBACK and ABORT
>    - an "error" file
>    - a "notice" file (postgresql has RAISE NOTICE... may be others have it
>    too)
>    - a "data" file (append only in the transaction, but not outside) where
>    the INSERT, UPDATES, DELETE and all the DDL and DCL commands will be 
> written
>    - each SELECT have to be written in a different file (named
>    sequentially): after writing the query, the same file could be read to get
>    the results (in xml...)
>    - on errors, sequents writes fails and error file will be readable
>
> The problems:

   - transaction -> directory conversion require creating a new connection
   to the backend (if I'm right thinking that transaction are connection wide)
   - xml output of fetchable results (SELECT, FETCH, SHOW...) require a tool
   to easily query such an output. It seem Plan 9 miss such a tool. xmlfs
   actually is unsuitable.
   (I'm thinking about an xpath command accepting xml in stdin and the xpath
   query as an argument, and return to stdout the results)


Giacomo

Reply via email to