I am new to this but are these issues those of trying to get it to do what 
sqlite it is not designed for. I quote the book

The Definitive Guide to SQLite - Chapter 1 --- Networking
" .... Again, most of these limitations are intentional—they are a result of 
SQLite’s
design. Supporting high write concurrency, for example, brings with it great
deal of complexity and this runs counter to SQLite’s simplicity in design.
Similarly, being an embedded database, SQLite intentionally does
__not__support__networking__ [my emphasis].  This should come as no surprise.
In short, what SQLite can’t do is a direct result of what it can. It was
designed to operate as a modular, simple, compact, and easy-to-use embedded
relational database whose code base is within the reach of the programmers
using it. And in many respects it can do what many other databases cannot, such
as run in embedded environments where actual power consumption is a limiting
factor. "
------
Is it really a good idea to network a data base that relies on the OS file 
systems like this? Is it ever going to be safe enough?
--------------------
David M X Green

|||"Alex Roston" (2007-02-02 20:05) wrote: |||>>>
Scott Hess wrote:
On 2/2/07, Dennis Cote <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] wrote:
> The problem is, not many network filesystems work correctly.

I'm sure someone knows which versions of NFS have working file locking,
at least under Linux.

I doubt it is this easy.  You need to line up a bunch of things in the
right order, with the right versions of nfs, locking services, perhaps
the right kernel versions, the right config, etc, etc.

IMO the _real_ solution would be a package which you could use to try
to verify whether the system you have is actually delivering working
file locking.  Something like diskchecker (see
http://brad.livejournal.com/2116715.html).  The basic idea would be to
have a set of networked processes exercising the APIs and looking for
discrepencies.  Admittedly, passing such a test only gets you a
statistical assurance (maybe if you'd run the test for ten more
minutes, or with another gig of data, it would have failed!), but
failing such a test is a sure sign of a problem.

-scott
That's a really useful idea, not only for itself, but also because it might lead to debugging some of the network issues, and allowing the developers to build a database of stuff that works: "Use Samba version foo, with patch bar, and avoid the Fooberry 6 network cards." Or whatever.

My suspicion, in the earlier case with Windows and Linux clients is that Windows didn't handle the locking correctly, and that would be worth proving/disproving too.

An alternate approach is to use something a little more like a standard client-server model, where there's a "server" program which intervenes between (possibly multiple) workstations and the database itself. The "server" would queue requests to the database, make sure that no more than one write request at a time went to the database, and certify that writes have been properly made.

The problem with this approach is that it eats quite heavily into SQLite's speed advantage, but if you've already put thousands of hours into developing your system, it might be a worthwhile hack.

Alex

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------





-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to