>This seems inconsequential

>Huh?  If you are making it available as a service then you have to care
>about authentication.  And identity - how do you tell users apart and keep
>their databases separate?  How will you deal with attacks from malicious
>users?  How will you add a security model to stop people from attaching
>random files, using too much memory or CPU?  What about audit trails so
>that if an account is hacked you can tell what the bad guys did?  And what
>about SQLite's model where there are no per table permissions so a
>connection with access can do anything to any data in database and
>anything else they can attach?
--

I was trying to say that I don't believe this would have anything to do
with SQLite, you would have to deal with these issues regardless of the
solution that is selected. (although users "sharing" a database is out of
the question, so there are some caveats with SQLite functioning in this
manner)

--
>Everything has latency. I'm not sure where you're going with this.

>In the SQLite API you ask for result rows one at a time.  If it takes 25ms
>round trip time to ask for a row then there is no way to get more than 40
>rows per second.

>If you are using a longer distance network as clarified then you'll have
>to batch up result rows but this will present difficulty with transactions
>and locking (do you keep the db open between requests, how do you ensure
>continuing requests hit the same process).

>When using SQLite in process there is essentially no latency as there
>isn't even a context switch.
--


That is a good point, I will need to take that into consideration.
Multiple queries, and queries that all rely on each other, are items I
have been struggling to conceptualize implementation for. It presents an
interesting problem.

--
>You came off as a prick, but maybe I didn't offer enough information...
>Or my question is not intelligent enough for youŠ

>I suggest reading the whole thing, but this is the pertinent part:

>http://catb.org/esr/faqs/smart-questions.html#keepcool



My apologies, I just got put off by the bluntness of your response.



Ryan Macy





On 1/25/12 11:11 PM, "Roger Binns" <rog...@rogerbinns.com> wrote:

>-----BEGIN PGP SIGNED MESSAGE-----
>Hash: SHA1
>
>On 25/01/12 19:52, Ryan Macy wrote:
>> I used API generically (not necessarily SQLites API)
>
>You can't use SQLite's API although it wasn't clear you realised that!
>
>> I am creating an application in python that uses an RESTful API to
>> allow the user connect to my service and submit statements. It will
>> return the result in JSON or XML. I would think I could spawn a new
>> SQLite "instance" for each database the user creates. More or less I
>> wanted to see if anyone else has successfully accomplished this or has
>> pondered the plausibility of this scenario.
>
>If you do that then the database being SQLite is irrelevant.  Any database
>will work.  Unless there is almost no load though you'll want to use a
>server whose operation more clearly matches the usage model.   You should
>note that existing database APIs in web servers have to provide a lot of
>extra functionality and details like connection pooling, timeouts, caching
>etc.
>
>SQLite doesn't work with networked file systems (see the FAQ) so for
>practical purposes you'll only be able to run this on one server.  Not
>much of a service then!
>
>> This seems inconsequential
>
>Huh?  If you are making it available as a service then you have to care
>about authentication.  And identity - how do you tell users apart and keep
>their databases separate?  How will you deal with attacks from malicious
>users?  How will you add a security model to stop people from attaching
>random files, using too much memory or CPU?  What about audit trails so
>that if an account is hacked you can tell what the bad guys did?  And what
>about SQLite's model where there are no per table permissions so a
>connection with access can do anything to any data in database and
>anything else they can attach?
>
>> Everything has latency. I'm not sure where you're going with this.
>
>In the SQLite API you ask for result rows one at a time.  If it takes 25ms
>round trip time to ask for a row then there is no way to get more than 40
>rows per second.
>
>If you are using a longer distance network as clarified then you'll have
>to batch up result rows but this will present difficulty with transactions
>and locking (do you keep the db open between requests, how do you ensure
>continuing requests hit the same process).
>
>When using SQLite in process there is essentially no latency as there
>isn't even a context switch.
>
>> You came off as a prick, but maybe I didn't offer enough information...
>> Or my question is not intelligent enough for youŠ
>
>I suggest reading the whole thing, but this is the pertinent part:
>
>  http://catb.org/esr/faqs/smart-questions.html#keepcool
>
>Roger
>
>
>-----BEGIN PGP SIGNATURE-----
>Version: GnuPG v1.4.11 (GNU/Linux)
>
>iEYEARECAAYFAk8g0nkACgkQmOOfHg372QRe9ACfUrJe987i2E8FDaZHqeKVRS3D
>hJoAoNx5bqV9dnbTPk1HsqfnkNiNUBXj
>=fhW/
>-----END PGP SIGNATURE-----
>_______________________________________________
>sqlite-users mailing list
>sqlite-users@sqlite.org
>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to