"stephen liu" <stephen....@gmail.com> schrieb
im Newsbeitrag
news:e5a33d590906140609h4f3f35fah9083be3560f5d...@mail.gmail.com...
http://code.google.com/p/sphivedb/

> SPHiveDB is a server for sqlite database. It use JSON-RPC over
> HTTP to expose a network interface to use SQLite database.
> It supports combining multiple SQLite databases into one file.
> It also supports the use of multiple files ( through Tokyo Cabinet ).
> It is designed for the extreme sharding schema -- one SQLite
> database per user. It supports to Automatic synchronization
> of database structure depending on a create table statement.

Interesting approach - (such a mem-vfs offers some nice
options at the serverside, once it is done).

Did you already tested, how the whole approach behaves
in concurrency-scenarios with e.g. only 4-8 concurrent clients?
Using your schema, especially the concurrent writes should get
a nice boost.

I imagine a simple script, which is executed by each client-
process and could fill-up a fresh DB to a certain amount of
records in a single table maybe (to keep the test-scenario simple).

Each client could do (with a 1:10 relation of writes and reads)

Loop 1000 times
    With LoopCounter Modulo 7 (case-switch)
        case 0: Insert one single record
        case 1: Insert 10 records
        case 2: Insert 100 records
        case 3: Update one single Record
        case 4: Update 10 affected Records
        case 5: Update 100 affected Record
        case 6: Delete one single Record
    End of WritePart

    Followed by 10 different Read-Jobs,
    each requesting a completely delivered
    resultset, based on different where-clauses, etc.
    reporting only the recordcount to the console or
    into a file, together with the current timing

    (maybe fill up two different tables - and also include
     some Joins for the read-direction-jobs)
End loop

Ending up with 110000 new records in the DB
(or just redefine the loop-counter for more).

Final Job for each client then a good matching single-
record-select with appropriate aggregates on the
currently contained data in the DB(-tables).

The last returning client should then always deliver the
same (correct) results - the timing then counted on this
last returning client.

Something like that is of course a very simplified
concurrency-scenario(-test) - a result-comparison would
be interesting nonetheless - and such a small schema
shouldn't be that difficult to implement over different
languages (working against different backends).

Maybe there's already something simple available,
that defines easy to follow stresstest-requirements,
one is able to implement without too much effort
(maybe for PostgreSQL something like that is already
 online).

Would be interested in such a simple(r) concurrency-
test-definition, in case there already is one ... does
anybody have a link for me?

Olaf



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to