On Tue, Apr 01, 2014 at 01:08:59PM +0000, Harmen de Jong - CoachR Group B.V. 
wrote:
> We have built our own SQLite database server application and are considering 
> making this open source. Since there will be some time involved into making 
> it an open source project and maintaining it, we would first like to make an 
> inventory to find out if there is any interest in this server application. 
> 
> 
> ==> Advantages and features of this server application:
> 
> * Takes semi-automatically care of concurrent writes (1*).
> * Very stable server application (eliminates long startup times in case of 
> client application crash with large DB's).
> * Database connections are cached.
> * Multiple parameter sets to prepare a query and execute it multiple times 
> with a different parameter set.
> * The result of a previous select query can be used in the next query.
> * Auto-checkpointing is disabled and manual checkpointing is done in a 
> separate thread, preventing transaction commits (read and write) to be 
> occasionally slowed down by a checkpoint event.
> * Built-in LZMA2 compressed and SHA256 encrypted backup and restore.
> * Built-in a-synchronous 'continuous data protection' (real-time backup) to 
> secondary location (4*).
> * Built-in slow query log and email notifications (2* 3*).
> * Built-in query error log and email notifications (3*).
> 
> 1.) A programmer can mark a transaction to be splittable and set splitting 
> points. If set, the server application will automatically split write queries 
> that take longer than the time set in milliseconds, thus allowing other write 
> queries to be executed in between.


Is this level of complexity worth it? If you really have such long transactions 
that you need this level of detail, perhaps more concurrent database server 
would suit better?


> 2.) The application can automatically log slow queries (that exceed the set 
> maximum query execution time) and send notifications by email. Maximum query 
> execution time can be set separately for read and write queries.

That's a good plan. I like this idea.


> 3.) All logging is done into separate databases, to prevent eating up costly 
> writing time from the main database.
> 4.) When a full backup has been made successfully, the real-time backup is 
> cleared. 

How much overhead does this add? Is this a high availability feature?


> 
> 
> ==> Cons:
> 
> * It has been built as a Windows service and therefore only runs on Microsoft 
> platforms

Boo. I'm sure we can "fix" that.


> * Transactions that cannot be split will block all writers.


For a typical SQLite use case, this is probably not a problem, else you'd be 
using a more scalable database.


> 
> 
> ==> How it works:
> 
> Clients can communicate with this server over TCP/IP sockets. Queries are 
> submitted and returned in XML format (BLOB results are returned in binary 
> format to prevent CPU intensive encoding and decoding). The server 
> application is written in native Visual C++ (without using MFC). If we would 
> make this project open source we would also include a client example.


I dare say that parsing and encoding XML would be more processor intensive that 
BLOB encoding.


> 
> A few examples of the XML the client sends to perform a query:
> 
> <query>
>       <sql databaseid="test">
>               <![CDATA[delete from session where lastclicktime = 
> @MinClickTime]]>
>       </sql>
>       <paramset>
>               <param type="1"  
> name="@MinClickTime"><![CDATA[1395318850]]></param>
>       </paramset>
>       <paramset>
>               <param type="1"  
> name="@MinClickTime"><![CDATA[1395318900]]></param>
>       </paramset>
> </query>


What form does a result set take? Is it all enclosed in an XML element, or do 
you get a cursor "object" returned and step through the result set 1 or more 
rows at a time. I ask, because the former might not be scalable for large 
result sets. Does it handle multiple result sets per query?


> 
> <query>
>       <sql databaseid="test">
>               <![CDATA[select ifnull(max(id),0)+1 as newid from field ]]>
>       </sql>
>       <sql databaseid="test">
>               <![CDATA[insert into field(id, name) values(@ID, @Name) ]]>
>       </sql>
>       <paramset>
>               <param type="0"  name="@ID"><![CDATA[newid]]></param>
>               <param type="1" name="@Name"><![CDATA[Test]]></param>
>       </paramset>
>       <transactionbreak></transactionbreak>
>       <sql databaseid="test">
>               <![CDATA[insert into closedfield(fieldid) values(@ID) ]]>
>       </sql>
>       <paramset>
>               <param type="0" name="@ID"><![CDATA[newid]]></param>
>       </paramset>
> </query>
> 
> Because the result of the last select query is returned to the client, the 
> client can read the new ID from the result.
> 
> 
> Please let us know if you would be interested in this project!
> 

Would certainly be curious.

IMO, you'd also increase your potential target audience if you could also 
provide reasonably functional JDBC, ADO.NET, PHP and/or Python database 
drivers. Providing any of these would allow existing users to plug your new 
database into existing applications with the minimal of fuss. Personally, I 
don't like the idea of XML as the protocol, largely because of the parsing 
overhead, and have been looking at a similar server based on RPC, but for 
debugging purposes it would be great.

Regards,
Christian
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to