> In short, using a SQLite-backed queue solution gives me a lot of options > that a simple IPC based (and, for that matter, even a professional Messaging > Product) does not give.
Also SQLite-backed solution gives you a big restriction that IPC doesn't: you have to poll the queue instead of pushing to it. I.e. the process reading the queue will have to execute some query periodically to see if there's anything in the queue. You don't want to execute this query without delay because it will eat 100% of you CPU at any time even when there's nothing in the queue. Besides it can introduce writer starvation. But when you execute query with any delay you lose immediate reaction of the queue. It's your choice of course. BTW, look closely at your requirements - you have some contradiction in them. You don't want to mess with file system because *you think* it will have performance penalty (as was already said it's not always true because OS cache your file in memory anyway). You don't want to use IPC because it's "bad". You want SQLite to work completely in memory and you want it to work inside several processes with the same memory. But how do you think SQLite should interact with itself to avoid reader in one process reading corrupted data while writer in another process is writing something new? The only way to do it is to use IPC. And SQLite does use one (probably the easiest) method of IPC - file systems locks. No other IPC mechanism is implemented in SQLite. So you have to allow SQLite to do its job - you need to have your database in the file system even if you won't ever read it once your application is closed. Pavel On Mon, May 10, 2010 at 3:59 PM, Manuj Bhatia <manujbha...@gmail.com> wrote: > Pavel, > > I do not have a requirement of persistence in my current design, but I > expect that we might extend this shared-queue solution to more areas of the > server and will require some sort of persistence then. > That is one of the main reasons I do not want to use IPC queues (there are > other reasons like fixed message sizes, minimal support for queue/message > level metadata). > > One of the main attractions of SQLite-based solution is to be able to > perform all kind of queries on the queue itself (from the point of view of > maintenance scripts/production support). > In my experience, if there are lots of services sharing different types of > messages over an IPC shared queue, sometimes you run into a situation where > the queue starts backing up and there is no way for production support folks > to determine which particular service is causing the backup (by sending > messages too fast, or consuming them really slow). And, in the end the only > solution is to bounce all the services (instead of just bouncing the > culprit) and we never discover the root cause of the backup. > > If I use a SQLite-backed queue, I can simply use the command line shell and > run queries like: > > select sender, receiver, count(*) > from queue > group by sender, receiver; > > Or any combination of message metadata to analyze the current state of the > queue. > > Also, I can easily modify my queue APIs to just update a used flag, instead > of deleting the message from the db. This way, I can analyze all the > messages at the end of day and determine all kinds of statistics (like how > long does a particular type of message sits in the queue). > > In short, using a SQLite-backed queue solution gives me a lot of options > that a simple IPC based (and, for that matter, even a professional Messaging > Product) does not give. > > Jay, > I did think of implementing a VFS for the shared-memory, but as you > mentioned a file-based DB with all syncs off might be a simpler trade-off. > > Alexey, > As Simon said, having a socket based daemon solution is something I want to > avoid because it adds another layer to the architecture. > > Thanks, > Manuj > > > > On Mon, May 10, 2010 at 10:56 AM, Simon Slavin <slav...@bigfraud.org> wrote: > >> >> On 10 May 2010, at 4:47pm, Alexey Pechnikov wrote: >> >> > TCP-socket listening daemon + SQLite in-memory database may be helpful. >> >> Yes. You can make one process, which handles all your SQLite transactions, >> and receives its orders from other processes via inter-process calls or >> TCP/IP. I've seen a few solutions which do this and they work fine. But >> that process will itself become some sort of bottleneck if you have many >> processes calling it. And I think that the original post in this thread >> described a situation where that was not a good solution. >> >> Simon. >> _______________________________________________ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >> > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users