Raghu,

Can you either use the backup API or a simple attach to copy the data
from the memory database to a new file based db. A second process could
then much more slowly poll to see if a new "temporary" file was
available, and attach it, and insert its data albeit slowly in to the
consolidated file-db, then deleting the "temporary file"

Insert would be fast in to the memory db, moving data to disk would be
fast. Creating a master db would be slow, but unless someone can figure
out why that is slowing down so much, I don't think you can ever avoid
the time cost of consolidating the data.

David

On Wed, 2009-12-16 at 08:20 -0600, Raghavendra Thodime wrote:
> Max,
>   I agree with what you are saying. To eliminate such stuff, this is what I 
> have done:  I created schema for multiple tables with no indexing, no joins, 
> no foreign key constraints.  Basically I wanted to keep it as simple as 
> possible for inserts to be faster. Then I simulated the inserts with as 
> minimum C code as possible. I had around 50 inserts within each transaction 
> (between BEGIN and END). Also I had synchronous=OFF and had my journaling 
> also turned off (even tried with MEMORY based journaling). So basically I 
> eliminated all external dependencies that I could think of to make it faster.
> 
>   I simulated for thousands of inserts to see how the system performs on 
> heavy load conditions. I printed the time consumption stats periodically. In 
> such a test case, I see that there is a considerable improvement seen. But 
> what I observed is as I kept running simulations for longer periods, the 
> inserts were taking lot more time in later stages than they were taking 
> initially. For ex, If I ran for 200,000 inserts, first 20,000 inserts were 
> done in 9 secs, but last 20,000 inserts (from 180,000th to 200,000) took 
> almost 110 secs. It is more than 10 times than what it was initially. These 
> results were consistent across all iterations of simulation I did.
> 
>   So I am afraid this won't scale as I keep inserting into same db file. 
> 
>   So my question is, Is this behavior expected? Am I doing something wrong? 
> Is my assumption that time to insert is proportional to size of the db file 
> correct? Since I am simulating with the same application code and with no 
> schema complications, I am assuming there should not be many external 
> dependencies that will affect the system in the long run. 
> 
>   Your help is appreciated...
> 
> Thanks
> Raghu
> 
> -----Original Message-----
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Max Vlasov
> Sent: Wednesday, December 16, 2009 2:38 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] BACK API Questions
> 
> On Wed, Dec 16, 2009 at 9:30 AM, Raghavendra Thodime 
> <rthod...@sipera.com>wrote:
> 
> >  I did try using batch of transactions with synchronous=OFF PRAGMA set. The
> > performance improved slightly. But as db file started to grow larger and
> > larger in size, the performance degraded considerably. Is it expected? Or Is
> > there a work around for this?
> >
> 
> If you did everything according this topic:
> 
> (19) INSERT is really slow - I can only do few dozen INSERTs per second (
> http://www.sqlite.org/faq.html#q19 )
> 
> from FAQ, you probably have extra overload with your development
> language/library. Also be aware that you will probably never get the same
> speed as the general copying of same amount of data with the file system
> routines since the db engine have to deal with indexes and other extra job.
> So the performance high or low might be a subjective estimate.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to