At 03:01 PM 10/15/2008, Jay A. Kreibich wrote:
>On Wed, Oct 15, 2008 at 01:38:16PM -0500, David Clark scratched on the wall:
> > Ok I want to convert a large application which uses ad-hoc data
> > structures and file formats to a sqlite database. But before I do that
> > I am working on a proof of concept test to show that yea it will work
> > once conversion is done.
> >
> > But alas my first test is failing miserably. Well, it works it just
> > takes way too long. Like at least all night.
> > I create table tbl2
> > f1 varchar(30)
> > f2 varchar(30)
> > f3 varchar(30)
> > identity integer primary index autoincrement
> >
> > That works I have the table.
> > I go to insert into with
> > insert into tbl2 values ("00000001", "00000001", "00000001", NULL);
> >
> > insert into tbl2 values ("00000002", "00000002", "00000002", NULL);
> >
> > Ok I did each insert as separate query using the 5 minute example and
> > it took all night and never did complete.
>
> You need to wrap them up in a transaction.
ok will look into that.
> Also, if you want to store integers, store integers. SQLite allows
> the same column to store records of different types.
No the real application uses mostly string data. I am using the
numbers just to give me something different to put in.
> Also, string-literals in SQL use single quotes.
Yes that was just a typeo on the post...sorry about that...the app
uses ' quotes.
> > So then I decided...ok the fsync() for serialization is the problem.
> >
> > So I looked at limits.html and make a single query out of as many as
> > would fit in a 1000000 byte buffer. That turned out to be
> > 15625 inserts. That should be one fsync per query so should be much
> > faster...but alas I don't think it is.
>
> That won't process them as one statement. Even if you pass them into
> SQLite as a big command buffer, they're still processed one at a
> time.
>
> Issue the command "BEGIN", do 100 to 10000 INSERTs, issue a "COMMIT".
> You should see a very noticeable difference in speed.
>
> > My second test would be a random query into this large table to see
> > how long that took. Then random queries from a large
> > number of threads. Get a query time under 1 second for pulling up a
> > record...and this product is so in my application.
>
> Depending on your query needs, you might need to look into indexes.
>
> -j
>
>--
>Jay A. Kreibich < J A Y @ K R E I B I.C H >
>
>"Our opponent is an alien starship packed with atomic bombs. We have
> a protractor." "I'll go home and see if I can scrounge up a ruler
> and a piece of string." --from Anathem by Neal Stephenson
>_______________________________________________
>sqlite-users mailing list
>[email protected]
>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users