Hey! I've actually been doing some work recently on speeding up bulk insert into Sqlite. My findings may be of interest to you.
First, there's this entry in the Sqlite FAQ: http://www.sqlite.org/faq.html#q19 The key is grouping as many inserts as possible into one transaction, so what I've been doing is building up a transaction string, starting with "begin transaction; " including multiple insert statements, (with a semicolon after each one,) and ending with "commit transaction". This transaction string is then executed by the sqlite command object. You may need to tweak this approach slightly to get the best speed results, (I'm not sure if there's a speed penalty for excessive string catenation in NSB,) or memory considerations on a windows mobile device when working with 30,000 records, but I think that the overall concept will help you out. Committing the transaction is the expensive part - the core sqlite engine will not return from a transaction commit before confirming that data has been safely stored in the physical file media. It's a bit interesting in this case that the SqLite faq talks about rotations of the disk platter and storage on the disk surface, presuming that a Sqlite database will always be stored on a traditional hard drive, and not for example flash memory, even though they also boast how it's a good choice for 'memory constrained gadgets such as cellphones, PDAs, and MP3 players.' I haven't tried this transaction-string approach on an NSB/CE project before, but I've seen results with a desktop Sqlite app that uses the same NewObjects component as NSbasic, and in a Gambas project running on my EeePC - which does use flash memory, not a rotating hard drive. Good luck! --- In [email protected], "bigp...@..." <bigp...@...> wrote: > > Yes, with some testing it is the INSERT that is slowing it down... A LOT. > Without the insert the file with 30,000 records gets read within 5 minutes. > > Yes the quotes are always present. > > Is there no faster way to INSERT records from a text file to the database? > > --- In [email protected], "joespan123" <joes@> wrote: > > > > > > > > Hi, > > > > Are you sure that reading the file is the slowest part. > > > > I would think that performing the INSERT to the database would be slowest > > part. > > > > The use of "Split" may be a slow function call. > > > > Can you guarentee the quotes are all ways present, if so you may be able to > > use the "Mid" function to strip off the quotes. > > > > > > Also I use the "file.LineInputString()" to read a line from a file, maybe > > try using that to see if it is faster than "File.ReadText(-3)". > > > > Cheers > > Joe > > > > --- In [email protected], "bigpete@" <bigpete@> wrote: > > > > > > I have a file with This type of record > > > > > > 1,C0AAA006,"AAA MOBILE STORAGE(ON)",,,,,,,,,,,,0001,,,,,, > > > > > > > > > > > > Now I have this following code > > > > > > Do > > > recs = File.ReadText(-3) > > > SplitRecs= Split(recs,",") > > > NoQuotes = Split(def(2),"""") > > > > > > i = i+1 > > > txtrecords.Text = i > > > cmd="INSERT OR REPLACE INTO ""NameDB"" VALUES( """ & SplitRecs(1) & > > > """,""" & NoQuotes(1) &""")" > > > showstatus SplitRecs(1) & " - " & NoQuotes(1) > > > > > > > > > It takes a while for the file to get read in (About 5 minutes to read > > > 1000 records :s) > > > > > > Is there a way that I can get File.ReadText or some other way to only > > > read the first 3 piece of data after the comma's... 1,C0AAA006,"AAA > > > MOBILE STORAGE(ON)"???...instead of the entire line. I think that may > > > speed up the searching. This file has about 30,000 records. > > > > > > OR is there another way I can get that data from a file read into the SQL > > > lite database? > > > > > > -- You received this message because you are subscribed to the Google Groups "nsb-ce" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/nsb-ce?hl=en.
