[sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Jim Wilcoxson
Simon - instead of using vacuum, it's much faster to create a new database from the old one, then rename it. It's easy to do this in Python using iterdump(), or you can connect to the new (empty) database, do your create table statements, attach the old database as olddb, then do: insert into

Re: [sqlite] sqlite3 performance on select count very slow for 16 GB file

2015-01-23 Thread Jim Wilcoxson
If you have a table where rows are inserted but never deleted, and you have a rowid column, you can use this: select seq from sqlite_sequence where name = 'tablename' This will return instantly, without scanning any rows or indexes, and is much faster than max(rowid) for huge tables. If no rows

[sqlite] HashBackup sqlite 3.6.18 -> 3.8.8 performance

2015-02-04 Thread Jim Wilcoxson
To add to the real-world performance info, I recently upgraded the backup program I'm developing, HashBackup, from SQLite 3.6.18 to 3.8.8, mostly to see if it fixed some occasional "Database is locked" errors (multithreaded). For these tests, nothing was changed except the SQLite library.

Re: [sqlite] Optimizing list retrieval with a FTS3 table

2011-03-16 Thread Jim Wilcoxson
On Wed, Mar 16, 2011 at 3:15 PM, Travis Orr wrote: > I am currently working on a project that requires retrieving a list of > all the rows from a FTS3 table. The ordering of the results varies by > search criteria. Since this is for an embedded project the list results > are passed

Re: [sqlite] Optimizing list retrieval with a FTS3 table

2011-03-16 Thread Jim Wilcoxson
gt; > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Jim Wilcoxson > Sent: March-16-11 12:51 PM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] Optimizing list retrieval with a FTS3 table &g

Re: [sqlite] unexpected large journal file

2010-06-22 Thread Jim Wilcoxson
Eric Smith writes: > > Jay A. Kreibich wrote: > > > Try getting rid of the PK definition and see how much that buys you. > > It might be worth it, or it might not. > > and Simon Slavin wrote: > > > We know you are doing a huge amount of writing to this database. Are > >

Re: [sqlite] linear behavior on INSERT (was Re: unexpected large journal file)

2010-06-22 Thread Jim Wilcoxson
On 6/22/10, Eric Smith wrote: > I have confirmed that INSERT times are roughly logarithmic in > the number of existing records after creating my specific user > indices. > > But INSERT times appeared to be *linear* in the number of existing > records before I had created any

Re: [sqlite] linear behavior on INSERT (was Re: unexpected large journal file)

2010-06-22 Thread Jim Wilcoxson
On 6/22/10, Eric Smith <eas@gmail.com> wrote: > Jim Wilcoxson wrote: ... >> Did you see my earlier note about combining your two integers into the >> primary key? This will also give you constant insert times, if you >> insert items in the order: ... > Thanks

Re: [sqlite] Corrupted sqlite journal

2010-07-09 Thread Jim Wilcoxson
On Fri, Jul 9, 2010 at 10:38 AM, Ke Tao wrote: > > HI All , > > I have sqlite db name "wdb" and "wdb-journal" file was created by power > failure something , when I do any db operation sqlite always prompt "disk > I/O error" , but when I delete the "wdb-journal" ,there is no

Re: [sqlite] Corrupted sqlite journal

2010-07-09 Thread Jim Wilcoxson
On Fri, Jul 9, 2010 at 3:21 PM, Jim Wilcoxson <pri...@gmail.com> wrote: > On Fri, Jul 9, 2010 at 10:38 AM, Ke Tao <kota...@gmail.com> wrote: > >> >> HI All , >> >> I have sqlite db name "wdb" and "wdb-journal" file was created by

Re: [sqlite] Database corruption on Linux ext3

2010-07-13 Thread Jim Wilcoxson
On Tue, Jul 13, 2010 at 8:06 PM, Roger Binns wrote: > > On 07/13/2010 04:57 PM, Simon Slavin wrote: > > One on each page and one for the entire file that checksums the page > checksums ? > > One for each page plus one of the header would make the most sense, but the > I

Re: [sqlite] Database corruption on Linux ext3

2010-07-14 Thread Jim Wilcoxson
On Wed, Jul 14, 2010 at 1:35 AM, Roger Binns <rog...@rogerbinns.com> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 07/13/2010 05:30 PM, Jim Wilcoxson wrote: > > I don't think this would work, because the problem described is that the > > w

Re: [sqlite] How much a table takes (will this query always work)

2010-07-16 Thread Jim Wilcoxson
On Fri, Jul 16, 2010 at 6:01 AM, Max Vlasov wrote: > Hi, > > always wanted to have a possibility to calculate how much a table occupies. > As long as I see from the archive, there's no out-of-the-box solution > (CMIIW) > > You mean, like this? pragma page_count pragma

Re: [sqlite] How much a table takes (will this query always work)

2010-07-16 Thread Jim Wilcoxson
On Fri, Jul 16, 2010 at 9:13 AM, Jim Wilcoxson <pri...@gmail.com> wrote: > On Fri, Jul 16, 2010 at 6:01 AM, Max Vlasov <max.vla...@gmail.com> wrote: > >> Hi, >> >> always wanted to have a possibility to calculate how much a table >> occupies. >> A

Re: [sqlite] [PATCH] cache preloading

2010-08-10 Thread Jim Wilcoxson
On 8/10/10, Paweł Hajdan, Jr. wrote: > So this is another chromium patch I'd like to submit: > http://src.chromium.org/viewvc/chrome/trunk/src/third_party/sqlite/preload-cache.patch?revision=26596=markup > > I'm not the author of that one, but the main idea seems to be

Re: [sqlite] Forthcoming release of SQLite 3.7.1

2010-08-18 Thread Jim Wilcoxson
SQLite is a great product; I use SEE, the encrypted edition. For those of us who are not using the C interface, I'd like to request that when possible, new C features like hinting about file sizes and extents also be added as pragmas. Pragmas make it very easy for non-C interfaces to make use of

Re: [sqlite] Can we get a pragma for SQLITE_FCNTL_CHUNKS_SIZE?

2010-08-19 Thread Jim Wilcoxson
On 8/19/10, Simon Slavin wrote: > > On 19 Aug 2010, at 9:27pm, Taras Glek wrote: > >> I really appreciate that sqlite got this feature to reduce >> fragmentation, but why not expose this as a pragma? > > Do you have figures which suggest that reducing fragmentation leads to

Re: [sqlite] partial index?

2010-08-20 Thread Jim Wilcoxson
On 8/20/10, Cory Nelson wrote: > +1 for this feature request. They've got a very specific and fairly > rare use case, but when opportunity strikes partial indexes are much > more convenient, straightforward, and efficient than the alternative. > > - If a table has

Re: [sqlite] partial index?

2010-08-20 Thread Jim Wilcoxson
On 8/20/10, Cory Nelson <phro...@gmail.com> wrote: > On Fri, Aug 20, 2010 at 7:39 AM, Jim Wilcoxson <pri...@gmail.com> wrote: >> ... >> The best I could come up with is a separate table. The problem is, >> indexing the SHA1 normally means there is a copy in t

Re: [sqlite] SQLite version 3.7.2

2010-08-27 Thread Jim Wilcoxson
Using the SQLite Encryption Edition rev 3.7.0.1, the latest released version, on OSX Tiger (10.4), results in an undefined reference to gethostuuid. Is OSX 10.4 no longer supported? Thanks, Jim -- HashBackup: easy onsite and offsite Unix backup http://sites.google.com/site/hashbackup On Mon,

Re: [sqlite] Feature request: copying vacuum

2010-09-10 Thread Jim Wilcoxson
I'd also be interested in a VACUUM TO feature, more for performance aspect than the fragmentation, although that's a plus too. The backup program I'm working on packs many files into archives, which are SQLite databases. I have run some vacuum tests here; the filesystem cache was purged before

[sqlite] Detach command in 3.7.1 gives "database is locked" error

2010-10-09 Thread Jim Wilcoxson
This may be a bug in 3.7.1, or I may have been taking advantage of a flaw in 3.6.18: HashBackup uses SQLite to store backup data in a meta data db and file data archive db. First I open the main database, then attach an archive database. When the archive gets full, I detach it, create a new

Re: [sqlite] Detach says database is locked

2010-10-21 Thread Jim Wilcoxson
This type of thing works with SQLite 3.6.18. I have a similar issue with 3.7. I believe the difference is that in 3.6.18, if you do a select on a main database that doesn't involve an attached database, the attached database isn't locked. In 3.7, if you do a select on the main database that

Re: [sqlite] programmatic way of determining fragmentation?

2010-10-21 Thread Jim Wilcoxson
On Thu, Oct 21, 2010 at 1:27 PM, Dustin Sallings wrote: > > On Oct 21, 2010, at 10:05, Pavel Ivanov wrote: > > > I think it's not related to fragmentation, but to fill percentage of > > b-tree pages. I guess your reconstructed table is much less in total > > size than your

Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jim Wilcoxson
Jonathan - 500 queries per second is 2ms per query. You'll have a hard time getting that kind of speed for random queries with any rotating media. Your database needs to be in memory - all of it, not just indexes - or on a flash drive. If your queries are not random but are somehow related, eg,

Re: [sqlite] sqlite3_open on non-DB files / corrupt DBs

2010-11-25 Thread Jim Wilcoxson
On Thu, Nov 25, 2010 at 11:06 AM, Tito Ciuro wrote: > On 25 Nov 2010, at 12:51, Igor Tandetnik wrote: > > > Run "PRAGMA integrity_check" right after opening. > > That could be a potentially slow operation if the database is valid and > contains lots of

Re: [sqlite] database size (again)

2010-12-07 Thread Jim Wilcoxson
A lot of the SQLite overhead is going to be in the stuff surrounding your actual data; I'd be surprised if you saved much space by using fixed-size ints vs the varints used by SQLite. You didn't mention about indexes; if you have any, they will take a lot of space because your row size is so

Re: [sqlite] Order of UNION query results

2011-01-22 Thread Jim Wilcoxson
On Fri, Jan 21, 2011 at 6:53 PM, Josh Gibbs wrote: > Could someone please clarify for me if the the resulting order of a UNION > query will come back with the left data first, then the right data in > the case > that no ordering has been defined for the query. > > My need is to

Re: [sqlite] stored procedures implementation for SQLite

2011-01-26 Thread Jim Wilcoxson
It looks interesting. Should your except stmt reference apsw? -Jim -- HashBackup: easy onsite and offsite Unix backup http://www.hashbackup.com On Wed, Jan 26, 2011 at 11:38 AM, Chris Wolf wrote: > > > I know this is an old thread, but shortly after I read it, I attempted

Re: [sqlite] Trigger for incrementing a column is slow

2011-02-03 Thread Jim Wilcoxson
On Thu, Feb 3, 2011 at 5:07 PM, Kevin Wojniak wrote: > > On Feb 3, 2011, at 11:41 AM, Petite Abeille wrote: > > > On Feb 3, 2011, at 6:53 PM, Kevin Wojniak wrote: > > > >> The trigger is ran once via sqlite3_exec(); > > > > Hmm... you mean the trigger is run every single

Re: [sqlite] Bi-directional unique

2011-02-09 Thread Jim Wilcoxson
If you don't care about the order and the integers are smallish, like 31 bits or less, I'd do this: create table t(k int primary key); i = whatever j = whatever if i < j: k = i<<32 | j else: k = j<<32 | i insert k into table To see if a pair is in the table, do the same steps and lookup

Re: [sqlite] Bi-directional unique

2011-02-09 Thread Jim Wilcoxson
On Wed, Feb 9, 2011 at 5:25 PM, Simon Slavin wrote: > > On 9 Feb 2011, at 7:32pm, Black, Michael (IS) wrote: > > > Didn't we just determine a couple of days ago that triggers were > performance killers? > > > > That's one reason I want to avoid those. > > Okay, then since

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-11 Thread Jim Wilcoxson
On Fri, Feb 11, 2011 at 10:50 PM, Thomas Fjellstrom < tfjellst...@strangesoft.net> wrote: > I've seen numbers on the internet stating that sqlite is super fast, should > be > able to handle tens of thousands of inserts or updates in a second or two, > even in sync mode. So I'm a bit stumped as to

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Jim Wilcoxson
On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS) wrote: > Here's a little benchmark program I wrote to test a super-simple > one-integer insert to test sql speed and commit interval behavior. > > Running it on my machine (Windows XP64 8-core 3Ghz gcc (tdm64-1) 4.5.1)

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-13 Thread Jim Wilcoxson
On Sun, Feb 13, 2011 at 1:15 AM, Max Vlasov <max.vla...@gmail.com> wrote: > On Sat, Feb 12, 2011 at 7:11 PM, Jim Wilcoxson <pri...@gmail.com> wrote: > > > > > > > Unless I'm missing something, SQLite has to update the first page of the > > database on

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-13 Thread Jim Wilcoxson
On Sun, Feb 13, 2011 at 7:07 AM, Jean-Christophe Deschamps wrote: > > >So if the drive is 5400 rpm, 227 is much more than 5400/60=90 and even if > >it's 7200 (manufacturers sometimes upgrade drives inside portable hd > >without > >prior notice), it's still twice as much as

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-13 Thread Jim Wilcoxson
On Sun, Feb 13, 2011 at 11:55 AM, Max Vlasov <max.vla...@gmail.com> wrote: > On Sun, Feb 13, 2011 at 5:31 PM, Jim Wilcoxson <pri...@gmail.com> wrote: > > > On Sun, Feb 13, 2011 at 1:15 AM, Max Vlasov <max.vla...@gmail.com> > wrote: > > > So the f

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-14 Thread Jim Wilcoxson
On Mon, Feb 14, 2011 at 2:33 AM, Max Vlasov <max.vla...@gmail.com> wrote: > > On Sun, Feb 13, 2011 at 8:53 PM, Jim Wilcoxson <pri...@gmail.com> wrote: > > > > But I thought about how it would be possible to test this explanation . > > I'm > > > going to

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-14 Thread Jim Wilcoxson
sing. Jim -- HashBackup: easy onsite and offsite Unix backup http://www.hashbackup.com > > On Mon, Feb 14, 2011 at 8:49 AM, Jim Wilcoxson <pri...@gmail.com> wrote: >> On Mon, Feb 14, 2011 at 2:33 AM, Max Vlasov <max.vla...@gmail.com> wrote: >>> >>> On Sun, Feb 13, 2011

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-14 Thread Jim Wilcoxson
341s user0m0.193s sys 0m0.535s Jim -- HashBackup: easy onsite and offsite Unix backup http://www.hashbackup.com > > > Pavel > > On Mon, Feb 14, 2011 at 8:49 AM, Jim Wilcoxson <pri...@gmail.com> wrote: >> On Mon, Feb 14, 2011 at 2:33 AM, Max Vlasov <max.vla...@gma

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-14 Thread Jim Wilcoxson
On Mon, Feb 14, 2011 at 3:02 PM, Max Vlasov <max.vla...@gmail.com> wrote: > On Mon, Feb 14, 2011 at 8:42 PM, Jim Wilcoxson <pri...@gmail.com> wrote: > >> On Mon, Feb 14, 2011 at 10:56 AM, Pavel Ivanov <paiva...@gmail.com> wrote: >> >> So my question is,

Re: [sqlite] COUNT() extremely slow first time!

2011-02-21 Thread Jim Wilcoxson
This is a common issue on the mailing list. The first time you do count(*), SQLite (actually your OS) has to load data into memory. Most OS's will keep the file in a buffer cache, so the 2nd count(*) doesn't have to read from disk. Here's a timing from my own system, after a purge command to

Re: [sqlite] COUNT() extremely slow first time!

2011-02-21 Thread Jim Wilcoxson
On Mon, Feb 21, 2011 at 10:38 AM, Puneet Kishor <punk.k...@gmail.com> wrote: > On Mon, Feb 21, 2011 at 10:17:03AM -0500, Jim Wilcoxson wrote: >> This is a common issue on the mailing list.  The first time you do >> count(*), SQLite (actually your OS) has to load data into

Re: [sqlite] COUNT() extremely slow first time!

2011-02-21 Thread Jim Wilcoxson
On Mon, Feb 21, 2011 at 11:05 AM, Sven L wrote: > > Thank you for your detailed explanation! > First, can you please tell me how to purge the cache in Windows 7? This could > be very useful for my tests! Sorry, dunno for Windows. On Mac OSX it is the purge command, in the

Re: [sqlite] COUNT() extremely slow first time!

2011-02-23 Thread Jim Wilcoxson
The SQLite cache size is in pages (2000 by default), so by increasing the page size 8x, you're also increasing the cache size and memory requirements by 8x. Not saying it's a bad thing, just something to be aware of. If you want to compare 1K and 8K page size and only compare the effect page

Re: [sqlite] COUNT() extremely slow first time!

2011-02-23 Thread Jim Wilcoxson
On Wed, Feb 23, 2011 at 11:12 AM, Sven L wrote: > > Thanks for pointing this out! > > In my case I have spent much time on normalizing my tables, so the row size > should be constant in most cases. I do wonder though, what if the row size is > 32 bytes? Or is there a

[sqlite] Commit frequency and performance

2009-02-04 Thread Jim Wilcoxson
I am creating an SQLite database via Python, and trying to understand some performance issues. This application does 3.8M inserts. Most inserts are to a main database that ends up being around 293MB. Around 740K of the inserts with larger data records are to 25 related databases of around

Re: [sqlite] Hundreds of thousands of INSERTs

2009-02-17 Thread Jim Wilcoxson
I had a similar experience: changing my page size decreased the run time of my SQLite application from 4 hours to 80 minutes (Linux). I think it would be very good if the default page size was changed to 4096, at least for LInux builds. It makes a huge difference in performance. Jim On

Re: [sqlite] Slow performance with Sum function

2009-03-04 Thread Jim Wilcoxson
Have you tried changing the page size to 4096 or 8192? Doing this with my SQLite application and increasing the transaction size decreased runtime from over 4 hours to 75 minutes.The runtime for my app writing the same amount of data to flat files was 55 minutes, so the time penalty for

Re: [sqlite] SQLite Transaction Rate and speed...

2009-03-07 Thread Jim Wilcoxson
If your goal is to handle 1 million inserts/updates per second, and a hard transaction to disk takes 1/60th of a second, you need at least 1 insert/updates per transaction. Do your testing with a transaction size of 20,000 and see what kind of performance you get. I'd probably set it higher,

Re: [sqlite] I need help with very complex queries

2009-03-10 Thread Jim Wilcoxson
You have specified how the movies table relates to the other tables, but you haven't specified any independent selection criteria for any tables. For example, in your query you need to add something like: and genres.genre = 'drama'. For this query, only the movies and genres tables are needed

Re: [sqlite] designing a db to hold repeating data

2009-03-15 Thread Jim Wilcoxson
You could eliminate met_grid_id from the cells table and replace it with an expression cell_id/2500. This expression will automatically truncate, giving you met_grid_id whenever you need it. This will save around 5 MB for a 1M row cell table. Also, queries in the cells table by met_grid_id, if

Re: [sqlite] Speed of DROP INDEX

2009-03-17 Thread Jim Wilcoxson
Drop is executed within a transaction, which means that every record you touch has to be backed up to the journal first, then modified in the database. I'm guessing that if you use pragma synchronous=off, it would speed up the drop index, but you'd take a chance on corrupting the database if the

Re: [sqlite] Speed of DROP INDEX

2009-03-18 Thread Jim Wilcoxson
Stevenson-Molnar <steve...@evergreen.edu> wrote: > I've actually been running it with synchronous=off. Unfortunately, it > doesn't seem to run any faster and still creates a journal file. > > _Nik > > On Mar 17, 2009, at 6:05 PM, Jim Wilcoxson wrote: > >> Drop

Re: [sqlite] Newbie: round function

2009-03-18 Thread Jim Wilcoxson
Well, it doesn't exactly say that an integer is returned. round() always returns a float: sqlite> select typeof(3); typeof(3) -- integer sqlite> select typeof(round(3)); typeof(round(3)) real sqlite> You can do this: sqlite> select typeof(cast(round(3.14) as

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Jim Wilcoxson
I'm not sure I completely understand your data structure, but here are some ideas: First, conduct experiments with different page and cache sizes. I don't know if you jumped to a 32K page size or experimented with each size, but experimentation is the way to go. I'm guessing that in your

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Jim Wilcoxson
The reason you're getting the same results is because you are CPU-bound. I/O has nothing to do with this problem. From your timings of your app, 31.76/33 = 96% CPU. If you were I/O bound, your real time would be 33 seconds and your sys+user time would be 3 seconds, or something low. My guess

Re: [sqlite] How to get the previous row before rows matching a where clause...

2009-03-22 Thread Jim Wilcoxson
What about: select blah from blah where rowid < windowstartrowid order by rowid desc limit 1 to get the row before, and: select blah from blah where rowid > windowlastrowid limit 1 to get the row after. Jim On 3/22/09, sorka wrote: > > I have a table of events

Re: [sqlite] problems with shared cache?

2009-03-24 Thread Jim Wilcoxson
Not sure if it will make a difference, but in your trigger stuff you explicitly coded null for the primary key value. Have you tried changing that so that you don't specify the primary key field at all? I can't remember from the previous post, but I think it was (or should be) set up as

Re: [sqlite] creating unique data takes many hours, help

2009-03-29 Thread Jim Wilcoxson
Yes, you're right, but if the data is already in index order, you'll do less I/O when creating the index. Whether the sort + create DB time is less than "create DB from random input" time is another question. Jim On 3/29/09, mrobi...@cs.fiu.edu wrote: > question: > When

Re: [sqlite] delete with an "exists" clause

2009-04-09 Thread Jim Wilcoxson
You want: delete from dummy where var=2; In years of DB work, I've never used exists. If you're mentioning this as a bug, I guess it could be: I'd have to lookup exists to see exactly how it's supposed to work. Jim On 4/9/09, Dave Dyer wrote: > > This little

Re: [sqlite] disk I/O error...?

2009-04-11 Thread Jim Wilcoxson
You'll get this if you have a database with an active journal (incomplete transactions) and you don't have write access to the database. In other words, the database needs a rollback from some prior operations done under a different userid, but now you don't have write access to do the rollback.

Re: [sqlite] Fixing a database

2009-04-22 Thread Jim Wilcoxson
I think it would be a good idea for sqlite3 to display a message like "Database opened for read-only" if you don't have permission to write. I saw this problem myself where a rollback was necessary from a previous root process, and as a new SQLite user, it was confusing and made me think "great,

Re: [sqlite] corrupt database recovery

2009-04-26 Thread Jim Wilcoxson
You could do a binary search to find the highest accessible rowid: select rowid where rowid = 2147483647 (fails) select rowid where rowid = 1073741824 (fails) ... select rowid where rowid = 65536 (fails) select rowid where rowid = 32768 (works!) select rowid where rowid =

Re: [sqlite] Optimizing concurrency with sql query - locks?

2009-05-06 Thread Jim Wilcoxson
I'm not sure what you are considering a massive slowdown, but let's assume that the entire database fits into memory and disk I/O isn't the bottleneck. You said you're running 300 instances of the query on several processors. If several means 3 CPUs, then in a perfect world, running 300

Re: [sqlite] Optimizing concurrency with sql query - locks?

2009-05-07 Thread Jim Wilcoxson
disk, so > a database in memory still shares a single disk resource. > > Jim Wilcoxson wrote: >> I'm not sure what you are considering a massive slowdown, but let's >> assume that the entire database fits into memory and disk I/O isn't >> the bottleneck.

Re: [sqlite] select performance with join

2009-05-08 Thread Jim Wilcoxson
I don't know if it makes any difference, but is that where clause the same as: WHERE name< 'zUmM' OR (name= 'zUmM' AND S.id< 8122) The original way it was coded, all 3 conditions would have to be evaluated most of the time. The new way might get most rows with 1 condition. Depends on the data

Re: [sqlite] SQLite version 3.6.14

2009-05-08 Thread Jim Wilcoxson
May I suggest an extension PRAGMA SYNCHRONOUS = 3 | ASYNC so that non-C bindings can use the async functionality? Thanks, this is a great enhancement! Jim >> On Wed, May 06, 2009 at 10:36:50PM -0400, D. Richard Hipp wrote: >>> >>> A new optional extension is included that implements an >>>

Re: [sqlite] select performance with join

2009-05-09 Thread Jim Wilcoxson
base. --- It sounds like it might use indexes for an OR after all. Jim On 5/8/09, Igor Tandetnik <itandet...@mvps.org> wrote: > "Jim Wilcoxson" <pri...@gmail.com> wrote > in message > news:c5830b750905080813p6bf901bn526c81ca8ce7a...@mail.gmail.com >> I don't

Re: [sqlite] Corrupt Database Problems

2009-05-15 Thread Jim Wilcoxson
I think you have answered your own question. If you use synchronous=off, you are saying "I don't care much about this database." When you "save" documents, you are merely putting them in a computer's cache (memory) and then confirming to the user that they are on the hard drive, when they aren't

Re: [sqlite] Corrupt Database Problems

2009-05-18 Thread Jim Wilcoxson
On 5/18/09, Kevin Gale wrote: > According to the PRAGMA command documentation the database might become > corrupted if the o/s crashes or the computer loses power before the data has > been written to the disk surface. From the information we have from the > customer

Re: [sqlite] Feature request

2009-05-23 Thread Jim Wilcoxson
I'd like to suggest that features such as this be implemented with PRAGMA commands whenever possible, the reason being that the new functionality will then be immediately available to folks using non-C bindings, without waiting for an update to a binding package that may or may not occur. Jim On

Re: [sqlite] Minor errors in CREATE TABLE syntax diagrams

2009-05-26 Thread Jim Wilcoxson
For my money, I'd prefer to have a smaller, faster parser that worked correctly on correct input at the expense of not catching all possible syntax errors on silly input. There is a definite trade-off here, and I could see where a totally complete parser that caught every possible error in SQL

Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Jim Wilcoxson
I'm running on Linux with ext3 and just wrote a Python test program to insert rows into a table with a single column, no indexing, and doing a commit after each insert. When I first ran it, I got around 440 inserts per second, which is clearly impossible. This is a 7200rpm drive, so even if I

Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Jim Wilcoxson
is doing the syncs correctly, but ext3 is getting cached. Jim On 5/27/09, Jim Wilcoxson <pri...@gmail.com> wrote: > I'm running on Linux with ext3 and just wrote a Python test program to > insert rows into a table with a single column, no indexing, and doing > a commit after each

Re: [sqlite] corrupt database recovery

2009-05-27 Thread Jim Wilcoxson
What platform is this? I just posted a note today that my Linux box running 2.6-25 and ext3 isn't doing fsyncs like it should, so I would be susceptible to DB corruption if my machine crashed during DB I/O. I posted a C program you could run on the specific machine with a corrupt DB to see if

[sqlite] Feature request: reset file mtime on rollback

2009-05-27 Thread Jim Wilcoxson
It would be handy for file synchronization if SQLite stored the database file's mtime in the journal and reset it if a rollback occurs. It's difficult to do this in an application: 1. If a DB is opened, a transaction started, some stuff happens, and a rollback is executed, the file mtime only

Re: [sqlite] Slow Transaction Speed?

2009-05-29 Thread Jim Wilcoxson
I agree that adding this to the library, and making it accessible via a pragma command would be very useful. For example, pragma commitrate 1000 would test the commit rate of 1000 commits and return the results in transactions per second as a row. If I install my app on a client's machine, I

Re: [sqlite] Join performance in SQLite

2009-05-30 Thread Jim Wilcoxson
SQLite has surprised me with its quick performance, not the other way around. In fact, I've implemented all kinds of lookup queries that I knew could be optimized by caching results so I didn't have to keep repeating the SQL query, but the performance was so good even repeating the queries that I

Re: [sqlite] Feature request

2009-05-30 Thread Jim Wilcoxson
I read through the header after Simon's request, and saw the data change counter he mentions. What I wasn't sure about is whether this number gets reset during a rollback. For example, you start a transaction, make some changes, some cache pages get flushed, and SQLite changes this number so

Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Jim Wilcoxson
.78447008133 TPS: 627.028688445 Obviously, there's no way a 7200RPM drive can actually achieve these results. I'm building my own version of Python and sqlite here, and after some research, I think I didn't build sqlite with the -DHAVE_FULLFSYNC flag, so it isn't actually doing the fdcntl() call. J

Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Jim Wilcoxson
Simon, the real point here is that Python and SQLite aren't doing real transactions. What we should be seeing here is AT MOST 30 TPS with synchronous=normal, assuming SQLite does 3 hard disk syncs per commit as was described in an earlier post. On my old AMD box, the one where fsync() is broken,

Re: [sqlite] Slow Transaction Speed?

2009-06-01 Thread Jim Wilcoxson
Microsoft has an interesting article on hard drive caches re: SQL Server: http://support.microsoft.com/kb/234656 "Many disk drives (SATA, ATA, SCSI and IDE based) contain onboard caches of 512 KB, 1 MB, and larger. Drive caches usually rely on a capacitor and not a battery-backed solution. These

Re: [sqlite] Slow select from a single table when using a view

2009-06-11 Thread Jim Wilcoxson
If you only have a handful of values for C and are already going to the trouble of creating separate views for each C, you could partition your data into separate tables for each value of C and maybe create another table containing the list of values of C and maybe the number of items in each C

Re: [sqlite] Advice on which to use (SQLite or SQL Server) for the following app.

2009-06-11 Thread Jim Wilcoxson
Here's what I'd try: 1. Write a small server that accepts connections and writes to the SQLite database using prepared statements. If you need require 500 transaction per second, it's simply not possible with rotating media. So the solution is to either turn off synchronous, which is dangerous,

Re: [sqlite] Advice on which to use (SQLite or SQL Server) for the following app.

2009-06-11 Thread Jim Wilcoxson
of queries, and have only 1 process (or thread) writing, while a pool of processes handles queries. Not sure how well SQLite handles this situation, but since you are doing group commits, it will greatly decrease your write load and potential concurrency issues. Jim On 6/11/09, Jim Wilcoxson <

Re: [sqlite] Advice on which to use (SQLite or SQL Server) for the following app.

2009-06-11 Thread Jim Wilcoxson
the effects of the read, erase, write cycle, usually by some form of caching, but then you are also playing with losing the transaction guarantees of a commit. Can't really have it both ways. Jim On 6/11/09, Sam Carleton <scarle...@gmail.com> wrote: > Jim Wilcoxson wrote: >> Here's what I

Re: [sqlite] Advice on which to use (SQLite or SQL Server) for the following app.

2009-06-11 Thread Jim Wilcoxson
survives intact. Jim On 6/11/09, Jim Wilcoxson <pri...@gmail.com> wrote: > SSD's usually have poor write performance, because to do a write, they > have to use read, erase, write sequences across large blocks like 64K. -- Software first. So

Re: [sqlite] Advice on which to use (SQLite or SQL Server) for the following app.

2009-06-11 Thread Jim Wilcoxson
Yes, good point. If you partition the database into multiple databases, you will have to place each on its own physical disk drive to increase transaction rates. If your base transaction rate with one drive is T, with N drives it should be N*T; 4 drives gives you 4x the transaction rate, etc.

Re: [sqlite] sqlite3_step performance degredation

2009-06-15 Thread Jim Wilcoxson
There was a recent SQLite bug that caused the size of the SQLite cache to shrink in some circumstances, and the longer a program ran, the smaller the cache became. Maybe you are running into this bug. IIRC, you had to do an update in the select loop to trigger the bug, so if you're not doing

Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Jim Wilcoxson
You are doing transactions here, which is a very different thing from normal disk I/O. Your CPU is idle because it is waiting for the disk. Your disk is idle because it is waiting for the platters to rotate around again. The best case you can achieve on a 7200RPM disk is 120 transactions

Re: [sqlite] async io and locks

2009-06-20 Thread Jim Wilcoxson
Async I/O fits perfectly with my app: I don't need the durable guarantee. But I do need the ability to activate it with a pragma since it isn't a C app. Are there plans to make async I/O available via pragma? Jim On 6/20/09, Dan wrote: > > I think we have quite

[sqlite] Python sqlite binding: commit inside select loop

2009-06-30 Thread Jim Wilcoxson
I'm using the Python sqlite3 (pysqlite) bindings. I'd like to be able to do a select, and while fetching those rows and inserting new ones, periodically do a commit. With the Python bindings, an error occurs because a commit resets all pending select statements. Here is an example showing the

Re: [sqlite] Python sqlite binding: commit inside select loop

2009-06-30 Thread Jim Wilcoxson
er Binns <rog...@rogerbinns.com> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Jim Wilcoxson wrote: >> With the Python bindings, an error occurs >> because a commit resets all pending select statements. > > Note that there are some constrain

Re: [sqlite] Python sqlite binding: commit inside select loop

2009-06-30 Thread Jim Wilcoxson
Thanks for the comments and explanations everyone - much appreciated. It seems there are a few alternatives I can check out. Jim On 6/30/09, Roger Binns <rog...@rogerbinns.com> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Jim Wilcoxson wrote: >> I guess I

Re: [sqlite] Python sqlite binding: commit inside select loop

2009-06-30 Thread Jim Wilcoxson
old or new data would be one thing, but in this example, totally wrong data is returned. There's no explanation of why it was happening. On 6/30/09, Jim Wilcoxson <pri...@gmail.com> wrote: > Thanks for the comments and explanations everyone - much appreciated. > It seems the

[sqlite] shell .schema command doesn't display attached table schemas

2017-02-21 Thread Jim Wilcoxson
Example: [jim@mb ~]$ sqlite3 dummy -- Loading resources from /Users/jim/.sqliterc SQLite version 3.15.1 2016-11-04 12:08:49 with the Encryption Extension Copyright 2016 Hipp, Wyrick & Company, Inc. Enter ".help" for usage hints. sqlite> create table t1(text); sqlite> ^D [jim@mb ~]$ ls -l dummy