Re: [sqlite] Do I need to migrate to MySQL?

2011-06-03 Thread Matthew L. Creech
On Fri, Jun 3, 2011 at 2:22 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 3 Jun 2011, at 7:19pm, Darren Duncan wrote:
>
>> MySQL should be avoided like the plague.
>
> Why ?
>

Coincidentally, I happened to be reading over this page just earlier today:

http://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL_2009

Obviously a bit biased toward PostgreSQL (since it's hosted there),
but it points out some fairly specific differences in features,
performance, etc.  (I have no personal experience either way, but was
just curious myself).

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL: no longer able to use read-only databases?

2010-07-15 Thread Matthew L. Creech
On Thu, Jul 15, 2010 at 12:23 PM, Pavel Ivanov <paiva...@gmail.com> wrote:
>> I don't think so.  Just like the older SQLite journal system, it's important 
>> that the WAL files survive through a crash.
>
> I believe WAL file is not a problem here (despite some confusing macro
> name that Matthew proposed). The problem is SHM file which don't have
> to survive - SQLite rebuilds it in case if it's missing.
>

Right, sorry for the confusing terminology.  The "-shm" file is what I
was referring to, since that's the part that needs to be writable even
for a read-only app, if I understand correctly.  I believe that it's
okay to put that in a ramdisk, issues with chroot() aside.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL: no longer able to use read-only databases?

2010-07-15 Thread Matthew L. Creech
So if I'm reading the documentation correctly:


The wal-index is in shared memory, and so technically it does not have
to have a name in the host computer filesystem. Custom VFS
implementations are free to implement shared memory in any way they
see fit, but the default unix and windows drivers that come built-in
with SQLite implement shared memory using mmapped files named using
the suffix "-shm" and located in the same directory as the database
file.

...

Early (pre-release) implementations of WAL mode stored the wal-index
in volatile shared-memory, such as files created in /dev/shm on Linux
or /tmp on other unix systems. The problem with that approach is that
processes with a different root directory (changed via chroot) will
see different files and hence use different shared memory areas,
leading to database corruption.


It seems like the only thing preventing WAL from working with
read-only databases is this lack of a global namespace for shared
memory.

This exists in many Linux systems as "/dev/shm", or even "/tmp" would
work fine for a lot of users.  I totally understand that you can't
make this the default, because it could potentially lead to strange
behavior with chroot()s and the like.  But for those of us with
controlled environments who know that all applications using SQLite
share the same view of the filesystem, it would be great if we could
#define an option which turns this on.  For my application, chroot()ed
apps are a complete non-issue, whereas lack of read-only DB access is
a dealbreaker for WAL (which would really be a shame, the performance
benefit is substantial!)

I know I could always write my own VFS to do this, but that seems like
overkill.  :)  More importantly, it requires maintenance - I'd
probably create my VFS by copying os_unix.c, but then I wouldn't
automatically be getting any fixes/updates that you guys make to that
file going forward.

Best would be if SQLite had a #define like SQLITE_CUSTOM_WAL_LOCATION,
which defaults to undefined (and hence you use the same directory as
the DB file), but which users could define to "/dev/shm", "/tmp", or
some other location to place all the shm files there, globally.  The
changes in

http://www.sqlite.org/src/fdiff?v1=ae173c9f6afaa58b2833a1c95c6cd32021755c42=a76d1952ac7984574701c48b665220b871c5c9a5

are pretty straightforward, so I could probably take a stab at this if you want.

What do you guys think?

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] WAL: no longer able to use read-only databases?

2010-07-09 Thread Matthew L. Creech
In testing the latest SQLite snapshot with WAL enabled, it seems that
there's no way to use a database in a read-only location.  For
example, let's say I've created a database as root, then closed it
(cleanly):

$ ls -l /flash/alarms.db*
-rw-r--r--1 root root 36864 Jan  1 00:14 /flash/alarms.db

If I try as another user to use that database, I get an error:

$ sqlite3 /flash/alarms.db
SQLite version 3.7.0
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> PRAGMA journal_mode;
Error: unable to open database file
sqlite>

However, if I create a symlink to that database in a location that I
have write access to, then everything works fine:

$ ln -s /flash/alarms.db ./alarms.db
$ ls -l alarms.db*
lrwxrwxrwx1 adminadmin   16 Jan  1 00:15 alarms.db ->
/flash/alarms.db
$ sqlite3 alarms.db
SQLite version 3.7.0
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> PRAGMA journal_mode;
wal
sqlite> .quit
$ ls -l alarms.db*
lrwxrwxrwx1 adminadmin   16 Jan  1 00:15 alarms.db ->
/flash/alarms.db
-rw-r--r--1 adminadmin32768 Jan  1 00:15 alarms.db-shm
-rw-r--r--1 adminadmin0 Jan  1 00:15 alarms.db-wal

So clearly this is a side-effect of WAL creation, which happens in the
same directory as the database file.

This doesn't seem like it should fundamentally be any different than
normal journaling mode, in that opening a database in read-only mode
makes the creation of a journal / WAL unnecessary.  But I'm not
familiar with the WAL internals, so maybe there's more to it.  FYI,
this works fine with normal journaling mode (we bumped in existing
code after changing the journal_mode).

Any additional flags or ways of doing this that I'm missing?  Or is it a bug?

Thanks!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to determine when to VACUUM?

2010-07-06 Thread Matthew L. Creech
On Tue, Jul 6, 2010 at 5:58 PM, Kristoffer Danielsson
<kristoffer.daniels...@live.se> wrote:
>
> However, I do believe there are times when vacuuming would be beneficial. For 
> instance, if a database for software X is detected to have 90% unused space 
> for a couple of weeks, then why bloat the harddrive? (I don't know how to do 
> that though :P)
>
> In my opinion, the user should always have the option to vacuum the database. 
> My goal is to let software X have some logic to give the user a hint when 
> this action would be appropriate.
>

It can also be beneficial for performance reasons, although you'd
obviously have to do testing to see whether the very-CPU-intensive
VACUUM operation is worth the gain in subsequent performance.  There
are some cases where it clearly is.

As for your original question, I would think that you'd want to use
some ratio of 'PRAGMA freelist_count;' to 'PRAGMA page_count;' to make
your determination:

http://sqlite.org/pragma.html

Once the number of unused pages gets large enough compared to the
total database size, it might be an appropriate time to VACUUM.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] SQLite 3.7.0 coming soon....

2010-06-30 Thread Matthew L. Creech
On Wed, Jun 30, 2010 at 1:21 PM, D. Richard Hipp <d...@hwaci.com> wrote:
> We are in the final phases of development for SQLite version 3.7.0.
> The key enhancement over version 3.6.23.1 is support for the use of
> write-ahead logs for transaction control.  See 
> http://www.sqlite.org/draft/wal.html
>  for additional information on how this might enhance your  use of
> SQLite.
>

For reference, I've got an embedded application (Linux/UBIFS on NAND)
that's write-heavy, and which has been pretty well optimized for
SQlite performance.  I replaced the library with the latest snapshot
and set journal_mode=WAL, no further tweaking of checkpointing or
anything, and log performance improved by ~30%.

Looking forward to 3.7.0!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Proposed: drop support for LinuxThreads

2010-05-12 Thread Matthew L. Creech
On Wed, May 12, 2010 at 11:43 AM, D. Richard Hipp <d...@hwaci.com> wrote:
> If we drop support for (the non-standard, non-compliant) LinuxThreads
> threading library and instead support only standard Posix threads
> implemented using NPTL, beginning with SQLite release 3.7.0, what
> disruptions might this cause?
>
> Is anybody still using LinuxThreads?
>

FWIW, our only platform still using LinuxThreads is [/was]
coldfire+MMU (v4e), and a couple of months ago CodeSourcery pushed
complete NPTL support for it upstream.  It's not a very popular or
well-supported system, so I'd take that as a sign that LinuxThreads is
on its last legs.

Obviously there will always be some exceptions, but I'd imagine that
few of them are concerned with keeping those systems up to date with
the very latest SQLite.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite C API screwing port access?

2010-03-23 Thread Matthew L. Creech
On Tue, Mar 23, 2010 at 12:55 PM, Martin Sigwald <msigw...@gmail.com> wrote:
> I have a program which builds an ICMP package over IP and sends it. Before
> that, I get IP number and other information from a SQlite DB. I was having
> problems, so I began to comment different parts of the code, until I got to
> this code (pseudocode):
>
> sqlite3_open(DB_NAME);
> sqlite3_close(DB_NAME);
>
> ping_server("10.0.0.4"); //my ping function, which pings a "hardcoded" IP,
> doesnt interact with DB
>

Could you post the actual code snippet?  For example, sqlite3_close()
doesn't take a string - you did say it's pseudocode, I'm just
wondering if there's a problem with the arguments along those lines.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Matthew L. Creech
On Mon, Mar 15, 2010 at 2:11 AM, Matthew L. Creech <mlcre...@gmail.com> wrote:
>
> I'll give this a try tomorrow on a real device with journaling off,
> and see how much space it uses in /tmp with journaling turned off.
>

I ran some tests on a real device with a real database, and got the
following results:



Test 1: 250,000 entries, normal VACUUM

Before the test:
- 27.9 MB database
- No journal
- 37.5 MB free disk space

By the time the journal file starts growing:
- 27.9 MB database
- Small journal
- ~15 MB free disk space

Toward the end of the operation (before failure):
- 27.9 MB database
- 14.6 MB journal
- < 2 MB free disk space

The VACUUM operation fails with a disk I/O error, presumably due to
running out of space.


Test 2: 250,000 entries, journal_mode = OFF, VACUUM

Before the test:
- 27.9 MB database
- 37.5 MB free disk space

Toward the end of the operation:
- 28.1 MB database
- 10.6 MB free disk space

The VACUUM operation succeeds.


Test 3: 250,000 entries trimmed to 100,000 entries, normal VACUUM

Before the test:
- 27.9 MB database
- No journal
- 37.5 MB free disk space

By the time the journal file starts growing:
- 27.9 MB database
- Small journal
- ~33 MB free disk space

Toward the end of the operation:
- 27.9 MB database
- ~28 MB journal
- ~5 MB free disk space

Afterward:
- 11.2 MB database
- 54.3 MB free disk space

The VACUUM operation succeeds.


Test 4: 250,000 entries trimmed to 100,000 entries, journal_mode = OFF, VACUUM

Before the test:
- 27.9 MB database
- 37.5 MB free disk space

Toward the end of the operation:
- 28.1 MB database
- 33.3 MB free disk space

The VACUUM operation succeeds.




I never did see any temporary files, but space was obviously being
taken up for a temp database, so I assume that SQLite opens a file
then unlink()s it or something like that.  It looks like in the normal
(journaled) case, the journal file consistently grows about as large
as the original database, but the extra disk space used up by the
temporary table is dependent on the new database size.

So Jay's estimate of disk usage:

 [old database size] + [new database size] + [journal file]

is correct, but for the normal VACUUM case, [journal file] is
basically equivalent to [old database size].  So it's really just:

 (2 * [old database size]) + [new database size]

This means that to VACUUM a SQLite database of size X, you need at
least 2X of _additional_ free disk space available.  That seems rather
wasteful, just looking at it as a SQLite user.  Although
programmatically there may be reasons for it that I'm not aware of.

At any rate, I guess my immediate problem will have to be solved with
a one-off patch that disables journaling and does a VACUUM, and for
long-term usage I'll need to shrink the database capacity even more to
account for the 3x disk space usage if I need to VACUUM again in the
future.

Thanks

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Matthew L. Creech
On Sun, Mar 14, 2010 at 9:18 PM, Jay A. Kreibich <j...@kreibi.ch> wrote:
>
>  Are you sure it is the journal file that is growing too large?
>
...
>
>  Now, if I'm following you correctly, the numbers you gave seem to
>  indicate that this should work... If the old database is 100MB and
>  the new database is 50MB and I'm saying the journal file is small,
>  then 80MB free before you start should be enough.
>

I'm sure that it's the journal file.  This is happening on an embedded
device in a flash-based filesystem which is rather slow, so I was
logged in via SSH and could see the journal file growing unusually
large as free space shrunk to zero.

To double check, I just tried a similar test on my desktop:

$ ls -l deadband.db
-rw-r--r-- 1 mlcreech mlcreech 85209088 Mar 15 01:35 deadband.db
$ ./sqlite3 deadband.db
SQLite version 3.6.22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> SELECT count() FROM val_table;
663552
sqlite> DELETE FROM val_table WHERE idx < 60;
sqlite> SELECT count() FROM val_table;
51840
sqlite> VACUUM;

In another shell, I've got a script monitoring the size of
deadband.db-journal every 100ms.  When I do the VACUUM, it shows:

Journal size: 4096 bytes
Journal size: 4096 bytes
Journal size: 7389184 bytes
Journal size: 14774272 bytes
Journal size: 22159360 bytes
Journal size: 2958 bytes
Journal size: 36929536 bytes
Journal size: 44314624 bytes
Journal size: 51699712 bytes
Journal size: 59080704 bytes
Journal size: 66658304 bytes
Journal size: 81235968 bytes
Journal size: 85393408 bytes

That last size is actually larger than the original database
(additional transaction metadata and what not, I guess).  After it's
done, though, the file size is appropriately smaller:

$ ls -l deadband.db
-rw-r--r-- 1 mlcreech mlcreech 6709248 Mar 15 01:43 deadband.db

I notice that when I've mostly emptied the database (as in this
example), the VACUUM completes quickly, roughly in proportion to how
many real entries are left.  But the journal size still follows a
near-linear growth throughout the operation, regardless.  So
presumably it's not really doing anything with the old data, but still
copying it over to the journal as it goes through the old DB
page-by-page or something.

You mentioned that it's creating a temp database, and looking at
sqlite3RunVacuum() I see 'vacuum_db' which seems to be just that.
However, I'm not sure where that data is actually going.  Setting
temp_store_directory to my current directory didn't actually generate
any files while the VACUUM was happening, that I could see.

>
>    PRAGMA journal_mode = OFF;
>
>  I would do this on a test system.  If the problem really is the
>  journal file, this should allow things to work.  If you still get a
>  space error, we're dealing with something else.
>

This could work as a last resort, although I'd hoped to find a better
way - these are embedded devices, so power failure or reboot during
the middle of this operation is a possibility.  For this one-time
case, though, I may be able to swing it if there's no other choice.

By the way, there's one single read/write filesystem in flash on these
systems, so in my case "/tmp" is no different than the directory that
the database & journal are stored in.  There's also not nearly enough
free memory to hold the small database, so I can't use temp_store =
MEMORY, unfortunately.

I'll give this a try tomorrow on a real device with journaling off,
and see how much space it uses in /tmp with journaling turned off.

Thanks for the response!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] VACUUM & journal size

2010-03-14 Thread Matthew L. Creech
Hi,

I have a SQLite database with one large table, and I'd like to shrink
the size of that table to free up space in the filesystem.  My problem
is that the database is (for example) 100 MB, and I have 80 MB of free
filesystem space.  I figured that I could DELETE, say, half of the
records from the table, then VACUUM, and the VACUUM would
[temporarily] need ~50 MB of free space for the journal (since that's
how much real data there is).

Instead, I'm finding that it needs a full 100 MB for the journal, even
though once the VACUUM succeeds the resulting DB is only 50 MB.  As a
result, I'm stuck unable to shrink the database, since VACUUM fails
with a disk I/O error (out of space), seemingly no matter many entries
I remove ahead of time.  I know the space is being freed, since
"PRAGMA freelist_count" shows the expected numbers.  So presumably
this is just an artifact of the way VACUUM is implemented internally.

Is there anything that I can do to shrink the database in-place?  It
looks like auto-vacuum is out of the question, since it has to have
been enabled before the table was initially created.  FYI, this
situation exists on a few dozen devices in various locations, so I'd
need a programmatic solution - "move the DB somewhere else, VACUUM,
then move it back" won't work, unfortunately.  :-)

Any tips are appreciated.  Thanks!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index usage

2009-09-21 Thread Matthew L. Creech
On Mon, Sep 21, 2009 at 8:27 AM, Pavel Ivanov <paiva...@gmail.com> wrote:
>
> There's no way to optimize your query to be fast in both situations.
> LIMIT clause is pretty hard to optimize. Maybe just to have a closer
> look at the application structure - maybe it's not so necessary to do
> ORDER BY or maybe LIMIT can be moved to inner query...
> But for this particular case I think it's pretty reasonable to use
> INDEXED BY clause despite what documentation says (it discourages
> usage for common cases).
>

Yeah, that's what I was afraid of.  :)  I guess I'll end up just
tracking the number of val_table entries which match each path, then
totaling up the # of matching entries first to get a count of how many
rows my real query is going to match.  Using that, and the LIMIT BY
items, I can maybe heuristically guess which indexing method will be
faster for when I do the real query.  Seems like a pain for this
relatively simple scenario, but I can see how it'd be deceptively
easy-looking to optimize.

Thanks for the response

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index usage

2009-09-18 Thread Matthew L. Creech
Hi,

I'm trying to optimize a query for 2 different scenarios, and I'm
having trouble getting something that works good in general.  I want
to be sure I'm not missing something.  Here are the tables and indexes
used in my database:

sqlite> CREATE TABLE path_table (idx INTEGER PRIMARY KEY, path TEXT UNIQUE);
sqlite> CREATE TABLE val_table (idx INTEGER, val REAL, time INTEGER);

sqlite> CREATE INDEX time_idx ON val_table (time ASC);
sqlite> CREATE INDEX path_idx ON val_table (idx ASC);

'path_table' contains unique string path names, while 'val_table'
records any number of values associated with each path, and the time
at which the value occurred.

My query looks something like:

sqlite> SELECT val FROM val_table WHERE idx IN (SELECT idx FROM
path_table WHERE path GLOB '[a]') ORDER BY time ASC LIMIT [b];

where [a] and [b] are provided by my code's caller.  My problem occurs
when the idx value(s) selected from 'path_table' match a large number
of records in the database - say, 100,000 out of 1,000,000 records.
In that case, the query takes several minutes to complete even when
[b] is small.  Presumably it's first looking up all 100,000 rows where
'idx' matches, then applying the ORDER BY clause to those results
without indexing.  EXPLAIN QUERY PLAN confirms:

0|0|TABLE val_table WITH INDEX path_idx

I tried adding "INDEXED BY time_idx", which greatly improved this
particular case, because statistically 1/10 rows will match 'idx' and
therefore we find [b] of them very quickly when [b] is small.  But
this hurts performance in other cases, since if there are only a few
rows with a matching 'idx', the query ends up manually walking through
most of the table.

My question: how can I optimize this kind of query so that it utilizes
both indexes, to grab the first [b] rows (ordered by time) which also
match [a]?  Or am I just going to have to guess at which way will be
faster, and use "INDEXED BY" to force it?  (The documentation says I
shouldn't have to do this)

Thanks for the help!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cross-Compile and Installation of Sqlite

2009-06-25 Thread Matthew L. Creech
On Thu, Jun 25, 2009 at 4:23 PM, Ben Atkinson<bwa4...@yahoo.com> wrote:
>
> I have an embedded Linux ARM target and wish to run sqlite on it.  I 
> successfully cross-compiled sqlite-3.6.15 on my Ubuntu x86 host, and now I'm 
> ready to install sqlite3, its libraries, and headers on my target system.
>
> I originally tried compiling sqlite on my embedded target system.  Because it 
> has only a flash file system, and there is no swap area, gcc fails because it 
> runs out of memory.
>
> I tried zipping up the cross-compiled sqlite-3.6.15 directory from my x86 
> host into a tar.gz file, downloading it to my target, unzipping it, then 
> running "make install".  Because the config files and the Makefile have all 
> of the arm-unknown-linux-gnueabi cross-compiler references to gcc, this 
> doesn't match the actual configuration on my embedded target, and the make 
> fails.
>
> Before I start hacking into the sqlite config and Makefiles on my embedded 
> target, has someone already been through this and perhaps has a "howto"?  Is 
> there already a recipe in the Makefile for this?
>

SQLite isn't much different than any other autotools-based package in
this regard.  If you're wanting to do a firmware build, and include
the SQLite pieces in their correct location in your target root
filesystem, you can just do something like:

cd /path/to/sqlite-3.6.15
./configure --prefix=/my/target/rootfs --host=arm-unknown-linux-gnueabi
make install

If all you want is to run the sqlite3 executable, though, you can just
take the cross-compiled binary + shared-lib and throw them onto the
target.  Note that you'll need to set LD_LIBRARY_PATH when running
"sqlite3" to prevent it from complaining about missing libraries,
unless /lib is writable on your target.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] the speed of embedded database engines, sqlite3 vs berkeley db, I'm confused

2009-04-28 Thread Matthew L. Creech
On Tue, Apr 28, 2009 at 3:25 AM, liubin liu <7101...@sina.com> wrote:
>
> Thanks!
>
> It sounds pretty good. But I don't understand it exactly. Could you write
> down the sample codes?
>

Please see the documentation here:

http://sqlite.org/c3ref/funclist.html

You'll have something along the lines of (just a sketch, obviously):

typedef struct {
sqlite3 *db;
sqlite3_stmt *stmt;
} my_handle_t;

my_handle_t my_init() {
sqlite3_open();
sqlite3_prepare();
return handle;
}

void my_exec(my_handle_t handle, int id) {
sqlite3_bind_int();
/* Put in a loop or whatever: */
sqlite3_step();
/* After you've gotten all the result rows: */
sqlite3_reset();
}


Your caller would then call my_init() once to get a handle, then call
my_exec() a bunch of times using that handle.  The way your example is
doing it, _every_ time through the loop it does an exec(), which
re-compiles the same SQL code (which is not a fast operation).

There are plenty of other examples floating around on this mailing
list, I'm sure - just do some digging.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] the speed of embedded database engines, sqlite3 vs berkeley db, I'm confused

2009-04-27 Thread Matthew L. Creech
On Mon, Apr 27, 2009 at 10:25 PM, liubin liu <7101...@sina.com> wrote:
>
> thanks
>
> I'm not sure of the real reason.
> It's true that the speed of inserting with transaction is very fast. But my
> project is using SQLite mainly in selecting something. I don't know how to
> use transaction in the situation. May some friends give me some codes on
> that?
>

As someone else mentioned, you probably don't want to open/close the
database every time you do a single SELECT.  In the example you're
doing 100,000 iterations of open/exec/close, which will give pretty
terrible performance.

Instead, try creating an initialization call, which opens the
database, prepare()s your SELECT stmt (with a '?' placeholder for the
ID in this example), and returns a handle to the caller.  The caller
can then loop 100,000 times calling getdata() with this handle.
getdata() can just do sqlite3_bind_XXX() followed by sqlite3_step()
(and sqlite3_reset()), which will be _much_ faster than
sqlite3_exec().

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] UNION with results distinct on a particular column?

2009-03-24 Thread Matthew L. Creech
Hi,

I'm hoping someone here can help me out with a query.  I have multiple
tables, each with the same schema.  For example:

=
Table A:
=
1|"xxx"
2|"yyy"
3|"zzz"
=

=
Table B:
=
1|"xxx222"
3|"zzz222"
5|"www"
=

I'd like a SELECT statement that yields:

=
Result:
=
1|"xxx"
2|"yyy"
3|"zzz"
5|"www"
=

In other words, I want the UNION of all the input tables, but if there
are multiple results that have the same value in the first column, the
first table's value should take precedence.

This seems like a common scenario, so I'm probably missing something
trivial.  :)  But so far, the only way I've figured out to do this is
with something like:

SELECT * FROM
(SELECT 1 AS precedence, col1, col2 FROM A UNION
 SELECT 2 AS precedence, col1, col2 FROM B
 ORDER BY col1 ASC, precedence DESC)
GROUP BY precedence
ORDER BY col1 ASC;

(Just an example, I've got several other columns that have to be
sorted on, and there can be any number of tables).  This seems to do
what I want, but it takes an order of magnitude longer than the inner
SELECTs do on their own (i.e. without the GROUP BY which eliminates
rows with duplicate 'col1' values).  Any ideas on how I could do this
more efficiently?

Thanks!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows XP: How Can I Get Higher CPU usage from SQLite

2009-02-22 Thread Matthew L. Creech
On Sun, Feb 22, 2009 at 11:44 PM, jose isaias cabrera
<cabr...@wrc.xerox.com> wrote:
>>
>> Try this:
>> pragma cache_size=1;
>> pragma page_size=16384;
>> vacuum;
>
> Wow, thanks.  That did help. Cool, so there was something that I could do.
> Here is a question: this DB is shared by other folks, do they each need to
> set this when they open or if I set it, it would apply to everyone that
> connects to it?
>

"page_size" will persist once you've done the VACUUM.  "cache_size"
doesn't persist, but you can use "default_cache_size" if you want it
to.

http://sqlite.org/pragma.html

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite on flash devices

2008-10-02 Thread Matthew L. Creech
On Thu, Oct 2, 2008 at 4:20 AM, Paul McMahon <[EMAIL PROTECTED]> wrote:
> What's the consensus on using sqlite with flash based storage such as sd
> cards on embedded devices?
>
> Has anyone done this successfully yet with a real product (that needs to
> last a few years in the field)?
>

We've got several (Linux-based) embedded products using SQLite
databases.  One has an M-Systems SoC with Ext3 on top of TFFS, another
uses CompactFlash with Ext3, and yet another uses a directly-addressed
NOR chip with JFFS2.  None have had any problems related to SQLite.

How long it lasts depends (obviously) on the amount of data you're
pushing through on average.  For the NOR device, I implemented a test
system with SQLite, set up a simulator to model worst-case conditions
for the device so that it was writing to the database about as much as
it ever would, and hooked the MTD layer to count the number of erased
blocks over a pre-defined period of time.  You can then use that rate
and the size of the partition to figure out how long it'll take to hit
e.g. 100k erase cycles over the whole device.  For SD you don't know
how well it's doing wear-leveling behind the scenes, so throw in a
multiplier to account for that, and you'll at least get an
order-of-magnitude accurate idea of how long it'll last worst-case.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A question about the use of localtime

2008-06-19 Thread Matthew L. Creech
On Thu, Jun 19, 2008 at 4:35 PM, Matthew L. Creech <[EMAIL PROTECTED]> wrote:
>
> I think this behavior is probably due to the way localtime() works in
> glibc.  From what I've seen (at least on my embedded ARM-Linux board),
> localtime() only invokes tzset() the first time it's run by an
> application.  So if your app starts and calls localtime() before the
> timezone is changed, you'll get times formatted according to the old
> timezone.
>

Correction - that's what happens when localtime_r() is called;
localtime() is guaranteed to call tzset() on each invocation.

So one option here is to just disable use of localtime_r(), since
presumably the configure script detects it and defines
HAVE_LOCALTIME_R in config.h.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A question about the use of localtime

2008-06-19 Thread Matthew L. Creech
On Thu, Jun 19, 2008 at 3:28 PM, Rich Rattanni <[EMAIL PROTECTED]> wrote:
>
> So this email isn't blaming SQLite for my improper timestamps even
> after I set the correct zoneinfo file.  I am just trying to
> understand what is going on.  Should I assume that my observation of
> 'Set zoneinfo first, then use datetime functions' is
> a valid fix?  Or am I fooling myself?
> I just wanted to know the mechanism at work here.  Of course I do
> fully acknowledge this is could be an artifact in the
> GLIBC library.  If someone knows the particular reason this is
> happening, even if it is GLIBC fault, I would love to know.
>

I think this behavior is probably due to the way localtime() works in
glibc.  From what I've seen (at least on my embedded ARM-Linux board),
localtime() only invokes tzset() the first time it's run by an
application.  So if your app starts and calls localtime() before the
timezone is changed, you'll get times formatted according to the old
timezone.

Our solution was to simply call tzset() before calling localtime(),
since it wasn't in any sort of critical path.  That's probably not a
good solution in SQLite, but as a start, you may want to try calling
tzset() yourself before each time you modify the flags, and see if
that fixes the problem.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot get amalgamation built from CVS to compile

2008-05-13 Thread Matthew L. Creech
On Tue, May 13, 2008 at 12:28 AM, Samuel Neff <[EMAIL PROTECTED]> wrote:
>
>  One other issue we're having and are not sure about is we get a compiler
>  error on sqlite3_profile and sqlite3_trace.  We need to remove these two
>  lines from the def file included with the sqlite source in order to get
>  everything to compile ok on VS 2008.  are sqlite3_profile and sqlite3_trace
>  included by default in both the source and def or is there a mismatch?  Or
>  is there something else we should be doing besides editing the def file
>  manually?
>

Presumably you're either defining SQLITE_OMIT_TRACE or
SQLITE_OMIT_FLOATING_POINT, which omits both of those functions from
the build, and since sqlite3.def is static it has to be updated.  I'd
hazard a guess that updating it yourself is currently required, but I
don't build on Windows, so I'm really not sure - DRH would know more
about this.

The makefile does have a target to re-generate sqlite3.def on the fly,
but that's meant for Cygwin-based builds.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot get amalgamation built from CVS to compile

2008-05-06 Thread Matthew L. Creech
In the latest CVS, you should now also be able to do what you intended
in the first place.  Namely:

./configure
make sqlite3.c

I thought about it, and there's no good reason to inline the
auto-generated config.h file in to the amalgamation like we were
doing, so now it keeps it as an #include that's only performed if
building in-tree.  Let me know if you see any more problems.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot get amalgamation built from CVS to compile

2008-05-06 Thread Matthew L. Creech
On Tue, May 6, 2008 at 4:38 PM, Samuel Neff <[EMAIL PROTECTED]> wrote:
>
>  we ran
>
>  configure
>  make sqlite3.c
>
>  and got an amalgamation with those types defined using unmodified sources
>  from CVS.  Are you saying the types should not be defined?

No - you got what I'd expect: #defines added for all the features of
your local system that the configure script detected.  Since it's a
Linux box, that will include HAVE_GMTIME_R as well as HAVE_STDINT_H.

>
>  We modified the sqlite3.c file by hand as Brad suggested and it compiles and
>  runs fine, but I agree with Brad that it seems odd to need to edit the file
>  manually.
>

If you want to create a generic amalgamation (without pre-defined
features like HAVE_STDINT_H), at least currently, you shouldn't use
the configure script to generate your makefile, since the point of
using autoconf is to detect resources (header files, etc.) that aren't
portable.  You can instead start with the pristine source, then:

cp Makefile.linux-gcc Makefile
make sqlite3.c

bypassing the configure script altogether.  That should do what you want

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot get amalgamation built from CVS to compile

2008-05-06 Thread Matthew L. Creech
On Tue, May 6, 2008 at 11:22 AM, Brad House
<[EMAIL PROTECTED]> wrote:
> We ran into the same problem here.  It seems as though maybe the
>  amalgamation is hand-edited for distribution to remove the contents
>  of the config.h to be system agnostic.  When we built ours from CVS,
>  we just did the same hand-edit and packaged it and it compiled fine on the
>  dozen or so OS's we distribute binaries for (Windows (32 & 64), MacOSX,
>  Linux, FreeBSD, Solaris, SCO, AIX, ...).
>
>  I'd actually like to know the consequences of this though, especially
>  in relation to the reentrant functions (HAVE_GMTIME_R, HAVE_LOCALTIME_R),
>  also I'd be interested to know what it does without UINT64_T or UINTPTR_T...
>

By default things like HAVE_GMTIME_R aren't defined, so you'd have to
add those to your CPPFLAGS or something if you wanted to build a
generic amalgamation with those features included.  The datatypes that
aren't defined will use less accurate types that are "good enough", so
that e.g. UINT32_T might be "unsigned long" rather than "uint32_t",
which might be 64 bits.  This is under discussion right now: whether
we even need the specifically-sized types at all.  If not, the
inclusion of  and definition of those types may disappear in
the interest of portability.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot get amalgamation built from CVS to compile

2008-05-06 Thread Matthew L. Creech
On Tue, May 6, 2008 at 10:49 AM, Samuel Neff <[EMAIL PROTECTED]> wrote:
>
>  Is this related to a change in the CVS source or is there something we're
>  doing wrong in building the amalgamation?
>
>  We're building the amalgmation on Fedora Core release 4 (Stentz),
>  2.6.17-1.2142_FC4smp #1 SMP i686 i686 i386 GNU/Linux
>
>  We're compiling sqlite in Microsoft Visual Studio 2008 as part of
>  System.Data.SQLite (.NET) which uses sqlite3.c and compiles fine with
>  sqlite3.c from the 3.5.8 distribution on the sqlite.org website.
>

Fhe configure script is picking up standard headers like  on
Linux which aren't supported in Visual Studio.

If you want to build the amalgamation on Linux without detecting
Linux-supported headers, you don't need to use autoconf - try:

1. Unpack the source tarball
2. cp Makefile.linux-gcc Makefile
3. make sqlite3.c

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Proper use of sqlite-amalgamation.

2008-04-04 Thread Matthew L. Creech
On Thu, Apr 3, 2008 at 7:11 PM, Amit <[EMAIL PROTECTED]> wrote:
>
>  Ok that is good to know. I will play around with the source
>  distribution and try to figure out how to get it to work with python
>  2.5. According to the python 2.5 documentation, to build Python with
>  sqlite3, I need the libraries plus the header files. Installing the
>  source distribution installs the header files as well? Or do I need to
>  copy them to some standard location like /usr/local/include?
>

Correct - from the source tarball, doing configure/make/make install
will build the amalgamation into a library and install it with the
headers (there are only 2).  No idea about python integration, though.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.5.7 & TCL: "undefined symbol: sqlite3StrICmp"

2008-04-04 Thread Matthew L. Creech
On Thu, Apr 3, 2008 at 10:46 PM, Zbigniew Baniewski <[EMAIL PROTECTED]> wrote:
> I'm sorry to confirm the problem described at http://tinyurl.com/29wc8x
>
>  #v+
>   $ tclsh8.5
>   % package require sqlite3
>   couldn't load file "/usr/lib/sqlite3/libtclsqlite3.so.0":
>   /usr/lib/sqlite3/libtclsqlite3.so.0: undefined symbol: sqlite3StrICmp
>  #v-
>
>  Does there exist any cure?

Nothing simple, unfortunately.  It looks like that function is built
with static linkage as part of the amalgamation, so it's inaccessible
to modules outside of libsqlite3.so.  We need to either rename it so
that it's part of the library's exported API, or do something
different in tclsqlite.c.

It's the only internal function this is a problem with, by the way -
you can check "nm -D libtclsqlite3.so" for all 'U' (undefined)
symbols.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.5.7 Compile Fails On sqlite3.c

2008-04-04 Thread Matthew L. Creech
On Thu, Apr 3, 2008 at 7:05 PM, Robert L Cochran <[EMAIL PROTECTED]> wrote:
> Here is what I did:
>
>  tar -xvzf sqlite-3.5.7.tar.gz
>  cd sqlite-3.5.7
>  mkdir bld
>  cd !$
>  ../configure --prefix=/usr/local/sqlite-3.5.7 --disable-tcl
>  --enable-threadsafe
>  make
>

Yeah, this was reported & fixed in CVS shortly after the 3.5.7 release:

http://www.sqlite.org/cvstrac/chngview?cn=4890

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Proper use of sqlite-amalgamation.

2008-04-03 Thread Matthew L. Creech
On Thu, Apr 3, 2008 at 5:39 PM, Amit <[EMAIL PROTECTED]> wrote:
>
>   1. Is there any documentation that I may have missed that addresses
>  the above issue? If not, there probably should be a wiki page
>  regarding this. I could create a wiki page with my notes regarding
>  this.
>

There's:

http://www.sqlite.org/cvstrac/wiki?p=TheAmalgamation

But it doesn't address specifics related to the build like this.

>   2. Compiling above without the "-lpthread -ldl" results in errors. I
>  simply looked at a mailing list post and stumbled upon the above
>  commands. Is this the right way to do it?

Yes.  FYI, as of 3.5.7 the autoconf-based build in the full
distribution uses the amalgamation by default when producing the
shared library and executables, and the non-autoconf build (if you
start by using Makefile.linux-gcc as a template) has targets to do the
same.  It links the library itself with "-lpthread -ldl", which is why
you don't have to add those lines if you're just linking a program
against the shared library - gcc is instructed automatically by
libsqlite3.so to link them in.

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.5.7 compile failure, with SQLITE_OMIT_VIEW

2008-03-19 Thread Matthew L. Creech
On Wed, Mar 19, 2008 at 12:49 PM, Ken <[EMAIL PROTECTED]> wrote:
>
>  Also Attempting to configure/compile in a different directory than the 
> makefile.in
> resulted in a cp failure while creating the amalgamated source.
>

This should be fixed in the latest CVS.  I also cleaned up the header
generation, since it was using the default [empty] config.h rather
than the one output by the configure script.  Let me know if you find
any other issues.  Thanks!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] can't find libsqlite3.so.0 on RHEL

2008-03-13 Thread Matthew L. Creech
On Thu, Mar 13, 2008 at 1:09 PM, P Kishor <[EMAIL PROTECTED]> wrote:
> I am not at all conversant with RH Linux (RHEL-es3). I have just
>  compiled sqlite3.5.6 from amalgamation, and I get the error that
>  libsqlite3.so.0 can't be found. I did the following
>
>  ./configure
>  make && make install
>
>  sqlite3 is place in /usr/local/bin
>  and libsqlite3.so.0 is placed in /usr/local/lib
>

The -rpath option was being used when linking libsqlite3.la, but not
when linking the sqlite3 binary.  Can you try the latest version from
CVS and see if that works?  (Or apply the diff at
http://www.sqlite.org/cvstrac/chngview?cn=4857 )

Thanks!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL Blobs - what am i doing wrong here?(code inside)

2008-03-12 Thread Matthew L. Creech
On Wed, Mar 12, 2008 at 9:37 AM, C S <[EMAIL PROTECTED]> wrote:
>
>  myString = "insert into Images(imageID, imageData)
>  values(?, ?);
>
>
>  status = sqlite3_prepare_v2(db, myString.c_str(), -1,
>  , NULL);
>
>  void *blob = reinterpretcast(imageArray);
>
>
>  status = sqlite3_bind_blob(statement, 2, blob, 10 *
>  sizeof(unsigned short), SQLITE_STATIC);
>
>  statusu = sqlite3_finalize(statement);
>
>  return sqlite3_last_insert_row(db);
>
>  
>
>  however when i do:
>
>  select * from Images;
>
>  i get no results returned to me. i just get returned
>  to the prompt. is there anything that i missed? thanks
>  again!!!
>

You're missing a bind for the first column (the imageID), and more
importantly, a call to sqlite3_step() - you do need one, otherwise the
insert never executes.  :)

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users