Re: [sqlite] Pre-compiled products

2009-03-22 Thread Jay A. Kreibich
On Fri, Mar 20, 2009 at 11:07:01AM -0500, William Kyngesburye scratched on the 
wall:
> Indeed, configuring sqlite to compile as a dylib on OSX is easy (it's  
> the default).

  Duh.  I really hate the way autoconf hides things in ".libs".  I was
  doing a huge series of test builds and wasn't actually installing
  stuff, so I missed the fact that it was, indeed, building the dynamic
  library.

> I think what Richard may mean is packaging an installer so that the  
> dylib is installed where it needs to be.  As you point out Jay, dylibs  
> have additional metadata, part of that is the full path to the dylib,  
> and this is also in the sqlite program.  So the dylib needs to be  
> installed where it was compiled to be - /usr/local/lib.  It is possible 
> to set DYLD_LIBRARY_PATH, like LD_LIBRARY_PATH on Linux, but Mac users 
> don't want to be bothered by this.

  Not exactly.  A dylib doesn't know where it is, but it does like a
  full path to the things it links to (i.e. libgcc, libSystem).  You
  can dig through the MachO header with otool(1).

  The big sticky point with dylib files vs ELF .so files is that Apple
  provides no simple tool to re-write the paths inside the dylib file
  (IIRC, libtool under Linux will do this).  So, if you have a package
  that builds and links a series of inter-related dylib files, you can
  get into a messy situation if you try to move the dylib files from
  their original install location.  In the case of SQLite, it doesn't
  matter, as the only libs SQLite is linking against are the system
  libs.

  I also disagree with your "Mac users don't want to be bothered by
  this" idea.  SQLite doesn't provide an XCode project file, after all.
  You're going to be digging around on the command line to build,
  install, and use these kinds of tools.  By the time you've reached
  this level, OS X is just another UNIX system.  Mucking around with
  libtool or some such low-level tool to re-path dynamic libraries
  under Linux is not exactly something your run-of-the-mill admin is
  very familiar with.  A software engineer, perhaps, but not most users
  or more admins.

> Using a static library means the sqlite program can be installed  
> whereever the user likes, as long as they know how to run it (ie it's in 
> their PATH).  No admin permissions needed.

  Outside of the minor detail that Mach-O has no concept of a "static
  library", I'm not really interested in having a static vs dynamic
  argument.  I understand the advantages and disadvantages of both,
  especially in light of the push to use amalgamation distributions.

  But the current set of Mac OS X distributions essentially forces you
  to go to source code since there are no libraries available.  If all
  three platforms are supported, I'd kind of expect the same four
  downloads for all three platforms.

> With a dylib, the program can still be anywhere, but the dylib needs to 
> be where the program expects it, normally in /usr/local/lib, which needs 
> admin permissions.  Either a script to copy it there could work (again, 
> there's the issue of us being Mac users), or a Mac installer package is 
> needed.

  For stuff like this, I don't think anyone is expecting an installer.
  If you need one, you should be using some kind of package management
  system for installing SQLite.

  After all, neither the Windows SQLite3.DLL nor the Linux libsqlite3.so
  download distributions have any kind of install scripts-- the
  downloads are just the raw library files.  No reason for a Mac OS X 
  .dylib distribution to be any different than the Linux .so distribution.
  People either know what the files are and what to do with them, or
  they're better off being confused.



  BTW, part of the reason I'm asking these questions is that I'm trying
  to write some docs having to do with the build/install process of
  SQLite on different platforms.  I've got a lot of UNIX experience, so
  the Linux/Mac stuff is pretty easy, but I'm a bit of a newbie on
  Windows.  If there are any folks out there that have some cross
  platform experience (i.e. speak UNIX) but spend a fair amount of time
  under Windows, I'd love to hear from you.

-j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to get the previous row before rows matching a where clause...

2009-03-22 Thread Jim Wilcoxson
What about:

select blah from blah where rowid < windowstartrowid order by
rowid desc limit 1

to get the row before, and:

select blah from blah where rowid > windowlastrowid limit 1

to get the row after.

Jim

On 3/22/09, sorka  wrote:
>
> I have a table of events that have a title, start time, and end time.
>
> The start time is guaranteed unique, so I've made it my primary integer key.
>
> I need all events that overlap the a window of time between say windowstart
> and windowend.  Currently, the statement
>
> SELECT title FROM event WHERE startTime < windowEnd AND endTime >
> windowStart.
>
> I've indexed the end time and the query is pretty fast, but it could be a
> lot faster if I only had to use the integer primary key.
>
> If instead I do
>
> SELECT title from event WHERE startTime > windowStart AND startTime <
> windowEnd
>
> this will get me almost the same thing except that it will be missing the
> first event that overlaps the windowStart because it's startTime is at or
> before startTime.
>
> In this case, if I can get exactly the previous row added to what is
> returned in the results above, I'll have exactly what I need.
>
> So the question is, when a WHERE clause returns a set of rows, is there a
> way to also get the row at the ROWID that comes just before the row that is
> returned from above with the lowest ROWID?
>
> Another way of putting it, if I take the lowest ROWID that is returned in my
> second example and get the next lowest ROW, the one that is less than the
> lowest ROWID I got but closest to it .i.e, the one right before it, then it
> would be complete.
>
>
> --
> View this message in context:
> http://www.nabble.com/How-to-get-the-previous-row-before-rows-matching-a-where-clause...-tp22650799p22650799.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to get the previous row before rows matching a where clause...

2009-03-22 Thread Dennis Cote
sorka wrote:
> I have a table of events that have a title, start time, and end time.
>
> The start time is guaranteed unique, so I've made it my primary integer key.
>
> I need all events that overlap the a window of time between say windowstart
> and windowend.  Currently, the statement 
>
> SELECT title FROM event WHERE startTime < windowEnd AND endTime >
> windowStart. 
>
> I've indexed the end time and the query is pretty fast, but it could be a
> lot faster if I only had to use the integer primary key.
>
> If instead I do
>
> SELECT title from event WHERE startTime > windowStart AND startTime <
> windowEnd
>
> this will get me almost the same thing except that it will be missing the
> first event that overlaps the windowStart because it's startTime is at or
> before startTime. 
>
> In this case, if I can get exactly the previous row added to what is
> returned in the results above, I'll have exactly what I need.
>
> So the question is, when a WHERE clause returns a set of rows, is there a
> way to also get the row at the ROWID that comes just before the row that is
> returned from above with the lowest ROWID?
>
> Another way of putting it, if I take the lowest ROWID that is returned in my
> second example and get the next lowest ROW, the one that is less than the
> lowest ROWID I got but closest to it .i.e, the one right before it, then it
> would be complete.
>
>
>   
This should give you the result you want, all your current results and 
the row with the immediately prior starttime.

select title from event
where starttime > windowstart
and starttime < windowend
union
select title from event
where starttime =
(select max(starttime) from event
where starttime <= windowstart
);
   
This is an alternate version that should perform better in case the 
optimizer doesn't optimize the max() function.

select title from event
where starttime > windowstart
and starttime < windowend
union
select title from event
where starttime =
(select starttime from event
where starttime <= windowstart
order by starttime desc
limit 1);

HTH
Dennis Cote

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to get the previous row before rows matching a where clause...

2009-03-22 Thread P Kishor
On Sun, Mar 22, 2009 at 4:23 PM, sorka  wrote:
>
> I have a table of events that have a title, start time, and end time.
>
> The start time is guaranteed unique, so I've made it my primary integer key.
>
> I need all events that overlap the a window of time between say windowstart
> and windowend.  Currently, the statement
>
> SELECT title FROM event WHERE startTime < windowEnd AND endTime >
> windowStart.
>
> I've indexed the end time and the query is pretty fast, but it could be a
> lot faster if I only had to use the integer primary key.
>
> If instead I do
>
> SELECT title from event WHERE startTime > windowStart AND startTime <
> windowEnd
>
> this will get me almost the same thing except that it will be missing the
> first event that overlaps the windowStart because it's startTime is at or
> before startTime.
>
> In this case, if I can get exactly the previous row added to what is
> returned in the results above, I'll have exactly what I need.
>
> So the question is, when a WHERE clause returns a set of rows, is there a
> way to also get the row at the ROWID that comes just before the row that is
> returned from above with the lowest ROWID?
>
> Another way of putting it, if I take the lowest ROWID that is returned in my
> second example and get the next lowest ROW, the one that is less than the
> lowest ROWID I got but closest to it .i.e, the one right before it, then it
> would be complete.
>

if you know the "lowest rowid that is returned from your query" the
next lowest rowid would be

SELECT rowid FROM table WHERE rowid = (SELECT Max(rowid) FROM (SELECT
rowid FROM table WHERE query));



-- 
Puneet Kishor http://www.punkish.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Carbon Model http://carbonmodel.org/
Open Source Geospatial Foundation http://www.osgeo.org/
Sent from: Madison WI United States.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite3 & Code::Blocks

2009-03-22 Thread Gregory A Failing

Hello everyone.

While not new to Code::Blocks and SQLite3, I have never until today tried
to use them together.  I am having a problem getting my proof-of-concept
code to compile.  Any assistance would be appreciated.

First, my environment:
- WinXP
- Code::Blocks 8.02 w/ MinGW compiler
- wxWidgets 2.8.9
- SQLite3 3.6.11

What I have done:
- Set up a wxWidget project
- Designed a frame
- Added a few menu bar items
- added a procedure to a menu pick that will test for a SQLite3 db
- Compile time '#include' references: 'sqlite3.h' & 'sqlite3ext.h'
- Execution library references include 'sqlite3.dll'

In the module that is accessed by the menu pick event, I have the
following:
...
#include "wx_pch.h"
#include "CallSQLiteMain.h"
#include 

#include "sqlite3ext.h"
...

The event procedure I am using for testing is:

void CallSQLiteFrame::OnFileCallDLLSelected(wxCommandEvent& event)
{
  sqlite3 *db;
  char *zErrMsg = 0;
  int rc;

  rc = sqlite3_open("menuscrn.db", );
  if( rc ){
sqlite3_malloc(255);
sprintf(zErrMsg, "Can't open database: %s; status=%d\n",
sqlite3_errmsg(db),rc);
SetStatusText(wxString::Format(_T("SQL error: %s\n"), zErrMsg),1);
sqlite3_close(db);
sqlite3_free(zErrMsg);
return;
  }

  sqlite3_close(db);
  return;
}


Comments and questions ...

If I '#ifdef' out the code in the procedure, the program compiles cleanly,
which is expected.  This proves the C::B project is set up properly.

If I include the code, I get the following single error:

"error: 'sqlite3_api' was not declared in this scope"

This means that the header information is being located correctly and
processed, but there may be a namespace issue.  Or something else ...

This begs the question: I know SQLite3 is a 'c' safe app because I use it
for other projects under *nix.  But can it be used directly in a 'c++'
situation under Windoze?

Or do I need a 'wrapper' of some kind?

Or a compile time option to be set/reset?

Lastly, while doing some internet research I have seen some issues with
the 'DLL' being set at a fixed address.  I know those articles were a year
or two old, but is that still an issue?  Do I need to somehow manipulate
the 'sqlite3.dll' before I can use it under Windoze?

Again thanks for any assisstance ...

GAFling
2009-03-22

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to get the previous row before rows matching a where clause...

2009-03-22 Thread sorka

I have a table of events that have a title, start time, and end time.

The start time is guaranteed unique, so I've made it my primary integer key.

I need all events that overlap the a window of time between say windowstart
and windowend.  Currently, the statement 

SELECT title FROM event WHERE startTime < windowEnd AND endTime >
windowStart. 

I've indexed the end time and the query is pretty fast, but it could be a
lot faster if I only had to use the integer primary key.

If instead I do

SELECT title from event WHERE startTime > windowStart AND startTime <
windowEnd

this will get me almost the same thing except that it will be missing the
first event that overlaps the windowStart because it's startTime is at or
before startTime. 

In this case, if I can get exactly the previous row added to what is
returned in the results above, I'll have exactly what I need.

So the question is, when a WHERE clause returns a set of rows, is there a
way to also get the row at the ROWID that comes just before the row that is
returned from above with the lowest ROWID?

Another way of putting it, if I take the lowest ROWID that is returned in my
second example and get the next lowest ROW, the one that is less than the
lowest ROWID I got but closest to it .i.e, the one right before it, then it
would be complete.


-- 
View this message in context: 
http://www.nabble.com/How-to-get-the-previous-row-before-rows-matching-a-where-clause...-tp22650799p22650799.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Schema syntax error

2009-03-22 Thread D. Richard Hipp

On Mar 18, 2009, at 9:39 PM, Tristan Seligmann wrote:

> Divmod Axiom[1] is a Python ORM built on SQLite; one of the book
> keeping tables it creates in the database has a column named
> "indexed", which became a reserved word around SQLite 3.6.4 (?). The
> "obvious" fix for this problem is to simply quote the column name
> using "", but the problem is that it is now impossible to load older
> databases which didn't have the column created with the name quoted:
>
> Error: malformed database schema (axiom_attributes) - near "indexed":
> syntax error
>
> What sort of migration path exists for converting / fixing these old
> databases? Ideally there would be some mechanism that does not require
> reinstalling an older version of SQLite.

See http://www.sqlite.org/cvstrac/chngview?cn=6370

Beginning with the next release, the parser has been modified to allow  
INDEXED to be used as the name of a table or index or column.


>
> -- 
> mithrandi, i Ainil en-Balandor, a faer Ambar
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Replacement for sqlite_exec_printf

2009-03-22 Thread Igor Tandetnik
"BOB_GOREN"  wrote
in message
news:ba7de4bcd2b86346ad08e3483a9a721e10a9a...@inetsrv.hv.amtote.com
> I am converting from sqlite 2.8.13 to 3.6.11 and am wondering what is
> the replacement for sqlite_exec_printf.

The most direct replacement is sqlite3_mprintf followed by sqlite3_exec. 
But consider switching to statement API instead - see 
sqlite3_prepare[_v2] et al.

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Table problem

2009-03-22 Thread Mayura S.
Hi,
I've choosen sqlite 3.6.11 for a small purpose in my organisation.
The software runs on a proprietroy OS.
SQLite is used by a single process, there are no other threads using SQLite, 
hence SQLITE_THREADSAFE = 0.
I have written the memory and OS functions.

My Schema table is very simple, as under.
static char    NVDataTable[] =
  "CREATE TABLE  NVDataTable( "
  "iDataId   INTEGER PRIMARY KEY, "
  "iVoidData INTEGER, "
  "iDefaultData  INTEGER,"
  "cStringId VARCHAR(32) DEFAULT '' NOT NULL,"
  "iSize INTEGER, "
  "iReadOnly INTEGER, "
  "iCiphered INTEGER )";

I was able to successfully Open database - using sqlite3_open().
I was able to successfully create table  - using sqlite3_exec().
Ofcourse, I struggled a LOT to achive the above two ...  (even though i had 
worked on 3.2.2 long ago)

I'm NOT able to INSERT a record into the database using - sqlite3_mprintf() and 
sqlite3_exec()
*z = sqlite3_mprintf("INSERT INTO NVDataTable VALUES(NULL, '%d', '%d', \"%s\", 
'%d', '%d', '%d' )", 
    iVoidData, iDefaultData,  stTmp, 1u, iReadOnly, 
iCiphered);
I get a return error as - 1
And, an error message as - [Error:= no such table: NVDataTable ]

I'm not getting any clue.  How do I verify the table after I've created the 
table.
I just need to achive INSERT and SELECT.   
The SELECT is just a simple fetch of record based on id, there are no complex 
or conditional queries.
Please help me.

Thanks in advance.
Mayura S.



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Use sqlite3_update_hook() to watch for changes

2009-03-22 Thread Ian Frosst
One method you can use to see changes is to hook up triggers on
update/insert.  Since you can add functions to  be called from SQL, you just
have to add whatever IPC system you desire as an SQLite function (see
sqlite_create_function).  This function can then be invoked through your
triggers, and away you go!

Frosstoise.

On Fri, Mar 20, 2009 at 10:22 PM, Ricky Huang  wrote:

> Here's my scenario, I have two applications, reader and a writer, and
> a shared database.  Writer will be writing changes to the database and
> reader will be reading.  Originally my idea was for the reader to use
> sqlite3_update_hook() to watch for database changes.  But after I got
> the code written, the hooked function was never called.  A little
> inter-tubes research revealed this: http://tinyurl.com/dc279r.  It
> basically says sqlite3_update_hook() is not designed for that.
>
> My question is, does SQLite offer ways of hooking up a second
> application to changes in a shared DB without polling?
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Replacement for sqlite_exec_printf

2009-03-22 Thread BOB_GOREN
I am converting from sqlite 2.8.13 to 3.6.11 and am wondering what is
the replacement for sqlite_exec_printf.  Also, are there any general
instructions for converting an application from 2.8 to 3.6.11?

Thanks, Bob Goren
 
 
This email, and any documents or data attached hereto, is intended for the 
addressee(s) only. It may contain confidential and/or privileged information 
and no rights or obligations have been waived by the sender. Any copying, 
distribution, taking of action in reliance on, other use of the information 
contain in this email by persons other than the intended addressee(s) is 
prohibited. If you have received this email in error, please reply to the 
sender by email and immediately delete or destroy all copies of this email, 
along with any attached documents or data.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] speeding up row by row lookup in a large db

2009-03-22 Thread P Kishor
On Sun, Mar 22, 2009 at 6:14 AM, Stefan Evert  wrote:
..
>
> However, Perl does introduce a lot of overhead, so it may not be the best
> choice for high-performance applications.  In my simplistic benchmarks, Perl
> did not achieve more than approx. 400,000 subroutine calls per second
> (MacBook Pro, 2.5 GHz).  If you're reading your 7,300 result rows in a loop
> with fetchrow_* method calls (or if similar subroutines are called
> internally when using fetchall_* or selectall_*), then you're already
> limited to less than 60 queries/s by the function call overhead.
>
> If you don't need to access the rows separately, you could try to pack the
> complete data for each met_id with Storable.pm and store it in a single
> database blob.  I haven't tried that myself, though, so I don't know whether
> that would be any faster.

Aha! So, if you read my original post that started this thread, that
is exactly the approach I have been thinking of... see my perlmonks
post on the same topic -- http://www.perlmonks.org/?node_id=752247

The problem I am facing is the size of the BLOB as well as the speed
of the SELECT query + INSERT BLOB query.

Each BLOB, compressed (with Compress::Zlib -- I am using
Data::Serializer to serialize and compress using Storable) is approx.
430 KB. That translates into a bit over 400 GB for the entire db,
which ain't gonna fit on my laptop. So, can't test it here. Ok, so I
have to chop up the db into a smaller problem first so I can do my
portable testing .The production machines won't have the space
problem, but still, moving a 400 GB file around, which has ballooned
from an original 430 MB file (my current SQLite db file is 430 MB with
all 1 million rows) is going to be less than convenient.

The second problem is the conversion from the current normalized db
structure to a de-normalized BLOB. With each SELECT query taking about
33 ms, just the SELECTs for all 1 million rows will take about 9
hours. Add to that the time to INSERT the BLOBs in the new table...
and, if the INSERT rate (or the SELECT) rate is not linear as the
SQLite file grows, I could be in for a long time doing the conversion.
As I noted in my original post, after an overnight run of conversion,
only a fraction of the rows had been converted to BLOBs before I
killed the process.

Of course, I could test with a 100 or a 1000 rows and do my
benchmarking, but those numbers are going to be misleading if the
SELECT and/or INSERTs are non-linear with file size.


>
>>> Another thing to keep in mind is that the SQLite version included in the
>>> DBD::SQLite distribution is fairly old (3.4.0 on my Mac -- I doubt
>>> there's a
>>> more recent version of DBD::SQLite around), and AFAIK there have been
>>> some
>>> speed improvements in SQLite recently.
>>
>> I am using DBD::SQLite::Amalgamation with SQLite 3.6.11. I am not sure
>> there is anything wrong with DBD::SQLite 1.1.4 at all (other than its
>> use of the older SQLite code, which is easily corrected). One might
>> get better bang by ensuring that the latest version of DBI is
>> installed, which Tim Bunce and company are probably always
>> fine-tuning.
>
> DBD::SQLite is fine and works very well for me, but hasn't seen any updates
> to a newer SQLite version in quite a long time.  My understanding is that
> it's not entirely trivial to replace the SQLite source code (without
> understanding all the XS interfacing code), and that it would be better and
> easier to use the amalgamation anyway.
>
> I'd love to switch to DBD::SQLite::Amalgamation, but I need to register my
> own callback functions, and this package fails the callback tests with a bus
> error.
>


Ok... first, it *is* trivial to replace the SQLite source, and if I
can do it, anyone can do it... certainly you who seems to know the
first C about programming (I don't).

Step 1. Download DBD::SQLite::Amalgamation
Step 2. Replace *.h in #1 with all the headers from new version.
Replace sqlite-amalgamation.c with the new sqlite3.c
Step 3. perl Makefile.PL, make && make install

Second, I am using a few callbacks, and I have no problem at all. At
one point, I do remember getting bus errors, but that was many, many
versions ago (with DBD::SQLite::Amalgamation's included version of
sqlite3). I replaced the db code in the package as shown in the steps
1-3 above, and the bus error went away.





-- 
Puneet Kishor http://www.punkish.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Carbon Model http://carbonmodel.org/
Open Source Geospatial Foundation http://www.osgeo.org/
Sent from: Madison WI United States.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] speeding up row by row lookup in a large db

2009-03-22 Thread Stefan Evert
On 22 Mar 2009, at 00:27, P Kishor wrote:

> Stefan, you were right about "Depends on how much data" part, but it
> applied to the weather data, not the lookup tables for lc (or
> landcover) or dist (or disturbance).
>
> Also, "Even for Perl/DBI" makes it seem there is something wrong with
> Perl/DBI... no, no... Perl/DBI is an industry-tested, extremely
> capable interface. I wouldn't trade it for anything. The problem was
> with my problem, with my data size, with my approach. I have to
> correct that. DBD::SQLite/DBD::SQLite::Amalgamation/Perl DBI are the
> greatest things since SQLite.

I didn't mean to criticise DBI specifically.  I'm doing most of my  
database work through DBI (with either SQLite or MySQL as a backend)  
and I'm very happy with its ease of use and stability.

However, Perl does introduce a lot of overhead, so it may not be the  
best choice for high-performance applications.  In my simplistic  
benchmarks, Perl did not achieve more than approx. 400,000 subroutine  
calls per second (MacBook Pro, 2.5 GHz).  If you're reading your 7,300  
result rows in a loop with fetchrow_* method calls (or if similar  
subroutines are called internally when using fetchall_* or  
selectall_*), then you're already limited to less than 60 queries/s by  
the function call overhead.

If you don't need to access the rows separately, you could try to pack  
the complete data for each met_id with Storable.pm and store it in a  
single database blob.  I haven't tried that myself, though, so I don't  
know whether that would be any faster.

>> Another thing to keep in mind is that the SQLite version included  
>> in the
>> DBD::SQLite distribution is fairly old (3.4.0 on my Mac -- I doubt  
>> there's a
>> more recent version of DBD::SQLite around), and AFAIK there have  
>> been some
>> speed improvements in SQLite recently.
>
> I am using DBD::SQLite::Amalgamation with SQLite 3.6.11. I am not sure
> there is anything wrong with DBD::SQLite 1.1.4 at all (other than its
> use of the older SQLite code, which is easily corrected). One might
> get better bang by ensuring that the latest version of DBI is
> installed, which Tim Bunce and company are probably always
> fine-tuning.

DBD::SQLite is fine and works very well for me, but hasn't seen any  
updates to a newer SQLite version in quite a long time.  My  
understanding is that it's not entirely trivial to replace the SQLite  
source code (without understanding all the XS interfacing code), and  
that it would be better and easier to use the amalgamation anyway.

I'd love to switch to DBD::SQLite::Amalgamation, but I need to  
register my own callback functions, and this package fails the  
callback tests with a bus error.

Cheers,
Stefan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users