Re: [sqlite] sqlite - AIX memory fault coredupm when using .output command [WARNING! - EXTERNAL]

2019-10-29 Thread Warren Young
On Oct 29, 2019, at 2:56 PM, Dawson, Jeff G  wrote:
> 
> SQLite version 3.7.14.1 2012-10-04 19:37:12

I infer that you’re migrating a legacy system.  There are two good alternatives 
to your current method that should avoid the symptom entirely:

1. Build a current version of SQLite for the old AIX system and try the dump 
with that instead.  It should read that old file just fine.

2. Copy the SQLite DB to the new system and work with it there.  If you’re 
moving away from both AIX and SQLite, you can do the dump on the modern system 
with a current version of SQLite, which should work much better.  If you’re 
sticking with SQLite on the new system, you don’t need the dump at all; just 
use the copied file on the new system.

If it happens that either path also crashes, you’re far more likely to get a 
fix for it than for this greatly outdated version of SQLite.

If you get another core dump with either alternative conversion path, please 
include the backtrace.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite v3.27.2 memory usage

2019-04-15 Thread Warren Young
On Apr 14, 2019, at 10:18 PM, David Ashman - Zone 7 Engineering, LLC 
 wrote:
> 
> It appears that there is a leak somewhere.

It is certainly in your code.  My bet’s on a missing sqlite3_finalize() call, 
but there are many other possibilities.

> Does anyone know why this error occurs?

I suggest you cross-compile your application, or at least its SQLite-using 
core, to a real OS and use its memory tracking tools to debug your use of 
SQLite, then apply that to the embedded app.

I’m talking about things like Valgrind, the Google AddressSanitizer now built 
into GCC and Clang, etc.

> Do I have to periodically call the release memory functions

No, you have to pair each function that makes allocations (e.g. 
sqlite3_prepare()) with a call to free that memory (e.g. sqlite3_finalize()).

> or is that inherent to SQLite?

This is C.  There is no inherent memory cleanup.  It’s all on you, the 
programmer.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite in memory

2017-05-19 Thread Wout Mertens
Note that, as I understand it, if you use only a single connection for the
CherryPi server, all the threads on the server will be running the queries
sequentially. Try using a database connection per thread?

On Thu, May 18, 2017, 8:47 PM Gabriele Lanaro 
wrote:

> Thanks everyone for all the tips! This is all very useful.
>
> We are using SQLite’s FTS5 feature to search a large number of text files.
> There are 50M records in total but they are split across 1000 smaller
> databases of 50K records each. Each DB is 250MB in size.
>
> I am trying to test query performance and for that I am issuing queries for
> same term over and over on a random subset of the databases.
>
> Each query will execute on its own thread (I am using Python’s CherryPy
> server) and the connection to each DB will be kept alive (multiple requests
> will certainly hit the same connection). I am using PRAGMA
> cache_size=256MB, so it should cache the entire DB in memory.
>
> If I only use a single database then I would expect that the query could be
> served entirely from cached the copy of DB. Since we have 100 of those, I
> could reasonably expect that there will be some reading from disk.
>
> But for a smaller number of DBs, say 10, I would expect the performance
> similar to in-memory.
>
> The schema is roughly constituted of two tables, one is a full text index,
> fts5_table, while the other is called datatable.
>
> The query in question is a full text query on the full text index joined
> with another table.
>
> *SELECT* fts5.column, datatable.column2 *FROM* fts5_table, datatable
> *WHERE*
>  fts5_table *MATCH* ‘term’ *AND* datatable.id = fts5_column.rowid *ORDER
> BY*
>  rank *ASC* *LIMIT* 10;
>
> The total number 'term' matches in the documents is about 3 documents
> (given the orderby query, this means that all of those documents will need
> to be ranked).
>
> As I increase the number of threads I reach a peak performance at 4
> threads, which is less than the number of available cores (after that, the
> performance degrades). If I make my application use more threads (i.e. more
> requests get processed in paralle) the performance linearly degrades.
>
> All I am getting is ~40 requests per seconds (when we have 100 users making
> queries in parallel). But I believe SQLite is capable of doing much more. I
> am running on a Red Hat Linux on an Intel machine with 8-cores, 16-threads
> and 64GB of system memory. Disks are SSD.
>
> Thanks,
>
> Gabriele
>
>
> On Thu, May 18, 2017 at 7:51 AM, Eduardo Morras  wrote:
>
> > On Wed, 17 May 2017 22:18:19 -0700
> > Gabriele Lanaro  wrote:
> >
> > > Hi, I'm trying to assess if the performance of my application is
> > > dependent on disk access from sqlite.
> > >
> > > To rule this out I wanted to make sure that the SQLite DB is
> > > completely accessed from memory and there are no disk accesses.
> > >
> > > Is it possible to obtain this effect by using pragmas such as
> > > cache_size?
> > >
> > > Another solution is to copy the existing db to a :memory: db but I'd
> > > like to achieve the same effect without doing so (because it will
> > > require substantial modification of the application). For the sake of
> > > argument, let's image that using :memory: db is not an option.
> > >
> > > Also using a ramdisk is not an option because I don't have root
> > > access to the machine.
> >
> > What OS are you using?
> >
> > You can next tips to make the app less dependant on disk I/O access:
> >
> > a) change where store temporal tables (mat views, subqueries, temp
> > tables) and indices (transient, to use ram always (pragma
> > temp_store=2),
> >
> > b) increase cache size, the more, the better (os disk cache is shared
> > with other processes and is slower), if cache is equal or bigger than
> > your db, it'll be fit in ram,
> >
> > c) set config SQLITE_CONFIG_STMTJRNL_SPILL=-1 if you use statement
> > journal files (check http://www.sqlite.org/tempfiles.html#stmtjrnl),
> >
> > d) use wal mode to avoid *-shm files, and set wal_checkpointing,
> >
> > e) use mmap_size pragma to minimize I/O (check
> > http://www.sqlite.org/mmap.html it has disadvanteges too)
> >
> >
> > > Thanks,
> > >
> > > Gabriele
> >
> > HTH
> >
> > ---   ---
> > Eduardo Morras 
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite in memory

2017-05-18 Thread Gabriele Lanaro
Thanks everyone for all the tips! This is all very useful.

We are using SQLite’s FTS5 feature to search a large number of text files.
There are 50M records in total but they are split across 1000 smaller
databases of 50K records each. Each DB is 250MB in size.

I am trying to test query performance and for that I am issuing queries for
same term over and over on a random subset of the databases.

Each query will execute on its own thread (I am using Python’s CherryPy
server) and the connection to each DB will be kept alive (multiple requests
will certainly hit the same connection). I am using PRAGMA
cache_size=256MB, so it should cache the entire DB in memory.

If I only use a single database then I would expect that the query could be
served entirely from cached the copy of DB. Since we have 100 of those, I
could reasonably expect that there will be some reading from disk.

But for a smaller number of DBs, say 10, I would expect the performance
similar to in-memory.

The schema is roughly constituted of two tables, one is a full text index,
fts5_table, while the other is called datatable.

The query in question is a full text query on the full text index joined
with another table.

*SELECT* fts5.column, datatable.column2 *FROM* fts5_table, datatable *WHERE*
 fts5_table *MATCH* ‘term’ *AND* datatable.id = fts5_column.rowid *ORDER BY*
 rank *ASC* *LIMIT* 10;

The total number 'term' matches in the documents is about 3 documents
(given the orderby query, this means that all of those documents will need
to be ranked).

As I increase the number of threads I reach a peak performance at 4
threads, which is less than the number of available cores (after that, the
performance degrades). If I make my application use more threads (i.e. more
requests get processed in paralle) the performance linearly degrades.

All I am getting is ~40 requests per seconds (when we have 100 users making
queries in parallel). But I believe SQLite is capable of doing much more. I
am running on a Red Hat Linux on an Intel machine with 8-cores, 16-threads
and 64GB of system memory. Disks are SSD.

Thanks,

Gabriele


On Thu, May 18, 2017 at 7:51 AM, Eduardo Morras  wrote:

> On Wed, 17 May 2017 22:18:19 -0700
> Gabriele Lanaro  wrote:
>
> > Hi, I'm trying to assess if the performance of my application is
> > dependent on disk access from sqlite.
> >
> > To rule this out I wanted to make sure that the SQLite DB is
> > completely accessed from memory and there are no disk accesses.
> >
> > Is it possible to obtain this effect by using pragmas such as
> > cache_size?
> >
> > Another solution is to copy the existing db to a :memory: db but I'd
> > like to achieve the same effect without doing so (because it will
> > require substantial modification of the application). For the sake of
> > argument, let's image that using :memory: db is not an option.
> >
> > Also using a ramdisk is not an option because I don't have root
> > access to the machine.
>
> What OS are you using?
>
> You can next tips to make the app less dependant on disk I/O access:
>
> a) change where store temporal tables (mat views, subqueries, temp
> tables) and indices (transient, to use ram always (pragma
> temp_store=2),
>
> b) increase cache size, the more, the better (os disk cache is shared
> with other processes and is slower), if cache is equal or bigger than
> your db, it'll be fit in ram,
>
> c) set config SQLITE_CONFIG_STMTJRNL_SPILL=-1 if you use statement
> journal files (check http://www.sqlite.org/tempfiles.html#stmtjrnl),
>
> d) use wal mode to avoid *-shm files, and set wal_checkpointing,
>
> e) use mmap_size pragma to minimize I/O (check
> http://www.sqlite.org/mmap.html it has disadvanteges too)
>
>
> > Thanks,
> >
> > Gabriele
>
> HTH
>
> ---   ---
> Eduardo Morras 
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite in memory

2017-05-18 Thread Eduardo Morras
On Wed, 17 May 2017 22:18:19 -0700
Gabriele Lanaro  wrote:

> Hi, I'm trying to assess if the performance of my application is
> dependent on disk access from sqlite.
> 
> To rule this out I wanted to make sure that the SQLite DB is
> completely accessed from memory and there are no disk accesses.
> 
> Is it possible to obtain this effect by using pragmas such as
> cache_size?
> 
> Another solution is to copy the existing db to a :memory: db but I'd
> like to achieve the same effect without doing so (because it will
> require substantial modification of the application). For the sake of
> argument, let's image that using :memory: db is not an option.
> 
> Also using a ramdisk is not an option because I don't have root
> access to the machine.

What OS are you using?

You can next tips to make the app less dependant on disk I/O access:

a) change where store temporal tables (mat views, subqueries, temp
tables) and indices (transient, to use ram always (pragma
temp_store=2),
 
b) increase cache size, the more, the better (os disk cache is shared
with other processes and is slower), if cache is equal or bigger than
your db, it'll be fit in ram, 

c) set config SQLITE_CONFIG_STMTJRNL_SPILL=-1 if you use statement
journal files (check http://www.sqlite.org/tempfiles.html#stmtjrnl), 

d) use wal mode to avoid *-shm files, and set wal_checkpointing, 

e) use mmap_size pragma to minimize I/O (check
http://www.sqlite.org/mmap.html it has disadvanteges too)


> Thanks,
> 
> Gabriele

HTH

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite in memory

2017-05-18 Thread Keith Medcalf
On Wednesday, 17 May, 2017 23:18, Gabriele Lanaro  
wrote:

> Hi, I'm trying to assess if the performance of my application is dependent
> on disk access from sqlite.

Of course it is.  Depending on what your application is doing.
 
> To rule this out I wanted to make sure that the SQLite DB is completely
> accessed from memory and there are no disk accesses.

You still haven't stated why you think disk access may be the bottleneck and 
what sort of operations that you are doing that you think are problematic.
 
> Is it possible to obtain this effect by using pragmas such as cache_size?

Yes and No.  It depends what kind of operations you are doing.  For example, 
assuming you have enough RAM then once pre-charged even a modern crappy 
filesystem cache will short-circuit I/O reads so that they do not hit the disk. 
 Nothing can be done for writes though (except that the old timer adage that 
the fastest way to do I/O is not to do it, applies in spades).
 
> Another solution is to copy the existing db to a :memory: db but I'd like
> to achieve the same effect without doing so (because it will require
> substantial modification of the application). For the sake of argument,
> let's image that using :memory: db is not an option.

Ok.
 
> Also using a ramdisk is not an option because I don't have root access to
> the machine.

That would imply that the machine has a concept of "root", which means that it 
is not Windows, MVS, or another operating system that does not call that level 
of access "root" access.

Perhaps you can provide some actually useful information such as:  the OS you 
are using, what type of operations you think are impacted (made slow) by I/O, 
whether or not the I/O channel "gets full" (ie, is the bottleneck), whether the 
CPU is fully consumed.  Etc.

And yes, Disk I/O makes a huge difference.  I have things that run on a "real 
computer (my laptop)" with an SSD hard drive that does I/O at 2.5 GB/s (yes, 
those are big-B Bytes and big G american billions) and has a significant sized 
"block" cache (as opposed to the modern preponderance of mostly useless 
filesystem caches) which can run a workload in 1/250th of the time taken 
compared to running the exact same workload on a typical "server class" machine 
configured with a high-speed hardware raid attached spinning rust.  The "server 
class" hardware has significantly better CPU, but the bottleneck (in this 
particular case) is apparently I/O.

Have you identified the "parts you think are slow" and instrumented them to see 
what is happening?  Although the SQLite query optimizer is very good, it does 
sometimes make incorrect or unnecessary optimizations that can severely impact 
performance.  Not to mention that there are a lot of other factors in play such 
as OS, filesystem, CPU, RAM, Virtualization, etc.






___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite in memory

2017-05-18 Thread Paul
If by any chance you have access to Linux or alike, you can just mount a ramfs 
and move database file over there.
It is a usual file system that lives in RAM. This will 100% guarantee you that 
no disk access will be made by SQLite.


18 May 2017, 08:18:47, by "Gabriele Lanaro" :
 
>   Hi, I'm trying to assess if the performance of my application is dependent
> on disk access from sqlite.
> 
> To rule this out I wanted to make sure that the SQLite DB is completely
> accessed from memory and there are no disk accesses.
> 
> Is it possible to obtain this effect by using pragmas such as cache_size?
> 
> Another solution is to copy the existing db to a :memory: db but I'd like
> to achieve the same effect without doing so (because it will require
> substantial modification of the application). For the sake of argument,
> let's image that using :memory: db is not an option.
> 
> Also using a ramdisk is not an option because I don't have root access to
> the machine.
> 
> Thanks,
> 
> Gabriele
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite in memory

2017-05-17 Thread petern
From the SQLite shell (CLI), have you tried dot commands ".backup" to file
and ".restore" to a new :memory: DB?  That assumes a few things like access
to the filesystem and sufficient user memory quota to hold the disk version
of the DB.  Does that work?

The shell dot commands and their syntax is explained here:
https://sqlite.org/cli.html



On Wed, May 17, 2017 at 10:18 PM, Gabriele Lanaro  wrote:

> Hi, I'm trying to assess if the performance of my application is dependent
> on disk access from sqlite.
>
> To rule this out I wanted to make sure that the SQLite DB is completely
> accessed from memory and there are no disk accesses.
>
> Is it possible to obtain this effect by using pragmas such as cache_size?
>
> Another solution is to copy the existing db to a :memory: db but I'd like
> to achieve the same effect without doing so (because it will require
> substantial modification of the application). For the sake of argument,
> let's image that using :memory: db is not an option.
>
> Also using a ramdisk is not an option because I don't have root access to
> the machine.
>
> Thanks,
>
> Gabriele
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite leaks memory

2012-11-04 Thread Igor Korot
Hi, Clemens,

On Sun, Nov 4, 2012 at 3:11 AM, Clemens Ladisch  wrote:
> Igor Korot wrote:
>> When the user asks to edit the data it starts the transaction, then
>> set the "SAVEPOINT"
>
> Why do you need a savepoint in addition to the transaction?

Because it is a transaction inside transaction for the scenario #3.

>
>> Now the problem: only during the scenario #3 I have a lot of memory
>> leaks. They are reported in the Visual Studio debug
>> window. Looking at the data I did a global search and it turns out
>> that those leakage are coming from the sqlite.
>
> How are those leaks shown?  With a call stack?  In that case, please
> don't keep it a secret.
>
> It would be helpful to show the code for scenario #3.
> (It might be possible that your code doesn't correctly free some
> SQLite object.)

Well I spent a good 30 minutes+ going thru vld report (all those
'false positives')
(I reported them to vld actually). At the very end of it there was a
callstack that
gave a leak. Which is now fixed.
Nevertheless, it would be nice to know what people use to identify leaks...

Thank you.
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite leaks memory

2012-11-04 Thread Clemens Ladisch
Igor Korot wrote:
> When the user asks to edit the data it starts the transaction, then
> set the "SAVEPOINT"

Why do you need a savepoint in addition to the transaction?

> Now the problem: only during the scenario #3 I have a lot of memory
> leaks. They are reported in the Visual Studio debug
> window. Looking at the data I did a global search and it turns out
> that those leakage are coming from the sqlite.

How are those leaks shown?  With a call stack?  In that case, please
don't keep it a secret.

It would be helpful to show the code for scenario #3.
(It might be possible that your code doesn't correctly free some
SQLite object.)


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory mode - Is there any limit about the data size using in-memory mode?

2011-09-12 Thread Jay A. Kreibich
On Mon, Sep 12, 2011 at 06:56:50PM +0200, Stephan Beal scratched on the wall:
> On Mon, Sep 12, 2011 at 6:47 PM, Jay A. Kreibich  wrote:
> 
> > On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> > > is there any limit about the data size?
> >
> >   PRAGMA max_page_count should work on in-memory databases.
> 
> Isn't there also the limitation that the maximum db size == the max memory
> the system allocator can dole out, plus limitations imposed by the "bitness"
> of the platform? 

  Well, yes... you need some place to put the data, and 32-bit systems
  do have inherent limits on the amount of memory they can address...
  real, virtual, or otherwise.

> You can't (per my understanding) have a 10GB in-memory db
> on a 32-bit machine (regardless of how much RAM it has),

  Correct.

  Although you could have a 10GB anonymous database, which is kind of
  like an in-memory/on-disk hybrid.

> but you can have a 10GB on-storage db on that same machine.

  Assuming you have a storage device with at last 10 GB free, yes.

   -j


-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory mode - Is there any limit about the data size using in-memory mode?

2011-09-12 Thread Stephan Beal
On Mon, Sep 12, 2011 at 6:47 PM, Jay A. Kreibich  wrote:

> On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> > is there any limit about the data size?
>
>   PRAGMA max_page_count should work on in-memory databases.
>

Isn't there also the limitation that the maximum db size == the max memory
the system allocator can dole out, plus limitations imposed by the "bitness"
of the platform? You can't (per my understanding) have a 10GB in-memory db
on a 32-bit machine (regardless of how much RAM it has), but you can have a
10GB on-storage db on that same machine.

:-?

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory mode - Is there any limit about the data size using in-memory mode?

2011-09-12 Thread Jay A. Kreibich
On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> Hi there,
> 
> I just have a question. If I am using JDBC driver to connect to sqlite using
> in-memory mode,
> is there any limit about the data size?

  PRAGMA max_page_count should work on in-memory databases.

  http://sqlite.org/pragma.html#pragma_max_page_count

  If you've ATTACHed the in-memory database, you'll need to issue the
  command like this:

PRAGMA .max_page_count = 

  Also note that this limits the page count, meaning the total size is
  dependent on the page size.  In-memory DBs live in the cache, which
  also has a small per-page overhead, so the total memory usage will be
  slightly more than (page_size * page_count).

-j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite and Memory Databases

2011-02-09 Thread Igor Tandetnik
Bastian Clarenbach  wrote:
> My environment does not have direct file access, instead I can only request
> files and get a memblock returned that contains the entire file.

You should be able to write a virtual file system that reads and writes to a 
block of memory rather than a physical file. See

http://www.sqlite.org/c3ref/vfs_find.html

-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite and Memory Databases

2011-02-09 Thread Bastian Clarenbach
Yes, I expect the database to be small enough.

It is the loading a :memory: database from and storing to memory blocks that
still eludes me.
I will take a look at the backup link.

Thanks!

On 9 February 2011 13:48, Simon Slavin  wrote:

>
> On 9 Feb 2011, at 10:14am, Bastian Clarenbach wrote:
>
> > My environment does not have direct file access, instead I can only
> request
> > files and get a memblock returned that contains the entire file. I am
> trying
> > to figure out how to do one, preferably both, of the following scenarios.
> >
> > 1. I want to create a database 'offline' and then load and use that db as
> a
> > static resource (no inserts or other changes)
> > 2. I want to create a database in memory, store that into a memory block
> and
> > then be able to restore it like in 1
>
> The biggest question with this is whether you expect your entire database
> file to be small enough that you would want to hold it in memory all that
> the same time.  SQLite handles databases in memory just fine: address the
> filename as ':memory:'.  See
>
> http://www.sqlite.org/inmemorydb.html
>
> You can delve into the depths of SQLite and mess with the file system
> routines.  But a more appropriate way to do this might be to use the backup
> API:
>
> http://www.sqlite.org/backup.html
>
> It might be possible to use this to copy your database between the memblock
> and memory or a local file.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite and Memory Databases

2011-02-09 Thread Simon Slavin

On 9 Feb 2011, at 10:14am, Bastian Clarenbach wrote:

> My environment does not have direct file access, instead I can only request
> files and get a memblock returned that contains the entire file. I am trying
> to figure out how to do one, preferably both, of the following scenarios.
> 
> 1. I want to create a database 'offline' and then load and use that db as a
> static resource (no inserts or other changes)
> 2. I want to create a database in memory, store that into a memory block and
> then be able to restore it like in 1

The biggest question with this is whether you expect your entire database file 
to be small enough that you would want to hold it in memory all that the same 
time.  SQLite handles databases in memory just fine: address the filename as 
':memory:'.  See

http://www.sqlite.org/inmemorydb.html

You can delve into the depths of SQLite and mess with the file system routines. 
 But a more appropriate way to do this might be to use the backup API:

http://www.sqlite.org/backup.html

It might be possible to use this to copy your database between the memblock and 
memory or a local file.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-21 Thread Ron Arts
Pavel Ivanov schreef:
>> Currently this means adding
>> the new columns to my C-structures, writing access functions, and
>> recompiling. I don't want to do that, because this means my appl *must*
>> be replaced on every database change, and I'd like to be able to
>> run different versions of it in the wild. I was hoping to make it
>> more dynamic and adaptive to database layout changes.
> 
> Sorry, Ron, but I don't get it. You're saying that adding more columns
> means adding more fields into your structure and making your
> application to know about those fields, and that means you need to
> recompile and it's a bad thing. But if you're working with SQLite's
> in-memory database then you have to create all tables in that database
> at startup, so you need an SQL statement in your application
> containing all columns you use, so again adding a column you need to
> recompile, right? Or if you generate your CREATE TABLE statement
> dynamically from postgresql then how does your code know about new
> fields added? How does it work with it? Are your newly added columns
> used only from sql coming directly from clients?
> Maybe you just need to re-think your database schema so that you need
> only one select statement per network request and so your requirements
> to sql engine speed would be lowered by the order of magnitude?
> 

Pavel,

You are right, and I'll look into rewriting a bigger part of our application.

Thanks,
Ron


> Pavel
> 
> On Sun, Oct 18, 2009 at 2:23 PM, Ron Arts  wrote:
>> P Kishor schreef:
>>> On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts  wrote:
 Very true Simon,

 this has been the fastest way so far and I can do around
 35 selects/second this way, using prepared statements
 (on my machine at least), but I need more speed.

 That's why I want to skip the SQL processing entirely
 and write a C function that reaches directly into the
 internal memory structures to gets my record from there.
>>> I might have missed the discussion, but... why don't you ditch SQLite
>>> and use something like Berkeley DB? Sounds to me you need a hash db
>>> instead of an rdbms, especially since you have no need for SQL.
>>>
>> Well, that's what I use at the moment (I use glib2 g_hash_table()),
>> but now the requirement to accept SQL queries for that database
>> has come up. And I don't want to write my own SQL parser..
>>
>> Next I tried to add a virtual table driver to SQLite for my database.
>> That worked, but because the application is evolving, columns
>> get added/changed on a regular basis. Currently this means adding
>> the new columns to my C-structures, writing access functions, and
>> recompiling. I don't want to do that, because this means my appl *must*
>> be replaced on every database change, and I'd like to be able to
>> run different versions of it in the wild. I was hoping to make it
>> more dynamic and adaptive to database layout changes.
>>
>> Thanks,
>> Ron
>>
>>
 thanks,
 Ron

 Simon Slavin schreef:
> On 18 Oct 2009, at 8:37am, Ron Arts wrote:
>
>> Is there a way to bypass the virtual machine altogether and reach
>> directly
>> into the btree and just retrieve one record by it's oid (primary
>> integer key),
>> and return it in a form that would allow taking out the column
>> values by name?
> The primary integer key column can always be referred to as the
> special name ROWID, even if you have assigned it a column name of your
> own.  So you can do
>
> SELECT ROWID,myCol1,myCol2 FROM myTable
>
> as long as you don't explicitly declare a primary integer key column
> and then change the values in it.
>
> Simon.
>>>
>>>
>>>
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-19 Thread Pavel Ivanov
> Currently this means adding
> the new columns to my C-structures, writing access functions, and
> recompiling. I don't want to do that, because this means my appl *must*
> be replaced on every database change, and I'd like to be able to
> run different versions of it in the wild. I was hoping to make it
> more dynamic and adaptive to database layout changes.

Sorry, Ron, but I don't get it. You're saying that adding more columns
means adding more fields into your structure and making your
application to know about those fields, and that means you need to
recompile and it's a bad thing. But if you're working with SQLite's
in-memory database then you have to create all tables in that database
at startup, so you need an SQL statement in your application
containing all columns you use, so again adding a column you need to
recompile, right? Or if you generate your CREATE TABLE statement
dynamically from postgresql then how does your code know about new
fields added? How does it work with it? Are your newly added columns
used only from sql coming directly from clients?
Maybe you just need to re-think your database schema so that you need
only one select statement per network request and so your requirements
to sql engine speed would be lowered by the order of magnitude?

Pavel

On Sun, Oct 18, 2009 at 2:23 PM, Ron Arts  wrote:
> P Kishor schreef:
>> On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts  wrote:
>>> Very true Simon,
>>>
>>> this has been the fastest way so far and I can do around
>>> 35 selects/second this way, using prepared statements
>>> (on my machine at least), but I need more speed.
>>>
>>> That's why I want to skip the SQL processing entirely
>>> and write a C function that reaches directly into the
>>> internal memory structures to gets my record from there.
>>
>> I might have missed the discussion, but... why don't you ditch SQLite
>> and use something like Berkeley DB? Sounds to me you need a hash db
>> instead of an rdbms, especially since you have no need for SQL.
>>
>
> Well, that's what I use at the moment (I use glib2 g_hash_table()),
> but now the requirement to accept SQL queries for that database
> has come up. And I don't want to write my own SQL parser..
>
> Next I tried to add a virtual table driver to SQLite for my database.
> That worked, but because the application is evolving, columns
> get added/changed on a regular basis. Currently this means adding
> the new columns to my C-structures, writing access functions, and
> recompiling. I don't want to do that, because this means my appl *must*
> be replaced on every database change, and I'd like to be able to
> run different versions of it in the wild. I was hoping to make it
> more dynamic and adaptive to database layout changes.
>
> Thanks,
> Ron
>
>
>>
>>> thanks,
>>> Ron
>>>
>>> Simon Slavin schreef:
 On 18 Oct 2009, at 8:37am, Ron Arts wrote:

> Is there a way to bypass the virtual machine altogether and reach
> directly
> into the btree and just retrieve one record by it's oid (primary
> integer key),
> and return it in a form that would allow taking out the column
> values by name?
 The primary integer key column can always be referred to as the
 special name ROWID, even if you have assigned it a column name of your
 own.  So you can do

 SELECT ROWID,myCol1,myCol2 FROM myTable

 as long as you don't explicitly declare a primary integer key column
 and then change the values in it.

 Simon.
>>
>>
>>
>>
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread Olaf Schmidt

"Ron Arts"  schrieb im
Newsbeitrag news:4adac5c1.5010...@arts-betel.org...

> Then my program opens a socket, and starts accepting connections,
> those connections are long lasting, and send messages that need
> a fast reply. Many of the messages result in messages being send
> to all other clients. The messages require on average 10 lookups
> in the memory db, each by oid.

Is the "socket-listener-thread" already decoupled from the
thread which hosts your sqlite-connection-handle?

If not done already, you should try it (that will not speedup
the sqlite-performance, but the overall-performance of your
"broadcasting-dispatcher-app").
Additionally you should decouple the "sqlite-thread" also from
the "reply-sender-threads" (placing the sqlite-query-results
in some structures, where the sender-threads are able to find
them).

That would ensure, that the sqlite-engine can always run
fullspeed, not waiting for potentially "slow, or blocking
socket-transfers".

In such a design you could also try another thing, which
maybe speeds up your selects - meaning, maybe "oID-
aggregation" can help.

If you receive in your socket-listener-thread  approx.
5 requests per second (and nothing will intermit this
receiver-thread now, since sqlite-queries run elsewhere) ...
then we talk about 50 incoming messages per milisecond.
Now, since the sqlite-thread is running elsewhere already
... why not aggregate the incoming oIDs in a comma-
separated list (in a simple charbuffer, shared with the
sqlite-thread - and flagged with a "next-job-descriptor").

Each 1 msec (to keep the latency low), you should end
gathering oIDs in such a "next-job" charbuffer and set
the finalized-flag in the job-descriptor-structure (after
that you could start gathering oIDs in your listener-thread
on a different charbuf-allocation immediately).

The sqlite-thread should look for new, flagged as "ready to
proceed" charbuffers on its own, and start its work in a more
"aggregated fashion" then - and maybe the engine-overhead
gets a bit reduced, if sqlite now performs *one* (larger)
select (only each 1 msec), but returning more than only
one single record in its step-loop then.
i.e. per:
Select * from Table Where oID In YourGathered_IDList

Just an idea - I've not yet tested here, if the throughput
would be better this way instead of performing single-record-
selects only ... you probably lose the advantage of the
precompiled "single-record-statement", but could gain
over all, as soon as you reach the step-loop, which does
then more than just one record with probably less overhead
overall.

Maybe that worth' a try.

Olaf



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread Simon Slavin

On 18 Oct 2009, at 7:23pm, Ron Arts wrote:

> because the application is evolving, columns
> get added/changed on a regular basis. Currently this means adding
> the new columns to my C-structures, writing access functions, and
> recompiling. I don't want to do that, because this means my appl  
> *must*
> be replaced on every database change, and I'd like to be able to
> run different versions of it in the wild. I was hoping to make it
> more dynamic and adaptive to database layout changes.

Then what you need is a flexible system that can cope with schema and  
index changes.  And that's SQLite.  Whatever you write yourself, if it  
has to have the same flexibility as SQLIte does, won't be much faster  
than SQLite.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread Kees Nuyt
On Sun, 18 Oct 2009 17:37:57 +0200, 
Ron Arts  wrote:

>Very true Simon,
>
>this has been the fastest way so far and I can do around
>35 selects/second this way, using prepared statements
>(on my machine at least), but I need more speed.
>
>That's why I want to skip the SQL processing entirely
>and write a C function that reaches directly into the
>internal memory structures to gets my record from there.

You will have to implement BTree code to walk the index
pages. You'll have a hard time to make your code more
efficient than the SQLite BTree code.

Luckily the BTree code and the Virtual machine interpreter
are in the SQLite source repository, which is public domain.
EXPLAIN SELECT colX FROM tableY WHERE tableY.id = your key;
yields the VM instructions.

So, yes, you can do it.

Note: the SQLite BTree code may change without notice,
because it isn't a public API.

In your case I'd either go for a hash table, without SQL, or
solve the speed problem with hardware.

>thanks,
>Ron
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread Ron Arts
P Kishor schreef:
> On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts  wrote:
>> Very true Simon,
>>
>> this has been the fastest way so far and I can do around
>> 35 selects/second this way, using prepared statements
>> (on my machine at least), but I need more speed.
>>
>> That's why I want to skip the SQL processing entirely
>> and write a C function that reaches directly into the
>> internal memory structures to gets my record from there.
> 
> I might have missed the discussion, but... why don't you ditch SQLite
> and use something like Berkeley DB? Sounds to me you need a hash db
> instead of an rdbms, especially since you have no need for SQL.
> 

Well, that's what I use at the moment (I use glib2 g_hash_table()),
but now the requirement to accept SQL queries for that database
has come up. And I don't want to write my own SQL parser..

Next I tried to add a virtual table driver to SQLite for my database.
That worked, but because the application is evolving, columns
get added/changed on a regular basis. Currently this means adding
the new columns to my C-structures, writing access functions, and
recompiling. I don't want to do that, because this means my appl *must*
be replaced on every database change, and I'd like to be able to
run different versions of it in the wild. I was hoping to make it
more dynamic and adaptive to database layout changes.

Thanks,
Ron


> 
>> thanks,
>> Ron
>>
>> Simon Slavin schreef:
>>> On 18 Oct 2009, at 8:37am, Ron Arts wrote:
>>>
 Is there a way to bypass the virtual machine altogether and reach
 directly
 into the btree and just retrieve one record by it's oid (primary
 integer key),
 and return it in a form that would allow taking out the column
 values by name?
>>> The primary integer key column can always be referred to as the
>>> special name ROWID, even if you have assigned it a column name of your
>>> own.  So you can do
>>>
>>> SELECT ROWID,myCol1,myCol2 FROM myTable
>>>
>>> as long as you don't explicitly declare a primary integer key column
>>> and then change the values in it.
>>>
>>> Simon.
> 
> 
> 
> 
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread P Kishor
On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts  wrote:
> Very true Simon,
>
> this has been the fastest way so far and I can do around
> 35 selects/second this way, using prepared statements
> (on my machine at least), but I need more speed.
>
> That's why I want to skip the SQL processing entirely
> and write a C function that reaches directly into the
> internal memory structures to gets my record from there.

I might have missed the discussion, but... why don't you ditch SQLite
and use something like Berkeley DB? Sounds to me you need a hash db
instead of an rdbms, especially since you have no need for SQL.


>
> thanks,
> Ron
>
> Simon Slavin schreef:
>> On 18 Oct 2009, at 8:37am, Ron Arts wrote:
>>
>>> Is there a way to bypass the virtual machine altogether and reach
>>> directly
>>> into the btree and just retrieve one record by it's oid (primary
>>> integer key),
>>> and return it in a form that would allow taking out the column
>>> values by name?
>>
>> The primary integer key column can always be referred to as the
>> special name ROWID, even if you have assigned it a column name of your
>> own.  So you can do
>>
>> SELECT ROWID,myCol1,myCol2 FROM myTable
>>
>> as long as you don't explicitly declare a primary integer key column
>> and then change the values in it.
>>
>> Simon.





-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
Sent from Madison, WI, United States
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread Simon Slavin

On 18 Oct 2009, at 4:37pm, Ron Arts wrote:

> I want to skip the SQL processing entirely
> and write a C function that reaches directly into the
> internal memory structures to gets my record from there.

I assume that you've already tested the fastest way of doing this that  
the standard library allows: prebinding using _prepare, etc..

You could download the source code for SQLite, take a look at how the  
SELECT command is implemented, and write your own customised code  
which knows exactly which fields and index you want and how your table  
is structured.  I have no idea what the results would look like or  
whether the developer forum would be a better place to discuss it.   
Start by using EXPLAIN to look at the code generated from your SELECT  
command.




Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread Ron Arts
Very true Simon,

this has been the fastest way so far and I can do around
35 selects/second this way, using prepared statements
(on my machine at least), but I need more speed.

That's why I want to skip the SQL processing entirely
and write a C function that reaches directly into the
internal memory structures to gets my record from there.

thanks,
Ron

Simon Slavin schreef:
> On 18 Oct 2009, at 8:37am, Ron Arts wrote:
> 
>> Is there a way to bypass the virtual machine altogether and reach  
>> directly
>> into the btree and just retrieve one record by it's oid (primary  
>> integer key),
>> and return it in a form that would allow taking out the column  
>> values by name?
> 
> The primary integer key column can always be referred to as the  
> special name ROWID, even if you have assigned it a column name of your  
> own.  So you can do
> 
> SELECT ROWID,myCol1,myCol2 FROM myTable
> 
> as long as you don't explicitly declare a primary integer key column  
> and then change the values in it.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread Simon Slavin

On 18 Oct 2009, at 8:37am, Ron Arts wrote:

> Is there a way to bypass the virtual machine altogether and reach  
> directly
> into the btree and just retrieve one record by it's oid (primary  
> integer key),
> and return it in a form that would allow taking out the column  
> values by name?

The primary integer key column can always be referred to as the  
special name ROWID, even if you have assigned it a column name of your  
own.  So you can do

SELECT ROWID,myCol1,myCol2 FROM myTable

as long as you don't explicitly declare a primary integer key column  
and then change the values in it.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-18 Thread Ron Arts
Pavel Ivanov schreef:
>> I use the following queries:
>>
>>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
> 
> I'm not sure how SQLite treats this table definition but probably
> because of your ASC it could decide that id shouldn't be a synonym for
> rowid which will make at least inserts slower.
> 
>> But I'm still looking to speed up selects.
> 
> Other conditions that can speed up selects:
> - Depending on relation between size of your database and size of
> database cache selects could work faster if all database is cached. So
> for example in just started application first set of selects will work
> slower than second. But if full database cannot fit into cache then
> different sets of selects will have the same performance on average.
> - If you don't care about changing your database concurrently from
> other processes you can place all your inserts in one transaction or
> in case of your real application just start transaction at the
> beginning and commit/rollback it at the end.
> 
> Tell us if it still doesn't satisfy you.
> 
> 

Pavel,

I formulated a followup question in new thread, and that was silly, so I'll
repeat it here.

Is there a way to bypass the virtual machine altogether and reach directly
into the btree and just retrieve one record by it's oid (primary integer key),
and return it in a form that would allow taking out the column values by name?

I don't care if it's a bit dirty, but I really need to speed up record retrieval
and I know beforehand that 99% of my queries involves just retrieving one 
record by
primary integer key. I also know that the entire database will fit in memory.

I'll tell you the general flow of processing in my program: on startup, it 
copies
a postgresql database into a sqlite memory database so I need to do a lot of 
inserts,
but nod very fast, as postgresql cannot supply the records very fast.
Then my program opens a socket, and starts accepting connections, those 
connections
are long lasting, and send messages that need a fast reply. Many of the 
messages result
in messages being send to all other clients. The messages require on average 10 
lookups
in the memory db, each by oid. Very occasionally I get a message that need more,
and for that I need to use an SQL statement internally. I also receive SQL 
queries
from clients occasionally.

My application needs to scale beyond tens of thousand of clients, and should 
also
communicatie with similar apps running on other machines for high availability
and geographical separation.

Thanks,
Ron
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-12 Thread Jay A. Kreibich
On Mon, Oct 12, 2009 at 07:23:30PM -0400, Pavel Ivanov scratched on the wall:
> > Is their a way to prepare the query and save (compiled form) so that 
> > we can share them between multiple connection?
> 
> Yes, there is: http://sqlite-consortium.com/products/sse.

  I realize this may be a general question, but given that the topic of
  the thread was in-memory databases, I'd just point out that there is
  no such thing as "multiple connections" to an in-memory DB.  Even
  within the same process, you cannot open another database handle to
  an in-memory DB.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-12 Thread Pavel Ivanov
> Pavel,
>
> does the cache work for memory datsbases too?

Doh, missed the fact that it's a memory database. I believe in-memory
database is in fact just a database cache that never deletes its pages
from memory and never spills them to disk. Although anything about
size of database cache will not be applicable here...
But wrapping all selects into one transaction will work with in-memory
databases too (somewhere in another thread there were even benchmarks
showing the effect of this). And maybe in fact you never even need to
commit or rollback transaction in in-memory database because there's
no other connection which will need to get access to the database and
to the committed data...

BTW, another option for speeding up selects is to recompile SQLite
without multi-threading support and maybe even without some other
stuff that you never use. At least lack of multi-threading support can
make a significant difference in performance.

Pavel

On Sun, Oct 11, 2009 at 12:39 PM, Ron Arts  wrote:
> Pavel Ivanov schreef:
>>> I use the following queries:
>>>
>>>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>>
>> I'm not sure how SQLite treats this table definition but probably
>> because of your ASC it could decide that id shouldn't be a synonym for
>> rowid which will make at least inserts slower.
>>
>>> But I'm still looking to speed up selects.
>>
>> Other conditions that can speed up selects:
>> - Depending on relation between size of your database and size of
>> database cache selects could work faster if all database is cached. So
>> for example in just started application first set of selects will work
>> slower than second. But if full database cannot fit into cache then
>> different sets of selects will have the same performance on average.
>> - If you don't care about changing your database concurrently from
>> other processes you can place all your inserts in one transaction or
>> in case of your real application just start transaction at the
>> beginning and commit/rollback it at the end.
>>
>> Tell us if it still doesn't satisfy you.
>>
>
> Pavel,
>
> does the cache work for memory datsbases too?
>
> Thanks,
> Ron
>
>>
>> Pavel
>>
>> On Sun, Oct 11, 2009 at 4:04 AM, Ron Arts  wrote:
>>> Olaf Schmidt schreef:
 "Ron Arts"  schrieb im
 Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...

> Here's my new benchmark output:
>
> sqlite3 insert 50 records time: 17.19 secs
> sqlite3 select 50 records time: 18.57 secs
> sqlite3 prepared select 50 records time: 3.27 secs
> glib2 hash tables insert 50 records time: 0.38 secs
> glib2 hash tables lookup 50 records time: 0.24 secs
>
> The prepared select indeed speeds up things tremendously,
> a 5-fold increase.
 Now do the same thing (prepared commands) for
 the sqlite3 - inserts too ... wrapped in a transaction.

 Against an InMemory-Table I reach here about
 12 Inserts per second (filling up and inserting
 "mixed values" against a 8-column-table).
 That's on a 1.8GHz  Intel-(mobile)CPU.

 As long as your benchmarked-tabledef (your insert) has not
 much more columns than my above mentioned '8', then
 you should see something like "factor 5" there too.

 With a small two-column-table (a prepared two-column-insert-Cmd)
 I see about 36 inserts per second, somewhat depending
 on the used datatypes (Integers and Doubles work a bit faster of
 course than inserting the same "amount of Columns" as Text-Values).

 Another reason for your bad insert-performance could of
 course be, that you already defined an index on the table
 in question (or in case your ID-Field is defined as
 INTEGER PRIMARY KEY *and* you're filling up
 new IDs in non-consecutive order).
 In that case your current results seem a bit more reasonable.

 If you don't have an index created yet (on your "HashKey-
 ID-Column" ... or if you don't have mapped your ID-Field
 to SQLites RowID yet (per INTEGER PRIMARY KEY)
 then you maybe should try to create one -  if possible, after
 your "main-amount" of fillups was done - that adds some
 additional time to your overall-data-preparation efforts of your
 "table-list" - but will be of benefit for your single-record-lookups,
 based on your "... Where ID = ? ".


>>> Olaf,
>>>
>>> I tried it, and indeed, this speeds up inserts tremendously as well,
>>> but in fact I'm not at all concernced about insert speed, but much more 
>>> about
>>> select speed. I use the following queries:
>>>
>>>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>>>
>>> Then I insert 50 records like this:
>>>
>>>   INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>>>
>>> (with consecutive values for the id value.)
>>>
>>> do you expect the id column now to be mapped to 

Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-12 Thread Pavel Ivanov
> Is their a way to prepare the query and save (compiled form) so that we can 
> share them between multiple connection?

Yes, there is: http://sqlite-consortium.com/products/sse.

Pavel

On Mon, Oct 12, 2009 at 6:28 AM, Naveen Grover -TP <naveen.gro...@igt.in> wrote:
> Is their a way to prepare the query and save (compiled form) so that we can 
> share them between multiple connection?
>
> Regds,
> Naveen
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Jay A. Kreibich
> Sent: Sunday, October 11, 2009 1:54 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] sqlite in-memory database far too slow in my use case
>
> On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
>
>> I'm afraid the process of
>> constructing SQL queries / parsing them by sqlite, and
>> interpreting the results in my app, multiple times per
>> event will be too slow.
>
>  There should be no need to construct and parse queries with each
>  interaction.  Assuming the queries are fairly well known, you should
>  be able to prepare them once and then keep using them over and over.
>  This should save a noticeable amount of time.
>
>  Make sure you're using the prepare/bind/step/reset/finalize
>  interfaces, rather than exec or get_table.
>
>   -j
>
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "Our opponent is an alien starship packed with atomic bombs.  We have
>  a protractor."   "I'll go home and see if I can scrounge up a ruler
>  and a piece of string."  --from Anathem by Neal Stephenson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> +
> CONFIDENTIALITY NOTICE & DISCLAIMER
>
> The contents of this e-mail are confidential to the ordinary user of the 
> e-mail address to which it was addressed and may also be privileged. If you 
> are not the addressee of this e-mail you may not copy, forward, disclose or 
> otherwise use it or any part of it in any form whatsoever. If you have 
> received this e-mail in error please e-mail the sender by replying to this 
> message. The recipient should check this email and any attachments for the 
> presence of viruses. InterGlobe accepts no liability for any damage caused by 
> any virus transmitted by this email.
> +
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-12 Thread Naveen Grover -TP
Is their a way to prepare the query and save (compiled form) so that we can 
share them between multiple connection?

Regds,
Naveen

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Jay A. Kreibich
Sent: Sunday, October 11, 2009 1:54 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite in-memory database far too slow in my use case

On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:

> I'm afraid the process of
> constructing SQL queries / parsing them by sqlite, and
> interpreting the results in my app, multiple times per
> event will be too slow.

  There should be no need to construct and parse queries with each
  interaction.  Assuming the queries are fairly well known, you should
  be able to prepare them once and then keep using them over and over.
  This should save a noticeable amount of time.

  Make sure you're using the prepare/bind/step/reset/finalize
  interfaces, rather than exec or get_table.

   -j

--
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

+
CONFIDENTIALITY NOTICE & DISCLAIMER

The contents of this e-mail are confidential to the ordinary user of the e-mail 
address to which it was addressed and may also be privileged. If you are not 
the addressee of this e-mail you may not copy, forward, disclose or otherwise 
use it or any part of it in any form whatsoever. If you have received this 
e-mail in error please e-mail the sender by replying to this message. The 
recipient should check this email and any attachments for the presence of 
viruses. InterGlobe accepts no liability for any damage caused by any virus 
transmitted by this email.
+
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-11 Thread Alexey Pechnikov
Hello!

On Sunday 11 October 2009 22:52:29 Jay A. Kreibich wrote:
>   A bit to my surprise, the difference is even more significant using 
>   prepared statements in a C program.  For a half-million selects over a
>   similar table in a :memory: database, there is a 20% speed-up by
>   wrapping all the selects in a transaction (vs the 10% you're seeing).
>   It's averaging about 4.3 seconds in auto-commit mode, and 3.4 seconds
>   in an explicit transaction.

The tcl programm doing copy of the selected rows. May be your C programm 
doesn't copy the selected data?

The modified tcl test script can show how transaction increase speed of data 
extracting. In this case the test programm perform copy of data too but the 
data doesn't extracting from database table when we use the "select NULL" 
construction. 

The performance increased of ~10% when we extract table data and of 
~3% again.

$ ./test.tcl
insert transaction 50 rows
21233766 microseconds per iteration

select 50 rows
28164019 microseconds per iteration

select without extract 50 rows
26379441 microseconds per iteration

select transaction 50 rows
25749923 microseconds per iteration

select transaction without extract 50 rows
25644248 microseconds per iteration


$ cat ./test.tcl   
#!/usr/bin/tclsh8.5   
package require sqlite3   
sqlite3 db :memory:   
set limit 50  

db eval {create table test(id int primary key, value text)}

puts "insert transaction $limit rows"
puts [time {
db transaction {
for {set i 0} {$i<$limit} {incr i} {
set value "value $i"
db eval {insert into test (value) values ($value)}
}
}
}]

puts "\nselect $limit rows"
puts [time {
for {set i 0} {$i<$limit} {incr i} {
set rowid [expr round(rand()*$limit)]
db onecolumn {select value from test where id=$rowid}
}
}]

puts "\nselect without extract $limit rows"
puts [time {
for {set i 0} {$i<$limit} {incr i} {
set rowid [expr round(rand()*$limit)]
db onecolumn {select NULL from test where id=$rowid}
}
}]

puts "\nselect transaction $limit rows"
puts [time {
db transaction {
for {set i 0} {$i<$limit} {incr i} {
set rowid [expr round(rand()*$limit)]
db onecolumn {select value from test where id=$rowid}
}
}
}]

puts "\nselect transaction without extract $limit rows"
puts [time {
db transaction {
for {set i 0} {$i<$limit} {incr i} {
set rowid [expr round(rand()*$limit)]
db onecolumn {select NULL from test where id=$rowid}
}
}
}]


Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-11 Thread Jay A. Kreibich
On Sun, Oct 11, 2009 at 11:49:57AM +0400, Alexey Pechnikov scratched on the 
wall:
> Hello!
> 
> On Sunday 11 October 2009 00:54:04 Simon Slavin wrote:
> > >  Using transactions speeds up a long series of SELECTs because it
> > >  eliminates the need to re-acquire a read-only file-lock for each
> > >  individual SELECT.
> > >
> > >  Since in-memory databases have no file locks, I'm not sure that is
> > >  relevant to this specific case.
> > 
> > I wasn't sure about that.  It could still be slower
> 
> You can check it very easy.
> 
> In transactions:
> ve...@veter-laptop:/tmp$ ./test.tcl
> 19968119 microseconds per iteration
> 25649514 microseconds per iteration
> 
> Without transactions:
> ve...@veter-laptop:/tmp$ ./test.tcl
> 35586024 microseconds per iteration
> 28630785 microseconds per iteration

  A bit to my surprise, the difference is even more significant using 
  prepared statements in a C program.  For a half-million selects over a
  similar table in a :memory: database, there is a 20% speed-up by
  wrapping all the selects in a transaction (vs the 10% you're seeing).
  It's averaging about 4.3 seconds in auto-commit mode, and 3.4 seconds
  in an explicit transaction.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-11 Thread Ron Arts
Are there compile time switches which I can use to speed up
selects in memory databases? Will the amalgamated version be faster
than linking the lib at runtime?

Thanks,
Ron

Pavel Ivanov schreef:
>> I use the following queries:
>>
>>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
> 
> I'm not sure how SQLite treats this table definition but probably
> because of your ASC it could decide that id shouldn't be a synonym for
> rowid which will make at least inserts slower.
> 
>> But I'm still looking to speed up selects.
> 
> Other conditions that can speed up selects:
> - Depending on relation between size of your database and size of
> database cache selects could work faster if all database is cached. So
> for example in just started application first set of selects will work
> slower than second. But if full database cannot fit into cache then
> different sets of selects will have the same performance on average.
> - If you don't care about changing your database concurrently from
> other processes you can place all your inserts in one transaction or
> in case of your real application just start transaction at the
> beginning and commit/rollback it at the end.
> 
> Tell us if it still doesn't satisfy you.
> 
> 
> Pavel
> 
> On Sun, Oct 11, 2009 at 4:04 AM, Ron Arts  wrote:
>> Olaf Schmidt schreef:
>>> "Ron Arts"  schrieb im
>>> Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...
>>>
 Here's my new benchmark output:

 sqlite3 insert 50 records time: 17.19 secs
 sqlite3 select 50 records time: 18.57 secs
 sqlite3 prepared select 50 records time: 3.27 secs
 glib2 hash tables insert 50 records time: 0.38 secs
 glib2 hash tables lookup 50 records time: 0.24 secs

 The prepared select indeed speeds up things tremendously,
 a 5-fold increase.
>>> Now do the same thing (prepared commands) for
>>> the sqlite3 - inserts too ... wrapped in a transaction.
>>>
>>> Against an InMemory-Table I reach here about
>>> 12 Inserts per second (filling up and inserting
>>> "mixed values" against a 8-column-table).
>>> That's on a 1.8GHz  Intel-(mobile)CPU.
>>>
>>> As long as your benchmarked-tabledef (your insert) has not
>>> much more columns than my above mentioned '8', then
>>> you should see something like "factor 5" there too.
>>>
>>> With a small two-column-table (a prepared two-column-insert-Cmd)
>>> I see about 36 inserts per second, somewhat depending
>>> on the used datatypes (Integers and Doubles work a bit faster of
>>> course than inserting the same "amount of Columns" as Text-Values).
>>>
>>> Another reason for your bad insert-performance could of
>>> course be, that you already defined an index on the table
>>> in question (or in case your ID-Field is defined as
>>> INTEGER PRIMARY KEY *and* you're filling up
>>> new IDs in non-consecutive order).
>>> In that case your current results seem a bit more reasonable.
>>>
>>> If you don't have an index created yet (on your "HashKey-
>>> ID-Column" ... or if you don't have mapped your ID-Field
>>> to SQLites RowID yet (per INTEGER PRIMARY KEY)
>>> then you maybe should try to create one -  if possible, after
>>> your "main-amount" of fillups was done - that adds some
>>> additional time to your overall-data-preparation efforts of your
>>> "table-list" - but will be of benefit for your single-record-lookups,
>>> based on your "... Where ID = ? ".
>>>
>>>
>> Olaf,
>>
>> I tried it, and indeed, this speeds up inserts tremendously as well,
>> but in fact I'm not at all concernced about insert speed, but much more about
>> select speed. I use the following queries:
>>
>>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>>
>> Then I insert 50 records like this:
>>
>>   INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>>
>> (with consecutive values for the id value.)
>>
>> do you expect the id column now to be mapped to the Row ID, so
>> this results in the fastest possible way of selecting by id?
>>
>> I now get this:
>>
>> sqlite3 prepared insert in trx 50 records time: 5.08 secs
>> sqlite3 select 50 records time: 19.28 secs
>> sqlite3 prepared select 50 records time: 3.47 secs
>> glib2 hash tables insert 50 records time: 0.37 secs
>> glib2 hash tables lookup 50 records time: 0.25 secs
>>
>> But I'm still looking to speed up selects.
>>
>> Thanks,
>> Ron
>>
>>> Olaf Schmidt
>>>
>>>
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-11 Thread Ron Arts
Pavel Ivanov schreef:
>> I use the following queries:
>>
>>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
> 
> I'm not sure how SQLite treats this table definition but probably
> because of your ASC it could decide that id shouldn't be a synonym for
> rowid which will make at least inserts slower.
> 
>> But I'm still looking to speed up selects.
> 
> Other conditions that can speed up selects:
> - Depending on relation between size of your database and size of
> database cache selects could work faster if all database is cached. So
> for example in just started application first set of selects will work
> slower than second. But if full database cannot fit into cache then
> different sets of selects will have the same performance on average.
> - If you don't care about changing your database concurrently from
> other processes you can place all your inserts in one transaction or
> in case of your real application just start transaction at the
> beginning and commit/rollback it at the end.
> 
> Tell us if it still doesn't satisfy you.
> 

Pavel,

does the cache work for memory datsbases too?

Thanks,
Ron

> 
> Pavel
> 
> On Sun, Oct 11, 2009 at 4:04 AM, Ron Arts  wrote:
>> Olaf Schmidt schreef:
>>> "Ron Arts"  schrieb im
>>> Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...
>>>
 Here's my new benchmark output:

 sqlite3 insert 50 records time: 17.19 secs
 sqlite3 select 50 records time: 18.57 secs
 sqlite3 prepared select 50 records time: 3.27 secs
 glib2 hash tables insert 50 records time: 0.38 secs
 glib2 hash tables lookup 50 records time: 0.24 secs

 The prepared select indeed speeds up things tremendously,
 a 5-fold increase.
>>> Now do the same thing (prepared commands) for
>>> the sqlite3 - inserts too ... wrapped in a transaction.
>>>
>>> Against an InMemory-Table I reach here about
>>> 12 Inserts per second (filling up and inserting
>>> "mixed values" against a 8-column-table).
>>> That's on a 1.8GHz  Intel-(mobile)CPU.
>>>
>>> As long as your benchmarked-tabledef (your insert) has not
>>> much more columns than my above mentioned '8', then
>>> you should see something like "factor 5" there too.
>>>
>>> With a small two-column-table (a prepared two-column-insert-Cmd)
>>> I see about 36 inserts per second, somewhat depending
>>> on the used datatypes (Integers and Doubles work a bit faster of
>>> course than inserting the same "amount of Columns" as Text-Values).
>>>
>>> Another reason for your bad insert-performance could of
>>> course be, that you already defined an index on the table
>>> in question (or in case your ID-Field is defined as
>>> INTEGER PRIMARY KEY *and* you're filling up
>>> new IDs in non-consecutive order).
>>> In that case your current results seem a bit more reasonable.
>>>
>>> If you don't have an index created yet (on your "HashKey-
>>> ID-Column" ... or if you don't have mapped your ID-Field
>>> to SQLites RowID yet (per INTEGER PRIMARY KEY)
>>> then you maybe should try to create one -  if possible, after
>>> your "main-amount" of fillups was done - that adds some
>>> additional time to your overall-data-preparation efforts of your
>>> "table-list" - but will be of benefit for your single-record-lookups,
>>> based on your "... Where ID = ? ".
>>>
>>>
>> Olaf,
>>
>> I tried it, and indeed, this speeds up inserts tremendously as well,
>> but in fact I'm not at all concernced about insert speed, but much more about
>> select speed. I use the following queries:
>>
>>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>>
>> Then I insert 50 records like this:
>>
>>   INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>>
>> (with consecutive values for the id value.)
>>
>> do you expect the id column now to be mapped to the Row ID, so
>> this results in the fastest possible way of selecting by id?
>>
>> I now get this:
>>
>> sqlite3 prepared insert in trx 50 records time: 5.08 secs
>> sqlite3 select 50 records time: 19.28 secs
>> sqlite3 prepared select 50 records time: 3.47 secs
>> glib2 hash tables insert 50 records time: 0.37 secs
>> glib2 hash tables lookup 50 records time: 0.25 secs
>>
>> But I'm still looking to speed up selects.
>>
>> Thanks,
>> Ron
>>
>>> Olaf Schmidt
>>>
>>>
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org

Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-11 Thread Pavel Ivanov
> I use the following queries:
>
>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)

I'm not sure how SQLite treats this table definition but probably
because of your ASC it could decide that id shouldn't be a synonym for
rowid which will make at least inserts slower.

> But I'm still looking to speed up selects.

Other conditions that can speed up selects:
- Depending on relation between size of your database and size of
database cache selects could work faster if all database is cached. So
for example in just started application first set of selects will work
slower than second. But if full database cannot fit into cache then
different sets of selects will have the same performance on average.
- If you don't care about changing your database concurrently from
other processes you can place all your inserts in one transaction or
in case of your real application just start transaction at the
beginning and commit/rollback it at the end.

Tell us if it still doesn't satisfy you.


Pavel

On Sun, Oct 11, 2009 at 4:04 AM, Ron Arts  wrote:
> Olaf Schmidt schreef:
>> "Ron Arts"  schrieb im
>> Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...
>>
>>> Here's my new benchmark output:
>>>
>>> sqlite3 insert 50 records time: 17.19 secs
>>> sqlite3 select 50 records time: 18.57 secs
>>> sqlite3 prepared select 50 records time: 3.27 secs
>>> glib2 hash tables insert 50 records time: 0.38 secs
>>> glib2 hash tables lookup 50 records time: 0.24 secs
>>>
>>> The prepared select indeed speeds up things tremendously,
>>> a 5-fold increase.
>>
>> Now do the same thing (prepared commands) for
>> the sqlite3 - inserts too ... wrapped in a transaction.
>>
>> Against an InMemory-Table I reach here about
>> 12 Inserts per second (filling up and inserting
>> "mixed values" against a 8-column-table).
>> That's on a 1.8GHz  Intel-(mobile)CPU.
>>
>> As long as your benchmarked-tabledef (your insert) has not
>> much more columns than my above mentioned '8', then
>> you should see something like "factor 5" there too.
>>
>> With a small two-column-table (a prepared two-column-insert-Cmd)
>> I see about 36 inserts per second, somewhat depending
>> on the used datatypes (Integers and Doubles work a bit faster of
>> course than inserting the same "amount of Columns" as Text-Values).
>>
>> Another reason for your bad insert-performance could of
>> course be, that you already defined an index on the table
>> in question (or in case your ID-Field is defined as
>> INTEGER PRIMARY KEY *and* you're filling up
>> new IDs in non-consecutive order).
>> In that case your current results seem a bit more reasonable.
>>
>> If you don't have an index created yet (on your "HashKey-
>> ID-Column" ... or if you don't have mapped your ID-Field
>> to SQLites RowID yet (per INTEGER PRIMARY KEY)
>> then you maybe should try to create one -  if possible, after
>> your "main-amount" of fillups was done - that adds some
>> additional time to your overall-data-preparation efforts of your
>> "table-list" - but will be of benefit for your single-record-lookups,
>> based on your "... Where ID = ? ".
>>
>>
>
> Olaf,
>
> I tried it, and indeed, this speeds up inserts tremendously as well,
> but in fact I'm not at all concernced about insert speed, but much more about
> select speed. I use the following queries:
>
>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> Then I insert 50 records like this:
>
>   INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>
> (with consecutive values for the id value.)
>
> do you expect the id column now to be mapped to the Row ID, so
> this results in the fastest possible way of selecting by id?
>
> I now get this:
>
> sqlite3 prepared insert in trx 50 records time: 5.08 secs
> sqlite3 select 50 records time: 19.28 secs
> sqlite3 prepared select 50 records time: 3.47 secs
> glib2 hash tables insert 50 records time: 0.37 secs
> glib2 hash tables lookup 50 records time: 0.25 secs
>
> But I'm still looking to speed up selects.
>
> Thanks,
> Ron
>
>> Olaf Schmidt
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-11 Thread Olaf Schmidt

"Ron Arts"  schrieb im
Newsbeitrag news:4ad19195.2060...@arts-betel.org...

> I tried it, and indeed, this speeds up inserts tremendously as well,
> but in fact I'm not at all concernced about insert speed, but much more
about
> select speed. I use the following queries:
>
>CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)

Not sure, if an explicit "text-storage-class-hint" for your name
column will help to speed things up a bit more, but try:
CREATE TABLE company(id INTEGER PRIMARY KEY, name TEXT)

> Then I insert 50 records like this:
>
>INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')

You should prepare the statement like this:
INSERT INTO company (id, name) VALUES (?, ?)

And then treat the (now RowID-mapped) Integer ID as an
Integer, not as a String.
Just use the correctly "typed" binding-calls.

And then wrap the Insert-Loop within a transaction.

> But I'm still looking to speed up selects.
With regards to Selects (searching for random "single IDs"), you
probably already reached the maximum (in case you've not done
any mistakes with the Type-Binding).

Olaf



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-11 Thread Simon Slavin

On 11 Oct 2009, at 9:04am, Ron Arts wrote:

>   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> Then I insert 50 records like this:
>
>   INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>
> (with consecutive values for the id value.)

I think you can remove the single quotes from around your value for  
the id column, because both the column definition and the values  
you're supplying for it are integers rather than strings.  This might  
speed up your INSERT even more.

> more about
> select speed

When testing SELECT speeds to work out how best to use the library,  
include code to take the values that are returned and put them into  
some variables in whatever language you're using.  In some languages,  
using some compilers, and with some optimisations turned on, the  
SELECT command itself executes quickly but extracting the values to  
variables takes longer.

Also, note that if you don't use the values from the variables some  
compilers (e.g. recent versions of gcc) will optimise out the routines  
which are meant to set the values.  So if you're running comparative  
speed tests and getting weird results try using the variables, for  
example printing them to /dev/null.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-11 Thread Ron Arts
Alexey Pechnikov schreef:
> Hello!
> 
> On Sunday 11 October 2009 12:04:37 Ron Arts wrote:
>>CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>>
>> Then I insert 50 records like this:
>>
>>INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>>
>> (with consecutive values for the id value.)
>>
>> do you expect the id column now to be mapped to the Row ID, so
>> this results in the fastest possible way of selecting by id?
> 
> Yes, the id is alias for rowid in your table. Check it by 
> sqlite3 > explain query plan select name from company where id=1;
>  
>> I now get this:
>>
>> sqlite3 prepared insert in trx 50 records time: 5.08 secs
>> sqlite3 select 50 records time: 19.28 secs
>> sqlite3 prepared select 50 records time: 3.47 secs
>> glib2 hash tables insert 50 records time: 0.37 secs
>> glib2 hash tables lookup 50 records time: 0.25 secs
>>
>> But I'm still looking to speed up selects.
> 
> Hm... I think you may not open more than 8 000 tcp/ip sockets per second
> in common case and so SQLite speed is good enough. Why you write about
> "to handle tens of thousands requests per second"?
> 

Well, in my case there can be tens of thousands of connections open
at the same time, where each connection can last days.
Each connection can spit out multiple messages per second, and each
message need around 5 SQL queries.

Ron


> Best regards, Alexey Pechnikov.
> http://pechnikov.tel/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-11 Thread Alexey Pechnikov
Hello!

On Sunday 11 October 2009 12:04:37 Ron Arts wrote:
>CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
> 
> Then I insert 50 records like this:
> 
>INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
> 
> (with consecutive values for the id value.)
> 
> do you expect the id column now to be mapped to the Row ID, so
> this results in the fastest possible way of selecting by id?

Yes, the id is alias for rowid in your table. Check it by 
sqlite3 > explain query plan select name from company where id=1;
 
> I now get this:
> 
> sqlite3 prepared insert in trx 50 records time: 5.08 secs
> sqlite3 select 50 records time: 19.28 secs
> sqlite3 prepared select 50 records time: 3.47 secs
> glib2 hash tables insert 50 records time: 0.37 secs
> glib2 hash tables lookup 50 records time: 0.25 secs
> 
> But I'm still looking to speed up selects.

Hm... I think you may not open more than 8 000 tcp/ip sockets per second
in common case and so SQLite speed is good enough. Why you write about
"to handle tens of thousands requests per second"?

Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-11 Thread Ron Arts
Olaf Schmidt schreef:
> "Ron Arts"  schrieb im
> Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...
> 
>> Here's my new benchmark output:
>>
>> sqlite3 insert 50 records time: 17.19 secs
>> sqlite3 select 50 records time: 18.57 secs
>> sqlite3 prepared select 50 records time: 3.27 secs
>> glib2 hash tables insert 50 records time: 0.38 secs
>> glib2 hash tables lookup 50 records time: 0.24 secs
>>
>> The prepared select indeed speeds up things tremendously,
>> a 5-fold increase.
> 
> Now do the same thing (prepared commands) for
> the sqlite3 - inserts too ... wrapped in a transaction.
> 
> Against an InMemory-Table I reach here about
> 12 Inserts per second (filling up and inserting
> "mixed values" against a 8-column-table).
> That's on a 1.8GHz  Intel-(mobile)CPU.
> 
> As long as your benchmarked-tabledef (your insert) has not
> much more columns than my above mentioned '8', then
> you should see something like "factor 5" there too.
> 
> With a small two-column-table (a prepared two-column-insert-Cmd)
> I see about 36 inserts per second, somewhat depending
> on the used datatypes (Integers and Doubles work a bit faster of
> course than inserting the same "amount of Columns" as Text-Values).
> 
> Another reason for your bad insert-performance could of
> course be, that you already defined an index on the table
> in question (or in case your ID-Field is defined as
> INTEGER PRIMARY KEY *and* you're filling up
> new IDs in non-consecutive order).
> In that case your current results seem a bit more reasonable.
> 
> If you don't have an index created yet (on your "HashKey-
> ID-Column" ... or if you don't have mapped your ID-Field
> to SQLites RowID yet (per INTEGER PRIMARY KEY)
> then you maybe should try to create one -  if possible, after
> your "main-amount" of fillups was done - that adds some
> additional time to your overall-data-preparation efforts of your
> "table-list" - but will be of benefit for your single-record-lookups,
> based on your "... Where ID = ? ".
> 
> 

Olaf,

I tried it, and indeed, this speeds up inserts tremendously as well,
but in fact I'm not at all concernced about insert speed, but much more about
select speed. I use the following queries:

   CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)

Then I insert 50 records like this:

   INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')

(with consecutive values for the id value.)

do you expect the id column now to be mapped to the Row ID, so
this results in the fastest possible way of selecting by id?

I now get this:

sqlite3 prepared insert in trx 50 records time: 5.08 secs
sqlite3 select 50 records time: 19.28 secs
sqlite3 prepared select 50 records time: 3.47 secs
glib2 hash tables insert 50 records time: 0.37 secs
glib2 hash tables lookup 50 records time: 0.25 secs

But I'm still looking to speed up selects.

Thanks,
Ron

> Olaf Schmidt
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-11 Thread Alexey Pechnikov
Hello!

On Sunday 11 October 2009 00:54:04 Simon Slavin wrote:
> >  Using transactions speeds up a long series of SELECTs because it
> >  eliminates the need to re-acquire a read-only file-lock for each
> >  individual SELECT.
> >
> >  Since in-memory databases have no file locks, I'm not sure that is
> >  relevant to this specific case.
> 
> I wasn't sure about that.  It could still be slower

You can check it very easy.

In transactions:
ve...@veter-laptop:/tmp$ ./test.tcl
19968119 microseconds per iteration
25649514 microseconds per iteration

Without transactions:
ve...@veter-laptop:/tmp$ ./test.tcl
35586024 microseconds per iteration
28630785 microseconds per iteration


$ cat ./test.tcl
#!/usr/bin/tclsh8.5

package require sqlite3
sqlite3 db :memory:

set limit 50
db eval {create table test(id int primary key, value text)}

puts [time {
db transaction {
for {set i 0} {$i<$limit} {incr i} {
set value "value $i"
db eval {insert into test (value) values ($value)}
}
}
}]

puts [time {
db transaction {
for {set i 0} {$i<$limit} {incr i} {
set rowid [expr round(rand()*$limit)]
db onecolumn {select value from test where rowid=$rowid}
}
}
}]


Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case (new benchmark inside)

2009-10-10 Thread Olaf Schmidt

"Ron Arts"  schrieb im
Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...

> Here's my new benchmark output:
>
> sqlite3 insert 50 records time: 17.19 secs
> sqlite3 select 50 records time: 18.57 secs
> sqlite3 prepared select 50 records time: 3.27 secs
> glib2 hash tables insert 50 records time: 0.38 secs
> glib2 hash tables lookup 50 records time: 0.24 secs
>
> The prepared select indeed speeds up things tremendously,
> a 5-fold increase.

Now do the same thing (prepared commands) for
the sqlite3 - inserts too ... wrapped in a transaction.

Against an InMemory-Table I reach here about
12 Inserts per second (filling up and inserting
"mixed values" against a 8-column-table).
That's on a 1.8GHz  Intel-(mobile)CPU.

As long as your benchmarked-tabledef (your insert) has not
much more columns than my above mentioned '8', then
you should see something like "factor 5" there too.

With a small two-column-table (a prepared two-column-insert-Cmd)
I see about 36 inserts per second, somewhat depending
on the used datatypes (Integers and Doubles work a bit faster of
course than inserting the same "amount of Columns" as Text-Values).

Another reason for your bad insert-performance could of
course be, that you already defined an index on the table
in question (or in case your ID-Field is defined as
INTEGER PRIMARY KEY *and* you're filling up
new IDs in non-consecutive order).
In that case your current results seem a bit more reasonable.

If you don't have an index created yet (on your "HashKey-
ID-Column" ... or if you don't have mapped your ID-Field
to SQLites RowID yet (per INTEGER PRIMARY KEY)
then you maybe should try to create one -  if possible, after
your "main-amount" of fillups was done - that adds some
additional time to your overall-data-preparation efforts of your
"table-list" - but will be of benefit for your single-record-lookups,
based on your "... Where ID = ? ".


Olaf Schmidt



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Simon Slavin

On 10 Oct 2009, at 10:57pm, Ron Arts wrote:

> The sqlite3_bind_int immediately gives me an RANGE_ERROR (25).
> Is there some obvious thing I'm doing wrong?

I notice that your _prepare call puts single quotes around the  
variable, whereas you are binding an integer to it.  But that's  
probably not what's causing your problem.

> rc = sqlite3_bind_int(stmt, 1, rand()%count);

rand()%count ?

Just for debugging purposes, split this into two: assign rand()%count  
to an integer variable in one line, then use it in the next line.  Try  
spitting out the variable value to a debugger or console.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Jay A. Kreibich
On Sat, Oct 10, 2009 at 11:57:30PM +0200, Ron Arts scratched on the wall:

> I'm expanding my benchmark to test just thaty, but I'm running into a problem.
> Here's my code (well part of it):
> 
>sqlite3_stmt *stmt;
>rc = sqlite3_prepare(db, "select name from company where id = '?'", -1, 
> , NULL);


> The sqlite3_bind_int immediately gives me an RANGE_ERROR (25).
> Is there some obvious thing I'm doing wrong?

  Remove the single quotes.  They aren't required for parameters, even
  if you're binding a text value.

  As written, that statement has a single character string literal and
  no parameters.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Ron Arts
Jay A. Kreibich schreef:
> On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
> 
>> I'm afraid the process of
>> constructing SQL queries / parsing them by sqlite, and
>> interpreting the results in my app, multiple times per
>> event will be too slow.
> 
>   There should be no need to construct and parse queries with each
>   interaction.  Assuming the queries are fairly well known, you should
>   be able to prepare them once and then keep using them over and over.
>   This should save a noticeable amount of time.
> 
>   Make sure you're using the prepare/bind/step/reset/finalize
>   interfaces, rather than exec or get_table.
> 
>-j
> 

Thanks Jay,

I'm expanding my benchmark to test just thaty, but I'm running into a problem.
Here's my code (well part of it):

   sqlite3_stmt *stmt;
   rc = sqlite3_prepare(db, "select name from company where id = '?'", -1, 
, NULL);
   if (rc != SQLITE_OK) {
 fprintf(stderr, "sqlite3_prepare SQL error: %d\n", rc);
 exit(1);
   }

   for (i=1; i < count; i++) {
 rc = sqlite3_bind_int(stmt, 1, rand()%count);
 if (rc != SQLITE_OK ){
   fprintf(stderr, "sqlite3_bind_int SQL error: %d\n", rc);
   exit(1);
 }
 while (1) {
   rc = sqlite3_step(stmt);
   if (rc == SQLITE_DONE) {
 sqlite3_reset(stmt);
 break;
   }
   if( rc != SQLITE_ROW){
 fprintf(stderr, "sqlite3_step SQL error: %d\n", rc);
 exit(1);
   }
 }
   }
   sqlite3_finalize(stmt);

The sqlite3_bind_int immediately gives me an RANGE_ERROR (25).
Is there some obvious thing I'm doing wrong?

Thanks,
Ron

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Simon Slavin

On 10 Oct 2009, at 9:27pm, Jay A. Kreibich wrote:

> On Sat, Oct 10, 2009 at 07:38:08PM +0100, Simon Slavin scratched on  
> the wall:
>>
>
>> Don't forget to use transactions, even for when you are just doing
>> SELECTs without changing any data.
>
>  Using transactions speeds up a long series of SELECTs because it
>  eliminates the need to re-acquire a read-only file-lock for each
>  individual SELECT.
>
>  Since in-memory databases have no file locks, I'm not sure that is
>  relevant to this specific case.

I wasn't sure about that.  It could still be slower, even if the  
individual transactions don't eventually result in any calls to file- 
handling routines.  At what point does the code decide whether a lock  
is needed ?  Does it get dropped almost immediately, or does it go  
through multiple levels of calls before some low-level code says "iff  
disk-database then lock" ?  Sorry, I've never looked at the source.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Jay A. Kreibich
On Sat, Oct 10, 2009 at 07:38:08PM +0100, Simon Slavin scratched on the wall:
> 
> On 10 Oct 2009, at 7:04pm, Roger Binns wrote:
> 
> > Ron Arts wrote:
> >> So I am wondering if I can drop the glib Hash Tables, and
> >> go sqlite all the way. But I'm afraid the process of
> >> constructing SQL queries / parsing them by sqlite, and
> >> interpreting the results in my app, multiple times per
> >> event will be too slow.
> 
> Don't forget to use transactions, even for when you are just doing  
> SELECTs without changing any data.

  Using transactions speeds up a long series of SELECTs because it
  eliminates the need to re-acquire a read-only file-lock for each
  individual SELECT.

  Since in-memory databases have no file locks, I'm not sure that is
  relevant to this specific case.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ron Arts wrote:
> Using hash tables I can do 10 requests in .24 seconds
> meaning around 40 req/sec.

If you are just doing simple lookups (eg doing equality on a single column)
then a hash table will always beat going through SQLite.  But if you are
doing more complex queries SQLite will beat the pants off any custom
expression evaluator you have unless you spend a lot of time optimising it.

> Virtual tables are a partial solution and I tried that, but
> as I said, hash tables are pretty static, and I can't recompile
> (and worse: restart) the application every time some programmer
> adds a database column.

I am confused by this.  Are columns added while running or is this a
development time issue?  Why do virtual tables require a recompile or
restart?  (Hint: you can drop and recreate a table.)

> So I assume there is no direct access method for the im-memory
> table structures?

It uses btrees and there is no stable API to them nor is it exposed in any
way.  See http://www.sqlite.org/arch.html

> Or did you mean that when you said using virtual tables?
> Can I add a hashed index on an existing sqlite memory table this way?

The point of virtual tables is to let you provide a SQL API to your
underlying data without exposing in any way how that data is structured or
optimised.  In the architecture diagram above your code provides the blue
"backend" box.

If all you have at the end of the day is a single simple flat list then this
won't provide much benefit.  If you have multiple lists and need to evaluate
expressions over them then virtual tables are great.  If you have an
underlying index then the virtual table BestIndex method works with them, or
you don't return anything and SQLite will iterate over all relevant rows.

Roger

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrQ7gcACgkQmOOfHg372QSwJACfeOFn8zID4H6veWVxQVXBbxTd
gwYAn1tCdv2tA21Lc5A6oOh7HUhEya5f
=mklT
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Jay A. Kreibich
On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:

> I'm afraid the process of
> constructing SQL queries / parsing them by sqlite, and
> interpreting the results in my app, multiple times per
> event will be too slow.

  There should be no need to construct and parse queries with each
  interaction.  Assuming the queries are fairly well known, you should
  be able to prepare them once and then keep using them over and over.
  This should save a noticeable amount of time.

  Make sure you're using the prepare/bind/step/reset/finalize
  interfaces, rather than exec or get_table.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Ron Arts
Ok,

I just finished writing a test program. It creates an SQLite memory table
and inserts 50 records, then it selects 50 times on a random key.

After that it uses hash memory tables to do the same thing. Here is the
test output:

sqlite3 insert 50 records time: 17.21 secs
sqlite3 select 50 records time: 18.59 secs

glib2 hash tables insert 50 records time: 0.68 secs
glib2 hash tables lookup 50 records time: 0.24 secs

Considering the fact that I need to do on average 5 lookups
per network request I need 18.59 seconds for 10 requests
i.e. 5379 requests/sec.

Using hash tables I can do 10 requests in .24 seconds
meaning around 40 req/sec.

This is a problem in my case. By the way, the database will
always fit in memory, so swapping is no problem.

Virtual tables are a partial solution and I tried that, but
as I said, hash tables are pretty static, and I can't recompile
(and worse: restart) the application every time some programmer
adds a database column.

So I assume there is no direct access method for the im-memory
table structures? Or a way I can add my own hashing? Or did you
mean that when you said using virtual tables?

Can I add a hashed index on an existing sqlite memory table this way?

Thanks,
Ron

PS: I *am* impressed by sqlite's speed.

Simon Slavin schreef:
> On 10 Oct 2009, at 7:04pm, Roger Binns wrote:
> 
>> Ron Arts wrote:
>>> So I am wondering if I can drop the glib Hash Tables, and
>>> go sqlite all the way. But I'm afraid the process of
>>> constructing SQL queries / parsing them by sqlite, and
>>> interpreting the results in my app, multiple times per
>>> event will be too slow.
> 
> Don't forget to use transactions, even for when you are just doing  
> SELECTs without changing any data.
> 
>> That is pure guesswork.  You will only know by building a  
>> representative
>> benchmark using data substantially similar to what you expect on  
>> machines
>> similar to what you expect in real world usage of the code.  (ie  
>> anyone
>> else's benchmark data isn't too useful to you)
>>
>> Secondly have a look at virtual tables.  They will let you store  
>> data in any
>> way you deem fit while still being able to offer a SQL interface to  
>> them.
> 
> Also, don't assume that holding your table in memory is faster than  
> just using a normal disk file and letting your operating system do  
> caching and swapping as appropriate.  If you have enough data in your  
> entire database (data /including/ indexes) to force paging/swapping  
> then this may actually be faster.
> 
> As Roger wrote, we can't guess much more without having an idea of the  
> size of your database and what kind of SELECT commands you're doing.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Simon Slavin

On 10 Oct 2009, at 7:04pm, Roger Binns wrote:

> Ron Arts wrote:
>> So I am wondering if I can drop the glib Hash Tables, and
>> go sqlite all the way. But I'm afraid the process of
>> constructing SQL queries / parsing them by sqlite, and
>> interpreting the results in my app, multiple times per
>> event will be too slow.

Don't forget to use transactions, even for when you are just doing  
SELECTs without changing any data.

> That is pure guesswork.  You will only know by building a  
> representative
> benchmark using data substantially similar to what you expect on  
> machines
> similar to what you expect in real world usage of the code.  (ie  
> anyone
> else's benchmark data isn't too useful to you)
>
> Secondly have a look at virtual tables.  They will let you store  
> data in any
> way you deem fit while still being able to offer a SQL interface to  
> them.

Also, don't assume that holding your table in memory is faster than  
just using a normal disk file and letting your operating system do  
caching and swapping as appropriate.  If you have enough data in your  
entire database (data /including/ indexes) to force paging/swapping  
then this may actually be faster.

As Roger wrote, we can't guess much more without having an idea of the  
size of your database and what kind of SELECT commands you're doing.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in-memory database far too slow in my use case

2009-10-10 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ron Arts wrote:
> So I am wondering if I can drop the glib Hash Tables, and
> go sqlite all the way. But I'm afraid the process of
> constructing SQL queries / parsing them by sqlite, and
> interpreting the results in my app, multiple times per
> event will be too slow.

That is pure guesswork.  You will only know by building a representative
benchmark using data substantially similar to what you expect on machines
similar to what you expect in real world usage of the code.  (ie anyone
else's benchmark data isn't too useful to you)

Secondly have a look at virtual tables.  They will let you store data in any
way you deem fit while still being able to offer a SQL interface to them.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrQzJEACgkQmOOfHg372QRBJQCfX+k+YiLSyWVZqkOpvT0827vY
N+UAoMLPFXAez4Ncd+0ZjJVEa0KpZW/w
=Cjpl
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite DB memory fault error

2008-09-04 Thread Ken
are you sure that the nickname field is 100 bytes?
Why do you use strncpy(d,s,100 )  why not 
strncpy(d,s, sizeof(d)) 

Other than that, hard to tell without seeing the data types and declarations.

Might want to post on a C programming board.


--- On Thu, 9/4/08, kogure <[EMAIL PROTECTED]> wrote:
From: kogure <[EMAIL PROTECTED]>
Subject: [sqlite]  SQLite DB memory fault error
To: sqlite-users@sqlite.org
Date: Thursday, September 4, 2008, 7:16 AM

Hello everyone. I have a database with fields not required to be filled in
(the other fields are declared NOT NULL). When I have a record with the
non-required fields empty, and copied it to my structure, there is a memory
fault (core dumped) error. Here is the snippet of my code:

if (pdata) {
for (index = 0; index < p_out.r_count; index++) {
// p_ptr will contain the record (row) from the table 
p_ptr = p_out.ppp_data[index];

// accessing the elements of the pointer to the record
(row)
pdata->info_id = atoi((p_ptr[0]));
strncpy(pdata->surname, (p_ptr[1]), 100);
strncpy(pdata->firstname, (p_ptr[2]), 100);
//#if 0
memcpy(pdata->nickname, (p_ptr[3]), 100); <- probable
error source (not required fields)
pdata->age = atoi((p_ptr[4]));   <-
probable error source (not required fields)
//#endif
strncpy(pdata->position, (p_ptr[5]), 100);
strncpy(pdata->teamname, (p_ptr[6]), 100);
strncpy(pdata->teamclient, (p_ptr[7]), 100);

// increment pointer to move to next record
pdata++;
}

// output the contents of pdata
*all_records = pdata;
}
else {
result = MEM_ERROR;
}

Thanks for the help in advance.
-- 
View this message in context:
http://www.nabble.com/SQLite-DB-memory-fault-error-tp19308790p19308790.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite DB memory fault error

2008-09-04 Thread Igor Tandetnik
"kogure" <[EMAIL PROTECTED]> wrote
in message news:[EMAIL PROTECTED]
> Hello everyone. I have a database with fields not required to be
> filled in (the other fields are declared NOT NULL). When I have a
> record with the non-required fields empty, and copied it to my
> structure, there is a memory fault (core dumped) error. Here is the
> snippet of my code:

I don't see a single SQLite call in your code. Which part do you feel is 
SQLite's fault?

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.6.1 memory leak?

2008-09-02 Thread Ulric Auger
Thanks Eric for your help.
You are right sqlite3_shutdown() fixed the memory leak under 3.6.1.

I tested SQLite 3.6.2 and it does not leak even if I don't call
sqlite3_shutdown.

I will proceed with SQLite 3.6.2

Ulric

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Eric Minbiole
Sent: August 30, 2008 1:21 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite 3.6.1 memory leak?

Ulric Auger wrote:
> Hi,
> 
> Since I updated to SQLite 3.6.1 I have a memory leak when my application
> exits.
> 
> If I compile using SQLite 3.5.8 I don't have the memory leak.

Be sure to call sqlite3_shutdown() just before the application exits-- 
this should free any outstanding resources held by SQLite.  (As a test, 
I commented out this shutdown call, and was able to reproduce the same 
leak warning message you reported.)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.6.1 memory leak?

2008-08-30 Thread Eric Minbiole
Ulric Auger wrote:
> Hi,
> 
> Since I updated to SQLite 3.6.1 I have a memory leak when my application
> exits.
> 
> If I compile using SQLite 3.5.8 I don't have the memory leak.

Be sure to call sqlite3_shutdown() just before the application exits-- 
this should free any outstanding resources held by SQLite.  (As a test, 
I commented out this shutdown call, and was able to reproduce the same 
leak warning message you reported.)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.6.1 memory leak?

2008-08-30 Thread Robert Simpson
There's not enough information in your post for us to comment -- which is
probably why nobody responded earlier.  The unit tests for SQLite create
thousands of connections and run hundreds of thousands of commands without
leaking.  So there's a probability that you may be doing something wrong,
yes.


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ulric Auger
Sent: Friday, August 29, 2008 11:13 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] SQLite 3.6.1 memory leak?

Hi,

Since I updated to SQLite 3.6.1 I have a memory leak when my application
exits.

If I compile using SQLite 3.5.8 I don't have the memory leak.

 

VS 2005 dump:

Detected memory leaks!

Dumping objects ->

c:\dev\mescruiser\lib\sqlite\sqlite3.c(11938) : {4754} normal block at
0x01BFC460, 48 bytes long.

 Data: <(   > 28 00 00 00 00 00 00 00 B0 B8 15 00 FF FF FF FF 

Object dump complete.

 

Using LeakFinder/StackWalker it seem that the memory leak originate from
sqlite3_open.

 

Is this a real memory leak or I'm doing something wrong?

 

Thanks

 

Ulric

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite in memory

2007-09-12 Thread Kees Nuyt

Hi Alex,

On Wed, 12 Sep 2007 12:19:44 +0200, you wrote:

> I have 3 questions regarding sqlite database loaded/used whilst in memory:
>
> 1. How can an sqlite database file (example file1.db) be 
>loaded in memory?
>  (Is this the answer?: > sqlite3.exe file1.db)

sqlite3 file1.db .dump | sqlite3 :memory:
The default database (no name specified) is in memory, so:
sqlite3 file1.db .dump | sqlite3
would do the same.

> 2. How can the in-memory sqlite database be accessed
>by multiple applications?

No. The only way to do that would be to construct a 'server
wrapper' around sqlite and have applications (clients) connect
to that server. It has been done before, for example, there is
an ODBC wrapper.
http://www.sqlite.org/cvstrac/wiki

>3. Can multiple threads or applications access simultaneously
>   the same in-memory sqlite database?

Not without the server mechanism.

The performance gain of a :memory: database is limited, one of
the reasons is that most operating systems will cache the
database file in memory anyway.

In general: when you really need a DB server, don't use SQLite.
http://www.sqlite.org/whentouse.html

>Thank you.

Regards,
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite and memory usage

2007-04-18 Thread Nuno Lucas

On 4/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

I performed a simple experiment where i placed printf statements in the 
routines sqlite3FreeX and sqlite3MallocRaw. They seem to be the two lowest 
level routines in SQLite that allocate and deallocate memory. I redirected the 
output to a text file and imported it into Excel (this may have been the hard 
way). I then summed all of the numbers expecting the result to be zero.

What I found was that at the end there was over a MB of memory still allocated.

I can't believe this answer so are there other routines called that allocate 
and deallocate memory?


SQLite is extensively tested for memory leaks, so I would doubt much.
I don't see why you think why those functions you mentioned are the
lower level functions. You are forgetting the sqlite3Realloc one and
by looking at util.c:

[...]
** The above APIs are implemented in terms of the functions provided in the
** operating-system interface. The OS interface is never accessed directly
** by code outside of this file.
**
** sqlite3OsMalloc()
** sqlite3OsRealloc()
** sqlite3OsFree()
** sqlite3OsAllocationSize()
**
[...]

So, it seems you are not using the lowest levels routines, nor
counting the right high level ones.

Also, SQLite has it's own memory leak detector, which you can enable
to make sure there are no memory leaks (but off course slowing down
your program a lot)

Look at the file util.c. It should give you an idea.

Regards,
~Nuno Lucas



Should I have done this another way?
Ray


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite and memory usage

2007-04-18 Thread Andrew Finkenstadt

Is it conceivable that the buffer cache is what occupies this undeallocated
memory?
--andy

On 4/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


I performed a simple experiment where i placed printf statements in the
routines sqlite3FreeX and sqlite3MallocRaw. They seem to be the two lowest
level routines in SQLite that allocate and deallocate memory. I redirected
the output to a text file and imported it into Excel (this may have been the
hard way). I then summed all of the numbers expecting the result to be zero.

What I found was that at the end there was over a MB of memory still
allocated.

I can't believe this answer so are there other routines called that
allocate and deallocate memory?

Should I have done this another way?
Ray


-
To unsubscribe, send email to [EMAIL PROTECTED]

-




RE: [sqlite] SQLite in memory database from SQLite (3.x) file database?

2005-07-06 Thread Damian Slee
You could create a new os_ram.c file, with new versions of these functions, but 
instead of read/writing to disk you are read/writing from a linked list of 
filenames, of which each could contain a ptr to the file data.  You would then 
be responsible for copying it back to disk at some point.  And loading it from 
disk when it starts, into your psuedo ram file system.

I guess this would also allow multithreaded access to an in memory database (if 
the locking is also implemented).  Where as opening the "%memory%" one, is only 
single threaded (in the cache), I think?

But you loose sharing the db across processes.   Which I don't think matters to 
you, if you want to load it all into ram anyway.

sqlite3OsOpenReadWrite
sqlite3OsOpenExclusive
sqlite3OsOpenReadOnly
sqlite3OsClose
sqlite3OsRead
sqlite3OsWrite
...
 

-Original Message-
From: Dirk Theisen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 07, 2005 7:09 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQLite in memory database from SQLite (3.x) file database?

Hello, John!

> It would be nice to have option that just loads the db file into 
> memory or  otherwise caches the contents wholly in memory.  Are there 
> any caching options in sqlite that would mirror this behavior?

You could set the cache size as big as your database file (via pragma). 
This should load all (used) data into memory. This is probably not a good 
solution if your application does not run for a longer time.

Greetings,
   Dirk


--
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.8.10/43 - Release Date: 6/07/2005
 

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.8.10/43 - Release Date: 6/07/2005
 


Re: [sqlite] SQLite in memory database from SQLite (3.x) file database?

2005-07-06 Thread Dirk Theisen

Hello, John!


It would be nice to have option that just loads the db file into
memory or  otherwise caches the contents wholly in memory.  Are there
any caching options in sqlite that would mirror this behavior?


You could set the cache size as big as your database file (via pragma). 
This should load all (used) data into memory. This is probably not a 
good solution if your application does not run for a longer time.


Greetings,
  Dirk



Re: [sqlite] SQLite in memory database from SQLite (3.x) file database?

2005-07-06 Thread John Duprey
Interesting.. can multiple threads share the same in-memory database
through multiple sqlite_open()s? From what I can scrape together from
the wiki page http://www.sqlite.org/cvstrac/wiki?p=InMemoryDatabase),
it sounds like the best one could do is create the in memory db handle
once in the main thread, then provide serial access to the db handle
via a get method for all the each thread.

It would be nice to have option that just loads the db file into
memory or  otherwise caches the contents wholly in memory.  Are there
any caching options in sqlite that would mirror this behavior?

Thanks,
-John

On 7/6/05, Cory Nelson <[EMAIL PROTECTED]> wrote:
> assuming you open up a :memory: database:
> 
> attach 'foo.db' as bar;
> create table baz as select * from bar.baz;
> detach bar;
> 
> doesn't copy indexes, so you'll have to remake them.  dont think it
> copies triggers either.
> 
> On 7/6/05, John Duprey <[EMAIL PROTECTED]> wrote:
> > Is it possible to load an SQLite file database into an SQLite "in
> > memory" database?  If so what is the most efficient method to do this?
> >  I'm looking for the fastest possible performance.  Taking out the
> > disk I/O seems like the way to go.
> >
> > Thanks,
> > -John
> >
> 
> 
> --
> Cory Nelson
> http://www.int64.org
>


Re: [sqlite] SQLite in memory database from SQLite (3.x) file database?

2005-07-06 Thread Cory Nelson
assuming you open up a :memory: database:

attach 'foo.db' as bar;
create table baz as select * from bar.baz;
detach bar;

doesn't copy indexes, so you'll have to remake them.  dont think it
copies triggers either.

On 7/6/05, John Duprey <[EMAIL PROTECTED]> wrote:
> Is it possible to load an SQLite file database into an SQLite "in
> memory" database?  If so what is the most efficient method to do this?
>  I'm looking for the fastest possible performance.  Taking out the
> disk I/O seems like the way to go.
> 
> Thanks,
> -John
> 


-- 
Cory Nelson
http://www.int64.org


Re: [sqlite] SQLite in memory database from SQLite (3.x) file database?

2005-07-06 Thread Jay Sprenkle
> Is it possible to load an SQLite file database into an SQLite "in
> memory" database?  If so what is the most efficient method to do this?
>  I'm looking for the fastest possible performance.  Taking out the
> disk I/O seems like the way to go.

create a memory database, attach the file based database,
insert using a select as the data source and it should work pretty well.