On Oct 29, 2019, at 2:56 PM, Dawson, Jeff G wrote:
>
> SQLite version 3.7.14.1 2012-10-04 19:37:12
I infer that you’re migrating a legacy system. There are two good alternatives
to your current method that should avoid the symptom entirely:
1. Build a current version of SQLite for the old
On Apr 14, 2019, at 10:18 PM, David Ashman - Zone 7 Engineering, LLC
wrote:
>
> It appears that there is a leak somewhere.
It is certainly in your code. My bet’s on a missing sqlite3_finalize() call,
but there are many other possibilities.
> Does anyone know why this error occurs?
I
Note that, as I understand it, if you use only a single connection for the
CherryPi server, all the threads on the server will be running the queries
sequentially. Try using a database connection per thread?
On Thu, May 18, 2017, 8:47 PM Gabriele Lanaro
wrote:
>
Thanks everyone for all the tips! This is all very useful.
We are using SQLite’s FTS5 feature to search a large number of text files.
There are 50M records in total but they are split across 1000 smaller
databases of 50K records each. Each DB is 250MB in size.
I am trying to test query
On Wed, 17 May 2017 22:18:19 -0700
Gabriele Lanaro wrote:
> Hi, I'm trying to assess if the performance of my application is
> dependent on disk access from sqlite.
>
> To rule this out I wanted to make sure that the SQLite DB is
> completely accessed from memory and
On Wednesday, 17 May, 2017 23:18, Gabriele Lanaro
wrote:
> Hi, I'm trying to assess if the performance of my application is dependent
> on disk access from sqlite.
Of course it is. Depending on what your application is doing.
> To rule this out I wanted to make
If by any chance you have access to Linux or alike, you can just mount a ramfs
and move database file over there.
It is a usual file system that lives in RAM. This will 100% guarantee you that
no disk access will be made by SQLite.
18 May 2017, 08:18:47, by "Gabriele Lanaro"
From the SQLite shell (CLI), have you tried dot commands ".backup" to file
and ".restore" to a new :memory: DB? That assumes a few things like access
to the filesystem and sufficient user memory quota to hold the disk version
of the DB. Does that work?
The shell dot commands and their syntax is
Hi, Clemens,
On Sun, Nov 4, 2012 at 3:11 AM, Clemens Ladisch wrote:
> Igor Korot wrote:
>> When the user asks to edit the data it starts the transaction, then
>> set the "SAVEPOINT"
>
> Why do you need a savepoint in addition to the transaction?
Because it is a transaction
Igor Korot wrote:
> When the user asks to edit the data it starts the transaction, then
> set the "SAVEPOINT"
Why do you need a savepoint in addition to the transaction?
> Now the problem: only during the scenario #3 I have a lot of memory
> leaks. They are reported in the Visual Studio debug
>
On Mon, Sep 12, 2011 at 06:56:50PM +0200, Stephan Beal scratched on the wall:
> On Mon, Sep 12, 2011 at 6:47 PM, Jay A. Kreibich wrote:
>
> > On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> > > is there any limit about the data size?
> >
> > PRAGMA
On Mon, Sep 12, 2011 at 6:47 PM, Jay A. Kreibich wrote:
> On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> > is there any limit about the data size?
>
> PRAGMA max_page_count should work on in-memory databases.
>
Isn't there also the limitation that
On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> Hi there,
>
> I just have a question. If I am using JDBC driver to connect to sqlite using
> in-memory mode,
> is there any limit about the data size?
PRAGMA max_page_count should work on in-memory databases.
Bastian Clarenbach wrote:
> My environment does not have direct file access, instead I can only request
> files and get a memblock returned that contains the entire file.
You should be able to write a virtual file system that reads and writes to a
block of
Yes, I expect the database to be small enough.
It is the loading a :memory: database from and storing to memory blocks that
still eludes me.
I will take a look at the backup link.
Thanks!
On 9 February 2011 13:48, Simon Slavin wrote:
>
> On 9 Feb 2011, at 10:14am,
On 9 Feb 2011, at 10:14am, Bastian Clarenbach wrote:
> My environment does not have direct file access, instead I can only request
> files and get a memblock returned that contains the entire file. I am trying
> to figure out how to do one, preferably both, of the following scenarios.
>
> 1. I
Pavel Ivanov schreef:
>> Currently this means adding
>> the new columns to my C-structures, writing access functions, and
>> recompiling. I don't want to do that, because this means my appl *must*
>> be replaced on every database change, and I'd like to be able to
>> run different versions of it
> Currently this means adding
> the new columns to my C-structures, writing access functions, and
> recompiling. I don't want to do that, because this means my appl *must*
> be replaced on every database change, and I'd like to be able to
> run different versions of it in the wild. I was hoping to
"Ron Arts" schrieb im
Newsbeitrag news:4adac5c1.5010...@arts-betel.org...
> Then my program opens a socket, and starts accepting connections,
> those connections are long lasting, and send messages that need
> a fast reply. Many of the messages result in messages being send
On 18 Oct 2009, at 7:23pm, Ron Arts wrote:
> because the application is evolving, columns
> get added/changed on a regular basis. Currently this means adding
> the new columns to my C-structures, writing access functions, and
> recompiling. I don't want to do that, because this means my appl
>
On Sun, 18 Oct 2009 17:37:57 +0200,
Ron Arts wrote:
>Very true Simon,
>
>this has been the fastest way so far and I can do around
>35 selects/second this way, using prepared statements
>(on my machine at least), but I need more speed.
>
>That's why I want to skip the
P Kishor schreef:
> On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts wrote:
>> Very true Simon,
>>
>> this has been the fastest way so far and I can do around
>> 35 selects/second this way, using prepared statements
>> (on my machine at least), but I need more speed.
>>
>>
On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts wrote:
> Very true Simon,
>
> this has been the fastest way so far and I can do around
> 35 selects/second this way, using prepared statements
> (on my machine at least), but I need more speed.
>
> That's why I want to skip the
On 18 Oct 2009, at 4:37pm, Ron Arts wrote:
> I want to skip the SQL processing entirely
> and write a C function that reaches directly into the
> internal memory structures to gets my record from there.
I assume that you've already tested the fastest way of doing this that
the standard
Very true Simon,
this has been the fastest way so far and I can do around
35 selects/second this way, using prepared statements
(on my machine at least), but I need more speed.
That's why I want to skip the SQL processing entirely
and write a C function that reaches directly into the
On 18 Oct 2009, at 8:37am, Ron Arts wrote:
> Is there a way to bypass the virtual machine altogether and reach
> directly
> into the btree and just retrieve one record by it's oid (primary
> integer key),
> and return it in a form that would allow taking out the column
> values by name?
Pavel Ivanov schreef:
>> I use the following queries:
>>
>> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> I'm not sure how SQLite treats this table definition but probably
> because of your ASC it could decide that id shouldn't be a synonym for
> rowid which will make at least
On Mon, Oct 12, 2009 at 07:23:30PM -0400, Pavel Ivanov scratched on the wall:
> > Is their a way to prepare the query and save (compiled form) so that
> > we can share them between multiple connection?
>
> Yes, there is: http://sqlite-consortium.com/products/sse.
I realize this may be a
> Pavel,
>
> does the cache work for memory datsbases too?
Doh, missed the fact that it's a memory database. I believe in-memory
database is in fact just a database cache that never deletes its pages
from memory and never spills them to disk. Although anything about
size of database cache will
gt; Sent: Sunday, October 11, 2009 1:54 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] sqlite in-memory database far too slow in my use case
>
> On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
>
>> I'm afraid the
, 2009 1:54 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite in-memory database far too slow in my use case
On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
> I'm afraid the process of
> constructing SQL queries / parsing them by
Hello!
On Sunday 11 October 2009 22:52:29 Jay A. Kreibich wrote:
> A bit to my surprise, the difference is even more significant using
> prepared statements in a C program. For a half-million selects over a
> similar table in a :memory: database, there is a 20% speed-up by
> wrapping
On Sun, Oct 11, 2009 at 11:49:57AM +0400, Alexey Pechnikov scratched on the
wall:
> Hello!
>
> On Sunday 11 October 2009 00:54:04 Simon Slavin wrote:
> > > Using transactions speeds up a long series of SELECTs because it
> > > eliminates the need to re-acquire a read-only file-lock for each
>
Are there compile time switches which I can use to speed up
selects in memory databases? Will the amalgamated version be faster
than linking the lib at runtime?
Thanks,
Ron
Pavel Ivanov schreef:
>> I use the following queries:
>>
>> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
>
Pavel Ivanov schreef:
>> I use the following queries:
>>
>> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> I'm not sure how SQLite treats this table definition but probably
> because of your ASC it could decide that id shouldn't be a synonym for
> rowid which will make at least
> I use the following queries:
>
> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
I'm not sure how SQLite treats this table definition but probably
because of your ASC it could decide that id shouldn't be a synonym for
rowid which will make at least inserts slower.
> But I'm still
"Ron Arts" schrieb im
Newsbeitrag news:4ad19195.2060...@arts-betel.org...
> I tried it, and indeed, this speeds up inserts tremendously as well,
> but in fact I'm not at all concernced about insert speed, but much more
about
> select speed. I use the following queries:
>
>
On 11 Oct 2009, at 9:04am, Ron Arts wrote:
> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> Then I insert 50 records like this:
>
> INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>
> (with consecutive values for the id value.)
I think you can remove the
Alexey Pechnikov schreef:
> Hello!
>
> On Sunday 11 October 2009 12:04:37 Ron Arts wrote:
>>CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>>
>> Then I insert 50 records like this:
>>
>>INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>>
>> (with
Hello!
On Sunday 11 October 2009 12:04:37 Ron Arts wrote:
>CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> Then I insert 50 records like this:
>
>INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>
> (with consecutive values for the id value.)
>
> do
Olaf Schmidt schreef:
> "Ron Arts" schrieb im
> Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...
>
>> Here's my new benchmark output:
>>
>> sqlite3 insert 50 records time: 17.19 secs
>> sqlite3 select 50 records time: 18.57 secs
>> sqlite3 prepared select 50
Hello!
On Sunday 11 October 2009 00:54:04 Simon Slavin wrote:
> > Using transactions speeds up a long series of SELECTs because it
> > eliminates the need to re-acquire a read-only file-lock for each
> > individual SELECT.
> >
> > Since in-memory databases have no file locks, I'm not sure
"Ron Arts" schrieb im
Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...
> Here's my new benchmark output:
>
> sqlite3 insert 50 records time: 17.19 secs
> sqlite3 select 50 records time: 18.57 secs
> sqlite3 prepared select 50 records time: 3.27 secs
> glib2
On 10 Oct 2009, at 10:57pm, Ron Arts wrote:
> The sqlite3_bind_int immediately gives me an RANGE_ERROR (25).
> Is there some obvious thing I'm doing wrong?
I notice that your _prepare call puts single quotes around the
variable, whereas you are binding an integer to it. But that's
probably
On Sat, Oct 10, 2009 at 11:57:30PM +0200, Ron Arts scratched on the wall:
> I'm expanding my benchmark to test just thaty, but I'm running into a problem.
> Here's my code (well part of it):
>
>sqlite3_stmt *stmt;
>rc = sqlite3_prepare(db, "select name from company where id = '?'", -1,
Jay A. Kreibich schreef:
> On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
>
>> I'm afraid the process of
>> constructing SQL queries / parsing them by sqlite, and
>> interpreting the results in my app, multiple times per
>> event will be too slow.
>
> There should be
On 10 Oct 2009, at 9:27pm, Jay A. Kreibich wrote:
> On Sat, Oct 10, 2009 at 07:38:08PM +0100, Simon Slavin scratched on
> the wall:
>>
>
>> Don't forget to use transactions, even for when you are just doing
>> SELECTs without changing any data.
>
> Using transactions speeds up a long series
On Sat, Oct 10, 2009 at 07:38:08PM +0100, Simon Slavin scratched on the wall:
>
> On 10 Oct 2009, at 7:04pm, Roger Binns wrote:
>
> > Ron Arts wrote:
> >> So I am wondering if I can drop the glib Hash Tables, and
> >> go sqlite all the way. But I'm afraid the process of
> >> constructing SQL
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ron Arts wrote:
> Using hash tables I can do 10 requests in .24 seconds
> meaning around 40 req/sec.
If you are just doing simple lookups (eg doing equality on a single column)
then a hash table will always beat going through SQLite. But if
On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
> I'm afraid the process of
> constructing SQL queries / parsing them by sqlite, and
> interpreting the results in my app, multiple times per
> event will be too slow.
There should be no need to construct and parse
Ok,
I just finished writing a test program. It creates an SQLite memory table
and inserts 50 records, then it selects 50 times on a random key.
After that it uses hash memory tables to do the same thing. Here is the
test output:
sqlite3 insert 50 records time: 17.21 secs
sqlite3
On 10 Oct 2009, at 7:04pm, Roger Binns wrote:
> Ron Arts wrote:
>> So I am wondering if I can drop the glib Hash Tables, and
>> go sqlite all the way. But I'm afraid the process of
>> constructing SQL queries / parsing them by sqlite, and
>> interpreting the results in my app, multiple times per
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ron Arts wrote:
> So I am wondering if I can drop the glib Hash Tables, and
> go sqlite all the way. But I'm afraid the process of
> constructing SQL queries / parsing them by sqlite, and
> interpreting the results in my app, multiple times per
>
are you sure that the nickname field is 100 bytes?
Why do you use strncpy(d,s,100 ) why not
strncpy(d,s, sizeof(d))
Other than that, hard to tell without seeing the data types and declarations.
Might want to post on a C programming board.
--- On Thu, 9/4/08, kogure <[EMAIL
"kogure" <[EMAIL PROTECTED]> wrote
in message news:[EMAIL PROTECTED]
> Hello everyone. I have a database with fields not required to be
> filled in (the other fields are declared NOT NULL). When I have a
> record with the non-required fields empty, and copied it to my
> structure, there is a
] On Behalf Of Eric Minbiole
Sent: August 30, 2008 1:21 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite 3.6.1 memory leak?
Ulric Auger wrote:
> Hi,
>
> Since I updated to SQLite 3.6.1 I have a memory leak when my application
> exits.
>
> If I compile us
Ulric Auger wrote:
> Hi,
>
> Since I updated to SQLite 3.6.1 I have a memory leak when my application
> exits.
>
> If I compile using SQLite 3.5.8 I don't have the memory leak.
Be sure to call sqlite3_shutdown() just before the application exits--
this should free any outstanding resources
There's not enough information in your post for us to comment -- which is
probably why nobody responded earlier. The unit tests for SQLite create
thousands of connections and run hundreds of thousands of commands without
leaking. So there's a probability that you may be doing something wrong,
Hi Alex,
On Wed, 12 Sep 2007 12:19:44 +0200, you wrote:
> I have 3 questions regarding sqlite database loaded/used whilst in memory:
>
> 1. How can an sqlite database file (example file1.db) be
>loaded in memory?
> (Is this the answer?: > sqlite3.exe file1.db)
sqlite3 file1.db .dump |
On 4/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
I performed a simple experiment where i placed printf statements in the
routines sqlite3FreeX and sqlite3MallocRaw. They seem to be the two lowest
level routines in SQLite that allocate and deallocate memory. I redirected the
output to
Is it conceivable that the buffer cache is what occupies this undeallocated
memory?
--andy
On 4/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
I performed a simple experiment where i placed printf statements in the
routines sqlite3FreeX and sqlite3MallocRaw. They seem to be the two lowest
PROTECTED]
Sent: Thursday, July 07, 2005 7:09 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQLite in memory database from SQLite (3.x) file database?
Hello, John!
> It would be nice to have option that just loads the db file into
> memory or otherwise caches the contents
Hello, John!
It would be nice to have option that just loads the db file into
memory or otherwise caches the contents wholly in memory. Are there
any caching options in sqlite that would mirror this behavior?
You could set the cache size as big as your database file (via pragma).
This
Interesting.. can multiple threads share the same in-memory database
through multiple sqlite_open()s? From what I can scrape together from
the wiki page http://www.sqlite.org/cvstrac/wiki?p=InMemoryDatabase),
it sounds like the best one could do is create the in memory db handle
once in the main
assuming you open up a :memory: database:
attach 'foo.db' as bar;
create table baz as select * from bar.baz;
detach bar;
doesn't copy indexes, so you'll have to remake them. dont think it
copies triggers either.
On 7/6/05, John Duprey <[EMAIL PROTECTED]> wrote:
> Is it possible to load an
> Is it possible to load an SQLite file database into an SQLite "in
> memory" database? If so what is the most efficient method to do this?
> I'm looking for the fastest possible performance. Taking out the
> disk I/O seems like the way to go.
create a memory database, attach the file based
66 matches
Mail list logo