Note that, as I understand it, if you use only a single connection for the
CherryPi server, all the threads on the server will be running the queries
sequentially. Try using a database connection per thread?
On Thu, May 18, 2017, 8:47 PM Gabriele Lanaro
wrote:
> Thanks everyone for all the tips!
Thanks everyone for all the tips! This is all very useful.
We are using SQLite’s FTS5 feature to search a large number of text files.
There are 50M records in total but they are split across 1000 smaller
databases of 50K records each. Each DB is 250MB in size.
I am trying to test query performanc
On Wed, 17 May 2017 22:18:19 -0700
Gabriele Lanaro wrote:
> Hi, I'm trying to assess if the performance of my application is
> dependent on disk access from sqlite.
>
> To rule this out I wanted to make sure that the SQLite DB is
> completely accessed from memory and there are no disk accesses.
On Wednesday, 17 May, 2017 23:18, Gabriele Lanaro
wrote:
> Hi, I'm trying to assess if the performance of my application is dependent
> on disk access from sqlite.
Of course it is. Depending on what your application is doing.
> To rule this out I wanted to make sure that the SQLite DB is com
If by any chance you have access to Linux or alike, you can just mount a ramfs
and move database file over there.
It is a usual file system that lives in RAM. This will 100% guarantee you that
no disk access will be made by SQLite.
18 May 2017, 08:18:47, by "Gabriele Lanaro" :
> Hi, I'm try
From the SQLite shell (CLI), have you tried dot commands ".backup" to file
and ".restore" to a new :memory: DB? That assumes a few things like access
to the filesystem and sufficient user memory quota to hold the disk version
of the DB. Does that work?
The shell dot commands and their syntax is
Hi, I'm trying to assess if the performance of my application is dependent
on disk access from sqlite.
To rule this out I wanted to make sure that the SQLite DB is completely
accessed from memory and there are no disk accesses.
Is it possible to obtain this effect by using pragmas such as cache_s
On Mon, Sep 12, 2011 at 06:56:50PM +0200, Stephan Beal scratched on the wall:
> On Mon, Sep 12, 2011 at 6:47 PM, Jay A. Kreibich wrote:
>
> > On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> > > is there any limit about the data size?
> >
> > PRAGMA max_page_count shoul
On Mon, Sep 12, 2011 at 6:47 PM, Jay A. Kreibich wrote:
> On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> > is there any limit about the data size?
>
> PRAGMA max_page_count should work on in-memory databases.
>
Isn't there also the limitation that the maximum db size
On Mon, Sep 12, 2011 at 12:29:56PM +0800, ?? scratched on the wall:
> Hi there,
>
> I just have a question. If I am using JDBC driver to connect to sqlite using
> in-memory mode,
> is there any limit about the data size?
PRAGMA max_page_count should work on in-memory databases.
http://sq
Hi there,
I just have a question. If I am using JDBC driver to connect to sqlite using
in-memory mode,
is there any limit about the data size?
Thanks.
Clark
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/lis
Pavel Ivanov schreef:
>> Currently this means adding
>> the new columns to my C-structures, writing access functions, and
>> recompiling. I don't want to do that, because this means my appl *must*
>> be replaced on every database change, and I'd like to be able to
>> run different versions of it in
> Currently this means adding
> the new columns to my C-structures, writing access functions, and
> recompiling. I don't want to do that, because this means my appl *must*
> be replaced on every database change, and I'd like to be able to
> run different versions of it in the wild. I was hoping to
"Ron Arts" schrieb im
Newsbeitrag news:4adac5c1.5010...@arts-betel.org...
> Then my program opens a socket, and starts accepting connections,
> those connections are long lasting, and send messages that need
> a fast reply. Many of the messages result in messages being send
> to all other client
On 18 Oct 2009, at 7:23pm, Ron Arts wrote:
> because the application is evolving, columns
> get added/changed on a regular basis. Currently this means adding
> the new columns to my C-structures, writing access functions, and
> recompiling. I don't want to do that, because this means my appl
>
On Sun, 18 Oct 2009 17:37:57 +0200,
Ron Arts wrote:
>Very true Simon,
>
>this has been the fastest way so far and I can do around
>35 selects/second this way, using prepared statements
>(on my machine at least), but I need more speed.
>
>That's why I want to skip the SQL processing entirely
P Kishor schreef:
> On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts wrote:
>> Very true Simon,
>>
>> this has been the fastest way so far and I can do around
>> 35 selects/second this way, using prepared statements
>> (on my machine at least), but I need more speed.
>>
>> That's why I want to skip
On Sun, Oct 18, 2009 at 10:37 AM, Ron Arts wrote:
> Very true Simon,
>
> this has been the fastest way so far and I can do around
> 35 selects/second this way, using prepared statements
> (on my machine at least), but I need more speed.
>
> That's why I want to skip the SQL processing entirely
On 18 Oct 2009, at 4:37pm, Ron Arts wrote:
> I want to skip the SQL processing entirely
> and write a C function that reaches directly into the
> internal memory structures to gets my record from there.
I assume that you've already tested the fastest way of doing this that
the standard library
Very true Simon,
this has been the fastest way so far and I can do around
35 selects/second this way, using prepared statements
(on my machine at least), but I need more speed.
That's why I want to skip the SQL processing entirely
and write a C function that reaches directly into the
internal
On 18 Oct 2009, at 8:37am, Ron Arts wrote:
> Is there a way to bypass the virtual machine altogether and reach
> directly
> into the btree and just retrieve one record by it's oid (primary
> integer key),
> and return it in a form that would allow taking out the column
> values by name?
Th
Pavel Ivanov schreef:
>> I use the following queries:
>>
>> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> I'm not sure how SQLite treats this table definition but probably
> because of your ASC it could decide that id shouldn't be a synonym for
> rowid which will make at least inser
On Mon, Oct 12, 2009 at 07:23:30PM -0400, Pavel Ivanov scratched on the wall:
> > Is their a way to prepare the query and save (compiled form) so that
> > we can share them between multiple connection?
>
> Yes, there is: http://sqlite-consortium.com/products/sse.
I realize this may be a genera
> Pavel,
>
> does the cache work for memory datsbases too?
Doh, missed the fact that it's a memory database. I believe in-memory
database is in fact just a database cache that never deletes its pages
from memory and never spills them to disk. Although anything about
size of database cache will not
009 1:54 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] sqlite in-memory database far too slow in my use case
>
> On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
>
>> I'm afraid the process of
>> constructing SQL que
, 2009 1:54 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite in-memory database far too slow in my use case
On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
> I'm afraid the process of
> constructing SQL queries / parsing them by
Hi!,
Optimizing by hand is one way to go, but it can get tedious with
multiple SQL statements requiring carefully sequenced prepares, binds,
transactions, pragmas, commits, exception-handling, compiler options
etc.
For automated optimization, you can try StepSqlite
(https://www.metatranz.com/step
Hello!
On Sunday 11 October 2009 22:52:29 Jay A. Kreibich wrote:
> A bit to my surprise, the difference is even more significant using
> prepared statements in a C program. For a half-million selects over a
> similar table in a :memory: database, there is a 20% speed-up by
> wrapping all
On Sun, Oct 11, 2009 at 11:49:57AM +0400, Alexey Pechnikov scratched on the
wall:
> Hello!
>
> On Sunday 11 October 2009 00:54:04 Simon Slavin wrote:
> > > Using transactions speeds up a long series of SELECTs because it
> > > eliminates the need to re-acquire a read-only file-lock for each
> >
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ron Arts wrote:
> Will the amalgamated version be faster
> than linking the lib at runtime?
The SQLite website quotes a 10% performance improvement for the
amalgamation. The reason for the improvement is that the compiler gets to
see all the SQLite c
Are there compile time switches which I can use to speed up
selects in memory databases? Will the amalgamated version be faster
than linking the lib at runtime?
Thanks,
Ron
Pavel Ivanov schreef:
>> I use the following queries:
>>
>> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> I'
Pavel Ivanov schreef:
>> I use the following queries:
>>
>> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> I'm not sure how SQLite treats this table definition but probably
> because of your ASC it could decide that id shouldn't be a synonym for
> rowid which will make at least inser
> I use the following queries:
>
> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
I'm not sure how SQLite treats this table definition but probably
because of your ASC it could decide that id shouldn't be a synonym for
rowid which will make at least inserts slower.
> But I'm still looki
"Ron Arts" schrieb im
Newsbeitrag news:4ad19195.2060...@arts-betel.org...
> I tried it, and indeed, this speeds up inserts tremendously as well,
> but in fact I'm not at all concernced about insert speed, but much more
about
> select speed. I use the following queries:
>
>CREATE TABLE compan
On 11 Oct 2009, at 9:04am, Ron Arts wrote:
> CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> Then I insert 50 records like this:
>
> INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>
> (with consecutive values for the id value.)
I think you can remove the
Alexey Pechnikov schreef:
> Hello!
>
> On Sunday 11 October 2009 12:04:37 Ron Arts wrote:
>>CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>>
>> Then I insert 50 records like this:
>>
>>INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>>
>> (with consecutive
Hello!
On Sunday 11 October 2009 12:04:37 Ron Arts wrote:
>CREATE TABLE company(id INTEGER PRIMARY KEY ASC, name)
>
> Then I insert 50 records like this:
>
>INSERT INTO company (id, name) VALUES ('1', 'Company name number 1')
>
> (with consecutive values for the id value.)
>
> do y
Olaf Schmidt schreef:
> "Ron Arts" schrieb im
> Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...
>
>> Here's my new benchmark output:
>>
>> sqlite3 insert 50 records time: 17.19 secs
>> sqlite3 select 50 records time: 18.57 secs
>> sqlite3 prepared select 50 records time: 3.27 secs
Hello!
On Sunday 11 October 2009 00:54:04 Simon Slavin wrote:
> > Using transactions speeds up a long series of SELECTs because it
> > eliminates the need to re-acquire a read-only file-lock for each
> > individual SELECT.
> >
> > Since in-memory databases have no file locks, I'm not sure that
"Ron Arts" schrieb im
Newsbeitrag news:4ad10a9e.3040...@arts-betel.org...
> Here's my new benchmark output:
>
> sqlite3 insert 50 records time: 17.19 secs
> sqlite3 select 50 records time: 18.57 secs
> sqlite3 prepared select 50 records time: 3.27 secs
> glib2 hash tables insert 5000
Jay A. Kreibich schreef:
> On Sat, Oct 10, 2009 at 11:57:30PM +0200, Ron Arts scratched on the wall:
>
>> I'm expanding my benchmark to test just thaty, but I'm running into a
>> problem.
>> Here's my code (well part of it):
>>
>>sqlite3_stmt *stmt;
>>rc = sqlite3_prepare(db, "select name
On 10 Oct 2009, at 10:57pm, Ron Arts wrote:
> The sqlite3_bind_int immediately gives me an RANGE_ERROR (25).
> Is there some obvious thing I'm doing wrong?
I notice that your _prepare call puts single quotes around the
variable, whereas you are binding an integer to it. But that's
probably
On Sat, Oct 10, 2009 at 11:57:30PM +0200, Ron Arts scratched on the wall:
> I'm expanding my benchmark to test just thaty, but I'm running into a problem.
> Here's my code (well part of it):
>
>sqlite3_stmt *stmt;
>rc = sqlite3_prepare(db, "select name from company where id = '?'", -1,
>
Jay A. Kreibich schreef:
> On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
>
>> I'm afraid the process of
>> constructing SQL queries / parsing them by sqlite, and
>> interpreting the results in my app, multiple times per
>> event will be too slow.
>
> There should be
On 10 Oct 2009, at 9:27pm, Jay A. Kreibich wrote:
> On Sat, Oct 10, 2009 at 07:38:08PM +0100, Simon Slavin scratched on
> the wall:
>>
>
>> Don't forget to use transactions, even for when you are just doing
>> SELECTs without changing any data.
>
> Using transactions speeds up a long series of
On Sat, Oct 10, 2009 at 07:38:08PM +0100, Simon Slavin scratched on the wall:
>
> On 10 Oct 2009, at 7:04pm, Roger Binns wrote:
>
> > Ron Arts wrote:
> >> So I am wondering if I can drop the glib Hash Tables, and
> >> go sqlite all the way. But I'm afraid the process of
> >> constructing SQL quer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ron Arts wrote:
> Using hash tables I can do 10 requests in .24 seconds
> meaning around 40 req/sec.
If you are just doing simple lookups (eg doing equality on a single column)
then a hash table will always beat going through SQLite. But if y
On Sat, Oct 10, 2009 at 07:24:33PM +0200, Ron Arts scratched on the wall:
> I'm afraid the process of
> constructing SQL queries / parsing them by sqlite, and
> interpreting the results in my app, multiple times per
> event will be too slow.
There should be no need to construct and parse querie
Ok,
I just finished writing a test program. It creates an SQLite memory table
and inserts 50 records, then it selects 50 times on a random key.
After that it uses hash memory tables to do the same thing. Here is the
test output:
sqlite3 insert 50 records time: 17.21 secs
sqlite3 sele
On 10 Oct 2009, at 7:04pm, Roger Binns wrote:
> Ron Arts wrote:
>> So I am wondering if I can drop the glib Hash Tables, and
>> go sqlite all the way. But I'm afraid the process of
>> constructing SQL queries / parsing them by sqlite, and
>> interpreting the results in my app, multiple times per
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ron Arts wrote:
> So I am wondering if I can drop the glib Hash Tables, and
> go sqlite all the way. But I'm afraid the process of
> constructing SQL queries / parsing them by sqlite, and
> interpreting the results in my app, multiple times per
> event
Hi,
I am building a libevent based application that must be
able to handle tens of thousands requests per second.
Each request needs multiple database lookups. Almost
all requests do the lookups on the primary key of the tables
only. So far I have been using Hash Tables from the glib2
library. Bu
There is a lot of information about disk file concurrency model.
I have not found much information regarding the in memory database
concurrency model.
Thanks,
-Alex
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mail
I'm curious if the authors of sqlite have given any consideration to the merits
of using a Hash index to retrieve data for In memory Databases ?
Thanks,
Ken
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman
Hi Alex,
On Wed, 12 Sep 2007 12:19:44 +0200, you wrote:
> I have 3 questions regarding sqlite database loaded/used whilst in memory:
>
> 1. How can an sqlite database file (example file1.db) be
>loaded in memory?
> (Is this the answer?: > sqlite3.exe file1.db)
sqlite3 file1.db .dump | sql
Hi,
I have 3 questions regarding sqlite database loaded/used whilst in memory:
1. How can an sqlite database file (example file1.db) be loaded in memory?
(Is this the answer?: > sqlite3.exe file1.db)
2. How can the in-memory sqlite database be accessed by multiple
applications?
3. Can multiple
Hi,
I've read a lot of mails on this group regarding use of Sqlite in-memory
mode, for better performance.
Currently we're using file based Sqlite DB and are facing lot of
performance issues since the DB file is accessed a zillion times in a
single program run.
I've decided to try out the in-mem
EMAIL PROTECTED]
Sent: Thursday, July 07, 2005 7:09 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQLite in memory database from SQLite (3.x) file database?
Hello, John!
> It would be nice to have option that just loads the db file into
> memory or otherwise caches the contents w
Hello, John!
It would be nice to have option that just loads the db file into
memory or otherwise caches the contents wholly in memory. Are there
any caching options in sqlite that would mirror this behavior?
You could set the cache size as big as your database file (via pragma).
This shoul
Interesting.. can multiple threads share the same in-memory database
through multiple sqlite_open()s? From what I can scrape together from
the wiki page http://www.sqlite.org/cvstrac/wiki?p=InMemoryDatabase),
it sounds like the best one could do is create the in memory db handle
once in the main th
assuming you open up a :memory: database:
attach 'foo.db' as bar;
create table baz as select * from bar.baz;
detach bar;
doesn't copy indexes, so you'll have to remake them. dont think it
copies triggers either.
On 7/6/05, John Duprey <[EMAIL PROTECTED]> wrote:
> Is it possible to load an SQLit
> Is it possible to load an SQLite file database into an SQLite "in
> memory" database? If so what is the most efficient method to do this?
> I'm looking for the fastest possible performance. Taking out the
> disk I/O seems like the way to go.
create a memory database, attach the file based dat
Is it possible to load an SQLite file database into an SQLite "in
memory" database? If so what is the most efficient method to do this?
I'm looking for the fastest possible performance. Taking out the
disk I/O seems like the way to go.
Thanks,
-John
63 matches
Mail list logo