That's very helpful, thank you very much, Daniel. Also to Jens.
I've re-read the SQLite documentation for shared cache now. I guess I had
enabled it in the past to increase the performance (many threads in my
application).
Apparently, this had the opposite effect :-/
After disabling the
I have an issue with concurrency. This may be the intended behavior of
SQLite. Or I'm doing something wrong.
If somebody could shed a light, I would be thankful.
I compile and use SQLite on Windows with VStudio.
I compile it with SQLITE_THREADSAFE=1
At runtime, I use sqlite3_open_v2 () and
Thanks to all the friendly people who commented on my question. Much
appreciated :-)
I was able to solve this with a small trick:
I created a small 'state' struct with a rowid and the result (float) for
that row.
Using the "user data" parameter when creating EXPENSIVE_FUNCTION, I supply a
Hi all,
I have a table with matrices stored as blobs. Each matrix has about 800
bytes.
This table has between 20,000 and 500,000 rows.
I use a custom function "EXPENSIVE_FUNCTION" which performs a calculation
using a matrix supplied via sqlite3_bind_pointer() as ?99 and the matrix in
the
Hello,
I recently looked into FTS 5.
The documentation is clear and I was able to get it running with a small
test database quickly. And the response times are awesome :-)
My question:
At least as I understand it at this point, FTS can only do prefix queries.
If my database contains
The temporary table is creates as
CREATE TEMPORARY TABLE _tempNNN (oid INTEGER PRIMARY KEY)
So the optimizer must know that the values are unique.
In a recent question
(http://mailinglists.sqlite.org/cgi-bin/mailman/private/sqlite-users/2015-March/058668.html)
I found out that joining with a single-column temporary table with 500 rows is
sometimes several hundred times (!) slower than using an IN clause.
So far my code switched to
I don?t think I can always run an analyze on the TEMP table for each query. May
ruin performance worse than trying my luck with the temp table.
I think this boils down why a JOIN with a 500 row table (one column) is so much
(several hundred times) slower than using an IN clause with 500
I?m using 3.8.8.1 on Windows via the ?C? interface.
I work with SQLite for several years with good success. And I know that no
optimizer ever will be perfect.
But I?ve just stumbled upon a case where a very similar query requires between
0.2 seconds and a whopping 30 seconds.
I?ve simplified
>So would it be possible to run that command each time you open the config
>database and after any change to it ? That would give us a perfect way to
>find out which commands were causing your problems.<
Not really possible. The average update rare is low, but there are times when
hundreds
I dump the output to the log file so when a user sends me a log after the
diagnosis reported a ?repaired? damage, I see one or more entries like:
?row 2481 missing from index idx_settings_sndidmnun?
I will see if I can collect more log files in the coming months.
One thing to add:
I was sometimes successful to remote-repair a corrupted database by telling the
user how to use sqlite3.exe and calling REINDEX.
After learning that, I added this to my diagnosis routine so if
integrity_check() returns something that?s not ?ok?, my application runs a
I have enabled the error callback and it logs everything except SQLITE_SCHEMA
and SQLITE_OK.
In the log files I have received so far, none of these were reported (I use
special headers to identify SQLite errors because log files are often several
hundred MB in size).
I?m waiting for a
> Why are you using shared cache, especially with WAL? Are your devices memory
> constrained?
I was under the impression that shared cache has performance benefits when used
in a MT environment.
> If those databases are small then running "PRAGMA integrity_check" on them
> should be very fast.
My application does that. This is how users find out that their settings
database has become corrupted.
So far it is just reported and then the config database is restored from last
working
I don?t run pragma quick_check in my application ? and I don?t have any
corrupted database here. I only ever see the log files when my application
reports a damaged application, and there I see the ?disk image malformed? error
message.
The diagnosis routine included in my application and
Sorry, typing on a handheld with too thick thumbs :)
The database files can be on remote storage (e.g. Windows server or NAS) (I
know that this may be a factor and we discussed this already and I know the
texts on your web site about false/incomplete locking etc). I just mentioned
that here
One process only.
The problem is that the databases usually are several gigabytes in size, which
make them too large for the users to send them to me.
A pragma quick_check takes one to two minutes, if the database is on remove
storage even more?
I currently only have a small 30 MB database which reports ?ok?
PRAGMA quick_check
Talks ~ 120s on an average size (4 GB) database. Database is on a SSD. Cold
database, right after open, nothing in the Windows file cache yet. Running it
again takes about 100 seconds, not much faster. Nothing you can run very often
or in the background because it keeps
I use 3.8.8.1, source id is 2015-01-20 16:51:25
f73337e3e289915a76ca96e7a05a1a8d4e890d55
I compile the amalgamation using Visual Studio 2012.
The options I use are:
encoding='UTF-16le';
journal_mode=WAL;
wal_autocheckpoint=2; // better bulk speed inserts
locking_mode=EXCLUSIVE;
I?m also concerned about this.
I have used SQLite since around 2008 with great success.
For the latest version of my application I decided to switch to using WAL mode
and shared cache, to gain better performance.
I use pragma synchronous=NORMAL and for a highly-critical (yet small, 2 MB)
Had another damaged database report.
This time it is a configuration database which holds application settings.
The file is stored on a local disk, not shared, and only ever accessed by my
application and only by one thread.
The database is run in FULL sync mode for maximum security.
I’m
As I wrote above, damaged databases are replaced. No user continues working
with a damaged database once it has been identified. The issue here is to
detect this early and avoid it altogether.
> One column of one row of one table may get corrupted.
> If that's the case then the database
1. No client-server, I use MySQL, SQL-Server or ORACLE for that.
2. No access to the SQLite database ever by more than one process concurrently
in writable mode. In readable mode, yes. But the reported damage cases were
always single user, one PC.
3. I cannot prevent or disallow users to
I estimate that over 90% of the users keep the database on local disks. I can
tell from the log files.
Keeping the SQLite database it on a network server really hurts performance.
That’s not what SQLite is designed for, besides all other aspects of network
locking mentioned in various SQLite
> Okay. First, stop doing VACUUM after this. You're not improving things and
> you may be making things worse
Not important. If this error is encountered the database is marked and the user
reminded on every open/close to replace it with a backup. The database is not
supposed to be used
- The databases in question are stored on a location hard disk or SSD.
- If a user stores his database on a NAS box or Windows server, it is accessed
directly, via standard Windows file system routines.
- From what I can tell, network-based databases are not more likely to corrupt
than
The core code is in place since about 2008.
I took advantage of changes in SQLite over time, from using the shared cache to
switching to WAL mode for databases which are not opened in read-only mode.
These changes were made between 12 and six months ago, and tested during beta
tests and
The diagnosis log of my application reports the output of integrity_check()
already.
I retrieved the log from the most recent error report. This is my application
has logged:
'*** IN DATABASE MAIN ***
ON TREE PAGE 385120 CELL 24: INVALID PAGE NUMBER 151192068
CORRUPTION DETECTED IN
My application does not phone home :-/ but I can add output of these functions
to the log file my application maintains. My users know how to collect these
log files and send them to me.
I will also add the error logging callback to my wrapper class and route it to
the log file.
This
1. I don’t have the damaged databases here so I cannot run the diagnosis
myself. The databases are usually too large to upload or transfer.
2. The SQLite version I currently use is 3.8.8.1 (complied using the Amalgation
and Visual Studio 2012).
But since not every user always keeps up to day,
Hello,
I’m using SQLite in one of my applications for several years with great success.
The databases managed with SQLite are between 1 and maybe 10 GB, with about 50
tables or so.
The platform is Windows 7 or higher.
Recently I get an increasing number of error reports about “database
Thanks, Richard
After swapping back to the latest SQLite version and running an Analyze on the
sample databases, performance is up to the same level as before (maybe even a
bit faster). Very good.
I will send out a recommendation to my users to run the weekly diagnostics
routine
The sample database was produced by a version of my software which runs the
previous (or even an older version of SQLite).
My software runs an Analysis as part of a weekly database maintenance
procedure. But the users can turn this off or delay it for weeks. Various
versions of my software
Hi, Richard
I have prepared a sample database, sample statements and some additional
details and sent it to your email address.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Hi,
Information provided as requested.
_temptable is a temporary table which contains a list of oids (integer, ~ 10
rows) to consider.
Stats3
tbl idx neqnlt ndltsample
stack_elem idx_rel_stack_elem_soid 4
Hello,
After re-compiling my Windows application (compiled with Visual C++ 2012, 32
Bit application) with the latest SQLite version (3.8.6) I noticed a severely
degraded performance with certain queries. The new version is 10 or more times
slower than the previous build I used (3.8.4.3).
1.
> Unrelated to your question, but, take a look at "external content" FTS4
table
> they dramatically cut down the amount of duplicated data [1])
Thanks for the tip. I'll definitely check that.
Currently I build the contents for FTS from several other tables, combining,
splitting, merging data via
> Unrelated to your question, but, take a look at "external content"
> FTS4 table they dramatically cut down the amount of duplicated data
> [1])
Thanks for the tip. I'll definitely check that.
Currently I build the contents for FTS dynamically from several other
tables, combining, splitting,
>
If you want to try running with synchronous=NORMAL, you might try setting
PRAGMA wal_autocheckpoint=10; (from the default of 1000) which will
make for dramatically larger WAL files, but also dramatically fewer syncs.
Then the syncs will use just 5 or 6 minutes instead of 4.5 hours.
I have a performance effect which I don't quite understand.
Maybe I'm using the wrong settings or something. Sorry for the long post,
but I wanted to include all the info that may be important.
My software is written in C++, runs on Windows 7/8, the SQLite database file
is either on a local SATA
> If you have a contrary example, please send me a
> copy of the database file via private email.
I take it from your reply that the integrity_check indeed should reveal
problems in the database file which cause the "disk image malformed" return
code so my logic is OK as it is.
Unfortunately,
I have implemented diagnostic routines which allow my users to check the
database for problems.
If something bad happens to a database (which may be not the fault of SQLite
at all, e.g. a network problem or disk problem) I want to detect this as
early as possible in order to inform the user. This
Unfortunately this does not help :-(
It may have still something to do with WAL.
Before I run the VACUUM the WAL file is quite small.
After the VACUUM has completed, it is about 20 MB - about the same size as
the properly compacted database would be.
But when I run a pragma wal_checkpoint; the
I run the VACUUM command at the end of a diagnosis and cleanup operation on
my database.
I use the SQLite 3 API on Windows 7. Latest version of SQLite.
My database uses the WAL mode.
The database size is 120 MB when I run the sqlite3_execute("VACUUM",...)
command.
After about 20 seconds of heavy
Great :-)
I guess that PRAGMA temp_store=MEMORY then does not add additional
performance on Windows and I can safely let it to DEFAULT or FILE.
This will avoid the excessive memory usage during VACUUM for my use case.
Thanks.
-- Mario
___
Hello List
the SQLite databases I use on Windows can become fairly large (several GB).
I just noticed that running a VACCUM on such a large database (where several
of the tables are also real large) can cause excessive memory usage (more
than 2.5 GB RAM in peak).
I tracked this down to using
Thank you for providing this pre-release amalgamation ;-)
I downloaded it immediately and compiled it into my software.
The problem has been ++resolved++ and the performance is at least as good as
with previous versions of SQLite.
It even feels a bit faster, although I only could try it with a
Thank you for providing this pre-release amalgamation ;-)
I downloaded it immediately and compiled it into my software.
The problem has been ++resolved++ and the performance is at least as good as
with previous versions of SQLite.
It even feels a bit faster, although I only could try it with a
Hi,
I just tried this (sorry dor the delay) but apparently I'm not having the
right toolset installed.
The make file requires gawk.exe (which I downloaded from sourceforge), but
now it's complaing about a missing tclsh85...
Since the previous SQLite version works fine I think I'll skip this and
So far I only used the Amalgamation. Looks like the files on your source
control server require me to build SQLite from source or at least run a
tool/script to build the amalgamation.
I'm using Windows and Visual Studio so the standard Linux build tools and
scripting languages are not available.
Hi, thanks.
But I'm not sure that I understand you correctly.
Is this behavior considered as something that needs to be looked at by the
SQLite Team to restore the original performance, or is this how "it is" now
and I have to find a work-around for good (e.g. applying your suggestion
with
This is a SQL Script
/*
Application linking to SQLite using the Amalgation.
Build Tool: C++, Visual Studio 2012, Windows 7 64-Bit
The table schema and the query enclosed below are in use
for over one year and various SQLite versions.
After downloading and compiling in the SQLite 3.7.16.2, we
54 matches
Mail list logo