> I agree with Jay - while it is tempting to have SQLite bite off
> optimizing this kind of thing, it's pretty far out of scope. Next
> we'll be talking about running SQLite on raw partitions!
>
Scott, thought about it, ironically sqlite vfs is flexible enough to
implement raw partition sqlite in
> This yields two benefits:
>
>A less fragmented db
>
>~50% vacuum speedup since the data is only copied once
>
> Currently we can copy the data to a new file, but it is a pretty
> invasive change to swap all of the current sqlite connections to the new
> file. Things like prepared statemen
On Tue, Aug 31, 2010 at 9:07 AM, Eric Smith wrote:
>
> You probably don't want 'SELECT max(foo_id)+1' because I think that does
> a full table scan.
>
>
you're right, but slightly modified version SELECT (SELECT max(foo_id) FROM
MxVft_FTIndex)+1 does not
Max
_
On Mon, Aug 30, 2010 at 5:34 PM, Pankaj Chawla wrote:
> I think keeping temp files in memory is a good idea but i was concerned
> as I read at a few places that if rollback journals are kept in memory
> then
> on power outage
> you not only lose the 2 minutes data but can have a corrupt DB in ha
On Mon, Aug 30, 2010 at 4:21 PM, Pankaj Chawla wrote:
>
> Thanks for the reply. Wont keeping things in memory lead to chances of
> db getting corrupt especially in cases of power failure or device reboots.
>
You have to decide what you want. The problem is if you don't want to be
anything writte
y fraction of time (for example
when the page is already in the cache), this might affect the performance in
general.
Also you can try to compare EXPLAIN QUERY result from a version prior to
3.5.5 to some of the current one for the same sql query. I wanted to do this
myself, but seems like can not acc
Simon,
I don't know whether my posts have a delay with delivery (replied several
hours ago to the discussion you mentioned), but actually I tested 100 rows
before and after with a similar query (ORDER BY LIMIT) and it definitely
shows that sqlite saves time and doesn't go further. Do you have reaso
E Word < 'TAKE' ORDER BY Word
DESC LIMIT 100)
UNION
SELECT * FROM (SELECT * FROM MxVft_FTWords WHERE Word >= 'TAKE' ORDER BY
Word LIMIT 100)
ORDER BY Word
I tried it on a real table containing about 1,3M records, EXPLAIN QUERY
PLAN successfully reported about index usage, al
> Maybe it'll be clearer if I describe my (quite simple) use case. Our
> app is caching what are basically csv files. Hundreds of files, about 2m
> records per file. Sometimes we want to delete all the cache rows for one
> of the files. We know ahead of time which file it will be -- let's say
>
> In my case (which is certainly not typical), a (several GB) large
> database is built up in several batches, one table at a time, while in
> parallel many intermediate files on the disk are created. This resulted
> in a very fragmented database file. After that, also several times, the
> data is
Whether SQLITE_FCNTL_CHUNKS_SIZE is useful is a different discussion.
> I myself would love to see features exposed via pragmas whenever
> possible, for the simple reason that I don't use the C API and can't
> make use of the features otherwise. I would assume that since the
> SQLite developers ad
On Fri, Aug 20, 2010 at 12:27 AM, Taras Glek wrote:
> Hi,
> I really appreciate that sqlite got this feature to reduce
> fragmentation, but why not expose this as a pragma?
>
Taras, I think that you're overestimating the feature. On the OS level it
won't matter how far the file pointer will go,
On Tue, Aug 17, 2010 at 4:28 PM, Lukas Haase wrote:
> Hi,
>
> My sqlite database is about 65 MB. The data is split into serval tables.
>
> Is there a way to enumerate the space requirements for each table so
> that I can see which tables are the memory consumers?
>
>
Look at this discussion:
http
> And let us know if the problem persists. Perhaps this has been fixed by
>
>http://www.sqlite.org/src/info/e4b8a2ba6e
>
>
Richard,
I tried to investigate the problem L L posted in parallel. I think L L will
report his results. From my tests in seem the snapshot has fixed the
problem, but it s
On Fri, Aug 13, 2010 at 1:38 PM, Max Vlasov wrote:
>
> I can approximately calculate, how big the new database will grow. Is
>> there a way to tell SQLite to reserve an inital space or numer of pages
>> instead of letting the database file grow again and again?
>>
From what I see, it seems that the sqlite internally doesn't rely on the
file size for the core functionality and xFileSize looks more like a service
function, but I may be wrong
Max Vlasov,
www.maxerist.net
___
sqlite-users mailing list
sqlite-user
qlite expects. So before any file system defragmentation,
internal sqlite defragmentation (VACUUM) have to be applied.
Max Vlasov,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
On Mon, Jul 26, 2010 at 11:28 PM, Taras Glek wrote:
> Hi,
> I noticed an interesting disk-space behavior with VACUUM. If I vacuum my
> places.sqlite(Firefox database), it's 49mb. ...
> ...
> Then vacuum, the db becomes 24mb.
>
Taras,
49 almost equal to 24*2. Can it be related to some encoding c
> In the case of SQLite, it is also very unlikely to save space. In
> fact, because of the way integer values are stored, it is very
> likely to use more space.
>
Jay, In most cases yes, but there are ones with several integers that should
be used together in ordering, the space is wasted sin
On Tue, Jul 20, 2010 at 7:38 PM, Ed Hawke <
edward.ha...@hawkeyeinnovations.co.uk> wrote:
> I was running a system using an old sqlite3.dll (version 3.0.8, I believe).
>
> Updating to the latest version (3.6.23) causes my program to run
> incredibly slowly.
>
>
Can you provide more details about t
On Tue, Jul 20, 2010 at 5:14 PM, Pavel Ivanov wrote:
> > Are there reasons not to implement optimization in the first case? Except
> > for this is not most requested one :)
>
> I guess because this case is highly specific and it's behavior should
> depend on particular constants used. Put there f
SQLite Expert (which I use extensively in the Pro version) enforces
> displaying types as declared. So if you declare your column as INTEGER
> (or INT, ...) then it will display integers regardless of the actual
> individual data type using common conversions. This is a side effect
> of the Delph
when several small fields exists and they should be
indexed, one can pack them into id(rowid) to save space and the first syntax
will allow querying more naturally
Thanks,
Max Vlasov
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http
On Fri, Jul 16, 2010 at 5:14 PM, Jim Wilcoxson wrote:
> >>
> > You mean, like this?
> >
> > pragma page_count
> > pragma page_size
> > size = page_count*page_size
> >
> >
> Doh. Nevermind. I see you said table, not database. :)
>
>
If I'm asked, the syntax PRAGMA page_count(table_name/index_nam
On Fri, Jul 16, 2010 at 4:52 PM, Jay A. Kreibich wrote:
>
> Check out sqlite3_analyzer. This is available in binary format only
> on the SQLite website. http://sqlite.org/download.html
>
>
Thanks, the tool is really great, but just wondering can the code used there
be part of another project
Hi,
always wanted to have a possibility to calculate how much a table occupies.
As long as I see from the archive, there's no out-of-the-box solution
(CMIIW)
Recently I made a query that probably solves this, but it has some
limitations and there are still issues unsolved.
So, let's TableToCheck
Riccardo,
please do the following query
select typeof(basket) as tp FROM Data WHERE tp<>"integer"
as you may know sqlite accepts any value regardless of your desired type, so
you possible could have inserted a real value not knowing about it. This
query will probably show the rows with such valu
On Wed, Jul 14, 2010 at 2:12 PM, D. Richard Hipp wrote:
> The signature of the VFS has changed slightly - the xShmOpen() method
> has been removed. Implementations are now expected to automatically
> create the shared memory on the first call to xShmMap(). And the
> xShmClose() method has been
>
> > In testing the latest SQLite snapshot with WAL enabled, it seems that
> > there's no way to use a database in a read-only location.
> >
>
> Documentation on the developers' current view of this issue can be found
> here:
>
>
>
In my opinion it's ok, WAL already has special conditions on which
On Mon, Jul 12, 2010 at 5:00 PM, Alexey Pechnikov wrote:
> Maxim, please show example here and the link to your implementation.
>
>
Alexey,
to illustrate I prepared a screen shot
http://www.maxerist.net/downloads/temp/bind_iterate.png
excel and cells to be copied are at the top, at the center t
ignoring extra fields with padding.
I don't know whether the clipboard content can be used in sqlite
ideologically, but in my case it saved additional time allowing importing
for example from excel without the intermediate file (csv).
Thanks
Max Vlasov
www.maxeris
>>
> >
> > Simon, you gave an interesting explanation, but does this rule work in
> > general? I mean there are many models, many sizes and so on.
>
> Don't know. You could test it. Write a program that creates a file half
> the size of the drive, then writes to random parts of it timing each
>
Second Proposed Change:
>
> Remove the WAL mode from "PRAGMA journal_mode". The journal_mode pragma
> only specifies the various rollback journal modes. Enable the WAL using a
> separate pragma such as "PRAGMA wal=ON; PRAGMA wal=OFF;"
>
> Thoughts? Comments? Other suggestions?
>
>
Maybe it's
> You want "PRAGMA main.journal_mode"
>
> A "PRAGMA journal_mode;" (without the "main.") shows you the default
> journal
> mode used by newly created databases, which is always "DELETE" unless you
> have changed it with a prior "PRAGMA journal_mode=MODE" command.
>
> Though, I will admit, this is c
> >> (I guess it well might not on an SSD disk, but on a conventional
> >> rotational disk, pager could read several pages ahead with one seek -
> >> but does it?)
> >
> > No, the pager does not. Among other things, my feeling is that the
> > locality of pages is not very strong, unless the data
Alexey,
I read this sentence, but it didn't help.
So I suppose there's a bug in PRAGMA journal_mode logic
Steps to reproduce.
1. Create an empty base with some table.
Look at the 18,19 offsets, they both = 1, it's ok, the base is compatible
with full range of sqlite3 versions.
2. Do PRAGMA journ
Just started testing WAL feature and have some questions
regarding PRAGMA journal_mode;
Should =WAL affects only current session, or all following sessions until
=DELETE performed? As long as I see it keeps the state, I concluded it from
the following facts
- prior versions of sqlite refuse to acc
On Sat, Jun 26, 2010 at 7:27 AM, zhangzhenggui wrote:
> tbl_test maybe like this:
> create table tbl_test(f1, f2, f3);
>
> Now, I want to get the num of records which (f1, f2) are distinct.
>
> I try "select count(distinct f1, f2) from tbl_test", but error occur: SQL
> error: wrong number of argum
domly and writing randomly until specially
interrupted. While the program reads and writes you check PRAGMA
integrity_check; from time to time and stop if result shows corruption.
If you can't reproduce the problem on your machine you can send specially
prepared version to the user an
On Sat, Jun 5, 2010 at 11:01 AM, Frank Church wrote:
>
> On checking the sqlite3.dll docs it states the only additional requirement
> for it is msvcrt.dll.
>
Frank, as long as I know it's just a variation of sqlite3.dll that uses MS C
run-time dynamically linked vs statically. You can compile sq
Hi,
Tried to figured out the simplest query returning empty result set without
binding to any existing table.
The query
SELECT 1 WHERE 1=2
worked, but it looked a little strange ) and besides it didn't work in
mysql. Will it work in future versions of sqlite or I'd be better to use a
query worki
e packages. If I had power, I'd be
glad to calculate and see how many sqlite-related reads and writes are
taking place on the whole earth. I'm sure it's a big number :)
Thank you for sqlite!
Max Vlasov
On Sat, May 29, 2010 at 5:57 PM, D. Richard Hipp wrote:
>
> Thanks, everyb
On Thu, May 27, 2010 at 3:07 PM, Michael Ash wrote:
> ...These are large tables (52,355 records in facility and 4,085,137 in
> release_cl).
>
> ...
> sqlite> explain query plan
> ...> SELECT name,score
> ...> FROM facility f, (SELECT facilitynumber,SUM(score_rev) AS score
> ...> FRO
On Wed, May 26, 2010 at 6:19 PM, Pavel Ivanov wrote:
> > But just
> > wondering, was allowing to create such field intentional? As I suppose
> such
> > field is complete phantom since most sql queries will interpret rowid
> > internally and won't allow access this user field at all.
>
> Situation
While implementing a table that intended for arbitrary table storage, I
automatically named some field rowid not linking at the moment that it will
have a name conflict with sqlite internal rowid name. Lately I discovered
this, and (not a big deal) recreated table with a different name. But just
wo
> where exists (
> select 1 from master_table where
>master_table.id=detail_table.masterid and
>masterfieldtocheck = okvalue and
>masterfield2tocheck = okvalue2);
>
> -- or
>
> where detail_table.masterid in (
> select id from master_table where
>masterfieldtocheck = okvalue and
>
We all know UPDATE sometimes is limited so in order to make some complex
conditional updating we can only rely on the complexity of WHERE clause.
I would like to update my detail table based on master properties (so to set
some value only if the corresponding master record fits some conditions).
Gilles,
For Delphi I successfully used files from http://www.aducom.com to
statically link sqlite files compiled with bcc (Borland command-line c
compiler freely available now) with Delphi. Also the components of
aducom.com will allow you to use all the power of Delphi database components
with the
> I have a table with two fields defining range of ids of another table,
> > IdFrom and IdTo. I would like to use this information (multiply rows as
> a
> > SELECT result of this IdFrom and IdTo) in joins.
>
> Something like this?
>
> select * from MyTable join Ranges
> on (MyTable.id between IdF
a way to use them in joins directly, but if it's not possible,
at least to find an effective way to populate this ranges to a temporary
table. I know there's a always a way to do this algorithmically, but just
wondering if it's possible with sql.
Thanks,
Max Vlas
writes
from several computers you will at least have probability arguments on your
side )
Max Vlasov,
maxerist.net
On Wed, Apr 28, 2010 at 9:43 AM, Jean-Christophe Deschamps
wrote:
> Hi gurus,
>
> I'm aware of the limitations that generally preclude using SQLite over
> a network
> ...As I add more restrictions on the where-clause it
> tends to slow down. I realize that this is due to my indexes, but can't
> add a lot of indexes because it slows down the insert speed which is
> more important than the query speed.
>
Nathan, maybe you already knew but just in case...
if
o exclude windows system cache as a player.
Max Vlasov,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
** A PENDING_LOCK is obtained by locking a designated byte different from
** the RESERVED_LOCK byte.
It was a note about windows, but there's also a phrase "The same locking
strategy and byte ranges are used for Unix."
Max Vlasov,
maxerist.net
_
One process, two threads.
> I have found in documentation, that is not recommended to pass one
> connection
> from one thread to second one.
>
Yes, you're right, forgot about this, only exception is sqlite3_interrupt
that can be called from other thread otherwise its existence makes no sense.
But
> I need an application consisting two threads.
> In one of them i need to store incomming "messages" (one message is 1 to 8
> bytes of data) to temporary table existing only in memory.
> It needs to be fast, storing hundreds of messages per second.
> There i have a trigger deleting old rows and cr
>
> Would you care to repeat those two SELECTs, but after making indices on the
> X and Y columns ?
>
Simon, Tim, forgot to mention, there were also two indexes in the test db,
on X and on Y. Without them 1,8 seconds and 1/10 data flow would not be
possible ))
On Mon, Mar 22, 2010 at 2:52 PM, Tim
>
> Assuming a table where Latitude column and Longitude column each have
> their own index:
>
> perform select #1 which returns the rowids of rows whose latitude meets
> criteria
> INTERSECT
> perform select #2 which returns the rowids of rows whose longitude meets
> criteria
>
>
Ok, just test.
On Sun, Mar 21, 2010 at 3:50 PM, Tim Romano wrote:
> For someone who doesn't read C, could someone who knows please describe
> the SQLite INTERSECT algorithm? What optimizations are available to it?
> Does INTERSECT have to assume that neither vector is pre-sorted? Here's
> the background of my
> First, I'm sure it is nearly impossible to do this as a
> guaranteed, atomic operation on most OSes and filesystems. ...
>
> Second, if this is meant to look like a cleanup operation on the
> original file, the original file (including any filesystem meta-data)
> should be kept in-tact. ..
> This means that to VACUUM a SQLite database of size X, you need at
> least 2X of _additional_ free disk space available. That seems rather
> wasteful, just looking at it as a SQLite user. Although
> programmatically there may be reasons for it that I'm not aware of.
>
>
Hmm, did some research,
> I wonder if the operator "AND" (in capitals letters) is yet available and
> different from the basic term "and" (in lower letters).
>
>
Make sure you compiled the sources with SQLITE_ENABLE_FTS3_PARENTHESIS,
since according to docs
SQLITE_ENABLE_FTS3_PARENTHESIS
This option modifies the quer
> When I enter a new book, I want the author to default to the last author in
> the database. (For the sake of this example you can ignore tricks involving
> deleting authors and reusing ids.)
>
I'm not sure I understood you correctly, but the query
INSERT INTO Books
(Author, Title)
VALUES
((Se
> Also it's quite known that
> creating index after inserting all rows is much faster than creating
> index before that. So it can be even beneficial in inserting huge
> amount of rows somewhere in the middle of the work: first delete all
> indexes, then insert rows, then create indexes once more.
nci...@aquarelo.com> wrote:
> Thanks Dannis,
>
> The problem is a little bigger. I must have 2 instances of same table:
> original and latest. Then my problem is what is the best way to:
> - transform 'original' with same data as 'latest'. This is 'save'.
> - transform 'latest' with same data as
As I recall sqlite probably doesn't case about string types included in the
CREATE TABLE field list, so TEXT, VARCHAR, VARCHAR(200) are identical from
its point of view. Today I needed to increase the number of chars in VARCHAR
statement since the library that uses sqlite actually does care about
> The writer application must be failsafe, as much as possible (acoustic
> emission recording devices); I simply can not afford that a reader makes a
> select and because of a programming error the acquisition be blocked. I had
> this just by opening sqliteman.
>
> The recording rate is variable; u
> Then I tried in a loop with 2 programs to write / read in parallel and it
> seems to work without problems. Can anyone advise if this has any chance to
> work (or say it would definitely NOT work)?
>
> As a short summary: would it be interesting for anyone to enable read-only
> open with a specia
On Wed, Mar 3, 2010 at 8:57 AM, Collin Capano wrote:
> The databases in question don't seem to be
> corrupt; I can open them on the command line and in python programs
> (using pysqlite) and can read triggers from them just fine. It's just
> when I try to vacuum and create temp tables that I run i
On Tue, Mar 2, 2010 at 6:36 PM, Pavel Ivanov wrote:
> OK, now I see the problem, but sqlite3_total_changes() will not help
> here too - it behaves the same way as sqlite3_changes(), it doesn't
> accumulate changes over several statements.
>
Hmm... are you sure about this?
A quote from the docs:
On Mon, Mar 1, 2010 at 6:31 PM, Pavel Ivanov wrote:
> sqlite3_changes() is exactly what you should use in this case. And I
> didn't understand why did you find it unsuitable for you?
>
> Pavel
>
I think I understand his confusion. Imagine if for some reason you don't
know whether last query is
On Tue, Mar 2, 2010 at 2:41 AM, Jason Lee wrote:
> Hi all,
>
> I've been playing around with the FTS3 (via the amalgamation src) on a
> mobile device and it's working well. But my db file size is getting
> pretty big and I was looking for a way to compress it.
>
Jason, can you calculate the rat
On Tue, Mar 2, 2010 at 10:54 AM, Paul Vercellotti wrote:
> Now I'm guessing that storing all those blobs will slow down access to the
> main tables (assuming records are added gradually - most without associated
> blobs, some with), because records would be spread out over many more pages
> (more
I did some search and as I see, file change counter (dbFileVers field of the
header) is not directly or indirectly accessible for reading. But sometimes
one would like to have such feature. For example, every layer above sqlite
has likely its own cache (Delphi db engine for example). And having abi
On Thu, Feb 25, 2010 at 6:53 PM, Pavel Ivanov wrote:
> > So is such behavior documented and can be used or should I still avoid
> > keeping such "live" statements in db-shared enviroment?
>
> It's documented that sqlite3_finalize AND sqlite3_reset release all
> locks acquired by the statement. So
Hi,
I thought that prepared statements in non-finalized state isn't compatible
with locking mechanism in db sharing environment i.e. if I want for any of
several processes not to hang waiting for EXCLUSIVE lock none of others
should use non-finalized prepared statements. But today I found that the
> The fts implementation does work in response to data gotten from
> > SQLite calls, and feeds data back out via SQLite calls, which should
> > all start throwing errors and causing things to unwind. Most
> > expensive fts operations involve lots of subsidiary queries into the
> > SQLite core
>
Simon,
you mentioned full-text search.
I just tried to search for mentioning of sqlite3_interrupt in the sqlite
sources
The main is the implemention of the function itself that just sets the
isInterrupted variable:
void sqlite3_interrupt(sqlite3 *db){
db->u1.isInterrupted = 1;
}
but all of of
Hello, Jérôme
Nice to hear you finally joined us with this really interesting discussion )
>
> To Max Vlasov:
>
> > in sorted order to sqlite base other 5 minutes, so about 10 minutes it
> > total. First 5 minutes was possible since we exchange only offsets,
> > not d
> I'm really surprised you're not seeing something from an increased
> cache size. It has always made a very noticeable difference in my own
> manipulations, but I think the largest table I've ever worked with
> "only" had 6M rows.
>
Jay, increasing cache size really helps, but till some si
> This is appx. 500MB cache, why not trying with 2,000,000 cache size ? :-)
>
>
Hmm, managed to increase it to only 1,000,000 (x1024) size, larger values
bring to "Out of memory" finally, and this values (1G) allows up to
6,000,000 fast records for 100 bytes field per record index. Still good,
such
Marcus,
although increasing cache size is a good method, it may sometimes give
unpredictable results (in terms of performance).
I looked at the vdbe code (EXPLAIN CREATE INDEX ... ) of the index creation
and it seems like there is no special sorting algorithm (CMIIW please).
Excluding all "make on
Jerome,
It's an an interesting challenge, thanks for the post
I tried to research more and did some tests.
My test database contains a table with 10,000,000 records of the text 100
chars in length
CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[Text] TEXT
)
I suppose your dat
>
>
> The sqlite3_stmt_status() interface was designed for this purpose.
> http://www.sqlite.org/c3ref/stmt_status.html
>
Thanks for the info, I hope it will be extended also with other counters
I did some test with this Rowid/Id trick. A larger base, 22mb, 100,000
records (220 bytes per record a
On Fri, Jan 29, 2010 at 6:02 AM, Doyel5 wrote:
> I deleted all indexes, kept only the indexes on the temporary tables,
> namely
> - tempSimArgs and tempPartArgs, and seemingly my original query's running
> time decreased to around 1 sec (when I ran the query the second time). The
> execution time
> I expect process B able to
> read the old data and displays appropriately. But, process B seems to be
> blocked. Why is this so as I thought SQLite handle concurrency as in this
> case where there are multiple reads and one single write ?
>
After Pavel's explanation just reread
http://www.s
>
> > SQLite allows multiple readers OR a single writer to access the database
> simultaneously.
> From the SQLite doc, as long as no transaction is pending, other process
> can
> read or obtain reserved lock for write.
>
the docs say: "Locks are not acquired until the first read or write
operatio
> Maybe I'll try a binary search, or something else along those lines.
>
You can mix binary search with sqlite as a storar. The idea is to place your
words ordered by text and use rowid as indexes.
So after creating a new table
CREATE TABLE Table2 AS SELECT * FROM Table ORDER BY Text
you now ca
>
>
>
> One of many ways would be to precompute the min/max into a separate
> table and then query that table whenever you need the min/max.
>
Only if one have full control about how the data changes what with so
widespread format as sqlite almost always not the case. I mean without
triggers you w
Mike,
as the docs on the page http://www.sqlite.org/c3ref/stmt.html says:
1. sqlite3_prepare_v2()
2. sqlite3_bind_*()
3. sqlite3_step() one or more times
4. sqlite3_reset()
5. goto 2 or sqlite3_finalize().
As you may know sqlite3_prepare_* compiles your SQL query into a VDBE
micro-program with in
As I think, the code you were surprised to see was a fix for this issue:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg35864.html
as Jeremy Spiegel in that post wrote
> when I try calling DeleteFileW on a file with a handle open on
> that file, I see DeleteFileW returning 1 (success) an
>
> All you really need to do is splice your code between the VFS that
> the SQLite engine sees and the native VFS layer that comes with the
> distribution, adding a bit of extra code to xRead() and xWrite() to
> munge the data.
>
I implemented this approach once, it worked, VFS also so flexib
>
> After much playing about, I have determined that it is necessary to violate
> the guidelines on not using INDEXED BY and NOT INDEXED in an attempt to
> optimise performance.
Although you're very determined about your conclusions, I saw a
misunderstanding about INDEXED BY in your statements.
T
On Tue, Jan 12, 2010 at 10:22 AM, zhangzhenggui wrote:
> Dear friend,
>When I execute a SQL with C/C++ API, I want to know which table is
> operated by this SQL. Is there any way to do this except parse the SQL?
>
>
Every approach is not perfect, here is another one.
add EXPLAIN to the query
On Tue, Jan 12, 2010 at 5:28 AM, ve3meo wrote:
> Max Vlasov writes:
>
> For 3.5.4 and 3.6.17 which executed in ~240s:
> "order", "from", "detail"
>
> And for 3.6.20 which executed the same query in ~2500s:
> "order","from",
On Mon, Jan 11, 2010 at 2:33 AM, Tom Holden wrote:
> I have been trying out a number of SQLite managers, one test being the
> execution time for the same query on the same database on the same computer.
> The scattering of results was very surprising with a spread on the order of
> 10:1.
>
Is th
On Mon, Jan 11, 2010 at 4:17 AM, Hub Dog wrote:
> I think I found a performance regression bug of sqlite 3.6.1 8. A sql will
> cost 1800 seconds to return the query result with sqlite 3.6.18 and with
> the
> previous version it only cost about 170 seconds.
>
I have a suggestion for you. If you'r
On Mon, Jan 11, 2010 at 12:56 AM, D. Richard Hipp wrote:
>
> On Jan 10, 2010, at 4:50 AM, Max Vlasov wrote:
>
> > Documentation says that INTERSECT implemented with temporary tables
> > either
> > in memory or on disk. Is it always the case?
>
> No.
>
> If
Documentation says that INTERSECT implemented with temporary tables either
in memory or on disk. Is it always the case? The problem is that if I have
several selects (six for example) when each produces thousands of results
and the intersection is only hundreds the query takes about minute to
execu
Thanks for the answers. At the first place I wanted to use rowid to save
space (since rowids always exist). After reading the replies I changed
declaration of ID to the one without AUTOINCREMENT and manually fill the
consecutive values starting current max(rowid)+1. So rowids still used but
now the
301 - 400 of 415 matches
Mail list logo