On Tue, Aug 17, 2010 at 4:28 PM, Lukas Haase wrote:
> Hi,
>
> My sqlite database is about 65 MB. The data is split into serval tables.
>
> Is there a way to enumerate the space requirements for each table so
> that I can see which tables are the memory consumers?
>
>
Look at
> And let us know if the problem persists. Perhaps this has been fixed by
>
>http://www.sqlite.org/src/info/e4b8a2ba6e
>
>
Richard,
I tried to investigate the problem L L posted in parallel. I think L L will
report his results. From my tests in seem the snapshot has fixed the
problem, but it
On Fri, Aug 13, 2010 at 1:38 PM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> I can approximately calculate, how big the new database will grow. Is
>> there a way to tell SQLite to reserve an inital space or numer of pages
>> instead of letting the database f
From what I see, it seems that the sqlite internally doesn't rely on the
file size for the core functionality and xFileSize looks more like a service
function, but I may be wrong
Max Vlasov,
www.maxerist.net
___
sqlite-users mailing list
sqlite-users@sql
ite expects. So before any file system defragmentation,
internal sqlite defragmentation (VACUUM) have to be applied.
Max Vlasov,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
On Mon, Jul 26, 2010 at 11:28 PM, Taras Glek wrote:
> Hi,
> I noticed an interesting disk-space behavior with VACUUM. If I vacuum my
> places.sqlite(Firefox database), it's 49mb. ...
> ...
> Then vacuum, the db becomes 24mb.
>
Taras,
49 almost equal to 24*2. Can it be
> In the case of SQLite, it is also very unlikely to save space. In
> fact, because of the way integer values are stored, it is very
> likely to use more space.
>
Jay, In most cases yes, but there are ones with several integers that should
be used together in ordering, the space is wasted
On Tue, Jul 20, 2010 at 7:38 PM, Ed Hawke <
edward.ha...@hawkeyeinnovations.co.uk> wrote:
> I was running a system using an old sqlite3.dll (version 3.0.8, I believe).
>
> Updating to the latest version (3.6.23) causes my program to run
> incredibly slowly.
>
>
Can you provide more details about
On Tue, Jul 20, 2010 at 5:14 PM, Pavel Ivanov wrote:
> > Are there reasons not to implement optimization in the first case? Except
> > for this is not most requested one :)
>
> I guess because this case is highly specific and it's behavior should
> depend on particular
SQLite Expert (which I use extensively in the Pro version) enforces
> displaying types as declared. So if you declare your column as INTEGER
> (or INT, ...) then it will display integers regardless of the actual
> individual data type using common conversions. This is a side effect
> of the
when several small fields exists and they should be
indexed, one can pack them into id(rowid) to save space and the first syntax
will allow querying more naturally
Thanks,
Max Vlasov
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http
On Fri, Jul 16, 2010 at 5:14 PM, Jim Wilcoxson wrote:
> >>
> > You mean, like this?
> >
> > pragma page_count
> > pragma page_size
> > size = page_count*page_size
> >
> >
> Doh. Nevermind. I see you said table, not database. :)
>
>
If I'm asked, the syntax PRAGMA
On Fri, Jul 16, 2010 at 4:52 PM, Jay A. Kreibich wrote:
>
> Check out sqlite3_analyzer. This is available in binary format only
> on the SQLite website. http://sqlite.org/download.html
>
>
Thanks, the tool is really great, but just wondering can the code used there
be part
Hi,
always wanted to have a possibility to calculate how much a table occupies.
As long as I see from the archive, there's no out-of-the-box solution
(CMIIW)
Recently I made a query that probably solves this, but it has some
limitations and there are still issues unsolved.
So, let's
Riccardo,
please do the following query
select typeof(basket) as tp FROM Data WHERE tp<>"integer"
as you may know sqlite accepts any value regardless of your desired type, so
you possible could have inserted a real value not knowing about it. This
query will probably show the rows with such
On Wed, Jul 14, 2010 at 2:12 PM, D. Richard Hipp wrote:
> The signature of the VFS has changed slightly - the xShmOpen() method
> has been removed. Implementations are now expected to automatically
> create the shared memory on the first call to xShmMap(). And the
> xShmClose()
>
> > In testing the latest SQLite snapshot with WAL enabled, it seems that
> > there's no way to use a database in a read-only location.
> >
>
> Documentation on the developers' current view of this issue can be found
> here:
>
>
>
In my opinion it's ok, WAL already has special conditions on
On Mon, Jul 12, 2010 at 5:00 PM, Alexey Pechnikov wrote:
> Maxim, please show example here and the link to your implementation.
>
>
Alexey,
to illustrate I prepared a screen shot
http://www.maxerist.net/downloads/temp/bind_iterate.png
excel and cells to be copied are at
ignoring extra fields with padding.
I don't know whether the clipboard content can be used in sqlite
ideologically, but in my case it saved additional time allowing importing
for example from excel without the intermediate file (csv).
Thanks
Max Vlasov
www.maxerist.net
>>
> >
> > Simon, you gave an interesting explanation, but does this rule work in
> > general? I mean there are many models, many sizes and so on.
>
> Don't know. You could test it. Write a program that creates a file half
> the size of the drive, then writes to random parts of it timing each
>
Second Proposed Change:
>
> Remove the WAL mode from "PRAGMA journal_mode". The journal_mode pragma
> only specifies the various rollback journal modes. Enable the WAL using a
> separate pragma such as "PRAGMA wal=ON; PRAGMA wal=OFF;"
>
> Thoughts? Comments? Other suggestions?
>
>
Maybe it's
> You want "PRAGMA main.journal_mode"
>
> A "PRAGMA journal_mode;" (without the "main.") shows you the default
> journal
> mode used by newly created databases, which is always "DELETE" unless you
> have changed it with a prior "PRAGMA journal_mode=MODE" command.
>
> Though, I will admit, this is
> >> (I guess it well might not on an SSD disk, but on a conventional
> >> rotational disk, pager could read several pages ahead with one seek -
> >> but does it?)
> >
> > No, the pager does not. Among other things, my feeling is that the
> > locality of pages is not very strong, unless the
Alexey,
I read this sentence, but it didn't help.
So I suppose there's a bug in PRAGMA journal_mode logic
Steps to reproduce.
1. Create an empty base with some table.
Look at the 18,19 offsets, they both = 1, it's ok, the base is compatible
with full range of sqlite3 versions.
2. Do PRAGMA
On Sat, Jun 26, 2010 at 7:27 AM, zhangzhenggui wrote:
> tbl_test maybe like this:
> create table tbl_test(f1, f2, f3);
>
> Now, I want to get the num of records which (f1, f2) are distinct.
>
> I try "select count(distinct f1, f2) from tbl_test", but error occur: SQL
>
ng randomly until specially
interrupted. While the program reads and writes you check PRAGMA
integrity_check; from time to time and stop if result shows corruption.
If you can't reproduce the problem on your machine you can send specially
prepared version to the user and ask him to perform
On Sat, Jun 5, 2010 at 11:01 AM, Frank Church wrote:
>
> On checking the sqlite3.dll docs it states the only additional requirement
> for it is msvcrt.dll.
>
Frank, as long as I know it's just a variation of sqlite3.dll that uses MS C
run-time dynamically linked vs
Hi,
Tried to figured out the simplest query returning empty result set without
binding to any existing table.
The query
SELECT 1 WHERE 1=2
worked, but it looked a little strange ) and besides it didn't work in
mysql. Will it work in future versions of sqlite or I'd be better to use a
query
. If I had power, I'd be
glad to calculate and see how many sqlite-related reads and writes are
taking place on the whole earth. I'm sure it's a big number :)
Thank you for sqlite!
Max Vlasov
On Sat, May 29, 2010 at 5:57 PM, D. Richard Hipp <d...@hwaci.com> wrote:
>
> Thanks, everybody,
On Thu, May 27, 2010 at 3:07 PM, Michael Ash wrote:
> ...These are large tables (52,355 records in facility and 4,085,137 in
> release_cl).
>
> ...
> sqlite> explain query plan
> ...> SELECT name,score
> ...> FROM facility f, (SELECT facilitynumber,SUM(score_rev) AS
On Wed, May 26, 2010 at 6:19 PM, Pavel Ivanov wrote:
> > But just
> > wondering, was allowing to create such field intentional? As I suppose
> such
> > field is complete phantom since most sql queries will interpret rowid
> > internally and won't allow access this user field
While implementing a table that intended for arbitrary table storage, I
automatically named some field rowid not linking at the moment that it will
have a name conflict with sqlite internal rowid name. Lately I discovered
this, and (not a big deal) recreated table with a different name. But just
> where exists (
> select 1 from master_table where
>master_table.id=detail_table.masterid and
>masterfieldtocheck = okvalue and
>masterfield2tocheck = okvalue2);
>
> -- or
>
> where detail_table.masterid in (
> select id from master_table where
>masterfieldtocheck = okvalue and
We all know UPDATE sometimes is limited so in order to make some complex
conditional updating we can only rely on the complexity of WHERE clause.
I would like to update my detail table based on master properties (so to set
some value only if the corresponding master record fits some conditions).
Gilles,
For Delphi I successfully used files from http://www.aducom.com to
statically link sqlite files compiled with bcc (Borland command-line c
compiler freely available now) with Delphi. Also the components of
aducom.com will allow you to use all the power of Delphi database components
with
> I have a table with two fields defining range of ids of another table,
> > IdFrom and IdTo. I would like to use this information (multiply rows as
> a
> > SELECT result of this IdFrom and IdTo) in joins.
>
> Something like this?
>
> select * from MyTable join Ranges
> on (MyTable.id between
a way to use them in joins directly, but if it's not possible,
at least to find an effective way to populate this ranges to a temporary
table. I know there's a always a way to do this algorithmically, but just
wondering if it's possible with sql.
Thanks,
Max Vlasov
maxerist.net
writes
from several computers you will at least have probability arguments on your
side )
Max Vlasov,
maxerist.net
On Wed, Apr 28, 2010 at 9:43 AM, Jean-Christophe Deschamps
<j...@q-e-d.org>wrote:
> Hi gurus,
>
> I'm aware of the limitations that generally preclude using SQLite o
> ...As I add more restrictions on the where-clause it
> tends to slow down. I realize that this is due to my indexes, but can't
> add a lot of indexes because it slows down the insert speed which is
> more important than the query speed.
>
Nathan, maybe you already knew but just in case...
if
o exclude windows system cache as a player.
Max Vlasov,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
he reserved lock byte.
** A PENDING_LOCK is obtained by locking a designated byte different from
** the RESERVED_LOCK byte.
It was a note about windows, but there's also a phrase "The same locking
strategy and byte ranges are used for Unix."
Max Vlasov,
maxerist.net
___
One process, two threads.
> I have found in documentation, that is not recommended to pass one
> connection
> from one thread to second one.
>
Yes, you're right, forgot about this, only exception is sqlite3_interrupt
that can be called from other thread otherwise its existence makes no sense.
But
> I need an application consisting two threads.
> In one of them i need to store incomming "messages" (one message is 1 to 8
> bytes of data) to temporary table existing only in memory.
> It needs to be fast, storing hundreds of messages per second.
> There i have a trigger deleting old rows and
>
> Would you care to repeat those two SELECTs, but after making indices on the
> X and Y columns ?
>
Simon, Tim, forgot to mention, there were also two indexes in the test db,
on X and on Y. Without them 1,8 seconds and 1/10 data flow would not be
possible ))
On Mon, Mar 22, 2010 at 2:52 PM,
>
> Assuming a table where Latitude column and Longitude column each have
> their own index:
>
> perform select #1 which returns the rowids of rows whose latitude meets
> criteria
> INTERSECT
> perform select #2 which returns the rowids of rows whose longitude meets
> criteria
>
>
Ok, just test.
> First, I'm sure it is nearly impossible to do this as a
> guaranteed, atomic operation on most OSes and filesystems. ...
>
> Second, if this is meant to look like a cleanup operation on the
> original file, the original file (including any filesystem meta-data)
> should be kept in-tact.
> This means that to VACUUM a SQLite database of size X, you need at
> least 2X of _additional_ free disk space available. That seems rather
> wasteful, just looking at it as a SQLite user. Although
> programmatically there may be reasons for it that I'm not aware of.
>
>
Hmm, did some
> I wonder if the operator "AND" (in capitals letters) is yet available and
> different from the basic term "and" (in lower letters).
>
>
Make sure you compiled the sources with SQLITE_ENABLE_FTS3_PARENTHESIS,
since according to docs
SQLITE_ENABLE_FTS3_PARENTHESIS
This option modifies the
> When I enter a new book, I want the author to default to the last author in
> the database. (For the sake of this example you can ignore tricks involving
> deleting authors and reusing ids.)
>
I'm not sure I understood you correctly, but the query
INSERT INTO Books
(Author, Title)
VALUES
> Also it's quite known that
> creating index after inserting all rows is much faster than creating
> index before that. So it can be even beneficial in inserting huge
> amount of rows somewhere in the middle of the work: first delete all
> indexes, then insert rows, then create indexes once more.
nci...@aquarelo.com> wrote:
> Thanks Dannis,
>
> The problem is a little bigger. I must have 2 instances of same table:
> original and latest. Then my problem is what is the best way to:
> - transform 'original' with same data as 'latest'. This is 'save'.
> - transform 'latest' with same data
As I recall sqlite probably doesn't case about string types included in the
CREATE TABLE field list, so TEXT, VARCHAR, VARCHAR(200) are identical from
its point of view. Today I needed to increase the number of chars in VARCHAR
statement since the library that uses sqlite actually does care about
> The writer application must be failsafe, as much as possible (acoustic
> emission recording devices); I simply can not afford that a reader makes a
> select and because of a programming error the acquisition be blocked. I had
> this just by opening sqliteman.
>
> The recording rate is variable;
> Then I tried in a loop with 2 programs to write / read in parallel and it
> seems to work without problems. Can anyone advise if this has any chance to
> work (or say it would definitely NOT work)?
>
> As a short summary: would it be interesting for anyone to enable read-only
> open with a
On Wed, Mar 3, 2010 at 8:57 AM, Collin Capano wrote:
> The databases in question don't seem to be
> corrupt; I can open them on the command line and in python programs
> (using pysqlite) and can read triggers from them just fine. It's just
> when I try to vacuum and
On Tue, Mar 2, 2010 at 6:36 PM, Pavel Ivanov wrote:
> OK, now I see the problem, but sqlite3_total_changes() will not help
> here too - it behaves the same way as sqlite3_changes(), it doesn't
> accumulate changes over several statements.
>
Hmm... are you sure about this?
A
On Mon, Mar 1, 2010 at 6:31 PM, Pavel Ivanov wrote:
> sqlite3_changes() is exactly what you should use in this case. And I
> didn't understand why did you find it unsuitable for you?
>
> Pavel
>
I think I understand his confusion. Imagine if for some reason you don't
know
On Tue, Mar 2, 2010 at 2:41 AM, Jason Lee wrote:
> Hi all,
>
> I've been playing around with the FTS3 (via the amalgamation src) on a
> mobile device and it's working well. But my db file size is getting
> pretty big and I was looking for a way to compress it.
>
Jason, can
On Tue, Mar 2, 2010 at 10:54 AM, Paul Vercellotti wrote:
> Now I'm guessing that storing all those blobs will slow down access to the
> main tables (assuming records are added gradually - most without associated
> blobs, some with), because records would be spread out over
I did some search and as I see, file change counter (dbFileVers field of the
header) is not directly or indirectly accessible for reading. But sometimes
one would like to have such feature. For example, every layer above sqlite
has likely its own cache (Delphi db engine for example). And having
On Thu, Feb 25, 2010 at 6:53 PM, Pavel Ivanov wrote:
> > So is such behavior documented and can be used or should I still avoid
> > keeping such "live" statements in db-shared enviroment?
>
> It's documented that sqlite3_finalize AND sqlite3_reset release all
> locks acquired
Hi,
I thought that prepared statements in non-finalized state isn't compatible
with locking mechanism in db sharing environment i.e. if I want for any of
several processes not to hang waiting for EXCLUSIVE lock none of others
should use non-finalized prepared statements. But today I found that
> The fts implementation does work in response to data gotten from
> > SQLite calls, and feeds data back out via SQLite calls, which should
> > all start throwing errors and causing things to unwind. Most
> > expensive fts operations involve lots of subsidiary queries into the
> > SQLite core
>
Simon,
you mentioned full-text search.
I just tried to search for mentioning of sqlite3_interrupt in the sqlite
sources
The main is the implemention of the function itself that just sets the
isInterrupted variable:
void sqlite3_interrupt(sqlite3 *db){
db->u1.isInterrupted = 1;
}
but all of of
Hello, Jérôme
Nice to hear you finally joined us with this really interesting discussion )
>
> To Max Vlasov:
>
> > in sorted order to sqlite base other 5 minutes, so about 10 minutes it
> > total. First 5 minutes was possible since we exchange only offsets,
> > not d
> I'm really surprised you're not seeing something from an increased
> cache size. It has always made a very noticeable difference in my own
> manipulations, but I think the largest table I've ever worked with
> "only" had 6M rows.
>
Jay, increasing cache size really helps, but till some
> This is appx. 500MB cache, why not trying with 2,000,000 cache size ? :-)
>
>
Hmm, managed to increase it to only 1,000,000 (x1024) size, larger values
bring to "Out of memory" finally, and this values (1G) allows up to
6,000,000 fast records for 100 bytes field per record index. Still good,
Marcus,
although increasing cache size is a good method, it may sometimes give
unpredictable results (in terms of performance).
I looked at the vdbe code (EXPLAIN CREATE INDEX ... ) of the index creation
and it seems like there is no special sorting algorithm (CMIIW please).
Excluding all "make
Jerome,
It's an an interesting challenge, thanks for the post
I tried to research more and did some tests.
My test database contains a table with 10,000,000 records of the text 100
chars in length
CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[Text] TEXT
)
I suppose your
>
>
> The sqlite3_stmt_status() interface was designed for this purpose.
> http://www.sqlite.org/c3ref/stmt_status.html
>
Thanks for the info, I hope it will be extended also with other counters
I did some test with this Rowid/Id trick. A larger base, 22mb, 100,000
records (220 bytes per record
On Fri, Jan 29, 2010 at 6:02 AM, Doyel5 wrote:
> I deleted all indexes, kept only the indexes on the temporary tables,
> namely
> - tempSimArgs and tempPartArgs, and seemingly my original query's running
> time decreased to around 1 sec (when I ran the query the second time).
> I expect process B able to
> read the old data and displays appropriately. But, process B seems to be
> blocked. Why is this so as I thought SQLite handle concurrency as in this
> case where there are multiple reads and one single write ?
>
After Pavel's explanation just reread
>
> > SQLite allows multiple readers OR a single writer to access the database
> simultaneously.
> From the SQLite doc, as long as no transaction is pending, other process
> can
> read or obtain reserved lock for write.
>
the docs say: "Locks are not acquired until the first read or write
> Maybe I'll try a binary search, or something else along those lines.
>
You can mix binary search with sqlite as a storar. The idea is to place your
words ordered by text and use rowid as indexes.
So after creating a new table
CREATE TABLE Table2 AS SELECT * FROM Table ORDER BY Text
you now
>
>
>
> One of many ways would be to precompute the min/max into a separate
> table and then query that table whenever you need the min/max.
>
Only if one have full control about how the data changes what with so
widespread format as sqlite almost always not the case. I mean without
triggers you
Mike,
as the docs on the page http://www.sqlite.org/c3ref/stmt.html says:
1. sqlite3_prepare_v2()
2. sqlite3_bind_*()
3. sqlite3_step() one or more times
4. sqlite3_reset()
5. goto 2 or sqlite3_finalize().
As you may know sqlite3_prepare_* compiles your SQL query into a VDBE
micro-program with
As I think, the code you were surprised to see was a fix for this issue:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg35864.html
as Jeremy Spiegel in that post wrote
> when I try calling DeleteFileW on a file with a handle open on
> that file, I see DeleteFileW returning 1 (success)
>
> All you really need to do is splice your code between the VFS that
> the SQLite engine sees and the native VFS layer that comes with the
> distribution, adding a bit of extra code to xRead() and xWrite() to
> munge the data.
>
I implemented this approach once, it worked, VFS also so
>
> After much playing about, I have determined that it is necessary to violate
> the guidelines on not using INDEXED BY and NOT INDEXED in an attempt to
> optimise performance.
Although you're very determined about your conclusions, I saw a
misunderstanding about INDEXED BY in your statements.
On Tue, Jan 12, 2010 at 10:22 AM, zhangzhenggui wrote:
> Dear friend,
>When I execute a SQL with C/C++ API, I want to know which table is
> operated by this SQL. Is there any way to do this except parse the SQL?
>
>
Every approach is not perfect, here is another one.
On Tue, Jan 12, 2010 at 5:28 AM, ve3meo <holden_fam...@sympatico.ca> wrote:
> Max Vlasov <max.vla...@...> writes:
>
> For 3.5.4 and 3.6.17 which executed in ~240s:
> "order", "from", "detail"
>
> And for 3.6.20 which executed the same
On Mon, Jan 11, 2010 at 2:33 AM, Tom Holden wrote:
> I have been trying out a number of SQLite managers, one test being the
> execution time for the same query on the same database on the same computer.
> The scattering of results was very surprising with a spread on
On Mon, Jan 11, 2010 at 4:17 AM, Hub Dog wrote:
> I think I found a performance regression bug of sqlite 3.6.1 8. A sql will
> cost 1800 seconds to return the query result with sqlite 3.6.18 and with
> the
> previous version it only cost about 170 seconds.
>
I have a
On Mon, Jan 11, 2010 at 12:56 AM, D. Richard Hipp <d...@hwaci.com> wrote:
>
> On Jan 10, 2010, at 4:50 AM, Max Vlasov wrote:
>
> > Documentation says that INTERSECT implemented with temporary tables
> > either
> > in memory or on disk. Is it always the case?
Documentation says that INTERSECT implemented with temporary tables either
in memory or on disk. Is it always the case? The problem is that if I have
several selects (six for example) when each produces thousands of results
and the intersection is only hundreds the query takes about minute to
Thanks for the answers. At the first place I wanted to use rowid to save
space (since rowids always exist). After reading the replies I changed
declaration of ID to the one without AUTOINCREMENT and manually fill the
consecutive values starting current max(rowid)+1. So rowids still used but
now
On Thu, Jan 7, 2010 at 3:56 PM, Igor Tandetnik <itandet...@mvps.org> wrote:
> Max Vlasov wrote:
> > I have a query "INSERT ... SELECT" and after it performed it I have to to
>
> If by autoincrement you mean a column actually declared with the
> AUTOINCREMENT k
I have a query "INSERT ... SELECT" and after it performed it I have to to
store range of rowids (autoincrement) of the inserted rows. While max(rowid)
for right bound seems ok, assuming max(rowid)+1 for the left bound (before
the query) depends on the fact whether there were deletes from the table
I noticed that setting temp_store to DISK surprisingly improved the
performance in some cases comparing to the same operation when it was set to
MEMORY. Looking at the numbers I noticed that in case of MEMORY one
operation that actually does select spread over a large table always led to
big data
On Fri, Jan 1, 2010 at 8:11 PM, Bert Nelsen wrote:
> So SQLite looks at both the database on the disk and in memory?
> Wouldn't that be difficult???
>
I don't think that only the memory can be used. Imagine you can have a very
big transaction, 1,000,000 inserts. As
> Another question would be: When I use transactions, and I said
> ".BeginTrans"
> and insert new records and then, before saying ".CommitTrans", I query the
> records, they seem to be already saved. Can you tell me why this is so?
> Does
> a select command automatically trigger a ".CommitTrans"?
The code I use can calculate data flow for sql queries (summing xRead iAmt
in VFS) and I noticed that many variations of INSERT SELECT led to very big
data flow (multiplication of the db size). I thought that such queries can
be optimized if both tables are indexed accordingly and finally the
It's an interesting topic. Tried to search the web, and at least two
well-known programs, Picasa and iTunes probably expects sqlite3.dll to be
located somewhere in a shared place (probably system32 folder) and some
other software silently replaced this dll with its own copy having this
entry
QLite-style table. Maybe, maybe not, fts1 did something like
> that and it got terribly slow once the fts table had a few tens of
> thousands of documents. The problem was that the tokens were
> distributed across a large portion of the index, so data locality went
> down the drain
Even then, you
> wouldn't have information about hit counts. To do it reasonably would
> require redesigning the data format to take this use-case into
> consideration.
>
> -scott
>
> On Wed, Dec 23, 2009 at 3:56 AM, Max Vlasov <max.vla...@gmail.com> wrote:
> >
Is it theoretically possible to allow some kind of sql SELECT query
operations with tokens?
Particulary one could implement suggestions similar to Google suggestions in
Web search, when for example "some" typed in a search box and a listbox
popped up with this database tokens starting with these
On Tue, Dec 22, 2009 at 1:22 PM, Evilsmile wrote:
> Hello,
>
> My sqlite version is 3.5.1 and there are a lot of db corruption in my
> system.
>
>
Please, let us know more about your language/platform
___
sqlite-users mailing
On Fri, Dec 18, 2009 at 6:27 PM, Gianandrea Gobbo wrote:
> I'm using sqlite (2.8) on an embedded product, running a Linux kernel.
> I'm experiencing sometimes a database corruption, and listing some
> tables contents gets me a "SQL error: database disk image is malformed".
> Ok,
>
> For ex, If I ran for 200,000 inserts, first 20,000 inserts were done in 9
> secs, but last 20,000 inserts (from 180,000th to 200,000) took almost 110
> secs. It is more than 10 times than what it was initially. These results
> were consistent across all iterations of simulation I did.
>
>
I
On Wed, Dec 16, 2009 at 9:30 AM, Raghavendra Thodime wrote:
> I did try using batch of transactions with synchronous=OFF PRAGMA set. The
> performance improved slightly. But as db file started to grow larger and
> larger in size, the performance degraded considerably. Is it
301 - 400 of 400 matches
Mail list logo