Thank you, Igor. Processing time: 5 seconds. :-)
Igor Tandetnik wrote:
> Tim Romano wrote:
>
>> I've read http://www.sqlite.org/optoverview.html but don't find my
>> answer there.
>>
>> In the following query, WOIDS has 4 million rows and CORNFIX has
>> 25,000 rows.
>>
>> UPDATEWOIDS
>>
I solved my "inner/outer" problems by compiling Sqlite with
SQLITE_ENABLE_STAT2=1.
That flag makes it better at choosing the inner table!
> To: sqlite-users@sqlite.org
> From: itandet...@mvps.org
> Date: Wed, 11 Nov 2009 12:03:06 -0500
> Subject: Re: [sqlite] optimiza
Tim Romano wrote:
> I've read http://www.sqlite.org/optoverview.html but don't find my
> answer there.
>
> In the following query, WOIDS has 4 million rows and CORNFIX has
> 25,000 rows.
>
> UPDATEWOIDS
> SET corn = 1
> WHERE EXISTS
> (
> SELECT *
> FROM CORNFI
I've read http://www.sqlite.org/optoverview.html but don't find my
answer there.
In the following query, WOIDS has 4 million rows and CORNFIX has 25,000
rows.
UPDATEWOIDS
SET corn = 1
WHERE EXISTS
(
SELECT *
FROM CORNFIX
WHERE (cornfix.col_1 = woids.t
Mark Gilbert wrote:
>
> In fact we stumbled across the solution, and I am amazed we didnt
> think of it earlier, and no-one suggested it. Basically our LEAVES
> table doesn't have an Index !!
>
> As soon as we added an index, the process sped up by 17000% :-)
>
> However, I have some questi
shouldn't leafID be the primary key of your LEAVES table and thus already
indexed? What does your create table statement look like? I'd expect
CREATE TABLE Leaves (LeafID INTEGER PRIMARY KEY AUTOINCREMENT, ... other
columns ... )
As far as the create index failing, no idea there, sorry..
Sam
, 28 Feb 2008 21:25:35 -0500
>From: "Samuel Neff" <[EMAIL PROTECTED]>
>Subject: Re: [sqlite] Optimization Question for SQLite Experts
>To: "General Discussion of SQLite Database"
>
>Here's two suggestions. First the simple suggestion is instead of th
You obviously have a set of UID's at the time of the loop, how about
creating a huge select .. from where ...IN (list_of_uids_comma_separated)?
It'll be one single query (or you can break it down into blocks of 50,
or 100, etc).
Will save the overhead of generating the queries over and over again
Here's two suggestions. First the simple suggestion is instead of this..
for (z=0;z wrote:
> Folks.
>
> Looking for some advice from hardened SQliters...
>
> ...
For each twig we have to find all the leaves. The Leaves table has
> maybe 15000 records and we have a query where we search the Le
Folks.
Looking for some advice from hardened SQliters...
- Our application uses an SQLite 3.4.1 database with 8 tables. 1 of
the tables may contain tens or maybe hundreds of thousands of records
with about 30 fields.
- The tables form a linked tree type hierarchy where one table is the
trunk
Mark Gilbert wrote:
>> > - We don't currently prepare SQL statements in advance, would this
>>> technique benefit from somehow preparing statements *before* that
>>> thread gets the lock on the database ? Can we have multiple threads
>>> using the SAME database connection preparing SQL Queries
>
> > - We don't currently prepare SQL statements in advance, would this
>> technique benefit from somehow preparing statements *before* that
>> thread gets the lock on the database ? Can we have multiple threads
>> using the SAME database connection preparing SQL Queries at the same
>> time
Am 20.02.2008 um 14:03 schrieb Mark Gilbert:
> Folks.
>
> Our application uses SQlite on Mac OSX. It is a central data hub for
> a larger infrastructure and manages data coming in from some clients,
> and requests for data from other clients, using our own XML based
> protocols via TCPIP.
>
> It
Folks.
Our application uses SQlite on Mac OSX. It is a central data hub for
a larger infrastructure and manages data coming in from some clients,
and requests for data from other clients, using our own XML based
protocols via TCPIP.
Its somewhat like a web server with a backend system supplyi
14 matches
Mail list logo