Hi,
On 07.02.20 09:25, Clemens Ladisch wrote:
Jürgen Baier wrote:
CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
CREATE TABLE staging ( ATT1 INT, ATT2 INT );
Then I execute
DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 =
staging.att1 AND
On Fri, 7 Feb 2020 at 16:25, Clemens Ladisch wrote:
> Jürgen Baier wrote:
> > CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
> > CREATE TABLE staging ( ATT1 INT, ATT2 INT );
> >
> > Then I execute
> >
> > DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE
Jürgen Baier wrote:
> CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
> CREATE TABLE staging ( ATT1 INT, ATT2 INT );
>
> Then I execute
>
> DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 =
> staging.att1 AND main.att2 = staging.att2)
>
> which takes
The difference is that #2 mentions only one field from ItemsME, namely IDR. The
value of that field comes from the index, the table itself doesn't need to be
read at all. It's not even clear why #2 bothers to join with ItemsME at all -
it's a no-op.
#1 uses more fields from ItemsME, so it
Mira Suk wrote:
> test 1.
>
> query
> SELECT [IndexME].[IDI], [IndexME].[Status], [IndexME].[Icon], [IndexME].[Text]
> FROM [IndexME] LEFT OUTER JOIN [ItemsME]
> ON [ItemsME].[IDR] = [IndexME].[IDI] WHERE
> [IndexME].[Parent] = ?1 AND
>
>Ok then, show the result of prepending EXPLAIN QUERY PLAN to your statement.
>--
>Igor Tandetnik
First of all thanks for bearing with me :)
functions
TZB_MATCHRECURSIVE(int,int)
- disabled for this test - always return 1. applies filter recursively
TZB_ISCHILD(int)
- bitmask check
Mira Suk wrote:
>> Mira Suk wrote:
>>> query written here is a lot simplified (for example "Points" column is
>>> filtered using custom function) however main culprit
>>> seems to be LEFT OUTER JOIN as accessing that same column in query which
>>> only has B table in it is
> Mira Suk wrote:
>> query written here is a lot simplified (for example "Points" column is
>> filtered using custom function) however main culprit seems
>> to be LEFT OUTER JOIN as accessing that same column in query which only has
>> B table in it is lightning fast.
>>
>> result of query
Mira Suk wrote:
> query written here is a lot simplified (for example "Points" column is
> filtered using custom function) however main culprit seems
> to be LEFT OUTER JOIN as accessing that same column in query which only has B
> table in it is lightning fast.
>
> result
On the MC55 and MC70 we use with Sqlite 3.5.9:
PRAGMA temp_store = MEMORY
PRAGMA journal_mode = PERSIST
PRAGMA journal_size_limit = 50
On 2/16/2011 5:24 AM, Black, Michael (IS) wrote:
> Try this benchmark program and see what numbers you get. You need to compare
> to other machines with the
Try this benchmark program and see what numbers you get. You need to compare
to other machines with the same benchmark to see if it's the machine or your
programming/architecture.
The MC55 is a 520Mhz PXA270 so I would expect to see more than a 6X difference
from my 3Ghz box (memory speed is
On Wed, Feb 16, 2011 at 6:13 AM, wrote:
> Hi,
>
> I'm using Motorola MC55 device, with 2GB external memory card.
>
> For the SQlite Db I have used the following Pragma values
>
> PRAGMA cache_size = 16000
> PRAGMA temp_store = 2
> PRAGMA synchronous = OFF
> PRAGMA
Hello!
On Thursday 01 April 2010 18:04:10 Adam DeVita wrote:
> How does
> $ time sqlite3 test32k.db "select count(1) from role_exist"
> perform?
Equal to count(*).
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
On Thu, Apr 01, 2010 at 10:44:51AM -0400, Pavel Ivanov scratched on the wall:
> > So 58s for count of all records! The count(*) for all records may use
> > the counter from primary key b-tree, is't it?
>
> What does this mean? I believe there's no any kind of counters in
> b-tree. If you meant
> So 58s for count of all records! The count(*) for all records may use
> the counter from primary key b-tree, is't it?
What does this mean? I believe there's no any kind of counters in
b-tree. If you meant counter from auto-increment key then how about
gaps in the middle?
Pavel
On Thu, Apr 1,
How does
$ time sqlite3 test32k.db "select count(1) from role_exist"
perform?
On Thu, Apr 1, 2010 at 5:52 AM, Alexey Pechnikov wrote:
> Hello!
>
> $ time sqlite3 test32k.db "select count(*) from role_exist"
> 1250
>
> real0m58.908s
> user0m0.056s
> sys
Igor Tandetnik wrote:
>
> Try searching for a value that doesn't fall into any block - you'll
> likely find that the query takes a noticeable time to produce zero
> records. Pick a large value that's greater than all startIpNum's.
>
Yes, you are right. That's why I'm going with the original
that first matching
row.
> -Original Message-
> From: Dani Va [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, October 31, 2007 8:30 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Performance problem for a simple select with range
>
>
> First, thanks, your sugg
First, thanks, your suggestion worked.
To my surprise, it was enough to add "limit 1" to the original query.
So:
select * from blocks,locations where locations.locid = blocks.locid AND ? >=
blocks.startIpNum AND ? <= blocks.endIpNum limit 1
takes about 1.398-005 seconds
and
select * from
"Dani Valevski" <[EMAIL PROTECTED]> wrote:
> I think I have a performance problem for a simple select with range.
>
> My Tables:
> CREATE TABLE locations(locidINTEGER PRIMARY KEY, ...);
>
> CREATE TABLE blocks(
> startIpNum INTEGER,
> endIpNum INTEGER,
>
[Default] On Mon, 29 Oct 2007 15:25:18 +0200, "Dani Valevski"
<[EMAIL PROTECTED]> wrote:
>I think I have a performance problem for a simple select with range.
>
>My Tables:
>CREATE TABLE locations(
>locidINTEGER PRIMARY KEY,
>country TEXT,
>
Richard,
Thanks for the additional info. I'll look into the multi-column index
idea. Sounds as if it might be the solution.
Stephen
On Thu, 2007-03-01 at 14:42 +, [EMAIL PROTECTED] wrote:
> Stephen Toney <[EMAIL PROTECTED]> wrote:
> > Thanks, Igor, Richard, and Tom,
> >
> > Why doesn't
Stephen Toney <[EMAIL PROTECTED]> wrote:
>
> 4. We do not preserve case in the index, so it can ignore incorrect
> capitalization in the search terms. Maybe FTS does this too?
That's a function of your stemmer. The default stemmers in FTS2
both ignore capitalization.
>
> 5. For historical
Stephen Toney <[EMAIL PROTECTED]> wrote:
> Thanks, Igor, Richard, and Tom,
>
> Why doesn't SQLite use the index on key? I can see from the plan that it
> doesn't, but why not? Can only one index be used per query?
>
> This seems strange. I have used SQL Server and Visual Foxpro for this
> same
Regarding:
"Can only one index be used per query?"
Yes, I believe that *is* the defined behaviour of sqlite (though it does
support compound indicies). Larger DBMS often have very involved code
to determine query plans.
On Thu, 2007-03-01 at 12:46 +, [EMAIL PROTECTED] wrote:
> Or maybe better yet: Have you looked into using FTS2 for whatever
> it is you are trying to do? Full-text search is hard to get right
> and you appear to be trying to create your own. Why not use a FTS
> subsystem that is already
Thanks, Igor, Richard, and Tom,
Why doesn't SQLite use the index on key? I can see from the plan that it
doesn't, but why not? Can only one index be used per query?
This seems strange. I have used SQL Server and Visual Foxpro for this
same problem, and they both handle this query in a second if
[EMAIL PROTECTED] wrote:
> Stephen Toney <[EMAIL PROTECTED]> wrote:
>> >
> > Here's the problem query with the plan:
> >
> > select count(*) from keyword a, keyword b where a.key=b.key and
> > a.value='music' and b.value='history';
> >
>
> A faster approach would be:
>
>SELECT (SELECT
You are almost certainly encountering disk cacheing effects.
Makavy, Erez (Erez) wrote:
Problem summery:
---
Simple queries sometimes take ~400 ms
Analysis:
---
- A php script runs the same SQL query several times in different places
(in different transactions).
Thank you very much. I am happy to hear that the performance I am seeing
is in line with what others have observed. I am running this on Windows
XP.
On Tue, 22 Nov 2005, Akira Higuchi wrote:
> Hi,
>
> On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
> Shane Baker <[EMAIL PROTECTED]> wrote:
>
> > I
Hi,
On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
Shane Baker <[EMAIL PROTECTED]> wrote:
> I just need to figure out why my performance is about 30x slower than what
> others are reporting when using the library in similar ways.
Are you using sqlite on windows or MacOS X?
As I tested, sqlite
Thank you very much for the feedback. I understand your point, hardware
takes a deterministic amount of time.
I have been basing my assumptions on these sources:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations (See
"Transactions and performance")
No, as I mentioned in my original message, I am not wrapping them. I
don't want to test an unrealistic scenario for my application. In my
application, there are multiple sources that will be inserting into the
database and pooling the information for a bulk insert won't work.
I understand that
On Mon, 21 Nov 2005, Shane Baker wrote:
>I'm sure I must be doing something wrong. This is my first attempt at
>working with SQLite.
We'll see...
>
>I have a simple table, with 7 columns. There are 6 integers and a BLOB,
>with the primary key being on an integer. When I try to run inserts
Are you wrapping the transactions in between Begin/End Transactions?
BEGIN TRANSACTION;
INSERT INTO table (foo) VALUES (bar);
INSERT INTO table (foo) VALUES (par);
INSERT INTO table (foo) VALUES (tar);
INSERT INTO table (foo) VALUES (far);
..
INSERT INTO table (foo) VALUES (car);
INSERT INTO
> SQLite only uses a single index per table on any give query.
> This is unlikely to change.
Would it be able to use a multi-column query on ipnode + author?
Hugh
> Shi Elektronische Medien GmbH, Peter Spiske wrote:
> >
> > the following simple query is very slow:
> > SELECT title FROM t1
Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an index.
The above query
At 1:33 PM +0100 3/20/04, Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an
On Thu, 2003-11-06 at 19:00, [EMAIL PROTECTED] wrote:
> How would you handle the lack of ordering associate with hash tables?
> Sqlite can currently use indicies for three main tests: equals, less than,
> and greater than. While hash-tables are good at finding equal-to in
> constant time it
- Forwarded by Ben Carlyle/AU/IRSA/Rail on 07/11/2003 10:00 AM -
Ben Carlyle
07/11/2003 10:00 AM
To: "Mrs. Brisby" <[EMAIL PROTECTED]>@CORP
cc:
Subject: Re: [sqlite] Performance problem
"Mrs. Brisby" <[EMAIL PR
<[EMAIL PROTECTED]>
Cc: "D. Richard Hipp" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, November 06, 2003 4:14 AM
Subject: RE: [sqlite] Performance problem
> On Wed, 2003-11-05 at 13:44, Clark, Chris wrote:
> > > -Original Message-
>
[EMAIL PROTECTED] wrote:
DRH: Will the changes to indicies allow us to define arbitrary collation
functions? If so, will those indicies be used when a query is done that
could use the arbitrary collation function?
Likely so. But no promises yet.
--
D. Richard Hipp -- [EMAIL PROTECTED] --
Allan Edwards wrote:
>
> I have YET to see a database, small to massively scalable that could handle
> BLOBS worth anything. ... I prefer the simplicity talk given early. If
> someone wants blobs, do it the old fashioned way!
>
Your concerns are understood, for BLOBs that are truely large. But
se
of code to start from.
Just some thoughts.
Allan
-Original Message-
From: Avner Levy [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2003 3:56 AM
To: D. Richard Hipp
Cc: [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance problem
Hi,
We have just finished testing the same scenario
Avner Levy wrote:
We have just finished testing the same scenario with MySql at amazingly
they continued to insert 1500-3000 rows per second even when the
database had 60,000,000 records. I don't know how this magic is done...
Nor do I. If anybody can clue me in, I would appreciate it. I
45 matches
Mail list logo