I have a Table with about 800,000 record
DB Version : 3. 2. 7
The Sql looks like
select x, y, sum(z)/1000 as bw from aa where
a=1 and b=1 and
c =1 and d= 6 group by x, y having count(*) > 1 order by bw desc
limit 10
Column "d" is all set to 6
1/3 of
x,y are both column of Table "aa", they may have random value .
And you think that will affect the performance.?
What I think is Sqlite will first use "where cluase" to get limited value ,then
use "group by" in the same .
so , the perfomance depend "where" clause , correct?
Igor
I have a Table with about 800,000 record
DB Version : 3. 2. 7
The Sql looks like
select x, y, sum(z)/1000 as bw from aa where
a=1 and b=1 and
c =1 and d= 6 group by x, y having count(*) > 1 order by bw desc
limit 10
Column "d" is all set to 6
1/3 of
Hi Drh
I just found a strange case , can you give me some explaination ?
I have a Table with about 800,000 record
DB Version : 3. 2. 7
The Sql looks like
select x, y, sum(z)/1000 as bw from aa where
a=1 and b=1 and
c =1 and d= 6 group by x, y having
Hi Nathan
I just found a strange case , can you give me some explaination ?
I have a Table with about 800,000 record
DB Version : 3. 2. 7
The Sql looks like
select x, y, sum(z)/1000 as bw from aa where
a=1 and b=1 and
c =1 and d= 6 group by x, y having
Hi Igor
I just found a strange case , can you give me some explaination ?
I have a Table with about 800,000 record
DB Version : 3. 2. 7
The Sql looks like
select x, y, sum(z)/1000 as bw from aa where
a=1 and b=1 and
c =1 and d= 6 group by x, y having count(*) >
Since you are using a time your could try using a trigger which deleted
the row with the oldest (lowest) time. You would need to have an index
on the timestamp. I guess something like timestamp > 0 LIMIT 1 might work.
Sean Machin wrote:
Hi All,
I'm considering using SQLLite for an embedded
Will Leshner wrote:
On Dec 20, 2005, at 7:46 AM, John Stanton wrote:
I haven't looked closely at the problem, so these are just first
ideas extending CM's approach. Basically there should be no reason
to perform any analysis of the SQL since that has already been done
and the
Hi All,
I'm considering using SQLLite for an embedded project. Some of the data
I'd like to store is
timestamped sensor readings. I'd like to know if there is a way to
configure a table
so that it acts like a fixed length FIFO queue, e.g. stores 10,000
records then once full
drops off the
-Message d'origine-
De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Envoyé : mardi 20 décembre 2005 21:41
À : sqlite-users@sqlite.org
Objet : re: [sqlite] multiple Db's and journal file time hit?
Thank you for the idea!
Tom
From: Eduardo
Thank you for the idea!
Tom
From: Eduardo <[EMAIL PROTECTED]>
Sent: Tuesday, December 20, 2005 11:21 AM
To: sqlite-users@sqlite.org
Subject: re: [sqlite] multiple Db's and journal file time hit?
At 17:17 19/12/2005, you wrote:
>I think I've confused
All the source code is on sourceforge ...
http://sourceforge.net/projects/sqlite-dotnet2
- Original Message -
From: "Joel Lucsy" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, December 20, 2005 11:53 AM
Subject: Re: [sqlite] implementing editable result sets
Can
At 17:17 19/12/2005, you wrote:
I think I've confused the issue a bit and the aforementioned time
hit might be normal given:
http://www.sqlite.org/lockingv3.html
We are using a separately developed COM DLL interface to Sqlite v3:
http://www.sqliteplus.com/
And we can live with the journal
On Dec 20, 2005, at 10:46 AM, Fanda Vacek wrote:
If you parse such a simple query like
SELECT [*,] bla [, bla] FROM table-name WHERE blabla ,
you'll get into less troubles than if you play with the rowid
trick. All you need is to check if returned colnames can be found
in table-name. And
On Dec 20, 2005, at 10:19 AM, Robert Simpson wrote:
What I'm left with are the alias names and original database table
and column names for all the fields in the query, complete with
information on whether or not any field is a primary key, how many
different tables are involved in the
Can you share what you changed?
On 12/20/05, Robert Simpson <[EMAIL PROTECTED]> wrote:
>
> My approach in the ADO.NET 2.0 provider was to modify the core engine
> slightly. I added a database pragma, which when enabled, directs the
> internal generateColumnNames() function in select.c to emit
If you parse such a simple query like
SELECT [*,] bla [, bla] FROM table-name WHERE blabla ,
you'll get into less troubles than if you play with the rowid trick. All
you need is to check if returned colnames can be found in table-name. And
it is a simple question using PRAGMA
- Original Message -
From: "Will Leshner" <[EMAIL PROTECTED]>
On Dec 20, 2005, at 7:46 AM, John Stanton wrote:
I haven't looked closely at the problem, so these are just first ideas
extending CM's approach. Basically there should be no reason to perform
any analysis of the SQL
On Dec 20, 2005, at 7:46 AM, John Stanton wrote:
I haven't looked closely at the problem, so these are just first
ideas extending CM's approach. Basically there should be no reason
to perform any analysis of the SQL since that has already been done
and the metalanguage generated. My
Nathan Kurz <[EMAIL PROTECTED]> writes:
> On Thu, Dec 15, 2005 at 09:17:48PM +, Andrew McDermott wrote:
>> For example, I'm currently computing a histogram in application code for
>> a moderately sized table (7+ million rows), but I'm wondering whether it
>> would be quicker to get the
I haven't looked closely at the problem, so these are just first ideas
extending CM's approach. Basically there should be no reason to perform
any analysis of the SQL since that has already been done and the
metalanguage generated. My approach would be to prepare the SQL
statement and then
Here is a table and with 2 index creatrf on it .
==
CREATE TABLE test (
a tinyint, /*range 0-3*/
b tinyint,/*range 0-3*/
c tinyint,/*range 0-3*/
d tinyint,/*range 0-3*/
e tinyint,
It looks as if you need to download the GNU make and compile it for
Irix. Try installing it as "gmake" and then execute gmake instead of make.
JS
Prettina Louis wrote:
hello sir,
I am trying to install Sqlite-3.2.7 in Irix 6.5 version OS.
As mentioned in the README file, I gave the
[EMAIL PROTECTED] wrote:
>
> The source file 2.8.17.tar.gz available on the download page is only 45 bytes
> long. I think that's a bit incomplete. :-)
>
This is not fixed (I think).
--
D. Richard Hipp <[EMAIL PROTECTED]>
Dears,
Why Sqlite 3.27 it's more slow than Sqlite 3.22
The next Query in Sqlite 3.22 cost 6.47 seconts and 20.3 seconds on Sqlite 3.27
Table with 732.000 records
select strftime('%Y',fecha) as Ano, strftime('%m',fecha) as Mes, sum(Duracion)
as Duracion,sum(Costo) as Costo,sum(1) as Cantidad
one idea. run an EXPLAIN fist, and then analyze the query plan. it will
tell you if there are more than one tables,
and maybe you can get info about aggregate functions and such. of
course, there is a cost to this...
> -Original Message-
> From: Will Leshner [mailto:[EMAIL PROTECTED]
>
26 matches
Mail list logo