On Thu, Dec 1, 2011 at 7:42 AM, Peter wrote:
> Hi,
>
> I have a problem with one of my queries which take 2 orders of magnitude
> more on Sqlite3 (3.7.9) compared with the identical query on PostgreSQL
> (8.4). Times are 2270 ms on Sqlite3 and around 17ms on PG.
>
Hi,
I have a problem with one of my queries which take 2 orders of magnitude
more on Sqlite3 (3.7.9) compared with the identical query on PostgreSQL
(8.4). Times are 2270 ms on Sqlite3 and around 17ms on PG.
The difference seems to that the Sqlite is not optimising a subquery by
using an
Hi,
I tried with concatenated index (by dropping an index which was already
present on timestamp_id) It took 12 seconds for completion, so there was no
improvement in speed.
> If we might assume your table is 1.5 GB in size and you do that query you
> have to do a full table scan for it. (unless
manohar s schrieb:
> I have a SQLite database which is of size 1.5 GB. The problem that it is
> taking a lot of time (12 seconds after execution of vacuum) to execute a *
> SELECT* query.
>
> Query :
> SELECT metric_id, MAX(timestamp_id) AS timestamp_id_max FROM
> snapshot_master GROUP BY
On Jan 23, 2009, at 11:50 AM, manohar s wrote:
> I have a SQLite database which is of size 1.5 GB. The problem that
> it is
> taking a lot of time (12 seconds after execution of vacuum) to
> execute a *
> SELECT* query.
>
> Here is the create Table statement:
> CREATE TABLE IF NOT EXISTS
I have a SQLite database which is of size 1.5 GB. The problem that it is
taking a lot of time (12 seconds after execution of vacuum) to execute a *
SELECT* query.
Here is the create Table statement:
CREATE TABLE IF NOT EXISTS [snapshot_master] (
6 matches
Mail list logo