Hi Rense,
>Thanks for this idea. In fact, the purpose of my original query is
>exactly to reduce the database. The 800 mln rows were exported from
>another source, and I was hoping to be able to use sqlite to manage
>this massive amount of data (e.g., removing redundant information)
>before I
Jean-Christophe,
Thanks for this idea. In fact, the purpose of my original query is
exactly to reduce the database. The 800 mln rows were exported from
another source, and I was hoping to be able to use sqlite to manage
this massive amount of data (e.g., removing redundant information)
before I
Rense,
>As for the ranges of n1 and n1: they are both roughly between 6
>and 1200 .
>
>Here are the results of EXPLAIN QUERY PLAN SELECT n1, n2 FROM table1
>Where n1 < n2 INTERSECT SELECT n2, n1 FROM table1 Where n2 < n1;
>
>1|0|0|SCAN TABLE table1 (~437976176 rows)
>2|0|0|SCAN TABLE
On Thu, Jun 23, 2011 at 10:20 PM, Rense Corten wrote:
> Wow, I'm almost embarrassed by how helpful you are, gentlemen. Thanks a
> lot.
>
> As to RAM: I'm trying this on two different machines, one with 12 Gb
> and one with 32 Gb RAM. I won't be able to get more in the near
>
On Thu, Jun 23, 2011 at 5:59 PM, Simon Slavin wrote:
>
> On 23 Jun 2011, at 10:56pm, Rense Corten wrote:
>
> > Simon: no, AFS=Andrew File System :). Both my servers run Ubuntu
> > (64-bit). In case of the 12 Gb RAM machine, everything is done
> > locally. On the 32 Gb
On 23 Jun 2011, at 10:56pm, Rense Corten wrote:
> Simon: no, AFS=Andrew File System :). Both my servers run Ubuntu
> (64-bit). In case of the 12 Gb RAM machine, everything is done
> locally. On the 32 Gb machine, I'm afraid I can't do that. And in fact
> I've been using the command-line tool all
Simon: no, AFS=Andrew File System :). Both my servers run Ubuntu
(64-bit). In case of the 12 Gb RAM machine, everything is done
locally. On the 32 Gb machine, I'm afraid I can't do that. And in fact
I've been using the command-line tool all along.
On Thu, Jun 23, 2011 at 2:45 PM, Simon Slavin
On 23 Jun 2011, at 7:20pm, Rense Corten wrote:
> As to RAM: I'm trying this on two different machines, one with 12 Gb
> and one with 32 Gb RAM. I won't be able to get more in the near
> future. Something that might be relevant is that the case of the 32Gb
> machine, the database is on an AFS.
On 23 Jun 2011, at 9:31pm, Jan Hudec wrote:
> You also need to make sure you are using
> 64-bit build of SQLite (32-bit build can only use between 2 and 3 GB and
> that's not enough for such huge database).
If you try to use a 32-bit compilation of SQLite to open a bigger database,
does
On Thu, Jun 23, 2011 at 11:20:22 -0700, Rense Corten wrote:
> Wow, I'm almost embarrassed by how helpful you are, gentlemen. Thanks a lot.
>
> As to RAM: I'm trying this on two different machines, one with 12 Gb
> and one with 32 Gb RAM. I won't be able to get more in the near
> future. Something
[sqlite-users-boun...@sqlite.org] on
behalf of Black, Michael (IS) [michael.bla...@ngc.com]
Sent: Thursday, June 23, 2011 2:03 PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] Query with UNION on large table
Any reason you can't add another field to your database?
0=
PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] Query with UNION on large table
Wow, I'm almost embarrassed by how helpful you are, gentlemen. Thanks a lot.
As to RAM: I'm trying this on two different machines, one with 12 Gb
and one with 32 Gb RAM. I won't be able to get m
Yes, I have run ANALYZE.
On Thu, Jun 23, 2011 at 11:56 AM, Simon Slavin wrote:
>
> On 23 Jun 2011, at 7:20pm, Rense Corten wrote:
>
>> Here are the results of EXPLAIN QUERY PLAN SELECT n1, n2 FROM table1
>> Where n1 < n2 INTERSECT SELECT n2, n1 FROM table1 Where n2 < n1;
On 23 Jun 2011, at 7:20pm, Rense Corten wrote:
> Here are the results of EXPLAIN QUERY PLAN SELECT n1, n2 FROM table1
> Where n1 < n2 INTERSECT SELECT n2, n1 FROM table1 Where n2 < n1;
>
> 1|0|0|SCAN TABLE table1 (~437976176 rows)
> 2|0|0|SCAN TABLE table1 (~437976176 rows)
> 0|0|0|COMPOUND
Wow, I'm almost embarrassed by how helpful you are, gentlemen. Thanks a lot.
As to RAM: I'm trying this on two different machines, one with 12 Gb
and one with 32 Gb RAM. I won't be able to get more in the near
future. Something that might be relevant is that the case of the 32Gb
machine, the
On Wed, Jun 22, 2011 at 10:25:52 -0700, Rense Corten wrote:
> CREATE TABLE table2 AS SELECT n1,n2 FROM (SELECT n1, n2 FROM table1
> UNION SELECT n2 AS n1, n1 AS n2 FROM table1) WHERE(n1 n1,n2;
Have you tried explaining it (prefix the whole query with "explain query
plan" and run
Is the Hard Drive thrashing? Could be that most everything is being done in
swap files. Given the size of the table, a lot more RAM would help.
Tom
___
sqlite-users mailing list
sqlite-users@sqlite.org
On Wed, Jun 22, 2011 at 9:25 PM, Rense Corten wrote:
>
> CREATE TABLE table2 AS SELECT n1,n2 FROM (SELECT n1, n2 FROM table1
> UNION SELECT n2 AS n1, n1 AS n2 FROM table1) WHERE(n1 n1,n2;
>
> This has the desired result on a small example, but when I try this on
On 23 Jun 2011, at 1:00am, Rense Corten wrote:
> @Simon: I don't have a problem per se with things running overnight,
> as long as I can be sure that things will complete at some point...the
> result of "PRAGMA integrity_check;", which by the way took less then
> an hour, is "ok". Any
Thanks Simon, Jim and Tom for your replies!
@Simon: I don't have a problem per se with things running overnight,
as long as I can be sure that things will complete at some point...the
result of "PRAGMA integrity_check;", which by the way took less then
an hour, is "ok". Any suggestions?
@Jim:
Thanks Simon, Jim and Tom for your replies!
@Simon: I don't have a problem per se with things running overnight,
as long as I can be sure that things will complete at some point...the
result of "PRAGMA integrity_check;", which by the way took less then
an hour, is "ok". Any suggestions?
@Jim:
This is even faster, on a tiny table:
SELECT n1, n2 FROM table1
INTERSECT
SELECT n2, n1 FROM table1
WHERE n2
Did you try to time a simpler select:
SELECT min(n1, n2) as new1, max(n1,n2) as new2 FROM table1 group by new1, new2
having count(*)> 1
___
sqlite-users mailing list
sqlite-users@sqlite.org
On 22 Jun 2011, at 6:36pm, Simon Slavin wrote:
> With a billion rows,
I forgot to say that there shouldn't really be a problem with a database of
this size. If you have a single-user single-process use for a database of this
size there's no reason SQLite shouldn't be a good solution for you.
On 22 Jun 2011, at 6:25pm, Rense Corten wrote:
> This has the desired result on a small example, but when I try this on
> my actual table which has about 800 million rows, the query never
> seems to complete. It has been running for a couple of days now, and
> it doesn't seem sqlite is still
25 matches
Mail list logo