Re: [SQL] Reference integrity question

2003-01-02 Thread Stephan Szabo

On Fri, 27 Dec 2002, Evgen Potemkin wrote:

> Good time of day!
>
> I have two tables
>   news(newsid int4, newscltid int4 references clt(cltid) match full,newstext text)
> and
>   clt(cltid int4, somedata text).
> after clt is renamed , for ex. to clt_old, newscltid start to reference to 
>clt_old.cltid.
>   i'm create new table clt(cltid int4, anotherdata text);
>
> is there any way to make newscltid reference to newly created clt, beside
> recreation of news ?

If you mean that all newscltids should reference the new table (presumably
already loaded)...

Since you haven't given a version, let's guess 7.3, in which case the
constraint should probably be named "$1" and you can do an
ALTER TABLE news drop constraint "$1";
ALTER TABLE news add foreign key (newscltid) references clt(cltid) match
full;

but it's tougher in older versions.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] PostgreSQL X Resin EE

2003-01-02 Thread Pedro Igor



Have someone used Resin EE with PostgreSQL or 
actually use both for building applications ? About PostgreSQL i know that 
is one of the best options for back-end, but what about Resin EE 
?
 
Thanks ...
 
Pedro Igor 


Re: [SQL] Slow self-join on a 100 million record table

2003-01-02 Thread Paul Thornett
I've played with this quite a lot on a fairly similar machine to
yours - i.e. Dual pentium III 700s, 5x4.5Gb 10k Scsi disks Hardware
Raid0, 1 Gb. Ram, Windows 2000 Professional (SP3), Sql Server 2000
Desktop (SP2). As expected my elapse times are almost exactly twice as
long as yours.

I created my 100 million rows using an 8k file of words which I then
copied and appended until it contained 100 million rows. Then I DTssed
it into my table:
CREATE TABLE test
  (wid  int IDENTITY (1, 1) NOT NULL,
  word varchar(20) NOT NULL) ON [PRIMARY]

I went through numerous sequences, finally ending up with the data and
clustered index in the Primary Filegroup on my Raid0 disks, and the
nonclustered primary key (wid) in a secondary Filegroup on a separate
Firewire disk.

I also simplified your Sql (I don't see any need for subqueries), as
follows:

SELECT Refs=COUNT(*), t1.word, t2.word, t3.word
FROM test t1
JOIN test t2
ON t2.wid=t1.wid+1
AND t1.word in ('any','of','a')
AND t2.word in ('corel','woman','person')
JOIN test t3
ON t3.wid=t1.wid+2
AND t3.word in ('ventura','that','which')
GROUP BY t1.word, t2.word, t3.word

This gives me:

Refs   word   word  word
-
23086   a  Corel VENTURA
57715   of Corel VENTURA

All the permutations I tried gave an identical result - it always
takes 30 seconds to run the above query on my machine, or your version
of the same query, or any number of other ideas I tried. Just to
select the first word with no joins takes 5 seconds. I was surprised
the secondary Filegroup didn't improve speed at all.

I feel like I've been on a long journey, and ended up exactly where I
started!!

-- 
Paul Thornett


"Mark Davies" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> I have a database containing 100 million records, in which each
record
> contains (in sequence) all of the words in a 100 million word
> collection of texts.  There are two columns: TheID (offset value)
and
> TheWord (sequential words), e.g.:
>
> TheID  TheWord
>   -
>
> 1  I
> 2  saw
> 3  the
> 4  man
> 5  that
> 6  came
>  . . .
> 1 xxx
>
> To extract strings, I then use self-joins on this one table, in
which
> [ID], [ID-1], [ID+1] etc are used to find preceding and following
> words, e.g.:
> select count(*),w1.w1,w2.w1,w3.w1 from
> ((select w1, ID+1 as ID from seq where w1 in ('the','that','this'))
w1
> inner join
> (select w1, ID as ID from seq where w1 in ('man','woman','person'))
w2
> on w2.ID = w1.ID)
> inner join
> (select w1, ID-1 as ID from seq where w1 in ('who','that','which'))
w3
> on w3.ID=w1.ID
> group by w1.w1,w2.w1,w3.w1
>
> This would yield results like "the man that" (words 3-5 above),"that
> woman who","this man which", etc.
>
> The problem is, the self-join solution is extremely slow.  I have a
> SQL Server 7.0 database with a clustered index on TheWord
(sequential
> words) and a normal index on TheID.  Even with all of this, however,
a
> self-join query like the one just listed takes about 15 seconds on
my
> machine (dual CPU 1.2GHz, 4GB RAM, three 10K rpm SCSI HD w/ RAID-0).
>
> Any suggestions?  Have I messed up in terms of the SQL statement?
> Thanks in advance for any help that you can give.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] empty arrays

2003-01-02 Thread Steve Crawford
Caution!

In 7.2.x your statement is interpreted by the parser to be a single element 
with an empty string which is converted to a zero. If you do this instead:

create table test_table ( test_column integer[], another_column integer );
CREATE
steve=# insert into test_table (another_column) values (1);
INSERT 7140977 1
steve=# select * from test_table;
 test_column | another_column 
-+
 |  1
(1 row)

You will see that there is no item in the array which you can confirm with: 
 select array_dims(test_column) from test_table;
 array_dims 

 
(1 row)
(Which seemingly oddly shows nothing instead of 0 since there is not yet 
really any array to have dimensions. Per the docs, an array can be null but 
it cannot contain SQL null elements. What the docs don't say is that 
requesting an element beyond either end of an array does return a SQL null.)

Here's the gotcha: when you upgrade to 7.3.x your statement will generate an 
error since an empty string is no longer arbitrarily assumed to be a zero 
(either in integer fields or arrays) so if you attempt the same you will get:

create table test_table ( test_column integer[], another_column integer );
CREATE TABLE
tati=> insert into test_table (test_column) values ( '{ }' );
ERROR:  pg_atoi: zero-length string

If you want to insert a null array you are best off saying so:
insert into test_table (test_column) values (null);
INSERT 17053 1
tati=> select * from test_table;
 test_column | another_column 
-+
 |   
(1 row)

Cheers,
Steve


On Monday 30 December 2002 9:40 am, [EMAIL PROTECTED] wrote:
> using: psql (PostgreSQL) 7.2.1
>
> why does an empty array return an array of length 1 rather than array of
> length 0? one would think that the results below would have returned { }
> instead of {0}.
>
> simple test using psql:
>
> # create table test_table ( test_column integer[] );
> CREATE
> # insert into test_table (test_column) values ( '{ }' );
> INSERT 43475 1
> # select * from test_table;
>
> test_column
> 
>  {0}
> (1 row)
>
>
> i want to be able to store a zero-length array field in the database. how
> do i specify this with  sql?
>
> Regards,
>
> Floyd Shackelford
> 4 Peaks Technology Group, Inc.
> VOICE: 334.735.9428
> FAX:   916.404.7125
> EMAIL: [EMAIL PROTECTED]
> ICQ #: 161371538
>
> Shackelford Motto: ACTA NON VERBA - Actions, not words
>
> Alabama StateMotto: AUDEMUS JURA NOSTRA DEFENDERE - We Dare Defend Our
> Rights
>
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Adding a Primary Key to an exisiting table

2003-01-02 Thread Maly Kaing
Hi

I just exported my MS Access databases into postgres using an ODBC driver,
but the thing is during the export postgres doesn't recognize any of my
primary keys in Access.
Is there any way to alter the tables so that i can make one of my current
fields the primary key?  I don't want to drop the column since I already
have data in the table and i don't want to create/drop tables as some of
the documentation suggests since most of my tables have 10+ fields.. any
suggestions??

mtk


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Adding a Primary Key to an exisiting table

2003-01-02 Thread Bhuvan A

> Hi
> 
> I just exported my MS Access databases into postgres using an ODBC driver,
> but the thing is during the export postgres doesn't recognize any of my
> primary keys in Access.
> Is there any way to alter the tables so that i can make one of my current
> fields the primary key?  I don't want to drop the column since I already
> have data in the table and i don't want to create/drop tables as some of
> the documentation suggests since most of my tables have 10+ fields.. any
> suggestions??
> 

AFAIK, it can be done in any of the below two ways:

1. Create unique index
   CREATE UNIQUE INDEX index_name ON table_name USING btree (field1, .., 
   fieldn);
2. Add primary key constraint
   ALTER TABLE table_name ADD PRIMARY KEY (field1, .., fieldn);

For this to happen, the fields should have unique values.

regards,
bhuvaneswaran



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])