Re: [GENERAL] Tracking row updates - race condition

2005-03-20 Thread Vincent Hikida
To fetch all updates since the last synchronization, the client would 
calculated a value for $lastrevision by running this query on its local 
database:
SELECT max(revision) AS lastrevision FROM codes;

It would then fetch all updated rows by running this query against the 
server:
SELECT * FROM codes WHERE revision  $lastrevision;

How about
SELECT * FROM codes WHERE revision  $lastrevision - 100
You could use another number other than 100. As you said, the client can 
handle duplicates.

Vincent 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] sql question

2005-03-13 Thread Vincent Hikida



SELECT t1.id
 
, t1.fref
 FROM t1
UNION ALL
SELECT t2.id
 
, t2.mref
FROM t2


  - Original Message - 
  From: 
  Steven Verhoeven 
  To: pgsql-general@postgresql.org ; 
  [EMAIL PROTECTED] 
  
  Sent: Friday, March 11, 2005 4:36 
AM
  Subject: [GENERAL] sql question
  Hi all My table definition : id 
  | fref | 
  mref--+---+-- 1 
  | 23 | 25 
  2 | 24 | 
  28 3 | 25 
  | 31 4 | 
  26 | 34My problem : i need a 
  query that results in this : id | 
  ref--+-- 1 | 
  23  1 | 25 
  2 | 24 2 
  | 28 3 | 
  25  3 | 
  31 4 | 26 
   4 | 34Do I 
  need a crosstab-query ?Who can help me ?
  -- A 
  computer is like an airconditioner. When windows open, it stops working 
  ! 
  -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
  -- -- -- -- -- -- -- -- -- -- -- -- -- 
  Steven Verhoeven, ICT Support 
  Engineer 
  


  Department for Molecular Biomedical Research 
(DMBR)VIB - Ghent University 'Fiers-Schell-Van Montagu' 
buildingTechnologiepark 927B - 9052 Ghent 
(Zwijnaarde)Belgium

  Tel : +32-(0)9-33-13.606 Fax : 
+32-(0)9-33-13.609
  E-mail : 
[EMAIL PROTECTED] 
URL : http://www.dmbr.UGent.be

  
  


Re: [NOVICE] [GENERAL] sql question

2005-03-13 Thread Vincent Hikida



OOPs.

I mean


SELECT t1.id
 
, t1.fref
 FROM t1
UNION ALL
SELECT t1.id
 
, t1.mref
FROM t1

  - Original Message - 
  From: 
  Vincent 
  Hikida 
  To: Steven Verhoeven ; pgsql-general@postgresql.org ; 
  [EMAIL PROTECTED] 
  
  Sent: Sunday, March 13, 2005 6:34 
PM
  Subject: Re: [NOVICE] [GENERAL] sql 
  question
  
  SELECT t1.id
   
  , t1.fref
   FROM t1
  UNION ALL
  SELECT t2.id
   
  , t2.mref
  FROM t2
  
  
- Original Message - 
From: 
Steven Verhoeven 
To: pgsql-general@postgresql.org 
; [EMAIL PROTECTED] 

Sent: Friday, March 11, 2005 4:36 
AM
Subject: [GENERAL] sql question
Hi all My table definition : 
id | fref | 
mref--+---+-- 1 
| 23 | 25 
2 | 24 | 
28 3 | 25 
| 31 4 | 
26 | 34My problem : i need a 
query that results in this : id | 
ref--+-- 1 | 
23  1 | 
25 2 | 24 
2 | 28 3 
| 25  3 
| 31 4 
| 26  4 
| 34Do I need a crosstab-query 
?Who can help me ?
-- A 
computer is like an airconditioner. When windows open, it stops working 
! 
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
-- -- -- -- -- -- -- -- -- -- -- -- -- -- 
Steven Verhoeven, ICT Support 
Engineer 

  
  
Department for Molecular Biomedical Research 
  (DMBR)VIB - Ghent University 'Fiers-Schell-Van Montagu' 
  buildingTechnologiepark 927B - 9052 Ghent 
(Zwijnaarde)Belgium
  
Tel : +32-(0)9-33-13.606 Fax : 
  +32-(0)9-33-13.609
E-mail 
  : [EMAIL PROTECTED] 
  URL : http://www.dmbr.UGent.be
  




Re: [GENERAL] Unique Index

2005-01-19 Thread Vincent Hikida

I actually just wanted to know if there is a way around this problem. 
Obviously it is implemented that way for whatever reason.

I still though think some arguments given in some of the replies, while 
probably correct, are besides the point.
Sorry. I was hoping someone else would answer.
I use a unique index that may contain null values. On an insert or update 
I can now not rely on the exception thrown but actually have to write a 
select statement to check if the same row exists, which I believe defies 
ONE purpose of having unique indices. Whether Null is associated with 
unknown value, divided by zero... or however one wants to interpret it 
is not the issue here, in my view NULL in the same column have the same 
value or at least should be treated the same. (If I want to differentiate 
the state, I would use a code instead of NULL as a NULL does not give any 
indication of its meaning, thus we could safely assume they are treated as 
equal).

Maybe there could be an option in the creation of the index to indicate on 
how to use NULL values.
I can think of two options.
One was mentioned already. If only one row can have a null value then it 
seems to me that you should make it a non null and null would have a special 
code.

If it really needs to be null. Then a rather messy solution would be to have 
a second column (I'll call it a null indicator) which can only be 1 or null 
and have a unique index on it.

colAind
--   -
1 null
2 null
3 null
null 1
How do other DBMS handle this?
Oracle is the same.
A


Tom Lane wrote:
Dann Corbit [EMAIL PROTECTED] writes:
Or (perhaps better yet, violating trichotomy) ...
If Some_column has a null numeric value, then ALL of the following are
FALSE for that case:

Some_column  0
Some_column  0
Some_column = 0 Some_column  0 // This is the one that many find 
surprising
Some_column = 0
Some_column = 0

It's worse than that: the above do *not* yield FALSE, they yield NULL.
Which does act like FALSE in a simple WHERE clause, but there are other
cases (like CHECK clauses) where it doesn't.  x NOT IN (SELECT ...)
is a case that newbies routinely get bitten by.

Even at that, I think that being able to insert more than one null value
into a unique index should be considered as a bug (or diagnosed as an
error).
Direct your complaints to the ISO SQL standards committee.
regards, tom lane
---(end of broadcast)---
TIP 8: explain analyze is your friend




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Question on a select

2005-01-02 Thread Vincent Hikida
  They are all 'not null' and I am trying to do exactly the kind of task 
you described. I tried the first example on my DB and got a syntax error:

tle-bu= SELECT a.file_name, a.file_parent_dir, a.file_type FROM 
file_info_1 a WHERE NOT EXIST (SELECT NULL FROM file_set_1 b WHERE 
b.fs_name=a.file_name, b.fs_parent_dir=a.file_parent_dir, 
b.fs_type=a.file_type);
ERROR:  syntax error at or near SELECT at character 88

I've quickly read the thread and I don't think you got an answer as to why 
you are getting a syntax error here. Your query shows something line

WHERE b.fs_name=a.file_name, b.fs_parent_dir=a.file_parent_dir
you need to put an AND instead of a comma:
WHERE b.fs_name=a.file_name AND b.fs_parent_dir=a.file_parent_dir

As for which of the queries is best I don't know. My background is as an 
Oracle developer.  I think that Bruno already suggested testing the three 
queries. There is a trace utility which shows some of what happens under the 
covers of a query. I've used it extensively in Oracle but have never used it 
in Postgresql.

If I understand what you said, the NOT IN was significantly slower. That has 
been my experience in Oracle long time ago so I've tended to shy away from 
that syntax. I'm sure optimizers are much better now then when I 
experimented with NOT IN but my coworker who tried it in Oracle was getting 
a slower response than with a subselect about a year ago. Theoretically if 3 
queries are logically equivalent as the three queries you've been given, an 
optimizer should find the same best query plan to execute it. I don't think 
that optimizers are that smart yet.

The outer join is probably doing either a sort merge or a hash join. In your 
application this should be the best option. (A sort merge sorts both tables 
first or at least the key columns and then merges the tables together.)

Bruno said that the subselect would be slower. It may be that he thinks it 
will do a nested loop. That is that it will read each row in table A and try 
to find that concatenated key in table B's index. I don't think that a 
nested loop would be very good in your particular application.

As for the indexes you set up, I think they are correct indexes.
Vincent


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Question on a select

2005-01-02 Thread Vincent Hikida

  The indexes are:
CREATE INDEX file_info_#_display_idx ON file_info_# (file_type, 
file_parent_dir, file_name);
CREATE INDEX file_set_#_sync_idx ON file_set_# (fs_name, fs_parent_dir, 
fs_type)

  Are these not effective for the second query? If not, what should I 
change or add? If so, would you have any insight into why there is such an 
incredible difference in performance?

I didn't look at your indexes closely enough. When you have concatenated 
index, you want to have the most selective colum first. I guess that 
file_type is not very selective. file_name is probably the most selective. 
In the above, the index on file_set_# is optimal. The index on file_info_# 
is suboptimal.

However, if the query is doing a hash join or sort merge, an index is not 
used so the index doesn't matter. However, you probably do other queries 
that do use the index so it should be fixed.

Vincent 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Question on a select

2005-01-01 Thread Vincent Hikida
There are several ways. I am making the simplifying assumption that name, 
type and dir cannot be NULL in either table. If they are the query is a 
little more complicated.

The following are a couple of many techniques.
SELECT a.a_name
, a.a_type
, a.a_dir
FROM a_table a
 WHERE NOT EXISTS
 ( SELECT NULL
FROM b_table b
 WHERE b.b_name  = a.a_name
   AND b.b_type= a.a_type
   AND b.b_dir  = a.a_dir
 )
SELECT a.a_name
,  a.a_type
,  a.a_dir
   FROM a_table  a
LEFT JOIN b_table b
   ON  a.a_table= b.b_table
 AND a.a_type = b.b_type
 AND a.a_dir   = b.b_type
WHERE b.b_table IS NULL   // assumes that b.b_table 
is a not null column.

Let's say that dir could be null and dir is a string, then (assuming that 
dir can never be 'xyz') you could say something like

COALESCE(a.a_dir,'xyz')  = COALESCE(b.b_dir,'xyz')
Since NULL never equal NULL, if you want NULL in one table to match a NULL 
in another table, you need to change it to something not NULL. However this 
depends on what you want in your application.

Queries like this are used often to check the integrity of your data. 
Examples of this are 1) What orders don't have order items?  2) What books 
have no authors? etc.

- Original Message - 
From: Madison Kelly [EMAIL PROTECTED]
To: PgSQL General List pgsql-general@postgresql.org
Sent: Saturday, January 01, 2005 7:32 PM
Subject: [GENERAL] Question on a select


Hi all,
  This is my first post here so please let me know if I miss any list 
guidelines. :)

  I was hoping to get some help, advice or pointers to an answer for a 
somewhat odd (to me at least) SELECT. What I am trying to do is select 
that values from one table where matching values do not exist in another 
table.

For example:
  Let's say 'table_a' has the columns 'a_name, a_type, a_dir, a_others' 
and 'table_b' has the columns 'b_name, b_type, b_dir, b_others' where 
'others' are columns unique to each table. What I need to do is select all 
the values in 'a_name, a_type, a_dir' from 'table_a' where there is no 
matching entries in table_b's 'b_name, b_type, b_dir'.

  I know I could do something like:
SELECT a_name, a_type, a_dir FROM table_a;
  and then loop through all the returned values and for each do a matching 
select from 'table_b' and use my program to catch the ones not in 
'table_b'. This is not very efficient though and I will be searching 
through tables that could have several hundred thousand entries so the 
inefficiency would be amplified. Is there some way to use a join or 
something similar to do this?

  Thank you all!
Madison
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] UNION with more restrictive DISTINCT

2004-12-19 Thread Vincent Hikida
Oops. That statement will prefer the t1.name instead of t2.name. It should 
be COALESCE(t2.name,t1.name)

Another option is:
SELECT t2.id
, t2.name
FROM t2
UNION
SELECT t1.id
, t1.name
   FROM t2
 WHERE NOT EXISTS
(SELECT NULL
FROM t1 JOIN t2 ON t1.id = t2.id
)
- Original Message - 
From: Vincent Hikida [EMAIL PROTECTED]
To: peter pilsl [EMAIL PROTECTED]; PostgreSQL List 
[EMAIL PROTECTED]
Sent: Saturday, December 18, 2004 12:40 AM
Subject: Re: [GENERAL] UNION with more restrictive DISTINCT


One solution is
SELECT COALESCE(t1.id,t2.id)
,  COALESCE(t1.name,t2.name)
   FROM t1 FULL JOIN t2  ON t1.id = t2.id
- Original Message - 
From: peter pilsl [EMAIL PROTECTED]
To: PostgreSQL List [EMAIL PROTECTED]
Sent: Wednesday, December 15, 2004 1:03 PM
Subject: [GENERAL] UNION with more restrictive DISTINCT


I'd like to UNION two queries but the distinct-criteria for UNION should 
not be all columns in the queries, but only one.

example. two tables:
test=# select id,name from t1;
 id | name
+--
  1 | bob
  2 | mike
(2 rows)
test=# select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike j.
(2 rows)
# select id,name from t1 union select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike
  2 | mike j.
(3 rows)
now I want a construct that returns me only one row for each id. If there 
are different names for that id's in the different tables, the name of t2 
should be chosen.

like:
# select id,name from t1 UNION ON (id) AND SUPERMAGICPREFER (t2.name) 
select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike j.
(2 rows)

What is an appropriate approach to this? If I use my UNION-query as 
subquery for a SELECT DISTINCT ID, I loose the name, which is important.

thnx.
peter

--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] UNION with more restrictive DISTINCT

2004-12-18 Thread Vincent Hikida
One solution is
SELECT COALESCE(t1.id,t2.id)
,  COALESCE(t1.name,t2.name)
   FROM t1 FULL JOIN t2  ON t1.id = t2.id
- Original Message - 
From: peter pilsl [EMAIL PROTECTED]
To: PostgreSQL List [EMAIL PROTECTED]
Sent: Wednesday, December 15, 2004 1:03 PM
Subject: [GENERAL] UNION with more restrictive DISTINCT


I'd like to UNION two queries but the distinct-criteria for UNION should 
not be all columns in the queries, but only one.

example. two tables:
test=# select id,name from t1;
 id | name
+--
  1 | bob
  2 | mike
(2 rows)
test=# select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike j.
(2 rows)
# select id,name from t1 union select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike
  2 | mike j.
(3 rows)
now I want a construct that returns me only one row for each id. If 
there are different names for that id's in the different tables, the 
name of t2 should be chosen.

like:
# select id,name from t1 UNION ON (id) AND SUPERMAGICPREFER (t2.name) 
select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike j.
(2 rows)

What is an appropriate approach to this? If I use my UNION-query as 
subquery for a SELECT DISTINCT ID, I loose the name, which is important.

thnx.
peter

--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Indexes?

2004-12-02 Thread Vincent Hikida
I believe that it is better to have a concatenated key of (toDate,FromDate). 
The reason the toDate should come first is that for more recent records, 
finding curDates less than toDate is much more selective than finding 
curDates greater than fromDate. Actually I'm not sure if fromDate is that 
helpful either as part of the concatenated key (it probably depends) but 
definitely not by itself.

If your usual query is someEarlyHistoricalDate between toDate and fromDate, 
then the concatenated key should be (fromDate,toDate) instead.

If toDate is sometimes not known, I would use some fixed date far in the 
future rather than a null.

Vincent
- Original Message - 
From: Bjørn T Johansen [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, December 01, 2004 10:11 PM
Subject: [GENERAL] Indexes?


I have a table where I need to use ..where curdate between fromDate and 
toDate.
Is it best to have two indexes, one for FromDate and one for toDate or 
just one index for both the fields?

Regards,
BTJ
--
---
Bjørn T Johansen
[EMAIL PROTECTED]
---
Someone wrote:
I understand that if you play a Windows CD backwards you hear strange 
Satanic messages
To which someone replied:
It's even worse than that; play it forwards and it installs Windows
---

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Indexes?

2004-12-02 Thread Vincent Hikida
Perhaps I'm missing something but let's say that the index has the 
following:

toDate   fromDate
1992-03-02  1991-01-23
1992-04-03  1990-06-13
1993-05-03  1991-01-22
...
...
...
2004-12-01  2003-02-22
2005-03-04  2003-02-22  (a)
2005-03-05  2004-12-15  (b)
2005-03-05  2004-06-18  (c)
2007-04-12  2005-06-18  (d)
Let's say that there are a million entries where the toDate is less than 
today 2004-12-02. That is less than (a) in the index. From the index then 
only a, b, c, and d should be scanned further. a and c would be picked based 
on the index values because 2004-12-02 is between the from and end date. 
However, b and d would be excluded immediately because the the from date is 
greater than 2004-12-02 and would save the optimizer from even reading the 
table for these index entries because the fromDate is in the index.

This may be a somewhat extreme example but my experience is in most systems 
old historical data makes up the bulk of the data and newer data is a much 
smaller amount. In addition most people are interested in data from the most 
recent month.

Of course I may be mistaken about the data distribution.
Vincent
- Original Message - 
From: Bruno Wolff III [EMAIL PROTECTED]
To: Vincent Hikida [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 12:10 PM
Subject: Re: [GENERAL] Indexes?


On Wed, Dec 01, 2004 at 23:16:48 -0800,
 Vincent Hikida [EMAIL PROTECTED] wrote:
I believe that it is better to have a concatenated key of
(toDate,FromDate). The reason the toDate should come first is that for 
more
recent records, finding curDates less than toDate is much more 
selective
than finding curDates greater than fromDate. Actually I'm not sure if
fromDate is that helpful either as part of the concatenated key (it
probably depends) but definitely not by itself.
I combined index won't be very useful for the kind of search he is doing.
And not having an index on FromDate could hurt in some cases depending
on the distribution of values.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] simple query question: return latest

2004-11-11 Thread Vincent Hikida
I interpreted the question slightly differently. I understood it to mean the
most recent instance of red which doesn't make much sense in this case but
let's say the table was

color   |  date |   entered_by
+-+---
red  |  2004-01-19| John
red  |  2004-04-12| Jane

and you wanted to pick up the row which Jane entered, then the statement
would be

SELECT g.color, g.date, g.entered_by
FROM giventable g
 WHERE g.color = 'red'
  AND g.date =
(SELECT MAX(g2.date)
  FROM giventable g2
   WHERE g2.color= g.color
)

or perhaps

SELECT g.color, g.date, g.entered_by
FROM giventable g
 WHERE (g.color,g.date) =
   (SELECT g2.color, MAX(g2.date)
 FROM giventable g2
   WHERE g2.color = 'red'
 GROUP BY g2.color
   )

etc. etc.

- Original Message -
From: Michael Glaesemann [EMAIL PROTECTED]
To: Scott Frankel [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, November 11, 2004 5:09 PM
Subject: Re: [GENERAL] simple query question: return latest


 Scott,

 On Nov 12, 2004, at 10:00 AM, Scott Frankel wrote:

   color |  date
  +
   red| 2004-01-19
   blue  | 2004-05-24
   red| 2004-04-12
   blue  | 2004-05-24
 
 
  How do I select the most recent entry for 'red'?
 

 SELECT color, MAX(date)
 FROM giventable
 WHERE color = 'red' -- omit this line if you'd like to see the latest
 date for each color
 GROUP BY color;

 OT hint: You might want to take a look at the list of PostgreSQL
 Keywords in the documentation and avoid using them (such as date) to
 help you avoid naming issues in the future.

 Hope this helps.

 Michael Glaesemann
 grzm myrealbox com


 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])





---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SQL question

2004-11-06 Thread Vincent Hikida
I'm afraid, I'm not used to SQL92 join syntax and almost all my experience
is in Oracle but how about:

SELECT t1.uid
 , t1.xname
 , t2.uid
 , t3.uid
  FROM table1  t1
   INNER JOIN table2  t2 ON t1.uid = t2.uid
   INNER JOIN table3  t3 ON t2.uid = t3.uid
 UNION
SELECT t1.uid
  , t1.xname
  , t2.uid
  , NULL
  FROM table1  t1
   INNER JOIN table2  t2 ON t1.uid = t2.uid
 WHERE NOT EXISTS
 (SELECT NULL
  FROM table3 t3
   WHERE t3.uid = t1.uid
 )
 UNION
SELECT t1.uid
  , t1.xname
  , NULL
  , t3.uid
  FROM  table1 t1
INNER JOIN table3 t3 ON t1.uid = t3.uid
WHERE NOT EXISTS
(SELECT NULL
 FROM table2 t2
  WHERE t2.uid = t3.uid
)

Perhaps there was a solution using outer joins and case statements within
the SELECT clause. Perhaps there is also a solution using subselects in the
SELECT clause. However, this is all I can do for tonight.

Vincent

- Original Message -
From: Uwe C. Schroeder [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, November 06, 2004 3:13 PM
Subject: [GENERAL] SQL question


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Here's a question for the SQL guru's out there, which I've been trying to
solve for the last couple of hours. There's got to be a solution to this,
but
somehow I can't find it.

Tables:

table1 (
uid int PK,
uname varchar(64)
)

table2 (
uid int FK to table1,
xuid int FK to table 1
)

table3 (
  uid int FK to table1,
yuid int FK to table1
)

There might be more tables of the type like table2 and table3, but I'd
already
be happy to solve the puzzle with the 3 tables above.
Ok, assume table1 is the master table - in my case a table used for login
authentication (some columns removed above)
table2 and table3 are tables where the uid always references to the uid in
table1. The second uid (xuid and yuid in this example) references to
another uid record in table1. The problem is that there may or may not be
entries in table2 (or table3) referencing a specific uid in their second uid
field.
Maybe some data:

table1:
1 test1
2 test2
3 test3

table2:
1 2
1 3
3 1

table3:
1 2
2 3
3 2

What I want to do in a view is the following resultset:

uid  uname xuid yuid
 1test1 2  2
 1test1 3
 2test2 3
 3test3  1
 3test3 2


So basically I want to know which uid is connected to which uid, one
relationship per row. So xuid and yuid shall be identical if records exist
in
both table2 and table3 or the value shall be NULL if a corresponding record
can't be found in either table2 or table3.

Can anyone here help me out?

Thanks a lot

UC

- --
Open Source Solutions 4U, LLC 2570 Fleetwood Drive
Phone:  +1 650 872 2425 San Bruno, CA 94066
Cell:   +1 650 302 2405 United States
Fax:+1 650 872 2417
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFBjVqVjqGXBvRToM4RAhk5AKCjjGc4VilA45PnPZoKluTNYUP6FACgrRQq
XFIvkCIJHyz7TvvV/XxL4Lk=
=/vxG
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly





---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: Fw: [GENERAL] Is SQL silly as an RDBMS-app interface?

2003-07-24 Thread Vincent Hikida

- Original Message -
From: Bruce Momjian [EMAIL PROTECTED]
To: elein [EMAIL PROTECTED]
Cc: Jan Wieck [EMAIL PROTECTED]; Vincent Hikida
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, July 24, 2003 5:35 PM
Subject: Re: Fw: [GENERAL] Is SQL silly as an RDBMS-app interface?



 Yes, it was more powerful because you could do aggregates in the query
 independent of the results returned by the query.

 The 'by' feature of aggregates always confused me because it would
 modify the aggregate WHERE clause (that was independent of the outer
 query) and restrict the aggregate to only process rows where the outer
 query's column value matched the same column's value in the aggregate.


Actually, I used a hierarchical/relational DBMS called Nomad in 1981. If I
understand Bruce, Nomad could do the same thing. I could aggregate at
different levels in the same query. Each aggregate created a break and I
could add whatever code I wanted at the level. I could also refer to any
level of aggregate in the rest of the query. I could also refer to any level
aggregate in the rest of the code. This meant that I could for example
calculate what percentage of the total the individual row was. The only
problem was that I could only join two tables at a time so if I wanted to
join several tables I had to have several statements. Each statement created
an intermediate table which was easy to refer to in subsequent statements.

Vincent


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] comparing database schema's

2003-07-22 Thread Vincent Hikida
I wrote a utility once at a company where I worked previously that did this.
It did compared the meta data tables of the the two schemas. This was in
Oracle though. I think it took me less than a week to do. It might be a nice
utility to build for postgres.

It started with code as

SELECT TABLE_NAME
 FROM USER_TABLES
MINUS
SELECT TABLE_NAME
FROM [EMAIL PROTECTED]
/
And vice versa.

It then compared columns of common tables, the characteristics of the
columns, the indexes etc and used PL/SQL.

Vincent Hikida,
Member of Technical Staff - Urbana Software, Inc.
A Personalized Learning Experience

www.UrbanaSoft.com

- Original Message -
From: John Harrold [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, July 22, 2003 11:46 AM
Subject: [GENERAL] comparing database schema's




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Fw: [GENERAL] select null + 0 question

2003-07-14 Thread Vincent Hikida
Oops forgot to cc the list.

 Unfortunately, intra-row functions using nulls return nulls. Inter-row
 functions usually ignore the nulls. I think there may be a few
exceptions.
 Though there is a relational theory which has is rigorously consistent,
 nulls are not part of the theory. Nulls are basically what
someone/somewhere
 thought of as a convenient tool (which it is) but has no theoretical
 underpinning and is dangerous. I use it because I feel that I have enough
 experience but perhaps I'll be stung one day.

 It has been discussed on the list before that in Oracle that in Oracle an
 empty string and null are the same. However Postgres treats an empty
string
 '' as an actual value and not as null.

 I just happened to notice another difference recently between Oracle and
 Postgresql for the clause

 WHERE 1 IN (1,2,NULL)

 In Oracle, this clause is false because 1 compared to a NULL is false.
 However, Postgresql will return a true. I actually don't know what the
ANSI
 standard is for this case. Perhaps someone else on this list will know.
 Perhaps the standard body never even thought of this. Yes, I was actually
 stung by this particular while using it in Oracle.

 Vincent Hikida,
 Member of Technical Staff - Urbana Software, Inc.
 A Personalized Learning Experience

 www.UrbanaSoft.com

 - Original Message -
 From: Jean-Christian Imbeault [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Sunday, July 13, 2003 10:42 PM
 Subject: [GENERAL] select null + 0 question


  Why is it that select null + 1 gives null but select sum(a) from
  table where there are null entries returns an integer?
 
  Shouldn't the sum() and + operators behave the same?
 
  TAL=# select null + 0;
?column?
  --
 
  (1 row)
 
  TAL=# select * from a;
a
  ---
 
 
1
  (3 rows)
 
  TAL=# select sum(a) from a;
sum
  -
  1
  (1 row)
 
 
  Thanks,
 
  Jean-Christian Imbeault
 
 
  ---(end of broadcast)---
  TIP 8: explain analyze is your friend
 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] FYI: geometric means in one step without custom functions

2003-07-14 Thread Vincent Hikida
This is a great technique.  It is especially useful in finance for
compounded interest for problems like the following

total return = ((1+janReturn)(1+febReturn)(1+marReturn))-1

I first learned it from an MBA in finance when I was looking over a
spreadsheet that she wrote.

Vincent Hikida,
Member of Technical Staff - Urbana Software, Inc.
A Personalized Learning Experience

www.UrbanaSoft.com

- Original Message -
From: Andrew Gould [EMAIL PROTECTED]
To: Postgres Mailing List [EMAIL PROTECTED]
Sent: Sunday, July 06, 2003 8:38 AM
Subject: [GENERAL] FYI: geometric means in one step without custom functions


 A long time ago, I emailed this list about calculating
 a geometric mean in PostgreSQL.  Creating a custom
 function didn't work because the process of
 multiplying the values from each record resulted in
 numbers that exceeded the limits for the size of a
 number very quickly when dealing with large
 populations.

 I have learned, since, that you can achieve the same
 end by replacing certain steps with log functions.
 (Someone who is very good at math showed me this -- I
 just tested the results and wrote the sql.)  This
 method has 2 great benefits:

 1. The method pushes the limits of deriving geometric
 mean calculations considerably.
 2. The default installation of PostgreSQL has
 everything needed to perform the calculation.

 The sql statement below calculates the geometric mean
 of the lengths of stay (gm_los) for patients, grouped
 by diagnostic related group and fiscal year.

 The population (cases) and average length of stay
 (avg_los) are also reported.

 Note 1. Make sure you are calculating geometric mean
 on a data type that has values to the right of the
 decimal point.

 Note 2. You cannot use a log function on a value = 0.
 Thus, I filtered for los  0.

 select drg_no, fy, count(pt_id) as cases,
 avg(los) as avg_los,
 exp(sum(ln(los::real)) * (1.0/count(pt_id))) as gm_los

 from case_current where los  0
 group by drg_no, fy;

 Have fun!

 Andrew Gould

 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings