Re: [SQL] update from multiple rows

2005-01-24 Thread franco
I understand data_sys is the average value for the 3 days, from at the
(Bday before to the day after.
(BThis should do what you want, in one pass. Check the average function in
(Bthe subselect. If what you want is to divide by 3 no matter how many
(Brecords where found, enable the commented line.
(B
(BUPDATE mytable SET data_sys=TMP.average
(BFROM (
(B--get the averages by date
(BSELECT
(BMT1.date AS date,
(Bavg(MT2.data_raw) AS average
(B--sum(MT2.data_raw)/3 AS average
(BFROM
(Bmytable MT1
(BINNER JOIN mytable MT2 ON (MT2.date BETWEEN MT1.date-1 AND MT1.date+1)
(BGROUP BY
(BMT1.date
(B) AS TMP
(BWHERE
(Bmytable.date=TMP.date
(B
(BHope this is what you where looking for.
(B
(Badam etienne wrote:
(B
(B> hi
(B> I have some trouble updating a table like this one :
(B> date | data_raw | data_sys
(B> 12-01 | 5 | 4.5
(B> 13-01 | 6 | 6
(B> 14-01 | 7 | 8
(B>
(B> I would like to update the 'data_sys' row by computing values of
(B> multiple 'data_raw' values. I mean for example :
(B> data_sys(13-01) = (data_raw['12-01'] + data_raw['13-01'] +
(B> data_raw['14-01'] )/3;
(B>
(B> I thought of a function that fetch the 3 data_raw rows for each
(B> rows but it was obviously too much slow...
(B>
(B> Is there a more efficient way to achieve this ?
(B> Thanks in advance.. This could help me very much..
(B>
(B> Etienne Adam
(B>
(B> _
(B> $BL5NA%a!<%k$J$i$d$C$Q$j(B $B!V(BMSN Hotmail$B!W(B 
(B> http://www.hotmail.com/
(B>
(B> ---(end of broadcast)---
(B> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
(B>
(B
(B
(B---(end of broadcast)---
(BTIP 6: Have you searched our list archives?
(B
(B   http://archives.postgresql.org

Re: [HACKERS] [SQL] What's wrong with this group by clause?

2003-03-14 Thread Franco Bruno Borghesi
Thanks Tom, I applied the patch and it works perfect now.
Thanks to you all.

On Thursday 13 March 2003 14:02, Tom Lane wrote:
> Manfred Koizar <[EMAIL PROTECTED]> writes:
> > ISTM you have found a Postgres 7.3 bug.
>
> Yeah.  Actually, the planner bug has been there a long time, but it was
> only latent until the parser stopped suppressing duplicate GROUP BY
> items:
>
> 2002-08-18 14:46  tgl
>
>   * src/backend/parser/parse_clause.c: Remove optimization whereby
>   parser would make only one sort-list entry when two equal()
>   targetlist items were to be added to an ORDER BY or DISTINCT list.
>   Although indeed this would make sorting fractionally faster by
>   sometimes saving a comparison, it confuses the heck out of later
>   stages of processing, because it makes it look like the user wrote
>   DISTINCT ON rather than DISTINCT.  Bug reported by
>   [EMAIL PROTECTED]
>
> 7.3 patch is attached if you need it.
>
>   regards, tom lane
>
>
> *** src/backend/optimizer/plan/planner.c.orig Wed Mar  5 13:38:26 2003
> --- src/backend/optimizer/plan/planner.c  Thu Mar 13 11:21:16 2003
> ***
> *** 1498,1510 
>* are just dummies with no extra execution cost.)
>*/
>   List   *sort_tlist = new_unsorted_tlist(subplan->targetlist);
>   int keyno = 0;
>   List   *gl;
>
>   foreach(gl, groupClause)
>   {
>   GroupClause *grpcl = (GroupClause *) lfirst(gl);
> ! TargetEntry *te = nth(grpColIdx[keyno] - 1, sort_tlist);
>   Resdom *resdom = te->resdom;
>
>   /*
> --- 1498,1511 
>* are just dummies with no extra execution cost.)
>*/
>   List   *sort_tlist = new_unsorted_tlist(subplan->targetlist);
> + int grpno = 0;
>   int keyno = 0;
>   List   *gl;
>
>   foreach(gl, groupClause)
>   {
>   GroupClause *grpcl = (GroupClause *) lfirst(gl);
> ! TargetEntry *te = nth(grpColIdx[grpno] - 1, sort_tlist);
>   Resdom *resdom = te->resdom;
>
>   /*
> ***
> *** 1518,1523 
> --- 1519,1525 
>   resdom->reskey = ++keyno;
>   resdom->reskeyop = grpcl->sortop;
>   }
> + grpno++;
>   }
>
>   Assert(keyno > 0);


pgp0.pgp
Description: signature


Re: [SQL] Stored procedures

2003-03-28 Thread Franco Bruno Borghesi
Here is a full example of a java program showing the data from a set returning 
function:

-
--IN YOUR DATABASE
CREATE TABLE people (name TEXT);
INSERT INTO people VALUES ('john');
INSERT INTO people VALUES ('peter');
INSERT INTO people VALUES ('joe');

CREATE FUNCTION getPeople() RETURNS SETOF people AS '
DECLARE
   rec RECORD;
BEGIN
   FOR rec IN
  SELECT name FROM people
   LOOP
  RETURN NEXT rec;
   END LOOP;   
   RETURN;
END;' LANGUAGE 'plpgsql';

---
--ListPeople.java
import java.sql.*;
public class ListPeople {
   public static void main(String[] args) {   
  try {
 Class.forName("org.postgresql.Driver");
 Connection 
con=DriverManager.getConnection("jdbc:postgresql:franco?user=admin");
 Statement stmt=con.createStatement();
 ResultSet rs=stmt.executeQuery("SELECT * FROM getPeople()");
 while (rs.next()) {
System.out.println(rs.getString("name"));
 }
  }
  catch (Exception e) {
 System.out.println("Exception: "+e.getMessage());
  }
   }
}

On Friday 28 March 2003 19:31, Zodiac wrote:
> Hello!
> Can anybody tell me one thing.
> How can i call stored procedures in my java-programm?
>
> Thanks for any help.


pgp0.pgp
Description: signature


Re: [SQL] returning composite types.

2003-03-29 Thread Franco Bruno Borghesi
yes, i was trying to do something like that, but it breaks always in the same 
place, first I thought that it was because of the way I was assigning values 
to the fields of my row, but now I'm beginning to think that the reason is 
the way I pass the row to f2. 

Here is the error:
franco=# SELECT f1();
WARNING:  Error occurred while executing PL/pgSQL function f1
WARNING:  line 5 at select into variables
ERROR:  Attribute "result" not found

CREATE TYPE mytype AS (val1 INTEGER, val2 INTEGER, val3 INTEGER);

CREATE OR REPLACE FUNCTION f1() RETURNS mytype AS '
DECLARE   
   result mytype%ROWTYPE;
BEGIN
   result.val1:=1;   
   SELECT val2, val3 INTO result.val2, result.val3 FROM f2(result);
   RETURN result;  
END;
' LANGUAGE 'plpgsql';

CREATE OR REPLACE FUNCTION f2(mytype) RETURNS mytype AS '
DECLARE   
   arg ALIAS FOR $1;
   result mytype%ROWTYPE;
BEGIN
   arg.val2:=2;
   arg.val3:=3;
   SELECT arg.val1, arg.val2, arg.val3 INTO result.val1, result.val2, 
result.val3;
   RETURN result;   
END;
' LANGUAGE 'plpgsql';

what do you guys think?

On Saturday 29 March 2003 13:49, Joe Conway wrote:
> Franco Bruno Borghesi wrote:
> > ok, soy you're telling me that the only way to return a composite type is
> > using a set of them, even if I know my function will allways return 1
> > record.
>
> Try this:
>
> create type foo as (f1 int, f2 text);
> create or replace function retfoo(int, text) returns foo as '
>   declare
>result foo%ROWTYPE;
>   begin
>select into result $1, $2;
>return result;
>   end;
> ' language 'plpgsql';
>
> regression=# select * from retfoo(2,'b');
>   f1 | f2
> +
>2 | b
> (1 row)
>
> Joe


pgp0.pgp
Description: signature


Re: [SQL] Stored procedures

2003-03-29 Thread Franco Bruno Borghesi
As far as I know, you always work with a ResultSet. 
If you know your stored procedures will always return an Integer and you don't 
wanna deal with the executeQuery and stuff every time, you could create a 
class with methods explicitly for accesing your stored procedures, for 
example:

assuming you have a pg function returning an INT, called countPeople(), you 
could do

public class MyStoredProcs {
   private static int executeAnyProc(Connection conn, String procName) throws 
SQLException{
  Statement stmt=conn.createStatement();
  ResultSet rs=stmt.executeQuery("SELECT * FROM "+procName+"()");
  rs.next();
  return rs.getInt(1); 
   }

   public static int countPeople() throws SQLException{
  return executeAnyProc("countPeople");
   }
};

You could add methods to access every stored procedure in your database (even 
returning other data types), and you would use it like this in your code:

...
   int count=MyStoredProcs.countPeople();
   // do something with the value
   if (count>100) {
...


hope this is what you were looking for.  
 

On Saturday 29 March 2003 17:35, Zodiac wrote:
> Thank you for help.
> Just one more question. Have i direct access to stored procedure?
> For example, i have procedure which returns Integer and i wanna to have
> ability to write such code " int var = ANY_CALL". Where ANY_CALL is a my
> procedure call.
> I meant must i do "executeQuery" only and after then parse Statement
> variable?
>
> Thank you.


pgp0.pgp
Description: signature


Re: [SQL] can i make this sql query more efficiant?

2003-04-03 Thread Franco Bruno Borghesi
if you're allowed to change the resultset structure, you could do:
SELECT
   event,
   level, 
   count(*)
FROM
   baz
GROUP BY
   event,
   level;   

 event | level | count
---+---+---
 x | 1 | 1
 x | 2 | 1
 x | 3 | 1
 y | 2 | 1
 y | 3 | 2
(5 rows)

of course it doesn't show you the rows where the count is zero.
if you need the zeros, do this

SELECT   
   EL.event,
   EL.level, 
   count(baz.*)
FROM
   (
  SELECT DISTINCT
 B1.event, B2.level 
  FROM 
 baz B1 
 CROSS JOIN baz B2
   ) EL
   LEFT JOIN baz ON (baz.event=EL.event AND baz.level=EL.level)  
GROUP BY
   EL.event,
   EL.level;   

 event | level | count
---+---+---
 x | 1 | 1
 x | 2 | 1
 x | 3 | 1
 y | 1 | 0
 y | 2 | 1
 y | 3 | 2
(6 rows)

hope it helps.

On Thursday 03 April 2003 18:02, Robert Treat wrote:
> create table  baz (event text, level int);
>
> insert into baz values ('x',1);
> insert into baz values ('x',2);
> insert into baz values ('x',3);
> insert into baz values ('y',2);
> insert into baz values ('y',3);
> insert into baz values ('y',3);
>
> select * from baz;
>
>  event | level
> ---+---
>  x | 1
>  x | 2
>  x | 3
>  y | 2
>  y | 3
>  y | 3
> (6 rows)
>
>
> I want to know how many ones, twos, and threes there are for each event:
>
> select
>   event,
>   (select count(*) from baz a
>   where level = 1 and a.event=baz.event) as ones,
>   (select count(*) from baz a
>   where level = 2 and a.event=baz.event) as twos,
>   (select count(*) from baz a
>   where level = 3 and a.event=baz.event) as threes
> from
>baz
> group by
>   event;
>
> which gives me:
>
>  event | ones | twos | threes
> ---+--+--+
>  x |1 |1 |  1
>  y |0 |1 |  2
> (2 rows)
>
>
> which is fine, but I am wondering if there is a better way to do this?
> I'd mainly like to reduce the number of subqueries involved. Another
> improvement would be to not have to explicitly query for each level,
> though this isn't as big since I know the range of levels in advance
> (famous last words for a dba :-)
>
> Thanks in advance,
>
> Robert Treat
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly


pgp0.pgp
Description: signature


Re: [SQL] SQL Help

2003-05-31 Thread Franco Bruno Borghesi
If your concern is speed, the thing here is that you will have as many records
as there are in "mytable", most of them (I think) with NULLs for alias1, alias2,
alias3 and alias4. This way, there is no condition to filter any record, so
postgreSQL will do a sequential scan over the whole table.

If you are ok fetching the records that match and assumming that the all the others
don't match, then the following will work:

--column1=column2
SELECT
   column3 AS alias1, column4 AS alias2, column5 AS alias3, NULL AS alias4
FROM
   myTable
WHERE
   column1=column2
UNION
--column6=column7
SELECT
   NULL AS alias1, NULL AS alias2, NULL AS alias3, column8 AS alias4
FROM
   myTable
WHERE
   column6=column7

Of course, you will need the necesary indexes.

If this didn't give you a hint, please post a message with a link to your original
message, so I can get a better idea of what you need.

On Friday 30 May 2003 12:47, C F wrote:
> Hello,
> I already tried this same basic question with no response  maybe I was
> too wordy.  So here it is simplified what's the best way to write this
> query?  I'm open to using stored procedures, but even then I don't know how
> I would conditionally populate a resultset (refcursor).  Notice that in the
> first three cases, the expression is the exact same, only the return value
> is different.  This seems inefficient
>
>
> select
>   (case when column1 = column2 then column3 end) as alias1,
>   (case when column1 = column2 then column4 end) as alias2,
>   (case when column1 = column2 then column5 end) as alias3,
>   (case when column6 = column7 then column8 end) as alias4
> from
>   mytable
> ;
>
> Any ideas?
> Thanks!
>
>
> -
> Do you Yahoo!?
> Free online calendar with sync to Outlook(TM).


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Delete duplicates

2003-06-22 Thread Franco Bruno Borghesi
try this

DELETE FROM aap WHERE  id NOT IN (
   SELECT max(id)
   FROM aap
   GROUP BY keyword
);

>
>
> Hi,
>
> I have a table with duplicates and trouble with my SQL.
> I'd like to keep a single record and remove older duplicates.
> For example below of the 6 recods I'd like to keep records
> 4 and 6.
>
> TABLE: aap
> id | keyword
> +-
>  1 | LEAGUE PANTHERS
>  2 | LEAGUE PANTHERS
>  3 | LEAGUE PANTHERS
>  4 | LEAGUE PANTHERS
>  5 | LEAGUE BRONCOS
>  6 | LEAGUE BRONCOS
>
> Here is my SQL so far, it will select records 1 to 5 instead
> of 1,2,3 and 5 only.
>
> Any help greatly appreciated. I think I need a Group By somewhere in
> there.
>
> select a1.id
> from aap a1
> where id < ( SELECT max(id) FROM aap AS a2 )
> AND EXISTS
> (
> SELECT *
> FROM aap AS a2
> WHERE a1.keyword = a2.keyword
> )
>
> Regards
> Rudi.
>
> ---(end of
> broadcast)--- TIP 1: subscribe and unsubscribe
> commands go to [EMAIL PROTECTED]




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] possible?

2003-08-14 Thread Franco Bruno Borghesi




If I don't get it worng, you wanna know how many threads you have for each forum, and how many posts you have for each thread... don't you?

maybe something like 

SELECT
    F.id AS forumId,
    ( SELECT count(id) FROM thread WHERE forumId=F.id ) AS threadCount,
    T.id AS threadId,
    ( SELECT count(id) FROM posts WHERE threadId=T.id ) AS postCount
FROM
    Forum F
    INNER JOIN thread T ON (T.forumId=F.id)

or perhaps

SELECT
    F.id AS forumId,
    T.id AS threadId,
    coalesce(TC.threadCount, 0) AS threadCount,
    coalesce(PC.postCount, 0) AS postCount
FROM
    forum F
    INNER JOIN thread T ON (T.forumId=F.id)
    LEFT JOIN (
    SELECT T.forumId, count(T.id) AS threadCount FROM thread T GROUP BY T.forumId
    ) TC ON (TC.forumId=F.id)
    LEFT JOIN (
    SELECT P.threadID, count(P.id) AS postCount FROM posts P GROUP BY P.threadId
    ) PC ON (PC.threadId=T.id)

should work.




On Mon, 2003-08-11 at 16:41, Fejes Jozsef wrote:

Hi!

I'd like to make a bulletin board, and came upon a strange question: can the
forum listing be done only with one single SELECT?

My data structure is like this. Table "forum" holds the main categories,
each with an "id" field. Table "thread" holds the threads, each has it's own
"id" too, and a "forumid" field that specifies it's parent caregory. Table
"post" holds the posts, with "id", and with a "threadid" field, that
specifies which thread this post belongs to, and also each post has a
timestamp.

First the user should see a listing of the "forum" table. Then he selects a
forum, and he should see the a listing of the "thread" table with the
appropriate "forumid". It's not too serious so far.

Here are my problems:
- order the listings by the time the last post was made
- display how many posts that "forum" or "thread" contains
- for forums, display how many threads are in it

I checked out some forum implementations so far, and found nothing helpful.
They use too many queries, or don't order by time (like PHPBB). On the main
page, I can list the forums, and then make a query for each of them to tell
how many threads and posts are in it, but I just don't like it. By creating
views, I was even able to order by time, but couldn't tell the number of
threads. Also, after creating a new thread, it's empty, so "WHERE forum.id =
thread.forumid AND thread.id = post.threadid" doesn't return empty threads,
so noone can post to it.

What I'd really love to see is a single SELECT statement for each of the two
listings that display everything I want. With data views, embedded selects,
or anything. So, is it possible?



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org






signature.asc
Description: This is a digitally signed message part


Re: [SQL] How to optimize this query ?

2003-08-14 Thread Franco Bruno Borghesi




Maybe its better now. Anyway, what I think is that joining will perform better than using IN. Am I wrong?

SELECT
   L.*
FROM
  lead L
  LEFT JOIN purchase P ON (L.id=P.lead_id)
  LEFT JOIN member_exclusion M ON (P.member_id=M.member_id_to_exclude)
  LEFT JOIN (
       SELECT DISTINCT affiliate_locked_id FROM affiliate_lockout WHERE member_id=21101
    ) A ON (L.affiliate_id=A.affiliated_locled_id)
WHERE
    L.exclusive IS NULL OR
    (
    L.exclusive=0 AND
    L.nb_purchases<3
    ) AND
    (P.lead_id IS NULL OR P.lead_id<>21101) AND
  (M.member_id IS NULL) AND
    (A.member_id IS NULL)


On Tue, 2003-08-12 at 22:29, Stephan Szabo wrote: 

On Tue, 12 Aug 2003, Franco Bruno Borghesi wrote:

> I *guess* this query does the same as yours (please verify).

It does not in general unfortunately. :( I see two possible
problems.

The first is that by using an inner join you're going to lose
any rows where there is no match on the right hand table which
the original doesn't.

The second is that if there were two rows in affiliate_lockout
with the different member_ids but the same affiliate_locked_id
say, (21101, 10) and (21201, 10) and you were matching a lead row
with a affiliate_id of 10, the second row would get past the
condition since it has a member_id!=21101, but the original
would drop the row because there existed some matching
affiliate_lockout row where the member_id was 21101.

> SELECT
> 	L.*
> FROM
> 	lead L
> 	INNER JOIN purchase P ON (L.id=P.lead_id)
> 	INNER JOIN affiliate_lockout A ON
> (L.affiliate_id=A.affiliate_locked_id)
> 	INNER JOIN member_exclusion M ON
> (P.member_id=M.member_id_to_exclude)
> WHERE
> 	L.exclusive IS NULL OR
> 	(
> 		L.exclusive=0 AND
> 		L.nb_purchases<3
> 	) AND
> 	P.lead_id<>21101 AND
> 	A.member_id<>21011
>
> > Hi all
> >
> > I have to optmize this query, because it takes a while to run (about
> > 30s)
> >
> > Here are the tables (with the keys):
> > affiliate_lockout  (6 rows)  (member_id, affiliate_id)
> > lead   (4490 rows)   (id, ...)
> > member  (6 rows) (id, ...)
> > member_exclusion (3 rows)(member_id, member_id_to_exclude)
> > purchase (1 rows)(lead_id, member_id, ...)
> >
> >
> > Here is the query:
> > SELECT *
> > FROM lead
> > WHERE
> > (exclusive IS NULL OR (exclusive = 0 AND nb_purchases < 3)) AND
> >
> > id NOT IN (
> >
> > SELECT lead_id
> > FROM purchase
> > WHERE member_id = 21101
> > ) AND affiliate_id NOT
> > IN (
> >
> > SELECT affiliate_locked_id
> > FROM affiliate_lockout
> > WHERE member_id = 21101
> > ) AND id NOT
> > IN (
> >
> > SELECT lead_id
> > FROM purchase
> > INNER JOIN member_exclusion
> > WHERE purchase.member_id = member_exclusion.member_id_to_exclude AND
> > purchase.member_id = 21101
> > )
> >
> >
> > I wonder the problem is with the subqueries (which are apparently very
> > slow to run, according to what I read), but I can't figure how to
> > rewrite this query without any subquery ...
> >
> > Maybe the problem comes from the index ... How would you create your
> > indexes to optimize this query ?
> >
> > Could somebody help me ?
> > Thanks
> > krysto
> >
> > ---(end of
> > broadcast)--- TIP 5: Have you checked our
> > extensive FAQ?
> >
> >   http://www.postgresql.org/docs/faqs/FAQ.html
>
>
>
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
>











signature.asc
Description: This is a digitally signed message part


Re: [SQL] How to optimize this query ?

2003-08-14 Thread Franco Bruno Borghesi
I *guess* this query does the same as yours (please verify).

SELECT
L.*
FROM
lead L
INNER JOIN purchase P ON (L.id=P.lead_id)
INNER JOIN affiliate_lockout A ON
(L.affiliate_id=A.affiliate_locked_id)
INNER JOIN member_exclusion M ON
(P.member_id=M.member_id_to_exclude)
WHERE
L.exclusive IS NULL OR
(
L.exclusive=0 AND
L.nb_purchases<3
) AND
P.lead_id<>21101 AND
A.member_id<>21011

Hope it performs better.

> Hi all
>
> I have to optmize this query, because it takes a while to run (about
> 30s)
>
> Here are the tables (with the keys):
> affiliate_lockout  (6 rows)  (member_id, affiliate_id)
> lead   (4490 rows)   (id, ...)
> member  (6 rows) (id, ...)
> member_exclusion (3 rows)(member_id, member_id_to_exclude)
> purchase (1 rows)(lead_id, member_id, ...)
>
>
> Here is the query:
> SELECT *
> FROM lead
> WHERE
> (exclusive IS NULL OR (exclusive = 0 AND nb_purchases < 3)) AND
>
> id NOT IN (
>
> SELECT lead_id
> FROM purchase
> WHERE member_id = 21101
> ) AND affiliate_id NOT
> IN (
>
> SELECT affiliate_locked_id
> FROM affiliate_lockout
> WHERE member_id = 21101
> ) AND id NOT
> IN (
>
> SELECT lead_id
> FROM purchase
> INNER JOIN member_exclusion
> WHERE purchase.member_id = member_exclusion.member_id_to_exclude AND
> purchase.member_id = 21101
> )
>
>
> I wonder the problem is with the subqueries (which are apparently very
> slow to run, according to what I read), but I can't figure how to
> rewrite this query without any subquery ...
>
> Maybe the problem comes from the index ... How would you create your
> indexes to optimize this query ?
>
> Could somebody help me ?
> Thanks
> krysto
>
> ---(end of
> broadcast)--- TIP 5: Have you checked our
> extensive FAQ?
>
>   http://www.postgresql.org/docs/faqs/FAQ.html




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] SQL Syntax problem

2003-09-29 Thread Franco Bruno Borghesi




This kind of conditions are left or right joins, depending on which side of the equal sign you have the (+).

Something like this

select    ...
from
	auswahlkatalog k,
	INNER JOIN anspruchkorrektur a ON (k.awk_id = a.awk_id),
	LEFT JOIN beteiligter b ON (b.bet_id = a.bet_idemp),
	RIGHT JOIN v_betkorr f ON (a.ask_id = f.ask_id)

should give you the same results.

On Sun, 2003-09-28 at 09:43, Doris Bernloehr wrote:

Hello.

I've got a problem in porting the following select statement from Oracle to 
Postgres, because of the characters after "b.bet_id" and "f.ask_id" in the 
where clause: (+)
I don't know what these characters mean and how I can transform these into 
PostgreSql Syntax.


select...
from 		auswahlkatalog k, anspruchkorrektur a, beteiligter b, v_betkorr f
where   	k.awk_id = a.awk_id and b.bet_id(+) = a.bet_idemp
		and a.ask_id = f.ask_id(+);


Hoping for help.
Doris


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster






signature.asc
Description: This is a digitally signed message part


Re: [SQL] Help with pivoting tables

2003-09-29 Thread Franco Bruno Borghesi




would 

SELECT
	groupid, activity_date,
	sum(TMP.Attended) AS Attended,
	sum(TMP.Unexcused) AS Unexcused,
	sum(TMP.Absent) AS Absent,
	sum(TMP.Called) AS Called
FROM (
	SELECT groupid, activity_date,
		count(CASE activity_subcode WHEN 100 THEN 1 ELSE NULL END) AS Attended,
		count(CASE activity_subcode WHEN 101 THEN 1 ELSE NULL END) AS Unexcused,
		count(CASE activity_subcode WHEN 102 THEN 1 ELSE NULL END) AS Absent,
		count(CASE activity_subcode WHEN 103 THEN 1 ELSE NULL END) AS Called,
		count(*) AS total
	FROM activity
	WHERE activity_date BETWEEN '06/02/2003' AND '06/06/2003'
	GROUP BY groupid, activity_date, activity_subcode
	ORDER BY groupid, activity_date
	) TMP
GROUP BY groupid, activity_date
ORDER BY groupid, activity_date

do what you want?

On Mon, 2003-09-29 at 16:50, Ben Schneider wrote:

Hi,

I am having some diffuculty with pivoting the results of a query. I am using
the following SQL in an attempt to aggreate the data from a table.

--Begin Query--

SELECT groupid, activity_date, 
count(CASE activity_subcode WHEN 100 THEN 1 ELSE NULL END) AS Attended, 
count(CASE activity_subcode WHEN 101 THEN 1 ELSE NULL END) AS Unexcused,
count(CASE activity_subcode WHEN 102 THEN 1 ELSE NULL END) AS Absent,
count(CASE activity_subcode WHEN 103 THEN 1 ELSE NULL END) AS Called,
count(*) AS total
FROM activity
WHERE activity_date BETWEEN '06/02/2003' AND '06/06/2003'
GROUP BY groupid, activity_date, activity_subcode
ORDER BY groupid, activity_date

--End Query--

The output is coming back like:

Groupid	activity_date  attended  unexcused  absent  called  total
-
BNEIO   	2003-06-04  7	0  0   0  7
BNEIO   	2003-06-04  0 2  0   0  2
BNEIO   	2003-06-05  4 0  0   0  4
BNEIO   	2003-06-05  0 5  0   0  5

I need the output to come back with the groups_id and activity_date combined
to look like this:

Groupid	activity_date  attended  unexcused  absent  called  total
-
BNEIO   	2003-06-04  7	2  0   0  9
BNEIO   	2003-06-05  4 5  0   0  9

Any ideas?

Thanks,
Ben

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.521 / Virus Database: 319 - Release Date: 9/23/2003
 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html






signature.asc
Description: This is a digitally signed message part


Re: [SQL] monitor sessions

2003-10-10 Thread Franco Bruno Borghesi




SELECT * FROM pg_stat_activity;

On Fri, 2003-10-10 at 09:48, Chris Faulkner wrote:

Hello

Can anyone tell me - is there a system table or view that I can query to
show all current sessions ?

Thanks


Chris



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings






signature.asc
Description: This is a digitally signed message part


Re: [SQL] URGENT!!! changing Column size

2003-10-28 Thread Franco Bruno Borghesi




Dopping the whole database just for a column change? 

On Tue, 2003-10-28 at 10:00, Theodore Petrosky wrote:

why not just

pg_dump dbname > olddb.out
pico olddb.out

edit the section that defines the table
save and exit

dropdb dbname
createdb dbname

psql dbname < olddb.out

no fuss no muss...

Ted

--- [EMAIL PROTECTED] wrote:
> Hi can we change the size of a column in postgres. I
> have a table named
> institution and column name is name varchar2(25), i
> want to change it to
> varchar2(50). Please let me know.
> 
> --Mohan
> 
> 
> 
> 
> 
> 
> ---(end of
> broadcast)---
> TIP 2: you can get off all lists at once with the
> unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

__
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears
http://launch.yahoo.com/promos/britneyspears/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html






signature.asc
Description: This is a digitally signed message part


Re: [SQL] SQL substring

2003-11-12 Thread Franco Bruno Borghesi




... WHERE field1 LIKE '%' || field2 || '%'
or
... WHERE position(field2 in field1)>0

On Wed, 2003-11-12 at 11:07, Guy Korland wrote:

Hi,
How can I compare two fields if one is a substring of the other?
Something like: 
	... WHERE StringA like '%' + StringB + '%';

Thanks,
Guy Korland


---(end of broadcast)---
TIP 8: explain analyze is your friend






signature.asc
Description: This is a digitally signed message part


Re: [SQL] query optimization

2004-06-10 Thread Franco Bruno Borghesi




I see that attribute project is defined as integer in library, and as varchar(8) in clone. I suspect that's what causing the problem and forcing a seq scan on library.

On Thu, 2004-03-04 at 14:56, Charles Hauser wrote:

All,

I have the following query which is running quite slow on our server and
was hoping someone would have suggestions how I might improve it.


est3=>EXPLAIN SELECT clone.uniquename,clone.name,library.type,clone.clone_id
est3-> FROM library,clone_aceg
est3-> JOIN clone USING (clone_id)
est3-> WHERE clone_aceg.aceg_id = 8 AND
est3-> clone.project=library.project;
 QUERY PLAN
-
 Nested Loop  (cost=0.00..27.92 rows=1 width=57)
   Join Filter: (("outer".project)::text = ("inner".project)::text)
   ->  Nested Loop  (cost=0.00..18.55 rows=4 width=43)
 ->  Index Scan using aceg_id_clone_aceg_key on clone_aceg  (cost=0.00..3.05 rows=4 width=4)
   Index Cond: (aceg_id = 8)
 ->  Index Scan using clone_pkey on clone  (cost=0.00..3.91 rows=1 width=39)
   Index Cond: ("outer".clone_id = clone.clone_id)
   ->  Seq Scan on library  (cost=0.00..2.15 rows=15 width=14)
(8 rows)




relevant tables below.

regards,

Charles
 

Tables:
Table "public.clone"
   Column   | Type  |   Modifiers
+---+
 clone_id   | integer   | not null default nextval('"clone_clone_id_seq"'::text)
 name   | character varying(10) | not null
 uniquename | text  | not null
 project| character varying(8)  |
 p_end  | character varying(2)  |
 lib_id | integer   |
 accn   | character varying(10) |
 seq| text  | not null
 seqlen | integer   |
 hq_start   | integer   |
 hq_end | integer   |
 scaffold   | character varying(50) |
Indexes: clone_pkey primary key btree (clone_id),
 clone_uniquename_idx unique btree (uniquename),
 clone_accn_idx btree (accn),
 clone_name_idx btree (name),
 clone_project_idx btree (project),
 clone_scaf_idx btree (scaffold)

 Table "public.library"
   Column|  Type   |   Modifiers
-+-+
 lib_id  | integer | not null default nextval('"library_lib_id_seq"'::text)
 source  | text|
 type| text|
 project | integer |
 name| text|
 organism| text|
 strain  | text|
 vector  | text|
 rs1 | text|
 rs2 | text|
 preparation | text|
Indexes: library_pkey primary key btree (lib_id),
 library_project_idx btree (project),
 library_type_idx btree ("type")


   Table "public.clone_aceg"
  Column  |  Type   | Modifiers
--+-+---
 clone_id | integer |
 aceg_id  | integer |
Indexes: clone_aceg_clone_id_key unique btree (clone_id, aceg_id),
 aceg_id_clone_aceg_key btree (aceg_id),
 clone_id_clone_aceg_key btree (clone_id)
Foreign Key constraints: cloneid FOREIGN KEY (clone_id) REFERENCES clone(clone_id) ON UPDATE NO ACTION ON DELETE CASCADE,
 acegid FOREIGN KEY (aceg_id) REFERENCES aceg(aceg_id) ON UPDATE NO ACTION ON DELETE CASCADE
 
  List of relations
 Schema |Name| Type  |  Owner  |Table
++---+-+--
 public | aceg_aceg_idx  | index | chauser | aceg
 public | aceg_assembly_key  | index | chauser | aceg
 public | aceg_blast_aceg_id_key | index | chauser | aceg_blast
 public | aceg_contig_idx| index | chauser | aceg
 public | aceg_g_scaffold_idx| index | chauser | aceg
 public | aceg_has_blast_idx | index | chauser | aceg
 public | aceg_id_aceg_blast_key | index | chauser | aceg_blast
 public | aceg_id_clone_aceg_key | index | chauser | clone_aceg
 public | aceg_pkey  | index | chauser | aceg
 public | aceg_uniquename_idx| index | chauser | aceg
 public | blast_id_aceg_blast_key| index | chauser | aceg_blast
 public | blast_id_contig_blast_key  | index | chauser | contig_blast
 public | blast_ortho_idx| index | chauser | blast
 public | blast_pkey | index | chauser | blast
 public | clone_accn_idx | index | chauser | clone
 public | clone_aceg_clone_id_key| index | chauser | clone_aceg
 public | clone_contig_clone_id_key  | index | chauser | clone_contig
 public | clone_id_clone_aceg_key| index | chauser | clone_aceg
 public | clone_id_clone_contig_key  | index | chauser | 

Re: [SQL] (No Subject)

2004-06-11 Thread Franco Bruno Borghesi




I remember reading somewhere that it was possible to execute system commands from plperl...  don't know if it's the best way, but you could do an exec of pg_dump from inside a plperl procedure.

On Fri, 2004-06-11 at 01:36, William Anthony Lim wrote:

is it possible to dump within procedural language/SQL syntax? Using pg_dump from console is very confusing for some end user who don't have Linux skills. so I decide to create a function to do that, and they may call it from my application.

Thanks

William


Need a new email address that people can remember
Check out the new EudoraMail at
http://www.eudoramail.com

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match






signature.asc
Description: This is a digitally signed message part


Re: [SQL] Need indexes on inherited tables?

2004-06-26 Thread Franco Bruno Borghesi




Table T is not inheriting any indexes, neither the primary key constraint. That means that the search is going to use an index scan on table B and a sequential scan on table T (unless of course you add a primary key constraint or an index on table T).

You can check this things doing: 
->SET enable_seqscan TO off;
->EXPLAIN SELECT * FROM B WHERE id=5; 
you'll see an index scan on table B and sequential scans on the other tables.

Doing:
->SELECT C.relname AS table_name, C2.relname AS index_name FROM pg_index I LEFT JOIN pg_class C ON (I.indrelid=C.oid) LEFT JOIN pg_class C2 ON (C2.oid=I.indexrelid) WHERE  C.relname ILIKE ''

you can find out what indexes are available for table_name (or \d  in psql).


On Sat, 2004-06-26 at 16:29, Phil Endecott wrote:

Dear Postgresql experts,

I have a base table that declares a primary key spanning a couple of columns:

create table B (
  id integer,
  xx someothertype,
  .
  primary key (id, xx)
);

and a number of derived tables that inherit from B:

create table T (
 
) inherits (B);

An index is automatically created for B because of the primary key.

If I search for something in T using the key columns, e.g. I do

  select * from T where id=1 and xx=something;

will the index be used?  Or must I explicity create an index on id and xx for T and each of the other derived tables?

Is it any different if I search in B and find rows that are actually in T?

(Slightly unrelated: does the index on (id,xx) help when I am searching only on id?)

Thanks for any insight anyone can offer.

--Phil.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings






signature.asc
Description: This is a digitally signed message part


Re: [SQL] Need indexes on inherited tables?

2004-06-28 Thread Franco Bruno Borghesi




you can find more information here: http://www.postgresql.org/docs/7.4/static/sql-createtable.html (under the parameters section). For unique constraints, the only thing I can think of is a table check constraint (SELECT count(pk)=0).

I agree with you, right now there are many inconveniences with postgresql inheritance, that's why I never use inheritance this way. 
I usually stick to the "standard" way, for example, I create a people table, and for every people I want to insert in my database there is one row in this table, plus one row with the same id/pk (which is also a fk) in a specialized table (let's say students, teachers, whatever). 
This simplifies things, because attribute definitions/defaults/constraints are always in one table. Maybe you get into some other troubles, but I find them less painfull 

On Sat, 2004-06-26 at 19:30, Phil Endecott wrote:

I asked if derived tables use primary key indexes generated in the base tables that they inherit from.

Franco Bruno Borghesi replied:
> [the derived table] is not inheriting any indexes, neither the
> primary key constraint.

OK, thanks!  I had come to the conclusion that it was not using the index, but I'm really surprised to hear that the primary key constraint that I define in a base table is not inherited.  Are any constraints inherited?  What happens if I declare a single-column primary key?  What if I declare a "not null" constraint or a "check" constraint in a base table?

Having to replicate the constraints and indexes for each derived table is a pain - lots of error-prone typing - but there is a more serious problem: how can I ensure that these keys are unique across all of the derived tables?  (i.e. if T1 and T2 inherit from B, and B's primary key is (id,xx), then I want there to be at most one row in (T1 union T2) that has any value of (id,xx).)

Is this a possible area for future enhancements?

Regards,

--Phil.

 




<>

signature.asc
Description: This is a digitally signed message part


Re: [SQL] Group by and aggregates

2004-11-04 Thread Franco Bruno Borghesi




If I understand well, you want the highest cmup for each partno, that is max(cmup) grouped by partno (only). 

You can achieve this with a subselect, and then you join the results whith the query you already have:

SELECT T.partno, T.status, TMP.max_cmup_for_partno, max(T.cmup) AS max_cmup, sum(T.qty) AS sum_qty
FROM my_table T, (SELECT partno,	max(cmup) AS max_cmup_for_partno FROM my_table GROUP BY partno) AS TMP
WHERE tmp.partno=T.partno
GROUP BY T.partno, TMP.max_cmup_for_partno, T.status

Hope it helped.


On Thu, 2004-11-04 at 13:54, Michael L. Hostbaek wrote:

List, 

I've got a table looking something like this:

my_table
	some_id int bla bla,
	partno varchar(100),
	status varchar(100),
	cmup numeric(14,2),
	qty int

Here a small sample of contents in my table:

some_id	partno	status		cmup	qty
1	test1	stock		10.00	15
2	test2	incoming	12.00	10
3	test1	incoming	15.00	60
4	test1	incoming	14.00	11


My SQL select statement will then group together partno, status and
aggregate sum(qty) and max(cmup). This is all good and nice.

My result will look something like this:

partno	status		cmup	qty
test1	stock		10.00	15
test1	incoming	15.00	71
test2	incoming	12.00	10

Now, I need the first line to say "15.00" in the cmup field. That is,
stock and incoming are obviously not being grouped, but since it's the
same partno I'd like somehow to show the highest cmup. Is there some
black SQL voodoo that'll achieve this ?

TIA, 





signature.asc
Description: This is a digitally signed message part


Re: [SQL] Simple SQL Question

2004-11-05 Thread Franco Bruno Borghesi




wouldn't it be easier using offset & limit?: you always select from the table with an itemkey,location order by clause. You save the current offset between requests, and for every request you re-execute the query with a different offset.

If you still want to go with what you already have, you should keep the lastItemKey and lastLocaltion values between requests, and your where clause should be something like:
WHERE (itemKey=:lastItemKey AND location>:lastLocation) OR (itemKey>:lastItemKey)

hope it helps.

On Fri, 2004-11-05 at 13:54, Andras Kutrovics wrote:

Hi All!

In one of my application, I would like to implement incremental
fetching. The problem is that I would like to run it
from a stateless application server, so I have to tell where
to begin fetching data. It is simple for a table which has single column
primary key, but causes difficulties (at least for me) in
a more-column primary key..

Let say I have a table wich has the primary key: itemkey,location

table1
--
itemkey
location
...



select * from table1 LIMIT x

gives me the first x row of the result.
After that, I save the last value, and next time, I adjust
the query as

select * from table1 where itemkey>:lastvalue LIMIT x

that should be enough for an 'Item' table, but I have another
column in the primary key.

let say, I have the following in Table1:

itemkey		location

1		1
1		2
...
5		1
5		2
5		3 <--- lets say this is the last value

next time i want to run a query, which starts from

5		4
6		5

and so on..

How can I specify that in sql?
I dont want to use cursor:), I would like to do it in plain sql.
(It it is possible).

Thank you in advance

Andras Kutrovics

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]






signature.asc
Description: This is a digitally signed message part


Re: [SQL] Simple SQL Question

2004-11-09 Thread Franco Bruno Borghesi
Didn't know about the seqscan problem when using ORs. But you still can 
split the query in two, and then use Union to join the results:

SELECT  WHERE itemKey=:lastItemKey AND location>:lastLocation
UNION
SELECT ... WHERE itemKey>:lastItemKey
You could solve the OFFSET/LIMIT modification problem if you could keep 
the transaction open, but I don't know if it's applicable in your case.

Andras Kutrovics wrote:
Franco Bruno Borghesi wrote:
Hi!
Sorry for being late with the answer, I was busy at one of our customer;)
wouldn't it be easier using offset & limit?: you always select from 
the table with an itemkey,location order by clause. You save the 
current offset between requests, and for every request you re-execute 
the query with a different offset.
  Sure, but sometimes I have to query by name, and dont want to create
  another query component.
If you still want to go with what you already have, you should keep 
the lastItemKey and lastLocaltion values between requests, and your 
where clause should be something like:
WHERE (itemKey=:lastItemKey AND location>:lastLocation) OR 
(itemKey>:lastItemKey)

  It works perfectly, but I have to consider the performance issue,
  because if i use 'or' statement, postgres doesn't use index scan,
  and I also have tables with 3 or more keys and 500.000 records , where
  the performance of this method is poor.
  Maybe I will end up using limit and offset in case of incremental
  fetching,but if the table is modified between requests, it can behave
  weird.
  Is there a perfect solution to this?
Sorry for the english
Thank you again,
Andras Kutrovics
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] update from multiple rows

2005-01-25 Thread Franco Bruno Borghesi
updating 40.000 records should take no longer than a couple of minutes. 
I think you should optimise your query before going any further.

You have an inner SELECT sentence that executes before anything. It 
joins EVERY row in your table (1,000,000+) with at most 3 other rows in 
the same table, so you will end up with about 3,000,000+ rows... but you 
are interested in only 40,000 rows!

To make it simple, add a WHERE condition to fetch only the 40.000 rows 
you are interested in and discard the others. Make sure also you have 
indexed the attributes you are filtering on, and the date attribute too.
You should use EXPLAIN ANALYZE on the inner query to check how it improves.

Once your SELECT query runs fast enough, the UPDATE should go much 
faster too.

The number of columns matters, but as I said, I don't think it's an 
UPDATE problem.

If you don't find the way to speed your query up, try posting to the 
performance list.

[EMAIL PROTECTED] wrote:
Thanks a lot.. That is what i searched.. In fact your query is very good for 
little changes, but i will have to use another method when updating all my rows 
because the performance is not very good alas.
My data set contains something like 4 rows to update in 1+ million records and 
data_raw, data_sys are of type "real"... The complete update took 40 minutes on 
a 256Mo, athlon 2400, kernel 2.6 and with no charge during the execution of the query.
Is this normal ? The number of columns of the table does it matter a lot (the 
table contains 12 reals and 4 integers) ?
I found that using an intermediate table which stock for every row the value 
before and the value after helps to gain speed... But it is not a very nice way 
i think..
Thanks again :)
Etienne 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq
 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: ***SPAM*** Re: [SQL] same question little different test MSSQL

2005-01-26 Thread Franco Bruno Borghesi




I've tested in a relation of mine, with about 20 attributes, and here
are the results:

test=# select count(*) from gestionestareas;
 count

 447681
(1 row)

test=# explain analyze select * from gestionestareas where agrupable;
 QUERY PLAN

 Seq Scan on gestionestareas  (cost=0.00..12334.81 rows=155495
width=372) (actual time=0.603..1176.177 rows=153530 loops=1)
   Filter: agrupable
 Total runtime: 1380.113 ms
(3 rows)

So, doing a seq scan on a 450,000 rows table and fetching 150,000 rows
takes only 1.3 secs. This a 900Mhz PIII, 1GB mem (133 Mhz), 7200RPM ide
disk, running freebsd. This machine is also a webserver (apache &
tomcat), mail server, file server (smb & nfs), ldap server, etc.

I don't  use pgadmin, I use psql (postgresql console client). It took 2
minutes to display the results of the above query (without the EXPLAIN
ANALIZE) this makes me think, couldn't be the problem that pgadmin,
psql, etc. takes too much time to display all the rows? It seems a
client software problem, not a server problem.

My advice is, use EXPLAIN ANALYZE to test both servers performance. If
you want to know which the final results will be, you test both
databases from a self programmed application (java, php, C++, etc.).

Hope this helped.

Joel Fradkin wrote:

  Well last evening (did not try it this morning) it was taking the extra
time.

I have made some adjustmenNots to the config file per a few web sites that you
all recommended my looking at.

It is now using 137 of 756 meg avail.
it is now taking 8 secs to return 22,000 rows (using pgadminIII in a sql
edit window).

The EXPLAIN ANALYSE still shows the same as below, but the table has 344,000
recs of which only 22636 are clientnum = 'SAKS'

I am still doing a seq search (this applies to the view question where if it
is a small result set it used a index search but on a larger return set it
did a seq search) in my view, but with the adjustments to the kernel I get a
result in 140 secs (MSSQL was 135 secs).

This is not production, I am still very worried that I have to do all this
tweeking to use this, MSSQL worked out of the box as it does (not saying its
great, but I never had to adjust a kernel setting etc). Since we cannot
afford the 70,000 dollars they want to license it I am not implying I can
use MSSQL, but I could look at other DB's like MYSQL, or Firebird, etc.

I have a lot of time now (two weeks) in this conversion and do not wish to
give up, I will see if I can learn what is needed to get the maximum
performance. I have seen much information available and this list has been a
huge resource. I really appreciate all the help.


Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
  
  
QUERY PLAN
"Seq Scan on tblcase  (cost=0.00..30066.21 rows=37401 width=996) (actual
time=0.344..962.260 rows=22636 loops=1)"
"  Filter: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 1034.434 ms"

  
  
That's only 1 second - to return 22,636 rows. Not 27 seconds, as in the 
original post. You'll never persuade PG to use the index when some 75% 
of your rows match the filter - it just doesn't make sense.

--
   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

  






Re: ***SPAM*** Re: [SQL] same question little different test MSSQL

2005-01-26 Thread Franco Bruno Borghesi




Maybe you should tweak the cpu_index_tuple_cost parameter instead of
disabling sequential scans. De default value is 0.001, you should
change it to a lower value (0.0005 or something).

Joel Fradkin wrote:

  I tried the SET ENABLE_SEQSCAN=FALSE;
And the result took 29 secs instead of 117.

After playing around with the cache and buffers etc I see I am no longer
doing any swapping (not sure how I got the 100 sec response might have been
shared buffers set higher, been goofing around with it all morning).

My worry here is it should obviously use an index scan so something is not
setup correctly yet. I don't want to second guess the analyzer (or is this a
normal thing?)

Least it is blowing the doors off MSSQL (which is what I touted to my boss
and was pretty upset when I got no result last night).

The 117 was before I forced the seq off so even doing a seq I am getting
results now that are better then MSSQL.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Richard Huxton [mailto:[email protected]] 
Sent: Wednesday, January 26, 2005 10:21 AM
To: Joel Fradkin
Cc: [EMAIL PROTECTED]; [email protected]
Subject: Re: [SQL] same question little different test MSSQL vrs Postgres

Joel Fradkin wrote:
  
  
Well last evening (did not try it this morning) it was taking the extra
time.

I have made some adjustments to the config file per a few web sites that

  
  you
  
  
all recommended my looking at.

  
  
The crucial one I'd say is the performance guide at:
   http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
The first half-dozen settings are the crucial ones.

  
  
It is now using 137 of 756 meg avail.
it is now taking 8 secs to return 22,000 rows (using pgadminIII in a sql
edit window).

  
  
That might be too much RAM. Don't forget PG likes to work with your 
operating-system (unlike many other DBs). Make sure Windows is using 
enough RAM to cache diskspace.
I'm curious as to how this takes 8secs whereas you had 1 second earlier. 
Are you sure some of this isn't pgadmin's overhead to display the rows?

  
  
The EXPLAIN ANALYSE still shows the same as below, but the table has

  
  344,000
  
  
recs of which only 22636 are clientnum = 'SAKS'

  
  
That sounds like it's about the borderline between using an index and 
not (depending on cache-size, disk speeds etc).

  
  
I am still doing a seq search (this applies to the view question where if

  
  it
  
  
is a small result set it used a index search but on a larger return set it
did a seq search) in my view, but with the adjustments to the kernel I get

  
  a
  
  
result in 140 secs (MSSQL was 135 secs).

  
  
If you want to check whether the index would help, try issuing the 
following before running your query:
   SET ENABLE_SEQSCAN=FALSE;
This will force PG to use any index it can regardless of whether it 
thinks it will help.

  
  
This is not production, I am still very worried that I have to do all this
tweeking to use this, MSSQL worked out of the box as it does (not saying

  
  its
  
  
great, but I never had to adjust a kernel setting etc). Since we cannot
afford the 70,000 dollars they want to license it I am not implying I can
use MSSQL, but I could look at other DB's like MYSQL, or Firebird, etc.

  
  
I'm a little curious what kernel settings you are changing on Windows. I 
wasn't aware there was much to be done there.

I'm afraid you do have to change half a dozen settings in 
postgresql.conf to match your workload, but PG runs on a much wider 
range of machines than MSSQL so it's difficult to come up with a 
"reasonable" default. Takes me about 5 minutes when I setup an 
installation to make sure the figures are reasonable (rather than the 
best they can be).

  
  
I have a lot of time now (two weeks) in this conversion and do not wish to
give up, I will see if I can learn what is needed to get the maximum
performance. I have seen much information available and this list has been

  
  a
  
  
huge resource. I really appreciate all the help.

  
  
--
   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

  






Re: [SQL] Calendar Function

2005-01-28 Thread Franco Bruno Borghesi




maybe somthing like this:

CREATE OR REPLACE FUNCTION calendar (DATE, DATE) RETURNS SETOF DATE
LANGUAGE 'plpgsql' AS '
DECLARE
    v_from ALIAS FOR $1;
    v_to ALIAS FOR $2;
    v_current DATE DEFAULT v_from;
BEGIN
    WHILE (v_current<=v_to) LOOP 
        RETURN NEXT v_current;
        v_current:=v_current+1;
    END LOOP;
    
    RETURN;
END;
';

test it:
SELECT * FROM calendar('2005-01-01', '2005-01-31');


Muhyiddin A.M Hayat wrote:

  
  
  
  Dear All,
   
  How to create Calendar Function or
Query. I would like to display date form -mm-dd to -mm-dd or
display date in one Month
   
  e.g:
   
      date

 2005-01-01
 2005-01-02
 2005-01-03
 2005-01-04
 2005-01-05
 2005-01-06
 2005-01-07
 2005-01-08
 2005-01-09
 2005-01-10
 2005-01-11
 2005-01-12
 2005-01-13
 2005-01-14
 2005-01-15
 2005-01-16
 2005-01-17
 2005-01-18
 2005-01-19
 2005-01-20
 2005-01-21
 2005-01-22
 2005-01-23
 2005-01-24
 2005-01-25
 2005-01-26
 2005-01-27
 2005-01-28
 2005-01-29
 2005-01-30
 2005-01-31
   
   






Re: [SQL] Query question

2005-04-20 Thread Franco Bruno Borghesi




If you have a row every 15 seconds, the answer is quite easy:




SELECT
	A1.date
FROM
	activity A1
	LEFT JOIN activity A2 ON (A2.date=A1.date-'15 secs'::interval)
WHERE
	A1.state<>A2.state OR A2.state IS NULL
ORDER BY 1



Now if you don't have a row every 15 seconds, the answer is a bit more
complex (at least I couldn't think of an easier solution): 



SELECT
	min(TMP2.new_date)
FROM
	(
		SELECT
			DISTINCT
			TMP.new_date,
			max(TMP.old_date) AS max_old_date
		FROM
			(
SELECT	
	A1.id AS new_id, A1.date AS new_date, A1.state AS new_state,
	A2.id AS old_id, A2.date AS old_date, A2.state AS old_state
FROM
	activity A1
	LEFT JOIN activity A2 ON (A2.dateTMP.new_state OR TMP.old_state IS NULL
		GROUP BY
			TMP.new_date
	) TMP2
GROUP BY
	TMP2.max_old_date
ORDER BY 1


I've tested both queries on postgreSQL 8 with the data you provided,
and they both work. Anyway try them with larger datasets before using
them in real life  ;-) 

Hope it helps.


Stéphane RIFF wrote:
Hi ,
  
  
I have table that represent a switch activity like this :
  
  
| date    | state  |
  
| 2005-04-20 17:00:00 |   0 |
  
| 2005-04-20 17:00:15 |   0 |
  
| 2005-04-20 17:00:30 |   1 |
  
| 2005-04-20 17:00:45 |   1 |
  
| 2005-04-20 17:01:00 |   1 |
  
| 2005-04-20 17:01:15 |   0 |
  
| 2005-04-20 17:01:30 |   0 |
  
| 2005-04-20 17:01:45 |   0 |
  
  
I want to get the date of each states change but i not a sql expert.
  
Can someone advices me
  
  
Thanks
  
  
  
---(end of
broadcast)---
  
TIP 7: don't forget to increase your free space map settings
  
  






Re: [SQL] PLEASE help ME , HOW TO GENERATE PRIMARY Keys on the fly

2006-05-24 Thread Franco Bruno Borghesi
Well, you could add a serial column. I'll tell you how, but I haven't tested the code, so be sure to check it! And using BEGIN and COMMIT/ROLLBACK to delimit transactions would not be a bad idea at all ;-)To add a serial column, just write:

--create new serial field
ALTER TABLE md_customer ADD id SERIAL;

If you check your table now, you will see that your new 'id' column contains correlative values. If this is what you wanted, you could update every tabe referencing md_customer, like this:--drop foreign key on remote table
ALTER TABLE xxx DROP CONSTRAINT xxx_fk;--set old pk values to the value in the 'id' field just createdUPDATE xxx SET fk_field=md_customer.id FROM md_customer CU WHERE CU.md_customeridpk=xxx.fk_field;
Check if everything is ok now. If it is, then recreate your foreign key, drop your old pk and rename the new one:--restore fk on remote tableALTER TABLE xxx ADD CONSTRAINT xxx_fk FOREIGN KEY (fk_field)  REFERENCES md_customer(id) ON DELETE ... ON UPDATE ...;
--drop old pkALTER TABLE md_customer DROP md_customeridpk CASCADE;--rename id to md_customeridpkALTER TABLE md_customer RENAME id TO md_customeridpk;--create pkALTER TABLE md_customer ADD CONSTRAINT md_customer_pk PRIMARY KEY(md_customeridpk).
That should be it.Hope it helps.
On 5/24/06, andi <[EMAIL PROTECTED]> wrote:













Dear friends,

 

I have table

MD_CUSTOMER

MD_CUSTOMERIDPK integer
primary key

NAME   
varchar

 

 

But my primary key is not in correct order like

MD_CUSTOMER

MD_CUSTOMERIDPK NAME

10   
ANDI

33   
TESTER

100    KKK

 

, so I want to make other primary key to generate sequences 1, 2, 3, …
and in MS SQL SERVER 2005

I can with Rank() function , but in Postgres how ?

 

PLEASE any one can help me, I am really appreciate.

 

Best regards

 

Andi kusnadi










[SQL] Using Random Sequence as Key

2001-04-16 Thread Bernardo de Barros Franco

Hello, I was wondering if noone can help me maybe someone could at least
give me some directions where to look for info or where to ask:
I wanted to index a table by a random key. Exemplifying, when a insert is
made, the id value is automatically filled with a random number between
1 and 9. I'm pretty new in pgsql so I could use a howto or
something, and I repeat, if you don't know how, directions to where I can
find info on that or where I can ask someone that might know would be
enough.

Thank you




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Using Random Sequence as Key

2001-04-16 Thread Bernardo de Barros Franco

Thank you it was pretty much something like that. I need to make a sale 
table but the sale code is a random number. Since it is going to be used in 
all queries and it is unique, it might as well be my table key (index). I 
could make a index and other field to be the sale code but then I would have 
2 different indexes, the table index and the sale code that would be used 
for all queries.
But my only question would be, in the example quoted would id be really the 
table index and is it unique?

Thank you

Quoting:
>Bernardo de Barros Franco writes:
>
> > I wanted to index a table by a random key. Exemplifying, when a insert 
>is
> > made, the id value is automatically filled with a random number between
> > 1 and 9.
>
>=> create table test (id int default random() * 8 + 1, content 
>text);
>CREATE
>=> insert into test (content) values ('hi');
>INSERT 36163 1
>=> insert into test (content) values ('there');
>INSERT 36164 1
>=> insert into test (content) values ('blah');
>INSERT 36165 1
>=> select * from test;
>   id   | content
>---+-
>  61616 | hi
>  72605 | there
>  83469 | blah
>(3 rows)
_
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Re: Using Random Sequence as Key

2001-04-16 Thread Bernardo de Barros Franco

Since I can't really redesign my application, I could index the table using
a serial but still would need a field with the random and unique number. I
tried using including a unique in : create table test (id int UNIQUE default
random() * 8 + 1, content text); and it didn't return any errors,
I'm just not sure if it actually worked.
I needed the random field because if I use serial and the user gets a 34203
he's sure that 34202 exists, and that (probably, there where 34202 inserts
before him (or at least an offset + some)). Using a random just makes the
user totally blind.
As I said I could use a serial for indexing the table but I NEED the random
field and I need to to be unique since all the queries will be using it as a
search parameter.
If inserting this way is slow it's not such a big deal since it's a small db
and inserts are seldom made.
Thanks in advance for any help.

Quoting:
> On Sun, Apr 15, 2001 at 10:58:40PM -0300, Bernardo de Barros Franco wrote:
>
> > I wanted to index a table by a random key. Exemplifying, when a insert
is
> > made, the id value is automatically filled with a random number between
> > 1 and 9. I'm pretty new in pgsql so I could use a howto or
>
> Bernardo,
>
> Do you really need the keys to be random numbers? Can't the keys be
> sequential numbers, like 1,2,3,4...? That'd be a lot easier to make
> unique, and as far as the user is concerned, it'll be random for him/her.
>



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] Re: Using Random Sequence as Key

2001-04-16 Thread Bernardo de Barros Franco

It is a good way to solve it but since I don't want/can't use cookies in my
application I could use only the random key but it would have to be unique
or the where key=$key could refer to multiple lines or I would have to ask
for the user to input both the random and the serial number wich wouldn't be
very good either. That's the reason I've being trying to make a random key
wich would be unique so I could ask for the user to type his order code and
then select info from orders where ordercode=$ordercode and not have the
risk of getting multiple answers (more then one line with that ordercode) or
someone typing his ordercode-1 and accessing someone elses form.
Ok, if I don't use a password or other protection even with random users can
keep trying to guess but I don't have much of a problem with someone
accessing someone else's form, I just don't want the user to let's say type
his ordercode and by mistake type the last char say 1 less then his own and
access someone else form and be completely lost. With random that still can
happen but it is so less likely that will do.

Thank you

Quoting:
> Bernardo de Barros Franco writes:
> > Hello, I was wondering if noone can help me maybe someone could at least
> > give me some directions where to look for info or where to ask:
> > I wanted to index a table by a random key.
>
> As others have pointed out, making a unique random primary key is
> tough. What I do for my cookie on my web based login system is have
> two fields in my database, one's the "id SERIAL", the other is a
> "magiccookie CHAR(16)" which I populate with 16 random characters on
> the initial insert.
>
> My cookie is then of the format "id/magiccookie". In my login
> verification code I split on the "/" character and query on "WHERE
> id=$id AND magiccookie=$magiccooke". Even though the "id" field is
> encoded in the cookie in plain text a cracker can't just guess at the
> user id number because that 16 character magiccookie needs to match as
> well.
>
> This also lets me be pretty loose about the id information, I can use
> it in other public places, because only the magiccookie needs to be
> restricted to being known by the logged in user.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly