[SQL] DISTINCT ON () with UNION

2001-05-13 Thread pgsql

How can you use a distinct on () including the whole union.

eg

select distinct on (valutaid) valutaid, short from valuta UNION select 
landid, land from land order by valutaid;

table: valuta
valutaidvaluta
1   USD
2   SEK

table: land
landid  land
1   Sweden
2   USA
3   Norway

The above would give
1   USD
1   Sweden
2   SEK
2   USA
3   Norway

What I would like to get is
1   USD
2   SEK
3   Norway

I did specify distinct on (valutaid) anyways =)

Any suggestions?

Best regards
Per-Olof Pettersson

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Syntax for "IF" clause in SELECT

2006-02-08 Thread pgsql
Greetings,

the following is an MySQL statement that I would like to
translate to PostgreSQL:

Could someone point me to a documentation of a coresponding
Systax for an "IF" clause in the a SELECT, 
or is the some other way to do this

select
 if(spektrum is null,' ','J'),
 if(s19 is null,' ','J'),
 if(OhneGrenze is null,' ','J'),
 from namen;


Do I need to create my own function to allow this behaviour!


my best regards,

Stefan
-- 
email: [EMAIL PROTECTED]
tel  : +49 (0)6232-497631
http://www.yukonho.de

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] group by function, make SQL cleaner?

2006-03-16 Thread pgsql
this should work, 
#
SELECT date_trunc('day',endtime),count(*)
 FROM eg_event where endtime >= '2006-02-01' and endtime < '2006-03-01'
 GROUP BY  1
  ORDER BY 1;

hope this helps 
best regards,


Stefan









Am Donnerstag, 16. März 2006 06:18 schrieb Bryce Nesbitt:
> I've got a working query:
>
> stage=# SELECT date_trunc('day',endtime),count(*)
> FROM eg_event where endtime >= '2006-02-01' and endtime < '2006-03-01'
> GROUP BY  date_trunc('day',endtime)
> ORDER BY date_trunc('day',endtime);
>
>  date_trunc  | count
> -+---
>  2006-02-01 00:00:00 |   253
>  2006-02-02 00:00:00 |   245
>  2006-02-03 00:00:00 |   231
>  2006-02-04 00:00:00 |   313
>  2006-02-05 00:00:00 |   285
>  2006-02-06 00:00:00 |   194
>  2006-02-07 00:00:00 |   229
>  2006-02-08 00:00:00 |   239
>  2006-02-09 00:00:00 |   250
>  2006-02-10 00:00:00 |   245
>  2006-02-11 00:00:00 |   275
>
> Is there a way to eliminate the ugly repeated use of
> date_trunc('day',endtime)?
>
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly

-- 
email: [EMAIL PROTECTED]
tel  : +49 (0)6232-497631
http://www.yukonho.de

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] returning an array as a list of single-column rows... (different approach)

2007-12-23 Thread pgsql
An: pgsql-sql@postgresql.org
Betreff: Re: [SQL] returning an array as a list fo single-column rows?

The following will return the elements of an array each in its
Own row.  Using both array_lower() and array_upper() the number of array
Elements and their internal index may vary from record to record. Or may
even be absent.
Within the record the array nstat[],nwert[],nwho[] must correspond. 
Joining the table with
generate_series(array_lower(nWert,1),array_upper(nWert,1)) as indx

returns the contained array elements.


Considering the following table with array.

Create table werte 
(id : integer,
  ..
  ..
 nstat  : character(1)[],
 nwert  : double precision[],
 nwho   : character varying(9)[]
);


select
w.id,ii.indx,
w.nStat[ii.indx],w.nWert[ii.indx],w.nWho[ii.indx]
from werte w
join
(
select id,
 generate_series(array_lower(nWert,1),array_upper(nWert,1)) as indx
 from werte
) ii on ii.id=w.id 
;


Let me know what you think about this approach?


My best regards,


Stefan Becker



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Continuous inserts...

2000-08-17 Thread brianb-pgsql


Poul L. Christiansen writes:

> Isn't easier to reduce the table every day and make a daily vacuum which only
> lasts a few seconds?

I doubt that it would last just a few seconds. From my experience, VACUUM
on large tables can sap your I/O subsystem, slowing down overall
performance for everyone else.

Joerg, if this is a logging-type application, you may want to consider
creating new tables periodically, e.g. rawdata__MM or rawdata_WEEKNO
and put a little more logic into your app to correctly name the table to
perform the INSERT on. The rawdata__MM tables should be created in
advance, of course. 

You can then safely post-process last month's data, insert results into a
much smaller postprocess__MM table, then archive or drop
rawdata__MM altogether.

Perhaps my suggestions are coloured by my experiences w/ 6.5, but this
seems to be the safest way to do it without losing data.

Alternately, you could log data to flat files, post-process and then INSERT
into Postgres.

Brian

> Joerg Hessdoerfer wrote:
> 
> > Hi!
> >
> > I have an application, where I have to insert data into a table at several
> > rows per second, 24 hours a day, 365 days a year.
> >
> > After some period (a week, maybe a month) the data will be reducted to some
> > degree and deleted from the table.
> >
> > As far as I understood, I would have to use VACUUM to really free the table
> > from deleted rows - but VACUUM (esp. on a table with several million rows)
> > takes some time and prevents me from inserting new data.
> >
> > Now, I thought I could just rename the table, inserting into a temp table, and
> > switch the tables back after VACUUMing. Ideally, this should work unnoticed
> > (and thus without prog. effort) on the client (inserter) side.
> >
> > Question: would it work to use a transaction to perform the rename?
> >
> > i.e.: continuous insert into table 'main' from client.
> >
> >  From somewhere else, execute:
> >
> > begin;
> > alter table main rename to vac_main;
> > create table main (...);
> > end;
> >
> > would the inserter notice this? Read: would ALL inserts AT ANY TIME succeed?
> >

--
Brian Baquiran <[EMAIL PROTECTED]>
http://www.baquiran.com/  AIM: bbaquiran 
Work: +63(2)718   Home: +63(2) 9227123

I'm smarter than average. Therefore, average, to me, seems kind of stupid. 
People weren't purposely being stupid. It just came naturally.
  -- Bruce "Tog" Toganazzini



[SQL] help on creating table

2000-10-23 Thread pgsql-sql

Hi All,

I'm planning to have data in a tree structure when fetched.
e.g.

NODE1
   + --- NODE1_1
   + --- NODE1_2
   |+ --- NODE1_2_1
   + --- NODE1_3

Is this possible? How?

I would really appreciate any help.

Sherwin




Re: [SQL] Large Objects

2000-10-23 Thread pgsql-sql


FROM test.pl of DBD-Pg-0.93.tar ...

# test large objects

# create large object from binary file

my ($ascii, $pgin);
foreach $ascii (0..255) {
$pgin .= chr($ascii);
};

my $PGIN = '/tmp/pgin';
open(PGIN, ">$PGIN") or die "can not open $PGIN";
print PGIN $pgin;
close PGIN;

# begin transaction
$dbh->{AutoCommit} = 0;

my $lobjId;
( $lobjId = $dbh->func($PGIN, 'lo_import') )
and print "\$dbh->func(lo_import) .. ok\n"
or  print "\$dbh->func(lo_import) .. not ok\n";

# end transaction
$dbh->{AutoCommit} = 1;

unlink $PGIN;


# blob_read

# begin transaction
$dbh->{AutoCommit} = 0;

$sth = $dbh->prepare( "" ) or die $DBI::errstr;

my $blob;
( $blob = $sth->blob_read($lobjId, 0, 0) )
and print "\$sth->blob_read  ok\n"
or  print "\$sth->blob_read  not ok\n";

$sth->finish or die $DBI::errstr;

# end transaction
$dbh->{AutoCommit} = 1;


# read large object using lo-functions

# begin transaction
$dbh->{AutoCommit} = 0;

my $lobj_fd; # may be 0
( defined($lobj_fd = $dbh->func($lobjId, $dbh->{pg_INV_READ}, 'lo_open')) )
and print "\$dbh->func(lo_open)  ok\n"
or  print "\$dbh->func(lo_open)  not ok\n";

( 0 == $dbh->func($lobj_fd, 0, 0, 'lo_lseek') )
and print "\$dbh->func(lo_lseek) ... ok\n"
or  print "\$dbh->func(lo_lseek) ... not ok\n";

my $buf = '';
( 256 == $dbh->func($lobj_fd, $buf, 256, 'lo_read') )
and print "\$dbh->func(lo_read)  ok\n"
or  print "\$dbh->func(lo_read)  not ok\n";

( 256 == $dbh->func($lobj_fd, 'lo_tell') )
and print "\$dbh->func(lo_tell)  ok\n"
or  print "\$dbh->func(lo_tell)  not ok\n";

( $dbh->func($lobj_fd, 'lo_close') )
and print "\$dbh->func(lo_close) ... ok\n"
or  print "\$dbh->func(lo_close) ... not ok\n";

( $dbh->func($lobjId, 'lo_unlink') )
and print "\$dbh->func(lo_unlink) .. ok\n"
or  print "\$dbh->func(lo_unlink) .. not ok\n";

# end transaction
$dbh->{AutoCommit} = 1;


# compare large objects

( $pgin cmp $buf and $pgin cmp $blob )
and print "compare blobs .. not ok\n"
or  print "compare blobs .. ok\n";

#


[EMAIL PROTECTED] writes:
>Hi,
>
>Could someone please provide a demo of creating the type "Lo".
>
>Regards,
>Craig May
>
>Enth Dimension
>http://www.enthdimension.com.au





Re(2): [SQL] Large Object dump ?

2000-11-01 Thread pgsql-sql

You can try the script I made for exporting all my Pg database.
Ideas were borrowed from pg_dumplo-0.0.5.
Make it sure that you have "Perl5 extension for PostgreSQL - Pg.pm"
installed.

sherwin

#!/usr/bin/perl -w
#
# Export large objects of all Pg database 
#  - Sherwin T. Daganato ([EMAIL PROTECTED])
#  - October 8, 2000  
#

use strict;
use Pg;

my $space = shift || die "Usage: $0 outputdir";

# make sure the directory doesn't end in '/'
$space =~ s/\/$//;

my $conn_all = Pg::connectdb("dbname=template1");
die $conn_all->errorMessage unless PGRES_CONNECTION_OK eq
$conn_all->status;

# find all database
my $sql = "SELECT datname FROM pg_database ORDER BY datname";
my $res_all = $conn_all->exec($sql);
die $conn_all->errorMessage unless PGRES_TUPLES_OK eq
$res_all->resultStatus;

my $counter = 0;
while (my ($database) = $res_all->fetchrow) {
  my $conn_db = Pg::connectdb("dbname=$database");
  die $conn_db->errorMessage unless PGRES_CONNECTION_OK eq
$conn_db->status;

  # find any candidate tables with columns of type oid
  $sql = qq/SELECT c.relname, a.attname 
FROM pg_class c, pg_attribute a, pg_type t 
WHERE a.attnum > 0 
AND a.attrelid = c.oid 
AND a.atttypid = t.oid 
AND t.typname = 'oid' 
AND c.relname NOT LIKE 'pg_%';
   /;

  my $res_db = $conn_db->exec($sql);
  die $conn_db->errorMessage unless PGRES_TUPLES_OK eq
$res_db->resultStatus;

  my $path;
  local (*F);
  while (my ($table, $fld) = $res_db->fetchrow) {

# find large object id
$sql = sprintf ("SELECT x.%s FROM %s x WHERE EXISTS (SELECT c.oid FROM
pg_class c WHERE c.relkind = 'l' AND x.%s = c.oid)",
$fld, $table, $fld);

# find large object id
#$sql = sprintf ("SELECT x.%s FROM %s x, pg_class c WHERE x.%s = c.oid
and c.relkind = 'l'", 
#$fld, $table, $fld);

my $res_tbl = $conn_db->exec($sql);
die $conn_db->errorMessage unless PGRES_TUPLES_OK eq
$res_tbl->resultStatus;

my $tuples;
next unless (($tuples = $res_tbl->ntuples) > 0);
$counter += $tuples;

$path = sprintf ("%s/%s", $space, $database);

if ( -d $path) {

  # creates file if it don't exist and appends to it
  open(F,">>$path/lo_dump.index") || die "\n $0 Cannot open $! \n"; 
} else {

  # create dir for database
  mkdir($path, 0755) || die "\n Can't create $path: $! \n";

  # opens file for writing. overwrite existing file
  open(F, ">$path/lo_dump.index") || die "\n $0 Cannot open $! \n";

  # temporarily turn off warnings
  # there might be undef
  local $^W = 0;

  print F "#\n# This is the PostgreSQL large object dump index\n#\n";
  printf F "#\tDate: %s\n", scalar(localtime);
  printf F "#\tHost: %s\n", $conn_db->host;
  printf F "#\tDatabase: %s\n", $conn_db->db;
  printf F "#\tUser: %s\n", $conn_db->user;
  printf F "#\n# oid\ttable\tattribut\tinfile\n#\n";
}

$path = sprintf ("%s/%s", $path, $table);

# create dir for table
mkdir($path, 0755) || die "\n Can't create $path: $! \n";
   
$path = sprintf ("%s/%s", $path, $fld); 
  
# create dir for field
mkdir($path, 0755) || die "\n Can't create $path: $! \n";

printf "dump %s.%s (%d large obj)\n", $table, $fld, $tuples;  

while (my ($lobjOid) = $res_tbl->fetchrow) {

  $path = sprintf ("%s/%s/%s/%s/%s", 
$space, $database, $table, $fld, $lobjOid);
   
  my $res_lobj = $conn_db->exec("BEGIN");
  die $conn_db->errorMessage unless PGRES_COMMAND_OK eq
$res_lobj->resultStatus;

  # export large object
  if ( 1 == $conn_db->lo_export($lobjOid, $path) ) {
printf F "%s\t%s\t%s\t%s/%s/%s/%s\n", 
  $lobjOid, $table, $fld, $database, $table, $fld, $lobjOid;
  } else {
printf STDERR "%s: %s\n", $conn_db->errorMessage, $0;
  }

  $res_lobj = $conn_db->exec("END");
  die $conn_db->errorMessage unless PGRES_COMMAND_OK eq
$res_lobj->resultStatus;
 
}
close(F);
  }
  undef $conn_db;
}

printf "\nExported %d large objects.\n\n", $counter;
undef $conn_all;


[EMAIL PROTECTED] writes:
>
>
>On Mon, 30 Oct 2000, [EMAIL PROTECTED] wrote:
>
>>Large objects are not dumped. It should be 
>>in the documentation for large objects.
>>
>>You need to write a script which writes them to
>>disk and then imports them back in after you have
>>installed your dbs.
>>
>>
>>Troy
>
>CREATE TABLE news  -- { chiave: id news ,newarchivio, newsnuove}
>(
>   "idnews"SERIAL primary key, 
>   "oidnotizia"OID,-- news as large object 
>   "autore"TEXTx   -- author
>);
>
>How should be done the script for this table ?
>
>I found something about large object only onthe programmesg guide are they
>present in other docs?
>
>Thanks in advance 
>
>Alex
>
>
>





Re(2): Re(2): [SQL] Large Object dump ?

2000-11-01 Thread pgsql-sql

because of some problems (see below). i was also thinking that it was 
just a compatibility problem of pg_dumplo-0.0.5 to postgresql 6.5.3. 
and because i needed it badly, i wrote a "quick & dirty" perl script.


1. When I tried showing all LO of my_db, I got this message:

pg_dumplo -w -d my_db
NOTICE:  (transaction aborted): queries ignored until END
NOTICE:  (transaction aborted): queries ignored until END
NOTICE:  (transaction aborted): queries ignored until END

Database 'my_db' contains 0 large objects.

My observation: my_db actually contains large objects. 


2. When I tried this
   pg_dumplo -a -d my_db -s /my_dump/dir
I got this.
   
<< CUT >>
dump for_payment.application (218 large obj)
lo_export: can't open inv object 4654657:
/fs/linux/work/tools/pg_dumplo-0.0.5/pg_dumplo
lo_export: can't open inv object 4654689:
/fs/linux/work/tools/pg_dumplo-0.0.5/pg_dumplo
lo_export: can't open inv object 4654881:
/fs/linux/work/tools/pg_dumplo-0.0.5/pg_dumplo
lo_export: can't open inv object 4654081:
/fs/linux/work/tools/pg_dumplo-0.0.5/pg_
<< CUT >>
Exported 1131 large objects.

NOTICE:  LockReleaseAll: xid loop detected, giving up

My observation: The tree (directories) were created but 1131 large objects
were not there.
   The lo_dump.index file is not readable
(contains garbage).


peace,

sherwin


[EMAIL PROTECTED] writes:
>
>On Wed, 1 Nov 2000, pgsql-sql wrote:
>
>> You can try the script I made for exporting all my Pg database.
>> Ideas were borrowed from pg_dumplo-0.0.5.
>> Make it sure that you have "Perl5 extension for PostgreSQL - Pg.pm"
>> installed.
>
> Why you re-write pg_dumplo to perl and not use directly it? Perl
>love? :-)
>
>   Karel
>





[ADMIN] alter pg_shadow

2000-11-08 Thread pgsql-sql

Is it safe to rename 'postgres' to any__name?

like

update pg_shadow set usename = 'any__name' where usename = 'postgres';


Sherwin





[SQL] 7.0.3 BUG

2000-11-24 Thread pgsql-sql

SELECT is returning bogus data.

migrate=# select version();
version
---
 PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc 2.95.3
(1 row)

migrate=# select userid from users where userid = '[EMAIL PROTECTED]';
   userid
-
 [EMAIL PROTECTED]
 admin
(2 rows)
 
migrate=# \d users
  Table "users"
Attribute | Type |Modifier
--+--+
 userrefkey   | integer  | not null default
nextval('users_userrefkey_seq'::text)
 userid   | varchar(128) | not null
 password1| char(20) | not null
 password2| char(50) |
 type | char(10) | not null
 partneremail | varchar(128) |
 adminlastname| char(40) | not null
 adminfirstname   | char(40) | not null
 adminaddress1| char(80) | not null
 adminaddress2| char(80) |
 admincity| char(80) | not null
 adminstateprov   | char(40) |
 admincountrycode | char(2)  | not null
 adminpostalcode  | char(10) |
 adminphone   | char(20) |
 adminfax | char(20) |
 checkpayableto   | char(80) | not null
 createdate   | timestamp| not null default now()
 lastaccessdate   | timestamp|
 lastmodifieddate | timestamp|
Indices: users_pkey,
 users_userid_key   







Re(2): [SQL] 7.0.3 BUG

2000-11-24 Thread pgsql-sql

[EMAIL PROTECTED] writes:
>"pgsql-sql" <[EMAIL PROTECTED]> writes:
>> migrate=# select userid from users where userid = '[EMAIL PROTECTED]';
>>userid
>> -
>>  [EMAIL PROTECTED]
>>  admin
>> (2 rows)
>
>That's a tad, um, startling :-(
>
>However, you haven't given us nearly enough information to have a shot
>at figuring out what's going on.
>
>   regards, tom lane


The query used indexscan. I tried using trim in userid and I got something
like ...
migrate=# select userid from users where trim(userid) = '[EMAIL PROTECTED]';
userid
 -
  [EMAIL PROTECTED]
 (1 row)

I thought it was a varchar problem ('coz userid is varchar) of 7.0.3 so I
changed to 7.0.2.
Is it really a varchar bug?
Anyway, I dumped all the database. When I was restoring it in 7.0.2 I got
these ...

You are now connected as new user root.
CREATE
You are now connected as new user pgsql.
ERROR:  copy: line 3910, Bad timestamp external representation '2000-01-05
00:00:60.00+08'
PQendcopy: resetting connection
ERROR:  copy: line 3910, Bad timestamp external representation '2000-01-05
00:00:60.00+08'
PQendcopy: resetting connection
ERROR:  copy: line 302, Bad timestamp external representation '2000-02-29
00:00:60.00+08'
PQendcopy: resetting connection
ERROR:  copy: line 13, Bad timestamp external representation '1970-01-01
08:04:60.00+08'
PQendcopy: resetting connection
ERROR:  copy: line 24, Bad timestamp external representation '1970-01-01
08:04:60.00+08'
PQendcopy: resetting connection
You are now connected as new user root.
ERROR:  copy: line 2, Bad timestamp external representation '1999-12-24
00:00:60.00+08'
PQendcopy: resetting connection
You are now connected as new user pgsql.
CREATE

I took a look at the database named 'migrate' (this is where the error
occured), 
the tables were all empty. Most of the tables  have field like this...

createdate   | timestamp | not null default now() 

Weird because those timestamps were generated by default now(). Is this an
another bug (timestamp bug)?
Take a look at this ...

migrate=# select version();
version
---
 PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.3
(1 row)
 
migrate=# select '2000-01-05 00:00:60.00+08'::timestamp;
ERROR:  Bad timestamp external representation '2000-01-05 00:00:60.00+08'
migrate=# select '2000-11-25 14:05:23.00+08'::timestamp;
 ?column?
---
 2000-11-25 14:05:23.00+08
(1 row)
 
migrate=# create table testing (datetime timestamp);
CREATE
migrate=# insert into testing values('2000-01-05 00:00:60.00+08');
ERROR:  Bad timestamp external representation '2000-01-05 00:00:60.00+08'
migrate=# insert into testing values('2000-11-25 14:05:23.00+08');
INSERT 6834235 1
migrate=#


Thanks,
sherwin












Re(2): Re(2): [SQL] 7.0.3 BUG

2000-11-26 Thread pgsql-sql

[EMAIL PROTECTED] writes:
>"pgsql-sql" <[EMAIL PROTECTED]> writes:
>> ERROR:  copy: line 3910, Bad timestamp external representation
>'2000-01-05
>> 00:00:60.00+08'
>
>> Weird because those timestamps were generated by default now().
>
>Weird, all right.  I can get 7.0.2 to emit an incorrect representation
>like that by constructing a fractional-second value that needs to be
>rounded off:
>
>play=> set TimeZone TO 'GMT-8';
>SET VARIABLE
>play=> select  '2000-01-05 00:00:59.00+08'::timestamp + '0.999
>sec'::interval;
> ?column?
>---
> 2000-01-05 00:00:60.00+08
>(1 row)
>
>That's clearly a bug.  Evidently the rounding to 2 fractional digits
>needs to be done before we start conversion, not at the end, since in
>the worst case the effects could need to propagate all the way to the
>displayed year.
>
>However, that shouldn't matter for now() results, because AFAIK now()
>results should always be exact integer seconds.  Are you sure you
>weren't doing arithmetic on these values somewhere along the line?

Yes, I'm very sure. 
About the dump file, I made manual changes to all timestamp 
containing "00:00:60.00+08" (changed it to "00:00:00.00+08").
I did this because
migrate=# select version();
version
---
 PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.3
(1 row)

migrate=# select '2000-01-05 00:00:60.00+08'::timestamp;
ERROR:  Bad timestamp external representation '2000-01-05 00:00:60.00+08'
migrate=# select '2000-01-05 00:00:00.00+08'::timestamp;
 ?column?
---
 2000-01-05 00:00:60.00+08
(1 row)


Is there a work-around to this aside from manually changing the dump file?
>
>In any case, I'm pretty sure nothing's changed in the timestamp code
>between 7.0.2 and 7.0.3, and I know of no changes that could explain
>your original report either.  I'm not sure why your update went so
>disastrously --- I'm wondering about possibilities like a corrupted
>download of Postgres.  What platform are you on (I think you said Linux,
>but which release of which distro)?  

Distribution Version:  Linux Mandrake release 7.2 (Odyssey) for
i586
Operating System Version:  #1 Thu Oct 5 13:16:08 CEST 2000
Operating System Release:  2.2.17-21mdk
Processor Type:i686

>Where and how did you obtain
>your Postgres files; if you compiled them yourself, how did you
>configure and compile?

It was shipped with Mandrake-Linux 7.2
>
>> migrate=# select version();
>> version
>> ---
>>  PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.3
>
>That's a rather interesting version report, seeing as how there is
>no such gcc release as 2.95.3 according to the GCC project's homepage.
>What compiler are you using exactly, and what confidence do you have
>that it's bug-free?  You wouldn't be using that known-unstable gcc
>that RedHat shipped in their 7.0, would you?

We can be sure that the compiler is relatively bug free because it was
used to recompile the entire Linux distribution...
>

sherwin
>









Re(2): [SQL] 7.0.3 BUG

2000-11-28 Thread pgsql-sql

Thanks to you Tom and Thomas.
Now I know.

- sherwin

[EMAIL PROTECTED] writes:
>Ah ha (or rather, ha ha ha)! I'd suggest using the RPMs posted on the
>postgresql.org ftp site, which include a sample .rpmrc file which fixes
>disasterous bugs in Mandrake's default compiler settings for building
>RPMs. Specifically, Mandrake sets the -ffast-math flag, which the gcc
>folks warn is not compatible with -On optimizations. When I build RPMs I
>kill the fast-math option, and the rounding troubles go away.
>
>The rounding trouble does not show up on other platforms or Linux
>distros because no one else ignores the gcc recommendations to this
>extent :(
>
>  - Thomas




Re: [SQL] lo_import for storing Blobs

2001-03-03 Thread pgsql-sql

You can use 'DBI'

from test.pl of DBD::Pg

# create large object from binary file

my ($ascii, $pgin);
foreach $ascii (0..255) {
$pgin .= chr($ascii);
};

my $PGIN = '/tmp/pgin';
open(PGIN, ">$PGIN") or die "can not open $PGIN";
print PGIN $pgin;
close PGIN;

# begin transaction
$dbh->{AutoCommit} = 0;

my $lobjId;
( $lobjId = $dbh->func($PGIN, 'lo_import') )
and print "\$dbh->func(lo_import) .. ok\n"
or  print "\$dbh->func(lo_import) .. not ok\n";

# end transaction
$dbh->{AutoCommit} = 1;

unlink $PGIN;
  

or you can use 'Perl5 extension for PostgreSQL' ...
note: i didn't test the script

use strict;
use Pg;   

my $dbname = 'your dbname';
my $lo_path = 'path/to/you/binaryfile';
my ($tbl, $fld) = ('your table', 'oid field');

my $conn = Pg::connectdb("dbname=$dbname");
die $conn->errorMessage unless PGRES_CONNECTION_OK eq $conn->status;   

  my $result = $conn->exec("BEGIN");
  die $conn->errorMessage unless PGRES_COMMAND_OK eq $result->resultStatus;

  # import  large object and get its oid
  my $new_oid = $conn->lo_import($lo_path) or die $conn->errorMessage;

  $result = $conn->exec("END");
  die $conn->errorMessage unless PGRES_COMMAND_OK eq
$result->resultStatus; 

# insert the oid of the lobj
  my $sql = sprintf("INSERT INTO %s (%s) VALUES (%ld)",
$tbl, $fld, $new_oid);
 
  $result = $conn->exec($sql);
  die $conn->errorMessage unless PGRES_COMMAND_OK eq
$result->resultStatus;  

undef $conn; 


Sherwin

[EMAIL PROTECTED] writes:
>I need to store a binary file in a database. I use a cgi writed in shell
>to do it. So I can use Postgres user to execute the cgi.
>
>How can I store a binary file in a database with a cgi ?
>
>Thanks a lot.
>
>Laurent.
>
>
>
>
>---(end of broadcast)---
>TIP 2: you can get off all lists at once with the unregister command
>(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re(2): [SQL] Permissons on database

2001-03-14 Thread pgsql-sql

you can also do it by a shell script

grantall.sh:
## start of grantall.sh ###
#!/bin/ash
SQL="SELECT relname FROM pg_class WHERE (relkind = 'r' OR relkind = 'S')
AND relname !~ '^pg_' ORDER BY relname"
OBJ=`psql -t -c "${SQL}" $1`
# OBJ=`echo ${OBJ} | sed 's/EOF//g'`
OBJ=`echo ${OBJ} | sed 's/ /, /g'`
# SQL="REVOKE ALL ON ${OBJ} FROM PUBLIC"
SQL="GRANT ALL ON ${OBJ} TO PUBLIC"
echo ${SQL}
psql -c "${SQL}" $1   
## end of grantall.sh ###

syntax: grantall.sh name_of_database

sherwin

[EMAIL PROTECTED] writes:
>> "bk" == Boulat Khakimov <[EMAIL PROTECTED]> writes:
>
>bk> How do I grant permissions on everything in the selected
>bk> databes?
>
>bk> GRANT doesnt take as on object database name nor does it
>bk> accept wild chars
>
>Attached is some Perl code I wrote long ago to do this.  This
>particular code was done for Keystone, a problem tracking database and
>it would do a "GRANT ALL".  Modify it as needed.  Last I checked it
>worked with both PostgreSQL 6.5.x and 7.0.x
>
>
>roland
>-- 
>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL]

2001-03-14 Thread pgsql-sql

you can also do it by a shell script

grantall.sh:
## start of grantall.sh ###
#!/bin/ash
SQL="SELECT relname FROM pg_class WHERE (relkind = 'r' OR relkind = 'S')
AND relname !~ '^pg_' ORDER BY relname"
OBJ=`psql -t -c "${SQL}" $1`
# OBJ=`echo ${OBJ} | sed 's/EOF//g'`
OBJ=`echo ${OBJ} | sed 's/ /, /g'`
# SQL="REVOKE ALL ON ${OBJ} FROM PUBLIC"
SQL="GRANT ALL ON ${OBJ} TO PUBLIC"
echo ${SQL}
psql -c "${SQL}" $1   
## end of grantall.sh ###

syntax: grantall.sh name_of_database

sherwin

[EMAIL PROTECTED] writes:
>> "bk" == Boulat Khakimov <[EMAIL PROTECTED]> writes:
>
>bk> How do I grant permissions on everything in the selected
>bk> databes?
>
>bk> GRANT doesnt take as on object database name nor does it
>bk> accept wild chars
>
>Attached is some Perl code I wrote long ago to do this.  This
>particular code was done for Keystone, a problem tracking database and
>it would do a "GRANT ALL".  Modify it as needed.  Last I checked it
>worked with both PostgreSQL 6.5.x and 7.0.x
>
>
>roland
>-- 
>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] trigger output to a file

2001-03-22 Thread pgsql-sql

Hello Everyone,

Here's my simple question.

I just want to know/get the recent changes made to a table.
Deeper? I wanted the Postgresql server to *publish* every
changes made to a table (similar to replication, incremental transfer,
etc.).
What is the best way to go about it?

My idea is to create a trigger for every insert/update that will
select the new/updated record and output it to a file. Or better
yet, I would like to output the actual sql query to file.
Is it possible to output the result of an sql query from a trigger
to a file? How?

I would appreciate any answer. Thank you very much.
Sherwin


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re(2): [SQL] trigger output to a file

2001-03-25 Thread pgsql-sql

Thanks Cedar, Jan, and Andy.

Actually the setup is something like this...
There are two remote servers-remoteA and remoteB.
The table of remoteA needs to be sychronized with the
table of remoteB all the time (well, there could be an interval).
remoteB will *publish* every changes and remoteA is *subscribe* to it.

These were my previous solutions:
1. Have a program (using PERL & DBI) in remoteA to connect to 
remoteB and do the synchronization.
>> I can't buy this 'coz remoteB has too many *hits*.
I just can't afford the cost.
2. Have a trigger in remoteB that will output to a file the result of
every sql
or the actually sql.
>> My understanding now is that this will not do it because
of a possible transaction rollback -- thanks again.

As much as possible I want to do the synchronization
*incrementally* (just deal with the difference between remoteA & remoteB).
But I guess I have to do it the hard way.

Here's my third solution. Please comment on this.
KNOWN FACTORS:
  ^ poor connection
   >>> the solution should be intelligent enough to handle such
situation.
3RD SOLUTION:
  ^ Have a script in remoteB to use pg_dump or sql copy and place it in
the 
 crontab. (say every 5 seconds)
  ^ Have a script in remoteA that will copy the dump.file from remoteB.
Place it in the crontab and use *scp* (secure copy) for the copying.
After dump.file is acquired, have another script to take care of it.

What do you think? Any better idea?
Thank you.

Sherwin


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Making dirty reads possible?

2004-12-06 Thread pgsql-sql
Hi co-fanatics.
I am working on a small prove of concept, and am running into a small 
obstacle. (the prove of concept showing, amongs other things, that doing 
calculations on a database works well, and that it is possible to let it 
run 'jobs')
Considder the following stored procedure:

  For reasons of size the complete code is located on my site:
  http://www.vankoperen.nl/concepts/postgresql/primer/index.html
It generates prime numbers for (parameter) odd numbers starting from the 
biggest known prime in the primes table.

The "controller" table makes it possible to abort execution, something 
wich can be handy if you did a 'select primer(1000);'
I am just getting to grips with the read cashing and the way to 
circumvent it (using the EXECUTE function) so as to read data each time 
and thus react to the newest data, especialy the data in the 
"controller" table in this case.

Now what does not seem to work is the opposite thing: i can not, from 
the console etc, read the new data as the function is generating it.
If i 'SELECT count(*);' at the start, or near the end of the running 
function, it always returns the same. Only when the function is finished 
it commits and the external select returns the new and correct value.
To monitor the function's progress (and for other reasons too, wich are 
not important in this concept yet) i realy want to read either the 
UNCOMMITTED data. Or some way to COMMIT it during the functions 
execution, but currently only the whole function can be regarded as a 
transaction, and nested transactions is not (yet) supported.

Some digging in the mailinglist archives pointed to isolation levels.
Apparently 'dirty reads' in theory:
[quote http://archives.postgresql.org/pgsql-hackers/2004-08/msg01417.php ]
It's only allowed when the transaction is in READ UNCOMMITTED isolation 
level.
Something Postgres doesn't currently support. In fact I'm not aware of 
any SQL
database that supports it, though I'm sure there's one somewhere.

You wouldn't normally want to use such a thing, but it could be useful for,
for example, seeing what progress a transaction has made for a UI progress
meter.
[/quote]
But not possible for real at the moment?
So, summarising:
- Nested transactions is not (yet) supported
- READ UNCOMMITTED isolation level is not (yet) supported
- the EXECUTE plpgsql construct does not circumvent the transaction
Is there a way around this?
Regards,
Ellert.
---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] selecting records X minutes apart

2011-06-03 Thread lists-pgsql
I have a table that, at a minimum, has ID and timestamp columns.  Records
are inserted into with random IDs and timestamps.  Duplicate IDs are allowed. 

I want to select records grouped by ID, ordered by timestamp that are X minutes
apart. In this case X is 5.

Note, the intervals are not X minute wall clock intervals, they are X minute
intervals from the last accepted record, per-id.

For instance here is some sample input data:

ID  TS (HH:MM)
---
0   20:00
1   20:03
1   20:04
0   20:05
1   20:05
0   20:08
1   20:09
0   20:10

I'd want the select to return:

ID  TS (HH:MM)
---
0   20:00
0   20:05
0   20:10
1   20:03
1   20:09


Does my question make sense?

Thanks in advance,
Wayne

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] selecting records X minutes apart

2011-06-04 Thread lists-pgsql
The TS column type is actually a timestamp with out timezone and yes I want to
take seconds into account so both of your entries would be included in the
result.

On Fri, Jun 03, 2011 at 06:01:53PM -0700, Kevin Crain wrote:
> Will you be using a full timestamp with that or are you only concerned
> about hours and minutes? If you want a full timestamp do you care
> about the seconds? For example, do you want to be able to do this for
> '2011-06-01 23:59:04' and '2011-06-02 00:04:04'?
> 
> On Fri, Jun 3, 2011 at 12:52 PM,   wrote:
> > I have a table that, at a minimum, has ID and timestamp columns.  Records
> > are inserted into with random IDs and timestamps.  Duplicate IDs are 
> > allowed.
> >
> > I want to select records grouped by ID, ordered by timestamp that are X 
> > minutes
> > apart. In this case X is 5.
> >
> > Note, the intervals are not X minute wall clock intervals, they are X minute
> > intervals from the last accepted record, per-id.
> >
> > For instance here is some sample input data:
> >
> > ID      TS (HH:MM)
> > ---
> > 0       20:00
> > 1       20:03
> > 1       20:04
> > 0       20:05
> > 1       20:05
> > 0       20:08
> > 1       20:09
> > 0       20:10
> >
> > I'd want the select to return:
> >
> > ID      TS (HH:MM)
> > ---
> > 0       20:00
> > 0       20:05
> > 0       20:10
> > 1       20:03
> > 1       20:09
> >
> >
> > Does my question make sense?
> >
> > Thanks in advance,
> > Wayne
> >
> > --
> > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-sql
> >

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] selecting records X minutes apart

2011-06-04 Thread lists-pgsql
On Sat, Jun 04, 2011 at 11:45:08AM +, Jasen Betts wrote:
> On 2011-06-03, lists-pg...@useunix.net  wrote:
> >
> > ID  TS (HH:MM)
> > ---
> > 0   20:00
> > 0   20:05
> > 0   20:10
> > 1   20:03
> > 1   20:09
> >
> >
> > Does my question make sense?
> 
> no, why is (1,20:04) excluded, but (0,20:05) included?
> both records are 5 minutes from the newest.

Jasen,

(1,20:04) is excluded because it's timestamp is less than 5 minutes from the
previous record with the same ID (1,20:03), (0,20:05) is included for the
opposite reason.

Let me restate my requirement again with a little more detail.  I want to
select records grouped by ID, ordered by timestamp, in ascending order so I'm
starting with the oldest, that are at least X minutes apart.

I hope that helps.

Thanks again,
Wayne

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] selecting records X minutes apart

2011-06-04 Thread lists-pgsql
Did you mean WHERE in place of your first AND?  If so I already had something
like this but it only returns one set, the oldest group of entries for each
ID.


On Sat, Jun 04, 2011 at 01:09:39PM -0700, Richard Broersma wrote:
> On Sat, Jun 4, 2011 at 12:15 PM,   wrote:
> >  I want to
> > select records grouped by ID, ordered by timestamp, in ascending order so 
> > I'm
> > starting with the oldest, that are at least X minutes apart.
> 
> 
> Here my guess:
> 
> SELECT id, ts
>   FROM Yourtable AS A
>AND NOT EXISTS ( SELECT *
>   FROM Yourtable AS B
>  WHERE B.id = A.id
>AND B.ts > A.ts - INTERVAL '5 MINUTES'
>AND B.tx < A.ts )
> 
> ORDER BY id, ts;
> 
> -- 
> Regards,
> Richard Broersma Jr.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] selecting records X minutes apart

2011-06-04 Thread lists-pgsql
Let's a take a look at just the input set for ID 0.

0       20:00
0       20:05
0       20:08
0       20:10

I want records, starting from the oldest record (20:00), that are at least 5
minutes apart.  So 20:00, 20:05, 20:10 but 20:08 - 20:05 is only 3 minutes so
it is to be ignored.

I was hoping to do this with a single SQL query that renders good runtime
performance but it may not possible. But I'm by no means proficient in SQL.

On Sat, Jun 04, 2011 at 05:51:18PM -0700, Kevin Crain wrote:
> Why is (0,20:10) listed in your expected results when there is a (0,20:08)?
> 
> 
> On Fri, Jun 3, 2011 at 12:52 PM,   wrote:
> > I have a table that, at a minimum, has ID and timestamp columns.  Records
> > are inserted into with random IDs and timestamps.  Duplicate IDs are 
> > allowed.
> >
> > I want to select records grouped by ID, ordered by timestamp that are X 
> > minutes
> > apart. In this case X is 5.
> >
> > Note, the intervals are not X minute wall clock intervals, they are X minute
> > intervals from the last accepted record, per-id.
> >
> > For instance here is some sample input data:
> >
> > ID      TS (HH:MM)
> > ---
> > 0       20:00
> > 1       20:03
> > 1       20:04
> > 0       20:05
> > 1       20:05
> > 0       20:08
> > 1       20:09
> > 0       20:10
> >
> > I'd want the select to return:
> >
> > ID      TS (HH:MM)
> > ---
> > 0       20:00
> > 0       20:05
> > 0       20:10
> > 1       20:03
> > 1       20:09
> >
> >
> > Does my question make sense?
> >
> > Thanks in advance,
> > Wayne
> >
> > --
> > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-sql
> >
> 
> -- 
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] selecting records X minutes apart

2011-06-05 Thread lists-pgsql
Thank you all who replied!!  It looks like Sugawara's recursive solution does
the trick.  Unfortunately performance is quite poor for the sample dataset I'm
working with which is a table of about 5 records.  Indeed, there are
indexes applied to the table.  I believe the recursive select is being
executed a great number of times causing the first part of the query to take a
long time.

The fastest solution I've come up with is a plpgsql procedure the loops over a
select where the result is ordered by (id,tstamp) and examines the tstamp
values and only returns rows that meet the interval criteria. This technique
takes roughly 2 seconds to filter out records over my 5 record sample
set which is acceptable but not nearly as elegant as a single SQL
statement.

Again, thank you for all the replies.

Wayne

On Sun, Jun 05, 2011 at 08:52:30PM +0900, Masaru Sugawara wrote:
> On Fri, 3 Jun 2011 15:52:53 -0400
> lists-pg...@useunix.net wrote:
> 
> 
> I also think you might want to use WITH RECURSIVE clause. 
> This SQL searches the case of an interval of 5 minutes or more, 
> and sets a relationship between a parent to its child.
> 
> 
> CREATE TABLE tbl(id integer, ts time) ;
> INSERT INTO tbl VALUES
> (0, '20:00'),
> (0, '20:05'),
> (0, '20:08'),
> (0, '20:10'),
> (0, '20:11'),
> (1, '20:03'),
> (1, '20:04'),
> (1, '20:05'),
> (1, '20:09'),
> (1, '20:16');
> 
> SELECT * FROM tbl;
> 
> 
> --
> WITH RECURSIVE rec(id , ts_p, ts_c) AS (
> SELECT a1.id, min(a1.ts), min(b1.ts)
>FROM tbl AS a1, tbl AS b1 
>WHERE a1.id=b1.id AND a1.ts + interval'5 minute' <= b1.ts 
>GROUP BY a1.id
> UNION ALL
> SELECT t2.id, t2.ts_p, t2.ts_c
>FROM rec AS t1 INNER JOIN 
>(SELECT a2.id, a2.ts as ts_p, min(b2.ts) AS ts_c
>FROM tbl AS a2, tbl AS b2 
>WHERE a2.id = b2.id AND a2.ts + interval'5 minute' <= b2.ts 
>GROUP BY a2.id, a2.ts
> UNION ALL
> SELECT a3.id, a3.ts, null
>FROM tbl AS a3  
> ) AS t2 ON t1.id = t2.id AND t1.ts_c=t2.ts_p
> )
> SELECT DISTINCT id, ts_p AS ts FROM rec
> ORDER BY 1,2;
> 
> 
> 
> 
> 
> 
> > I have a table that, at a minimum, has ID and timestamp columns.  Records
> > are inserted into with random IDs and timestamps.  Duplicate IDs are 
> > allowed. 
> > 
> > I want to select records grouped by ID, ordered by timestamp that are X 
> > minutes
> > apart. In this case X is 5.
> > 
> > Note, the intervals are not X minute wall clock intervals, they are X minute
> > intervals from the last accepted record, per-id.
> > 
> > For instance here is some sample input data:
> > 
> > ID  TS (HH:MM)
> > ---
> > 0   20:00
> > 1   20:03
> > 1   20:04
> > 0   20:05
> > 1   20:05
> > 0   20:08
> > 1   20:09
> > 0   20:10
> > 
> > I'd want the select to return:
> > 
> > ID  TS (HH:MM)
> > ---
> > 0   20:00
> > 0   20:05
> > 0   20:10
> > 1   20:03
> > 1   20:09
> > 
> > 
> > Does my question make sense?
> > 
> > Thanks in advance,
> > Wayne
> > 
> > -- 
> > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-sql
> 
> 
> 
> 
> -- 
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Usage of function retruning record in query

2011-07-04 Thread lists-pgsql
Sorry, I don't have a useful answer but I have a similar question.

Along these same lines how does one access the discreet x,y components
of type 'point'?


On Mon, Jul 04, 2011 at 06:59:49AM -0700, gmb wrote:
> 
> Harald Fuchs-10 wrote:
> > In article <1309762075448-4549140.p...@n5.nabble.com>,gmb
> > <gmbou...@gmail.com> writes:
> > SELECT itemid, (calcvalues(itemid)).* FROM itemlist
> > 
> 
> Thanks for the feedback, Harald.
> 
> How about specifying different aliases to the resulting values?
> This will be handy when I use the same function multiple times in the same
> query. 
> (the function will take another input parameters used in the calculations)
> 
> E.g.:
> SELECT itemid, (calcvalues(itemid, '2011-06-06')).*, (calcvalues(itemid,
> '2011-06-07')).* FROM itemlist;
> 
>  itemid | calcval1 | calcval2 | calcval1 | calcval2
> +--+--+--+-- 
>   4 | 0.67 | 10.00| 0.64 | 65.23 
>   5 | 1.55 | 45.00| 1.23 | 23.25 
>   6 | 3.60 | 69.00| 2.98 | 62.66 
> How will I manage unique column names for this output?
> 
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/Usage-of-function-retruning-record-in-query-tp4549140p4550092.html
> Sent from the PostgreSQL - sql mailing list archive at Nabble.com.
> 
> -- 
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Usage of function retruning record in query

2011-07-04 Thread lists-pgsql
Never mind, turns out you can do it with the array subscript operator. I
stumbled on to this by chance. I don't know if this is in the
documentation somewhere and I perhaps missed it?

On Mon, Jul 04, 2011 at 02:31:52PM -0400, lists-pg...@useunix.net wrote:
> Sorry, I don't have a useful answer but I have a similar question.
> 
> Along these same lines how does one access the discreet x,y components
> of type 'point'?
> 
> 
> On Mon, Jul 04, 2011 at 06:59:49AM -0700, gmb wrote:
> > 
> > Harald Fuchs-10 wrote:
> > > In article <1309762075448-4549140.p...@n5.nabble.com>,gmb
> > > <gmbou...@gmail.com> writes:
> > > SELECT itemid, (calcvalues(itemid)).* FROM itemlist
> > > 
> > 
> > Thanks for the feedback, Harald.
> > 
> > How about specifying different aliases to the resulting values?
> > This will be handy when I use the same function multiple times in the same
> > query. 
> > (the function will take another input parameters used in the calculations)
> > 
> > E.g.:
> > SELECT itemid, (calcvalues(itemid, '2011-06-06')).*, (calcvalues(itemid,
> > '2011-06-07')).* FROM itemlist;
> > 
> >  itemid | calcval1 | calcval2 | calcval1 | calcval2
> > +--+--+--+-- 
> >   4 | 0.67 | 10.00| 0.64 | 65.23 
> >   5 | 1.55 | 45.00| 1.23 | 23.25 
> >   6 | 3.60 | 69.00| 2.98 | 62.66 
> > How will I manage unique column names for this output?
> > 
> > --
> > View this message in context: 
> > http://postgresql.1045698.n5.nabble.com/Usage-of-function-retruning-record-in-query-tp4549140p4550092.html
> > Sent from the PostgreSQL - sql mailing list archive at Nabble.com.
> > 
> > -- 
> > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-sql
> 
> -- 
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] overload

2011-07-08 Thread lists-pgsql
I'm have the same situation with large tables.  Take a look at using a
cursor to fetch several thousand rows at a time.  I presume what's
happening is that perl is attempting to create a massive list/array in
memory.  If you use a cursor the list should only contain X number of
rows where X in the number specified at each fetch execution.  You'll
need to define the cursor inside a transaction block.

- begin transaction
- define the cursor
- fetch rows from cursor
- while row count from previous step > 0, execute previous step
- terminate transaction

Or you could use plpgsql instead of plperl, FOR loops over result sets in
plpgsql implicitly use cursors... it's just a little less code.

Hope that helps,
Wayne

On Tue, Jul 05, 2011 at 10:29:03PM +0200, Viktor Bojovi?? wrote:
> Hi,
> while reading 20GB table through PL/PERL function , it constantly grows in
> RAM.
> I wanted to ask you which is the best way to read table inside that
> function without such memory consumption.
> Thanks in advance
> 
> Code is here:
> 
> CREATE  FUNCTION pattern_counter("patLength" integer)
>   RETURNS varchar AS
> $BODY$
> my $rv = spi_exec_query("select sequence from entry");
> my $rowCount = $rv->{processed};
> my $patLen = $_[0];
> my $patt = '';
> my %patterns=();
> foreach my $rn (0 .. $rowCount -1){
> my $row = $rv->{rows}[$rn];
> my $seq = $row->{sequence};
> for (my $x = 1;$x<=length($seq) - $patLen;$x++){
> $patt=substr($seq,$x,$patLen);
> if (! defined $patterns{$patt}) {
> $patterns{$patt}=1;
> }else{
> $patterns{$patt}++;
> }
> }
> }
> foreach $patt (keys %patterns){
> my $sql="insert into patterns values('".$patt."',".$patterns{$patt}.")";
> spi_exec_query($sql);
> }
> return '';
> $BODY$
>   LANGUAGE plperl VOLATILE
>   COST 100;
> 
> 
> 
> -- 
> ---
> Viktor Bojovi??
> ---
> Wherever I go, Murphy goes with me

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] overload

2011-07-08 Thread lists-pgsql
Hi Viktor,

I'm not sure what your requirements are in terms of performance and
stability of the your result set. See Pavel's response. A cursor issues
a single query and renders a single result set. The result set is
static, the cursor just gives you finer control/performance when
retrieving rows from the set. Using a transaction will also render better
performance when %patterns contains a large number of keys/values,
insert all of them in one transaction, the same one you opened for the
cursor.

Your method issues many queries and will take longer for each successive
query.  And the number of queries will increase as table size increases. 
It could also return duplicate rows and/or missed rows due to other
transactions completing between your select query.

If you can tolerate the above issues then so be it, if not you really
should look at cursors.

Also there might be a bug in your code if you delete entries from
'entry'. Your depending on $rowCountAll to remain static which is not the
case if you ever delete entries. You can fix this by skipping the
"select count(1)" step and just breaking your loop when less then
$windowSize entries are returned from the "select sequence.." query.

Wayne


On Fri, Jul 08, 2011 at 08:55:36PM +0200, Viktor Bojovi?? wrote:
> Thanx Wayne,
> at the end i did it that way and it works.
> The code is below.
> 
> CREATE FUNCTION pattern_counter1("patLength" integer) RETURNS character
> varying
> LANGUAGE plperl
> AS $_X$
> my $rvCnt = spi_exec_query("select count(1) as cnt from entry");
> #my $rowCountAll = $rvCnt->{processed};
> my $row = $rvCnt->{rows}[0];
> my $rowCountAll = $row->{cnt};
> my $windowSize = 50;
> my %patterns=();
> for (my $p=0;$p<$rowCountAll;$p+=$windowSize){
> my $sql="select sequence from entry limit $windowSize offset $p";
> 
> my $rv = spi_exec_query($sql);
> my $rowCount = $rv->{processed};
> my $patLen = $_[0];
> my $patt = '';
> 
> foreach my $rn (0 .. $rowCount -1){
> my $row = $rv->{rows}[$rn];
> my $seq = $row->{sequence};
> for (my $x = 1;$x<=length($seq) - $patLen;$x++){
> $patt=substr($seq,$x,$patLen);
> if (! defined $patterns{$patt}) {
> $patterns{$patt}=1;
> }else{
> $patterns{$patt}++;
> }
> }
> }
> }
> 
> foreach $patt (keys %patterns){
> my $sql="insert into patterns values('".$patt."',".$patterns{$patt}.")";
> spi_exec_query($sql);
> }
> return $tmp;
> $_X$;
> 
> 
> On Fri, Jul 8, 2011 at 8:50 PM,  wrote:
> 
> > I'm have the same situation with large tables.  Take a look at using a
> > cursor to fetch several thousand rows at a time.  I presume what's
> > happening is that perl is attempting to create a massive list/array in
> > memory.  If you use a cursor the list should only contain X number of
> > rows where X in the number specified at each fetch execution.  You'll
> > need to define the cursor inside a transaction block.
> >
> > - begin transaction
> > - define the cursor
> > - fetch rows from cursor
> > - while row count from previous step > 0, execute previous step
> > - terminate transaction
> >
> > Or you could use plpgsql instead of plperl, FOR loops over result sets in
> > plpgsql implicitly use cursors... it's just a little less code.
> >
> > Hope that helps,
> > Wayne
> >
> > On Tue, Jul 05, 2011 at 10:29:03PM +0200, Viktor Bojovi?? wrote:
> > > Hi,
> > > while reading 20GB table through PL/PERL function , it constantly grows
> > in
> > > RAM.
> > > I wanted to ask you which is the best way to read table inside that
> > > function without such memory consumption.
> > > Thanks in advance
> > >
> > > Code is here:
> > >
> > > CREATE  FUNCTION pattern_counter("patLength" integer)
> > >   RETURNS varchar AS
> > > $BODY$
> > > my $rv = spi_exec_query("select sequence from entry");
> > > my $rowCount = $rv->{processed};
> > > my $patLen = $_[0];
> > > my $patt = '';
> > > my %patterns=();
> > > foreach my $rn (0 .. $rowCount -1){
> > > my $row = $rv->{rows}[$rn];
> > > my $seq = $row->{sequence};
> > > for (my $x = 1;$x<=length($seq) - $patLen;$x++){
> > > $patt=substr($seq,$x,$patLen);
> > > if (! defined $patterns{$patt}) {
> > > $patterns{$patt}=1;
> > > }else{
> > > $patterns{$patt}++;
> > > }
> > > }
> > >   

[SQL] Search for underscore w/ LIKE

2000-07-07 Thread brianb-pgsql


How do I use LIKE to search for strings with an underscore? The
documentation (well, Bruce's book) says to use 2 underscores (__) but it
doesn't work. For example:

create table liketest (
somestr varchar(50)
);

insert into liketest values ('foo_bar');
insert into liketest values ('foobar');
insert into liketest values ('snackbar');
insert into liketest values ('crow_bar');

-- I want to select strings with "_bar"
select * from liketest where somestr like '%_bar';
 somestr  
--
 foo_bar
 foobar
 snackbar
 crow_bar
(4 rows)

-- Using double underscore doesn't work either
select * from liketest where somestr like '%__bar';
 somestr  
--
 foo_bar
 foobar
 snackbar
 crow_bar
(4 rows)

-- Escaping w/ backslash doesn't work 
select * from liketest where somestr like '%\_bar';
 somestr  
--
 foo_bar
 foobar
 snackbar
 crow_bar
(4 rows)

Brian
--
Brian Baquiran <[EMAIL PROTECTED]>
http://www.baquiran.com/ AIM: bbaquiran 
Work: (632)718   Home: (632)9227123



[SQL] optimize sql

2000-07-26 Thread pgsql-sql

HI!

The SQL below is too slow.

SELECT name FROM office, office_application 
WHERE code = office_code 
AND name NOT IN
(SELECT DISTINCT name FROM office, office_application
WHERE active = 't' AND code = office_code);

Can anyone tell me how to optimize it?
Thanks.





Re(2): [SQL] optimize sql

2000-07-26 Thread pgsql-sql

[EMAIL PROTECTED] writes:
>How does the output of the above differ from:
>
>SELECT name FROM office, office_application 
>WHERE code = office_code 
>AND active != 't';
>
>Without knowing the table structures (which tables to active, code, 
>and office_code belong to?) it's hard to suggest much else.
>
>Ross


The name and code fields belong to office table. While
office_code and active fields belong to office_application table.
The name field have duplicates and among the duplicates,
only one active field is TRUE. I just wanted to get name field
that has no TRUE active field. Any other idea? Thanks.

sherwin




[SQL] INSERT waiting under heavy load

2006-01-06 Thread alex-lists-pgsql

After digging through all the discussions of "INSERT waiting" problems I am
still not clear about the concensus about solving it.



I am running ration 6:1 SELECT:INSERT (insert fires up an UPDATE trigger
that hits a column in a table holding keys used by SELECT). I am looking at
doing about 2,000 INSERT/UPDATE per second, with possible peaks at 10,000
INSERT/UPDATE per second (i.e. 60,000 SELECTs).


(table 1)

The table holding primary keys is expected to grow to around 10,000 rows.
This is the table that gets 50% of SELECTs and 100% of UPDATES. This is the
owner status table. It is optimized so with a single SELECT against this
table all information needed for real-time clients would be accessible.

(table 2)

The 2nd number of rows in the second table is expected to be around 100
times the number of rows in the 1st table. Each entry in this table has uses
first table's column as a foreign key to avoid unlinked entries. It also has
foreign key dependecies to some other tables that for the purpose of the
application are never updated. This table gets the other 50% of SELECTs.

(table 3)

Finally, the 3rd table (audit log) is expected to have arbitraty number of
entries (measured in millions). It gets virtually no SELECT activity in the
mornal operations. If the data from this table is needed, a snapshot of this
table gets pulled into a different table (most likely on a different
database) and gets processed there. The table gets cleaned up at specific
intervals using DROP TABLE/CREATE TABLE sequence. It is guaraneed that when
the management application (non-real time) performs DROP TABLE/CREATE table
combination. The only thing that I do not particulary like is that every INSERT
into this table has to adjust a counter column in a corresponding row of the
(table1) via (table3->table2->table1) path.



The server is configured to accept about 200 connections from clients. The
problem is that after first couple of hours of working normally, as the
table (3) grows, the backend indicates that more and more INSERTs into table
3 are held up in the "INSERT waiting" state.

It happens even when table 1 contains only one row, table 2 contains 4 rows.


Is there anything that can be done to diagnose why "INSERT waiting" state
becomes so prevalent?

Would pulling the counter from table 1 into a table
(4) that contains only reference to appropriate table (1) row and counter
value make it better? 


Thanks,
Alex




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] INSERT waiting under heavy load

2006-01-06 Thread alex-lists-pgsql
> > After digging through all the discussions of "INSERT waiting" problems I am
> > still not clear about the concensus about solving it.
> > ...
> > The only thing that I do not particulary like is that every INSERT
> > into this table has to adjust a counter column in a corresponding row of the
> > (table1) via (table3->table2->table1) path.
> 
> Well, if there are only a few rows in table1, then this design is
> inherently going to lose big.  Any two transactions trying to update the
> same table1 row are going to conflict and one will have to wait for the
> other to complete.  Rethink the need for those counters.

I appreciate that it is most likely not the best design though i expect
reasonable distribution of UPDATE hits against the first table when the
number of rows increases. 

What I do not understand is this: 

if the problem is caused by the the acquire lock->modify column->release
lock on the table 1, then why does it increase significantly increase as the
number of entries in the table 3 grows? The simulation maintains pretty much
constant rate of new requests coming to table 3.

Alex



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] stored procedures for complex SELECTs

2006-01-18 Thread alex-lists-pgsql

Are there performance advantages that can be achieved by wrapping a 
complex SELECT into a stored procedure? 

Alex

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] executing external command

2006-03-16 Thread alex-lists-pgsql


Is there a way to execute an external i.e. system command from inside a
pl/pgsql function?

Alex

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] How to find entries missing in 2nd table?

2006-07-11 Thread alex-lists-pgsql
Hi,
I realize I probably lost my marbles but I've been having a god
awful time with a single query:

control:


controller_id   pk;


datapack:

controller_id   fk;




I need to get all entries from the table control that are not listed in
datapack.


Thanks,
Alex

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster