Re: [SQL] SQL to Check whether "AN HOUR PERIOD" is between start and end timestamps
Dear Srikanth, You can solve your problem by doing this THE SQL IS AS FOLLOWS ASSUME TIME INTERVAL 2008-12-07 07:59:59 TO 2008-12-07 08:58:59 AND THE TABLE NAME time_interval COUNT (*) FROM (select customer_id, log_session_id, start_ts, end_ts , end_ts-start_ts as "Interval" from time_interval where end_ts-start_ts >= '1 hour' and '2008-12-07 07:59:59' between start_ts and end_ts) AS COUNT ; --ORGINAL MESSAGE-- From: Richard Huxton To: Srikanth Cc: [email protected] Sent: Tuesday, 17 March, 2009 18:06:09 Subject: Re: [SQL] SQL to Check whether "AN HOUR PERIOD" is between start and end timestamps Dear all, I have a table that records User Login Sessions with two timestamp fields. Basically Start of Session and End of a Session (start_ts and end_ts). Each row in the table identifies a session which a customer has used. Data from the table (session): - customer_id | log_session_id | start_ts | end_ts -+-++ 1006100716 | 209571229340224 | 15/12/2008 16:53:52.665327 | 15/12/2008 16:58:52.665327 1006100789 | 112061228488202 | 05/12/2008 20:13:32.773065 | 09/12/2008 22:59:02.770218 1006100888 | 214221233045949 | 27/01/2009 14:15:16.289626 | 27/01/2009 14:58:59.989182 100608 | 205221236839534 | 12/03/2009 12:02:15.947509 | 12/03/2009 12:07:15.947509 1006100825 | 112331229068049 | 12/12/2008 13:17:37.229249 | 12/12/2008 13:56:58.394577 The requirement is as follows, I have to find out how many User Sessions that were present in any given "1 HOUR TIME PERIOD". A single User Session can span across many days. Example: start_ts | end_ts 05/12/2008 20:13:32.773065 | 09/12/2008 22:59:02.770218 - Let me explain a scenario, I have to find out the number of sessions during the interval '07/01/2009 11:00:00' to '07/01/2009 11:59:59'. If I have to find the number of sessions present at any instant time say '07/01/2009 11:25:25', i can easily find out using the following Query, select count(log_session_id) from session where '07/01/2009 11:25:25' between start_ts and end_ts ; But, I have to find the number of sessions present during the "HOUR INTERVAL" '07/01/2009 11:00:00' to '07/01/2009 11:59:59'. I tried using wildcards in timestamp '07/01/2009 11:%:%" but in vain. I tries googling / searching archives without any success either. I feel this is a general requirement and this topic should have already been discussed. Could someone help me solve this please ? Any lead would do, like some special postgres-function or any other means. Many Thanks,
Re: [SQL] SQL to Check whether "AN HOUR PERIOD" is between start and end timestamps
James Kitambara wrote:
> Dear Srikanth,
> You can solve your problem by doing this
>
> THE SQL IS AS FOLLOWS
> ASSUME TIME INTERVAL 2008-12-07 07:59:59 TO 2008-12-07 08:58:59 AND THE
> TABLE NAME time_interval
>
> COUNT (*) FROM
> (select customer_id, log_session_id, start_ts, end_ts , end_ts-start_ts
> as "Interval" from time_interval
> where end_ts-start_ts >= '1 hour'
> and '2008-12-07 07:59:59' between start_ts and end_ts)
> AS COUNT ;
Another way to phrase the WHERE clause is with the OVERLAPS operator,
something like this:
WHERE (start_ts, end_ts) OVERLAPS ('2008-12-07 07:59:59', '2008-12-07 08:59:59')
What I'm not so sure about is how optimizable this construct is.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
[SQL] ibatis with overlaps query
Good morning, With ibatis, do overlap checking: (1) select (DATE #begin_date#, DATE #end_date#) overlaps (DATE '2008-01-01', DATE '2009-01-01') . #begin_date# is varchar . #end_date# is varchar Always get: Cause: java.sql.SQLException: ERROR: syntax error at or near "$4" However, when I updated the query to (2) select (#begin_date#::DATE, #end_date#::DATE) overlaps (DATE '2008-01-01', DATE '2009-01-01') It works. I am bit confused why (1) does not work, but (2) does? -- Lu Ying -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Performance problem with row count trigger
I was looking to speed up a count(*) query, as per the recommendations on the postgres wiki: http://wiki.postgresql.org/wiki/Slow_Counting I decided to use the trigger approach to get an accurate count that doesn't depend on VACUUM being run recently. I've got it working, but the addition of the trigger slows things down so bad that it's not a viable option. I was hoping for advice on how to speed things up, or at least an explanation of why it gets so slow. The relevant tables are as follows: --- CREATE TABLE dataset( dataset_id SERIAL PRIMARY KEY, catalog_id INTEGER REFERENCES catalog (catalog_id) ON DELETE CASCADE, t_begin TIMESTAMP WITHOUT TIME ZONE NULL, t_end TIMESTAMP WITHOUT TIME ZONE NULL, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT now(), "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT now(), "assoc_count" BIGINT NOT NULL DEFAULT 0 ) CREATE TABLE assoc ( dataset_id INTEGER REFERENCES dataset (dataset_id) ON DELETE CASCADE, range ip4r NOT NULL, label_id INTEGER NULL, value BIGINT NULL, PRIMARY KEY (dataset_id, range), UNIQUE (dataset_id, range, label_id) ); --- What I want to do is update the assoc_count field in the dataset table to reflect the count of related records in the assoc field. To do so, I added the following trigger: --- CREATE OR REPLACE FUNCTION update_assoc_count_insert() RETURNS TRIGGER AS ' BEGIN UPDATE dataset SET assoc_count = assoc_count + 1 WHERE dataset_id = NEW.dataset_id; RETURN NEW; END ' LANGUAGE plpgsql; CREATE TRIGGER assoc_update_assoc_count_insert AFTER INSERT ON assoc FOR EACH ROW EXECUTE PROCEDURE update_assoc_count_insert(); --- (I also have triggers for UPDATE/DELETE, left out for brevity.) The slowness I'm talking about doesn't show up for a single insert, but arises when doing thousands of them in a transaction. Here are some test runs of 10,000 inserts without the trigger in place: --- 1000 (2231.540142/s) 2000 (2341.849077/s) 3000 (2234.332303/s) 4000 (2311.247629/s) 5000 (2366.171695/s) 6000 (2400.028800/s) 7000 (2407.147716/s) 8000 (2416.419084/s) 9000 (2401.476107/s) 1 (2406.870943/s) --- The number in parens is the number of inserts per second for each batch of 1,000 inserts. As you can see, performance isn't too terrible, and is pretty constant from start to finish. Now I add the trigger, and here's what happens: --- 1000 (1723.216901/s) 2000 (1613.529119/s) 3000 (1526.081496/s) 4000 (1431.907261/s) 5000 (1340.159570/s) 6000 (1269.746140/s) 7000 (1191.374990/s) 8000 (1117.332012/s) 9000 (1056.309389/s) 1 (1001.051003/s) --- The throughput of the first batch of 1,000 is diminished, but still tolerable, but after 10,000 inserts, it's gotten much worse. This pattern continues, to the point where performance is unacceptable after 20k or 30k inserts. To rule out the performance of the trigger mechanism itself, I swapped the trigger out for one that does nothing. The results were the same as without the trigger (the first set of numbers), which leads me to believe there's something about the UPDATE statement in the trigger that is causing this behavior. I then tried setting the assoc_count to a constant number instead of trying to increment it with assoc_count = assoc_count + 1, but performance was just as bad as with the proper UPDATE statement. I'm not really sure where to go next. I can update the assoc_count once at the end of the transaction (without a trigger), but this could lead to inconsistencies if another client does inserts without updating the count. I would really prefer to use the trigger solution recommended on the PGsql wiki, but can't do so until I solve this performance problem. I greatly appreciate any and all help. Thanks. -Tony -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] SQL to Check whether "AN HOUR PERIOD" is between start and end timestamps
Alvaro Herrera writes:
> Another way to phrase the WHERE clause is with the OVERLAPS operator,
> something like this:
> WHERE (start_ts, end_ts) OVERLAPS ('2008-12-07 07:59:59', '2008-12-07
> 08:59:59')
> What I'm not so sure about is how optimizable this construct is.
Not at all :-( --- or at least, our code doesn't do anything with
it currrently; I shouldn't claim that it's impossible to optimize.
regards, tom lane
--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] ibatis with overlaps query
Emi Lu writes: > With ibatis, do overlap checking: > (1) select (DATE #begin_date#, DATE #end_date#) overlaps > (DATE '2008-01-01', DATE '2009-01-01') > . #begin_date# is varchar > . #end_date# is varchar > Cause: java.sql.SQLException: ERROR: syntax error at or near "$4" > However, when I updated the query to > (2) select (#begin_date#::DATE, #end_date#::DATE) overlaps > (DATE '2008-01-01', DATE '2009-01-01') > It works. I am bit confused why (1) does not work, but (2) does? The syntax DATE 'foo' (or more generally, typename 'foo') works only for string-literal constants 'foo'. It looks like what your frontend is actually sending is a parameter symbol, like DATE $1, which is a syntax error. There is more about this in the manual, sections 4.1.2.5 and 4.2.8: http://www.postgresql.org/docs/8.3/static/sql-syntax.html regards, tom lane -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] FUNCTION problem
Adrian Klaver wrote: On Wednesday 01 April 2009 4:31:20 pm Peter Willis wrote: Hello, I am having a problem with a FUNCTION. The function creates just fine with no errors. However, when I call the function postgres produces an error. Perhaps someone can enlighten me. --I can reproduce the error by making a test function --that is much easier to follow that the original: CREATE OR REPLACE FUNCTION test_function(integer) RETURNS SETOF RECORD AS $BODY$ DECLARE croid integer; BEGIN --PERFORM A SMALL CALCULATION --DOESNT SEEM TO MATTER WHAT IT IS SELECT INTO croid 2; --A SELECT STATEMENT OUTPUTS RECORDS (one in this case) SELECT croid,$1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE --The call looks like the following: SELECT test_function(1); --The resulting error reads as follows: ERROR: query has no destination for result data HINT: If you want to discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL function "test_function" line 5 at SQL statement ** Error ** ERROR: query has no destination for result data SQL state: 42601 Hint: If you want to discard the results of a SELECT, use PERFORM instead. Context: PL/pgSQL function "test_function" line 5 at SQL statement You have declared function to RETURN SETOF. In order for that to work you need to do RETURN NEXT. See below for difference between RETURN and RETURN NEXT: http://www.postgresql.org/docs/8.3/interactive/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS Thank you for the pointer. I tried using FOR/RETURN NEXT as suggested but now get a different error: CREATE OR REPLACE FUNCTION test_function(integer) RETURNS SETOF record AS $BODY$ DECLARE croid integer; DECLARE R RECORD; BEGIN SELECT INTO croid 2; FOR R IN SELECT croid,$1 LOOP RETURN NEXT R; END LOOP; RETURN; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE There is now an error : ERROR: set-valued function called in context that cannot accept a set CONTEXT: PL/pgSQL function "test_function" line 7 at RETURN NEXT ** Error ** ERROR: set-valued function called in context that cannot accept a set SQL state: 0A000 Context: PL/pgSQL function "test_function" line 7 at RETURN NEXT PostgreSQL doesn't seem to see 'R' as being a SET OF RECORD Peter -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Performance problem with row count trigger
Tony Cebzanov wrote: > The throughput of the first batch of 1,000 is diminished, but still > tolerable, but after 10,000 inserts, it's gotten much worse. This > pattern continues, to the point where performance is unacceptable after > 20k or 30k inserts. > > To rule out the performance of the trigger mechanism itself, I swapped > the trigger out for one that does nothing. The results were the same as > without the trigger (the first set of numbers), which leads me to > believe there's something about the UPDATE statement in the trigger that > is causing this behavior. MVCC bloat from the constant updates to the assoc_count table, maybe? If you're using 8.3, I'd expect HOT to save you here. Are you using an older version of PostgreSQL? If not, have you by any chance defined an index on assoc_count ? Also, try to keep records in your `dataset' table as narrow as possible. If the catalog_id, t_begin, t_end, ctime and mtime fields do not change almost as often as the assoc_count field, split them into a separate table with a foreign key referencing dataset_id, rather than storing them directly in the dataset table. -- Craig Ringer -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Performance problem with row count trigger
Hi Craig, thanks for your help. Craig Ringer wrote: > MVCC bloat from the constant updates to the assoc_count table, maybe? That's what a coworker suggested might be happening. The fact that a no-op trigger performs fine but the UPDATE trigger doesn't would seem to confirm that it's something in the trigger SQL and not in the trigger mechanism itself. > If you're using 8.3, I'd expect HOT to save you here. Are you using an > older version of PostgreSQL? If not, have you by any chance defined an > index on assoc_count ? I'm running 8.3.7, which is the most recent version from Macports. There's no index of any kind on dataset.assoc_count. Having read up on HOT, it sounds like it would be helpful. Is there anything I need to do to enable HOT in 8.3.7, or is it always used? > Also, try to keep records in your `dataset' table as narrow as possible. > If the catalog_id, t_begin, t_end, ctime and mtime fields do not change > almost as often as the assoc_count field, split them into a separate > table with a foreign key referencing dataset_id, rather than storing > them directly in the dataset table. ctime is the creation time of the dataset, so it's never supposed to be updated. mtime is the last time the dataset was changed, and there's another trigger to update that timestamp whenever the dataset table changes. So, at best, I'd be able to remove the ctime column from the dataset table, but I'd have to add the foreign key, so I don't think that would be helpful. Your mention of the ctime and mtime columns made me think the update timestamp trigger may be contributing to the problem as well (since the assoc INSERT trigger would presumably cause the dataset UPDATE trigger to fire), but dropping that trigger yielded no improvement, so I think it's something else. -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Performance problem with row count trigger
Tony Cebzanov writes: > What I want to do is update the assoc_count field in the dataset table > to reflect the count of related records in the assoc field. To do so, I > added the following trigger: > CREATE OR REPLACE FUNCTION update_assoc_count_insert() > RETURNS TRIGGER AS > ' > BEGIN > UPDATE dataset > SET assoc_count = assoc_count + 1 > WHERE dataset_id = NEW.dataset_id; > RETURN NEW; > END > ' LANGUAGE plpgsql; > CREATE TRIGGER assoc_update_assoc_count_insert AFTER INSERT ON assoc > FOR EACH ROW EXECUTE PROCEDURE update_assoc_count_insert(); There is basically no way that this is going to not suck :-(. In the first place, using an AFTER trigger means that each update queues an AFTER trigger update event that has to be fired at statement or transaction end. In the second place (as Craig correctly noted) this results in a separate update to the count-table row for each inserted row, which tremendously bloats the count table with dead tuples. In the third place, if you have any concurrency of insertions, it disappears because all the inserters need to update the same count row. If you dig in the pgsql-hackers archives, you will find that the original scheme for this was to have each transaction accumulate its total number of insertions minus deletions for a table in local memory, and then insert *one* delta row into the count table just before transaction commit. I don't think it's possible to do that with just user-level triggers (not least because we haven't got ON COMMIT triggers); it would have to be a C-code addition. The various blog entries you cite are non-peer-reviewed oversimplifications of that design. Digging around, the oldest description I can find of this idea is http://archives.postgresql.org/pgsql-hackers/2003-09/msg00387.php although there are more recent threads rehashing the topic. One point I don't recall anyone mentioning is that the stats subsystem now implements a fairly large subset of this work already, namely the initial data accumulation. So you could imagine plugging something into that to send the deltas to a table in addition to the stats collector. regards, tom lane -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Performance problem with row count trigger
On 04/02/2009 03:32 PM, Tom Lane wrote: Tony Cebzanov writes: What I want to do is update the assoc_count field in the dataset table to reflect the count of related records in the assoc field. To do so, I added the following trigger: CREATE OR REPLACE FUNCTION update_assoc_count_insert() RETURNS TRIGGER AS ' BEGIN UPDATE dataset SET assoc_count = assoc_count + 1 WHERE dataset_id = NEW.dataset_id; RETURN NEW; END ' LANGUAGE plpgsql; CREATE TRIGGER assoc_update_assoc_count_insert AFTER INSERT ON assoc FOR EACH ROW EXECUTE PROCEDURE update_assoc_count_insert(); There is basically no way that this is going to not suck :-(. In the first place, using an AFTER trigger means that each update queues an AFTER trigger update event that has to be fired at statement or transaction end. In the second place (as Craig correctly noted) this results in a separate update to the count-table row for each inserted row, which tremendously bloats the count table with dead tuples. In the third place, if you have any concurrency of insertions, it disappears because all the inserters need to update the same count row. If you dig in the pgsql-hackers archives, you will find that the original scheme for this was to have each transaction accumulate its total number of insertions minus deletions for a table in local memory, and then insert *one* delta row into the count table just before transaction commit. I don't think it's possible to do that with just user-level triggers (not least because we haven't got ON COMMIT triggers); it would have to be a C-code addition. The various blog entries you cite are non-peer-reviewed oversimplifications of that design. Digging around, the oldest description I can find of this idea is http://archives.postgresql.org/pgsql-hackers/2003-09/msg00387.php although there are more recent threads rehashing the topic. One point I don't recall anyone mentioning is that the stats subsystem now implements a fairly large subset of this work already, namely the initial data accumulation. So you could imagine plugging something into that to send the deltas to a table in addition to the stats collector. regards, tom lane So, basically other than reading from pg_class table about the tuple count, there isn't a good way to optimize the COUNT(*)? Thanks Wei
Re: [SQL] Performance problem with row count trigger
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> I was looking to speed up a count(*) query
A few things spring to mind:
1) Use a separate table, rather than storing things inside of
dataset itself. This will reduce the activity on the dataset table.
2) Do you really need bigint for the counts?
3) If you do want to do this, you'll need a different approach as
Tom mentioned. One way to do this is to have a special method for
bulk loading, that gets around the normal updates and requires that
the user take responsiblity for knowing when and how to call the
alternate path. The basic scheme is this:
1. Disable the normal triggers
2. Enable special (perl) triggers that keep the count in memory
3. Do the bulk changes
4. Enable normal triggers, disable special perl one
5. Do other things as needed
6. Commit the changes to the assoc_count field.
Number 6 can be done anytime, as long as you are in the same session. The danger
is in leaving the session without calling the final function. This can be
solved with some deferred FK trickery, or by careful scripting of the events.
All this doesn't completely remove the pain, but it may shift it around enough
in useful ways for your app.
Here is some code to play with:
- -- Stores changes into memory, no disk access:
CREATE OR REPLACE FUNCTION update_assoc_count_perl()
RETURNS TRIGGER
LANGUAGE plperlu
AS $_$
use strict;
my $event = $_TD->{event};
my ($oldid,$newid) = ($_TD->{old}{dataset_id},$_TD->{new}{dataset_id});
if ($event eq 'INSERT') {
$_SHARED{foobar}{$newid}++;
}
elsif ($event eq 'DELETE') {
$_SHARED{foobar}{$oldid}--;
$_SHARED{foobar}{$oldid}||=-1;
}
elsif ($oldid ne $newid) {
$_SHARED{foobar}{$oldid}--;
$_SHARED{foobar}{$oldid}||=-1;
$_SHARED{foobar}{$newid}++;
}
return;
$_$;
- -- Quick little debug function to view counts:
CREATE OR REPLACE FUNCTION get_assoc_count(int)
RETURNS INTEGER
LANGUAGE plperlu
AS $_$
my $id = shift;
return $_SHARED{foobar}{$id} || 0;
$_$;
- -- Create, then disable, the perl trigger
CREATE TRIGGER update_assoc_count_perl
AFTER INSERT OR UPDATE OR DELETE
ON assoc
FOR EACH ROW
EXECUTE PROCEDURE update_assoc_count_perl();
ALTER TABLE assoc DISABLE TRIGGER update_assoc_count_perl;
- -- Switches the main triggers off, and the memory triggers on
- -- Use deferred constraints to ensure that stop_bulkload_assoc_count is called
CREATE OR REPLACE FUNCTION start_bulkload_assoc_count()
RETURNS TEXT
LANGUAGE plperlu
AS $_$
spi_exec_query("ALTER TABLE assoc DISABLE TRIGGER
update_assoc_count_insert"); ## x 3 as needed
spi_exec_query("ALTER TABLE assoc ENABLE TRIGGER update_assoc_count_perl");
-- Put foreign key magic here
return 'Ready to bulkload';
$_$;
- -- Switches the triggers back, and allows a commit to proceed
CREATE OR REPLACE FUNCTION end_bulkload_assoc_count()
RETURNS TEXT
LANGUAGE plperlu
AS $_$
my $sth = spi_prepare(
'UPDATE dataset SET assoc_count = assoc_count + $1 WHERE dataset_id = $2',
'INTEGER', 'INTEGER');
for my $id (keys %{$_SHARED{foobar}}) {
my $val = $_SHARED{foobar}{$id};
spi_exec_prepared($sth,$val,$id);
}
spi_exec_query("ALTER TABLE assoc ENABLE TRIGGER update_assoc_count_insert");
## x3 etc.
spi_exec_query("ALTER TABLE assoc DISABLE TRIGGER update_assoc_count_perl");
-- Put FK magic here
return 'Bulk load complete';
$_$;
- -- Usage:
SELECT start_bulkload_assoc_count();
- -- Lots of inserts and updates
SELECT end_bulkload_assoc_count();
- --
Greg Sabino Mullane [email protected]
End Point Corporation
PGP Key: 0x14964AC8 200904021644
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-
iEYEAREDAAYFAknVJiUACgkQvJuQZxSWSsisTQCg4iPr4fepGc/wA3LBUMLz13Gj
aEsAoLFB/KbA572VNKooa2a82Ok4DKUy
=Z95U
-END PGP SIGNATURE-
--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How would I get rid of trailing blank line?
Hi Everybody, I am using postgres 8.3.4 on linux. I often use a line like: psql -tf query.sql mydatabase > query.out -t option gets rid of the heading and count report at the bottom. There is a blank line at the bottom, however. Is there any way to have psql not give me that blank line? Thank you for your help. Regards, Tena Sakai [email protected]
Re: [SQL] FUNCTION problem
- "Peter Willis" wrote: > Adrian Klaver wrote: > > On Wednesday 01 April 2009 4:31:20 pm Peter Willis wrote: > >> Hello, > >> > >> I am having a problem with a FUNCTION. > >> The function creates just fine with no errors. > >> > >> However, when I call the function postgres produces an error. > >> > >> Perhaps someone can enlighten me. > >> > >> > >> --I can reproduce the error by making a test function > >> --that is much easier to follow that the original: > >> > >> CREATE OR REPLACE FUNCTION test_function(integer) > >>RETURNS SETOF RECORD AS > >> $BODY$ > >>DECLARE croid integer; > >>BEGIN > >> > >>--PERFORM A SMALL CALCULATION > >>--DOESNT SEEM TO MATTER WHAT IT IS > >> > >>SELECT INTO croid 2; > >> > >>--A SELECT STATEMENT OUTPUTS RECORDS (one in this case) > >>SELECT croid,$1; > >>END; > >> > >> $BODY$ > >>LANGUAGE 'plpgsql' VOLATILE > >> > >> > >> > >> > >> --The call looks like the following: > >> > >> SELECT test_function(1); > >> > >> > >> > >> > >> > >> --The resulting error reads as follows: > >> > >> ERROR: query has no destination for result data > >> HINT: If you want to discard the results of a SELECT, use PERFORM > instead. > >> CONTEXT: PL/pgSQL function "test_function" line 5 at SQL > statement > >> > >> ** Error ** > >> > >> ERROR: query has no destination for result data > >> SQL state: 42601 > >> Hint: If you want to discard the results of a SELECT, use PERFORM > instead. > >> Context: PL/pgSQL function "test_function" line 5 at SQL statement > > > > You have declared function to RETURN SETOF. In order for that to > work you need > > to do RETURN NEXT. See below for difference between RETURN and > RETURN NEXT: > > > http://www.postgresql.org/docs/8.3/interactive/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS > > > > > > > Thank you for the pointer. > > I tried using FOR/RETURN NEXT as suggested but now get a > different error: > > > CREATE OR REPLACE FUNCTION test_function(integer) >RETURNS SETOF record AS > $BODY$ >DECLARE croid integer; >DECLARE R RECORD; >BEGIN > SELECT INTO croid 2; > > FOR R IN SELECT croid,$1 LOOP >RETURN NEXT R; > END LOOP; > RETURN; >END; > > $BODY$ >LANGUAGE 'plpgsql' VOLATILE > > > There is now an error : > > ERROR: set-valued function called in context that cannot accept a > set > CONTEXT: PL/pgSQL function "test_function" line 7 at RETURN NEXT > > ** Error ** > > ERROR: set-valued function called in context that cannot accept a set > SQL state: 0A000 > Context: PL/pgSQL function "test_function" line 7 at RETURN NEXT > > > > PostgreSQL doesn't seem to see 'R' as being a > SET OF RECORD > > > Peter Did you happen to catch this: Note that functions using RETURN NEXT or RETURN QUERY must be called as a table source in a FROM clause Try: select * from test_function(1) Adrian Klaver [email protected] -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] How would I get rid of trailing blank line?
"Tena Sakai" writes:
> I often use a line like:
> psql -tf query.sql mydatabase > query.out
> -t option gets rid of the heading and count
> report at the bottom. There is a blank line
> at the bottom, however. Is there any way to
> have psql not give me that blank line?
Doesn't look like it --- the final fputc('\n', fout); seems to be
done unconditionally in all the output formats. I wonder if we should
change that? I'm afraid it might break programs that are used to it :-(
regards, tom lane
--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] FUNCTION problem
Adrian Klaver wrote: Did you happen to catch this: Note that functions using RETURN NEXT or RETURN QUERY must be called as a table source in a FROM clause Try: select * from test_function(1) I did miss that, but using that method to query the function didn't work either. Postgres doesn't see the result as a tabular set of records. Even if I replace the FOR loop with: FOR R IN SELECT * FROM pg_database LOOP RETURN NEXT R; END LOOP; I get the same error(s). I don't think postgres likes the unrelated 'SELECT INTO [column] FROM [QUERY] LIMIT 1' lines before the FOR loop... I think I need to go back and approach the function from a different direction. Thanks for all the pointers. Peter -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] FUNCTION problem
On Thursday 02 April 2009 4:22:06 pm Peter Willis wrote: > Adrian Klaver wrote: > > Did you happen to catch this: > > Note that functions using RETURN NEXT or RETURN QUERY must be called as a > > table source in a FROM clause > > > > Try: > > select * from test_function(1) > > I did miss that, but using that method to query the function > didn't work either. Postgres doesn't see the result as a > tabular set of records. > > Even if I replace the FOR loop with: > > > FOR R IN SELECT * FROM pg_database LOOP > RETURN NEXT R; > END LOOP; > > > > I get the same error(s). I don't think postgres likes > the unrelated 'SELECT INTO [column] FROM [QUERY] LIMIT 1' > lines before the FOR loop... > > I think I need to go back and approach the function from a > different direction. > > Thanks for all the pointers. > > Peter Now I remember. Its something that trips me up, the RECORD in RETURN setof RECORD is not the same thing as the RECORD in DECLARE RECORD. See below for a better explanation- http://www.postgresql.org/docs/8.3/interactive/plpgsql-declarations.html#PLPGSQL-DECLARATION-RECORDS Note that RECORD is not a true data type, only a placeholder. One should also realize that when a PL/pgSQL function is declared to return type record, this is not quite the same concept as a record variable, even though such a function might use a record variable to hold its result. In both cases the actual row structure is unknown when the function is written, but for a function returning record the actual structure is determined when the calling query is parsed, whereas a record variable can change its row structure on-the-fly. -- Adrian Klaver [email protected] -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] How would I get rid of trailing blank line?
Hi Tom, I am a bit surprised to hear that that '\n' is there unconditionally. But I am sure there are more pressing things for you to work on. It's something I can live with. Regards, Tena Sakai [email protected] -Original Message- From: Tom Lane [mailto:[email protected]] Sent: Thu 4/2/2009 4:01 PM To: Tena Sakai Cc: [email protected]; [email protected] Subject: Re: [SQL] How would I get rid of trailing blank line? "Tena Sakai" writes: > I often use a line like: > psql -tf query.sql mydatabase > query.out > -t option gets rid of the heading and count > report at the bottom. There is a blank line > at the bottom, however. Is there any way to > have psql not give me that blank line? Doesn't look like it --- the final fputc('\n', fout); seems to be done unconditionally in all the output formats. I wonder if we should change that? I'm afraid it might break programs that are used to it :-( regards, tom lane
Re: [SQL] How would I get rid of trailing blank line?
On Thu, Apr 2, 2009 at 3:33 PM, Tena Sakai wrote: > Hi Everybody, > > I am using postgres 8.3.4 on linux. > I often use a line like: > psql -tf query.sql mydatabase > query.out > > -t option gets rid of the heading and count > report at the bottom. There is a blank line > at the bottom, however. Is there any way to > have psql not give me that blank line? Tired of those blank lines in your text files? Grep them away: psql -tf query.sql mydatabase | grep -v "^$" > query.out > > Thank you for your help. > > Regards, > > Tena Sakai > [email protected] > -- When fascism comes to America, it will be the intolerant selling it as diversity. -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [HACKERS] [SQL] How would I get rid of trailing blank line?
Hi Andrew, > Right. There's a simple pipeline way to get rid of it: > psql -t -f query.sql | sed -e '$d' > query.out Hi Scott, > Tired of those blank lines in your text files? Grep them away: > psql -tf query.sql mydatabase | grep -v "^$" > query.out Thank you Both. Regards, Tena Sakai [email protected] -Original Message- From: Andrew Dunstan [mailto:[email protected]] Sent: Thu 4/2/2009 6:34 PM To: Tom Lane Cc: Tena Sakai; [email protected]; [email protected] Subject: Re: [HACKERS] [SQL] How would I get rid of trailing blank line? Tom Lane wrote: > "Tena Sakai" writes: > >> I often use a line like: >> psql -tf query.sql mydatabase > query.out >> > > >> -t option gets rid of the heading and count >> report at the bottom. There is a blank line >> at the bottom, however. Is there any way to >> have psql not give me that blank line? >> > > Doesn't look like it --- the final fputc('\n', fout); seems to be > done unconditionally in all the output formats. I wonder if we should > change that? I'm afraid it might break programs that are used to it :-( > > > Right. There's a simple pipeline way to get rid of it: psql -t -f query.sql | sed -e '$d' > query.out cheers andrew
