Joshua D. Drake wrote:
Johannes Konert wrote:
But that is not my point. The question is where I can change the
enforced chmod 700 postgresql always wants me to set.
You can't.
You can however change the postgresql.conf to put look for files
somewhere besides $PGDATA and thus you would be able
Alexander Staubo wrote:
On 6/8/07, Billings, John [EMAIL PROTECTED] wrote:
If so which part of the database, and what kind of parallel
algorithms would be used?
GPUs are parallel vector processing pipelines, which as far as I can
tell do not lend themselves right away to the data
On Mon, 2007-06-11 at 09:03 +0200, Johannes Konert wrote:
Joshua D. Drake wrote:
Anything else in there you should be grabbing via pg_dump anyway.
So you suggest not to backup the filesystem-files, but to do a dump
instead? Does this work together with PITR as described in 23.3.
MC Moisei wrote:
I'm not sure I understand the question. What else runs on it ?I have an
Apache that fronts a Tomcat (Java Enterprise App Server). In tomcat I only
run this application that has a connection pool of 30 connections(if I
remember correctly).Once the application starts to
Rodrigo De León wrote:
On Jun 8, 7:59 pm, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:
i have 4 tables :
date_table (date_id,.)
A_table(A_table_id, something1,something2.)
A1_table(A1_table_id references A_Table(A_Table_id),A11,A12)
A2_table(A2_table_id references
Sergey Karin wrote:
Hi all.
I use PG 8.1.8 and PostGIS 1.1.1
vka1=# select version();
I develop application for loading geodata to database. In separate
transaction the application inserts the data in separate table that
created dynamically when transaction started. All tables has equal
Hi!
I'm looking for recommendation for tracking DDL changes on
single database instance.
Currently I'm using pg_log to extract DDL changes, but those changes
are cluster wide.
Ideally I would like to enable option in pg_log to give me info about
in which database changes were made.
Something
you can modify log_line_prefix to contain database name.
depesz
On 6/11/07, Rikard Pavelic [EMAIL PROTECTED] wrote:
Hi!
I'm looking for recommendation for tracking DDL changes on
single database instance.
Currently I'm using pg_log to extract DDL changes, but those changes
are cluster wide.
We use PG 8.2.4 with as cursors over libpq and get an error:
ERROR: current transaction is aborted, commands ignored until end of
transaction block
How can we avoid this error or make the cursor still working afterwards?
Assume following contents of table CUSTOMERS:
ID | ZIP
Is this possible? I'm attempting to create a function like this and I'm
getting the following error:
ERROR: RETURN NEXT cannot have a parameter in function with OUT
parameters at or near myRecord.
--
__
Jeremy Nix
Senior Application Developer
Southwest
Hello
I thing so problem is there
AND Cast(CUSTOMERS.ZIP as integer) 2
You cannot cast 'A1234' to integer
Regards
Pavel Stehule
2007/6/11, Pit M. [EMAIL PROTECTED]:
We use PG 8.2.4 with as cursors over libpq and get an error:
ERROR: current transaction is aborted,
Hello
it's possible, but it's probably some different than you expect
CREATE OR REPLACE FUNCTION foo(OUT a integer, OUT b integer)
RETURNS SETOF RECORD AS $$
BEGIN
a := 10; b := 10;
RETURN NEXT;
a := 11; b := 20;
RETURN NEXT;
RETURN;
END;
$$ LANGUAGE plpgsql;
postgres=# select * from
I see what you're doing, but I'm not quite sure how to adapt it to what
I'm doing. Here's simplified snippet of my code. Can elaborate on how
I can return a recordset and the output parameters.?
CREATE OR REPLACE FUNCTION Search (OUT TotalRecords int, OUT TotalPages int)
RETURNS SETOF record
All,
I am new to PostGres 8 (using 8.2.4 windows version). We have for several
years been using iAnywhere's SQL Anywhere product with our commercial
transportation software. With ASA there are 2 files that must be distributed
for the database, a filename.db and a filename.log. When we do a
On Mon, Jun 11, 2007 at 12:55:08PM +0200, hubert depesz lubaczewski wrote:
you can modify log_line_prefix to contain database name.
%d = database name
restart required
a trigger might be cleaner.
depesz
On 6/11/07, Rikard Pavelic [EMAIL PROTECTED] wrote:
Hi!
I'm looking for
Hi,
On Monday 11 June 2007 13:45, Pit M. wrote:
| Assume a user doing a query which shall select all customers having a
| ZIP in [1;2[ by using a CAST. If the cursor now fetches the row
| with ID 4 we get an error (invalid input syntax) as A1234 cannot be
| casted as integer. This is ok,
On 6/11/07, Mike Gould [EMAIL PROTECTED] wrote:
How can we do this with PostGres? Other than backup and restore or creating
SQL scripts I haven't been able to find another method. Some of these
tables may have over a million rows in them initially if we convert old
data.
The most portable
I didn't try myself, but wrapping the whole into a PL/pgSQL function and
using exceptions might do the work;
It's not good advice. I tested it, and problem is in where clause. I
don't understand problem well, but one possibility is change from cast
to to_number function like:
postgres=#
Pavel Stehule schrieb:
Hello
I thing so problem is there
AND Cast(CUSTOMERS.ZIP as integer) 2
You cannot cast 'A1234' to integer
Regards
Pavel Stehule
Yes i know that i can't, but why the transaction fails?
Pit
---(end of
Hello
it's not possible. PostgreSQL doesn't support multiple recordset. You
have to have two functions.
Regards
Pavel
2007/6/11, Jeremy Nix [EMAIL PROTECTED]:
I see what you're doing, but I'm not quite sure how to adapt it to what
I'm doing. Here's simplified snippet of my code. Can
Hello,
I forgot, You can do it via recordset of cursors.
http://www.postgresql.org/docs/8.1/static/plpgsql-cursors.html
37.8.3.3. Returning Cursors
one cursor returns TotalRecords and TotalPages columns and second
record returns searchResult.
Regards
Pavel
2007/6/11, Jeremy Nix [EMAIL
I didn't try myself, but wrapping the whole into a PL/pgSQL function and
using exceptions might do the work;
http://www.postgresql.org/docs/8.2/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
Ciao,
Thomas
Unfortunately this is not possible, because this should happen on the
Pavel Stehule escribió:
Hello
it's not possible. PostgreSQL doesn't support multiple recordset. You
have to have two functions.
The other idea is to declare the function to return SETOF refcursor (or
have an OUT refcursor param), and return two refcursors open with the
different recordsets.
Yes i know that i can't, but why the transaction fails?
because casting fails. First FETCH was ok, and evaluating of next row
(which need second FETCH) was broken. When any statement in
transaction fail, you have to rollback current transaction.
Pavel
---(end of
On Mon, Jun 11, 2007 at 03:20:15PM +0200, Pavel Stehule wrote:
it's not possible. PostgreSQL doesn't support multiple recordset. You
have to have two functions.
If you don't mind handling cursors then you could return multiple
cursors from one function. See the PL/pgSQL documentation for an
Unfortunately this is not possible, because this should happen on the
client. The client calls FETCH for every record available in that cursor
when the user (application) wants to display the data (scrollable
list of records)
So do you think i should wrap each FETCH statement?
We handle
Thank you Pavel!
The problem is, that the CAST function may be used in any SQL query,
since our software allows using a free SQL-query. The query itself
doesn't fail, because the syntax is correct.
The problem is that there may be other functions in a query that can
lead to a runtime error
On Mon, Jun 11, 2007 at 03:28:22PM +0200, Pit M. wrote:
We handle exceptions errors through libpq, and if a FETCH leads to such
a runtime error, we try to FETCH the first record again.
The problem is that we can't use this cursor any more - it seems to be
corrupt after that error.
An
Mike Gould wrote:
All,
I am new to PostGres 8 (using 8.2.4 windows version). We have for
several years been using iAnywhere's SQL Anywhere product with our
commercial transportation software. With ASA there are 2 files that
must be distributed for the database, a filename.db and a
Rikard Pavelic [EMAIL PROTECTED] writes:
I'm looking for recommendation for tracking DDL changes on
single database instance.
Perhaps ALTER DATABASE mydb SET log_statement = ddl would do what
you want.
regards, tom lane
---(end of
Jeremy Nix [EMAIL PROTECTED] writes:
I see what you're doing, but I'm not quite sure how to adapt it to what
I'm doing. Here's simplified snippet of my code. Can elaborate on how
I can return a recordset and the output parameters.?
I suppose what you need is something like
CREATE OR
On Mon, Jun 11, 2007 at 03:28:22PM +0200, Pit M. wrote:
We handle exceptions errors through libpq, and if a FETCH leads to such
a runtime error, we try to FETCH the first record again.
The problem is that we can't use this cursor any more - it seems to be
corrupt after that error.
An
Hi!
You may try https://projects.commandprompt.com/public/odbcng/. This
PostgreSQL ODBC driver's connection string can contain parameters you need:
SSL_CERTIFICATE=[string] - path to SSL certificate file
SSL_PRIVATE_KEY=[string] - your SSL private key
Tom Lane wrote:
Rikard Pavelic [EMAIL PROTECTED] writes:
I'm looking for recommendation for tracking DDL changes on
single database instance.
Perhaps ALTER DATABASE mydb SET log_statement = ddl would do what
you want.
regards, tom lane
;(
I'm having
Hi,
What is the meaning of 'materializing' a relation after sequential scan?
explain select * from tb1, tb2 where tb1.c1 = tb2.c2;
QUERY PLAN
Nested Loop (cost=1.03..2.34 rows=3 width=24)
Join
From /src/include/utils/tuplestore.h
The materialize shields the sort from the need to do mark/restore and
thereby allows it to perform its final merge pass on-the-fly; while the
materialize itself is normally cheap since it won't spill to disk unless the
number of tuples with equal key values
2007/6/10, Alexander Staubo [EMAIL PROTECTED]:
On 6/10/07, Tom Allison [EMAIL PROTECTED] wrote:
The table itself is small (two columns: bigint, int) but I'm wondering when
I'll
start to hit a knee in performance and how I can monitor that.
You don't say anything about what the data is in the
Filip Rembiałkowski wrote:
For the monitoring, however, you can log your queries along with
timings and timestamps, and copy them into a tool like R to
statistically analyze your performance over time. You will be able to
predict the point at which your system will be too slow to use, if
Pit M. [EMAIL PROTECTED] writes:
The goal is to maintain a valid cursor that can be used to FETCH other
records.
It would be ideal to skip this record an continue fetching the next record -
but how can i achieve this?
Well ideally the answer would be to wrap each FETCH in a savepoint and
Alexander Staubo wrote:
For the monitoring, however, you can log your queries along with
timings and timestamps, and copy them into a tool like R to
statistically analyze your performance over time. You will be able to
predict the point at which your system will be too slow to use, if
QUERY PLAN
Nested Loop (cost=1.03..2.34 rows=3 width=24)
Join Filter: (tb1.c1 = tb2.c1)
- Seq Scan on tb2 (cost=0.00..1.04 rows=4 width=12)
- Materialize (cost=1.03..1.06 rows=3
Gregory Stark [EMAIL PROTECTED] writes:
Pit M. [EMAIL PROTECTED] writes:
The goal is to maintain a valid cursor that can be used to FETCH other
records.
It would be ideal to skip this record an continue fetching the next record -
but how can i achieve this?
Well ideally the answer would be
On Mon, 11 Jun 2007, Steve Crawford wrote:
In my experience the more common situation is to go off a cliff.
Yeah, I think the idea that you'll notice performance degrading and be
able to extrapolate future trends using statistical techniques is a
bit...optimistic.
Anyway, back to the
We have a problem with the postgres backend process terminating and
dropping core. We believe it may be related to another problem we have
where the postgres backend process can not be properly terminated.
Question is, when I see:
#17 0x08151bc5 in ClosePostmasterPorts ()
in the stack trace
Thank you Jon -- thats the exact sort of trick I was hoping for.
Cheers!
On Jun 7, 2007, at 6:36 PM, Jon Sime wrote:
Jonathan Vanasco wrote:
Does anyone have a trick to list all columns in a db ?
No trickery, just exploit the availability of the SQL standard
information_schema views:
On Jun 11, 2:01 pm, Andrus [EMAIL PROTECTED] wrote:
(...)
This index makes no sense at all:
CREATE INDEX dok_dokumnr_idx ON firma1.dok USING btree (dokumnr);
I listed table structure and constraints partially.
Theis is also primary key constraint in dok table:
CONSTRAINT dok_pkey
On Jun 11, 2:23 pm, Andrus [EMAIL PROTECTED] wrote:
I tried
CREATE TEMP TABLE mydel AS
SELECT r.dokumnr
FROM rid r
LEFT JOIN dok d USING (dokumnr)
WHERE d.dokumnr IS NULL;
DELETE FROM rid USING mydel WHERE rid.dokumnr =mydel.dokumnr;
drop table mydel;
and this runs 1 seconds intead
Geoffrey [EMAIL PROTECTED] writes:
Question is, when I see:
#17 0x08151bc5 in ClosePostmasterPorts ()
in the stack trace of the process, is this an indicator that the backend
process was trying to shutdown?
No; that's a function that's executed immediately after fork() to close
postmaster
Tom Lane wrote:
Geoffrey [EMAIL PROTECTED] writes:
Question is, when I see:
#17 0x08151bc5 in ClosePostmasterPorts ()
in the stack trace of the process, is this an indicator that the backend
process was trying to shutdown?
No; that's a function that's executed immediately after fork() to
What about creating NOW empty schemas 'till 2038?
Your application will move automatically on the new empty schema on the new
year without any changes to the db structure.
On 4/26/07, Owen Hartnett [EMAIL PROTECTED] wrote:
At 9:23 AM +0100 4/26/07, Richard Huxton wrote:
Jonathan Vanasco wrote:
Greg Smith wrote:
On Mon, 11 Jun 2007, Steve Crawford wrote:
In my experience the more common situation is to go off a cliff.
Yeah, I think the idea that you'll notice performance degrading and be
able to extrapolate future trends using statistical techniques is a
bit...optimistic.
On Mon, 11 Jun 2007, Tom Allison wrote:
All of this was run on a Pentium II 450 MHz with 412MB RAM and a software
linear 0 pair or UDMA 66 7200RPM 8MB Cache drives (really old) on seperate
IDE channels with ReiserFS disk format.
Sometimes it's not clear if someone can speed up what they're
52 matches
Mail list logo