Dear All,
Salam kenal dari saya, saya adalah termasuk
pendatang baru di dunia postgreSQL dan baru mendalami tentang nya.
mohon bantuan dari rekan- rekan mailing
list.
trims,
RizalTechnical SupportPT. Garansindo Inter
Global Jl. Taman Margasatwa No. 25Ragunan - Ps. MingguJakarta
Selatan 1
Greg Stark <[EMAIL PROTECTED]> writes:
> So vacuum doesn't really know what tuples are actually visible to the
> snapshots actually taken by a transaction? It's making the conservative
> estimate that a snapshot could have been taken as early as the start of the
> transaction even if no snapshot wa
Hi,
I am new in
Postgres.
Can any one tell me
best way of EXCEPTION handling in PL/pgSQL.
1. I mean How to get
Raised Exception messages in "EXCEPTION" block (I mean exception messages thrown
by us OR by Postgres SQL Exception).
2. Is EXCEPTION
datatype is available in Postgres as i
Tom Lane <[EMAIL PROTECTED]> writes:
> We have to start the transaction no later than event #2 since there has
> to be something to hold the lock. But it'd be easy enough to decouple
> this from BEGIN, and it'd be good enough to solve the "COMMIT;BEGIN"
> problem.
Oh I think I finally figured ou
Greg Stark <[EMAIL PROTECTED]> writes:
> This is presumably because of the long-standing issue that Postgres takes the
> snapshot as soon as the BEGIN is issued.
No, we don't set the snapshot until the first DML query is issued. This
is critical for serializable transactions: you have to be able
Andreas Joseph Krogh <[EMAIL PROTECTED]> writes:
> On Thursday 14 April 2005 21:08, Andrew Sullivan wrote:
>> On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
>>> It is a Postgres limitation as well. We _could_ make the server "really
>>> start the transaction" at the point the firs
On Thursday 14 April 2005 22:44, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 09:57:03PM +0200, Andreas Joseph Krogh wrote:
> > Is it safe to issue a "kill -2 ", will the
> > connection-pool reconnect?
>
> The kill -2 will definitely work to shut the connectiond down
> cleanly. Actually, I sho
On Thu, Apr 14, 2005 at 09:57:03PM +0200, Andreas Joseph Krogh wrote:
> Is it safe to issue a "kill -2 ", will the
> connection-pool reconnect?
The kill -2 will definitely work to shut the connectiond down
cleanly. Actually, I should put that more precisely: you should send
SIGINT. I don't know
On Thursday 14 April 2005 21:08, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
> > It is a Postgres limitation as well. We _could_ make the server "really
> > start the transaction" at the point the first query is issued instead of
> > when the BEGIN is i
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
> >
> > So, what you're suggesting is that a restart of the webapp should make
> > vacuum
> > able to delete those dead rows?
>
> Yes, but that'll only solve your problem for now
On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
>
> It is a Postgres limitation as well. We _could_ make the server "really
> start the transaction" at the point the first query is issued instead of
> when the BEGIN is issued. In fact this problem would go away if we did
> that.
On Thu, Apr 14, 2005 at 02:56:23PM -0400, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 07:54:19PM +0200, Andreas Joseph Krogh wrote:
> > Any hints on how I can find out what's keeping the connection idle in a
> > transaction? I realize now that I should probably ask that question on the
> > p
On Thu, Apr 14, 2005 at 07:54:19PM +0200, Andreas Joseph Krogh wrote:
> Any hints on how I can find out what's keeping the connection idle in a
> transaction? I realize now that I should probably ask that question on the
> pgsql-jdbc-list:-)
Nope. That's a problem with your pool software. It's
Andreas Joseph Krogh wrote:
On Thursday 14 April 2005 19:12, Richard Huxton wrote:
So - do you have an application/client that has had a single transaction
open for a long time. A very long time unless you update this table a lot.
Hm, it's a web-app, and I don't hold a transaction open for a long
On Thursday 14 April 2005 19:33, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
> > So, what you're suggesting is that a restart of the webapp should make
> > vacuum able to delete those dead rows?
>
> Yes, but that'll only solve your problem for now.
Thank you, Joe.
Your solution provided below works great. Much better/simpler than my
original approach. You rock!
-Michael
_
From: Joe Conway [mailto:[EMAIL PROTECTED]
Sent: Mon 4/11/2005 3:26 PM
To: Moran.Michael
Cc: PostgreSQL
Subject: Re: [SQL] PGCrypto: Realworld scenario and a
On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
>
> So, what you're suggesting is that a restart of the webapp should make vacuum
> able to delete those dead rows?
Yes, but that'll only solve your problem for now. You'll have the
problem again soon. What's keeping open th
On Thursday 14 April 2005 19:12, Richard Huxton wrote:
> Andreas Joseph Krogh wrote:
> > Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN
> > ANALYZE shows that PG thinks it has as much as 160057 rows AFAICT from
> > the EXPLAIN-output.
> >
> >
> > I've just run a "VACUUM FULL
Andreas Joseph Krogh wrote:
Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE
shows that PG thinks it has as much as 160057 rows AFAICT from the
EXPLAIN-output.
I've just run a "VACUUM FULL VERBOSE" on the whole database, which gave the
following output on the rele
nbeweb=> EXPLAIN ANALYZE select count(*) from onp_web_index;
Total runtime: 179748.993 ms
WOW.
It's possible your table is bloated.
What version of PG are you using ?
VACUUM FULL ANALYZE VERBOSE onp_web_index
if it solves your problem, good, else post the resul
On Thu, Apr 14, 2005 at 06:38:14PM +0200, Andreas Joseph Krogh wrote:
> Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE
> shows that PG thinks it has as much as 160057 rows AFAICT from the
> EXPLAIN-output.
It does have 160057 rows:
> INFO: vacuuming "public.onp_w
Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE
shows that PG thinks it has as much as 160057 rows AFAICT from the
EXPLAIN-output.
I'm using PG-8.0.0
Here is the schema:
nbeweb=> \d onp_web_index;
Table "public.onp_web_index"
C
Yes you got my problem correctly. I want to change tablename and values at
run time.
Is there any other way instead using "||" (append) ? like prepared
statement.
PREPARE fooplan (int, text, bool, numeric) AS
INSERT INTO foo VALUES($1, $2, $3, $4);
EXECUTE fooplan(1, 'Hunter Valley', 't', 200
At 10:28 AM 4/14/05, Dinesh Pandey wrote:
How can
I
Write a prepared query and
Set values at run time and
Execute that query for different values.
Depends on what language you are using - try interfaces mailing
list.
I am using PL/pgSQL.
Thanks
Dinesh Pandey
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
Behalf Of Frank Bax
Sent: Thursday, April 14, 2005
8:24 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Prepared query
?
At
10:28 AM 4/14/05, Dinesh Pandey wrote:
How
Yes here the TABLE NAME is also coming at runtime.
Like In Oracle I have done in this way:
---
LOOP
v_sql := 'INSERT INTO ' || mytable || ' VALUES(:in_sentryid_id,
:in_node_id)';
EXECUTE IMMEDIATE v_sql USING v_sentryid, v_nodeid;
Dinesh Pandey wrote:
How can I
1. Write a prepared query and
2. Set values at run time and
3. Execute that query for different values.
LOOP
…..
…..
sql := INSERT INTO MYTABLE VALUES(?, ?);
--EXECUTE (sql);
You don't say what language you want to use, but assuming it's plpgsql,
"Dinesh Pandey" <[EMAIL PROTECTED]> writes:
> FOR _record IN SELECT A1, A2 FROM A
> LOOP
> _sql := 'INSERT INTO B VALUES (:A1, :A2)';
> EXECUTE (_sql);
Why do you need a dynamic query here at all? You could just do
FOR _record IN SELECT A1, A2 FROM A
LOOP
INSERT INTO B VALUES (_record.A1
How can I
Write a prepared query and
Set values at run time and
Execute that query for
different values.
LOOP
…..
…..
sql := INSERT
INTO MYTABLE VALUES(?, ?);
……
--EXECUTE (sql);
END LOOP;
Regards
Dinesh Pandey
On Apr 14, 2005, at 7:37 AM, Dinesh Pandey wrote:
How can we set A1, A2 values in dynamic 'INSERT’ query?
DECLARE
_record RECORD;
_sql VARCHAR(2000);
FOR _record IN SELECT A1, A2 FROM A
LOOP
_sql := 'INSERT INTO B VALUES (:A1, :A2)’;
EXECUTE (_sql);
END LOOP;
=
FOR _record IN SELECT A1, A2 FROM A
LOOP
--[Dinesh Pandey] -- Sorry I didn't get this part.
_sql := "INSERT INTO B VALUES ( '*" || _record.A1 || "'** , '" || _record.A2
|| "' ***)";
EXECUTE (_sql);
END LOOP;
Dinesh Pandey wrote:
> How can we set A1, A2 values in dynamic 'INSERT' query?
>
>
On Thursday 14 April 2005 13:28, Richard Huxton wrote:
> Andreas Joseph Krogh wrote:
> > nbeweb=> EXPLAIN select count(*) from onp_web_index;
> > QUERY PLAN
> > -
> > Hi all.
> > I have a pr
How can we set A1, A2 values in dynamic 'INSERT’ query?
DECLARE
_record RECORD;
_sql VARCHAR(2000);
FOR _record IN SELECT A1, A2 FROM A
LOOP
_sql := 'INSERT INTO B VALUES (:A1, :A2)’;
EXECUTE (_sql);
END LOOP;
Andreas Joseph Krogh wrote:
nbeweb=> EXPLAIN select count(*) from onp_web_index;
QUERY PLAN
-
Hi all.
I have a problem with a table which only has 298 rows in it. A select count(*)
takes fo
nbeweb=> EXPLAIN select count(*) from onp_web_index;
QUERY PLAN
-
Hi all.
I have a problem with a table which only has 298 rows in it. A select count(*)
takes forever and CTRL-C also takes
35 matches
Mail list logo