> It seems that the root cause was that pgpool acquired the locks in the
> wrong order. If the resource is called A it seems that pgpool allows child
> X to acquire A on node1 and at the same time, child Y acquires A on node2.
> This leaves X wanting A on node2 and Y wanting A on node1. This lea
On 1/15/2013 9:02 PM, Robert James wrote:
I would do this as actual databases, but Postgres doesn't allow JOINs
and FKs between different databases. Can I use schema for the above?
How? How do I backup and restore schema independently?
pg_dump --schema= ...
and drop schema, create schema prio
I'd like to organize tables into different groups. This is for 3 reasons:
1. Each group needs to be backed up and restored independently from the others
2. Each group has different permissions for particular users
3. As an aid to human understanding organization
I would do this as actual database
On Wed, Jan 16, 2013 at 11:45 AM, Rob Sargent wrote:
> What environment are you in. In jdbc you can address the resultset by
> column name.
C++, so it's quite inefficient to always use names.
Chris Angelico
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On 01/15/2013 04:41 PM, Ken Tanzer wrote:
8.4 supports FDW so I will assume you do not have the permissions to
create one.
Thanks but I'm confused--the doc page you mentioned says the mysql FDW
isn't supported until 9.1.
My apologies. FDW appeared in 8.4, the extension mechanism which
On 01/15/2013 05:22 PM, Meta Seller Dev/Admin wrote:
Hi! (I'm Chris Angelico posting from a work account - usually I'm here
under the name ros...@gmail.com.)
I've run into a bit of an awkwardness in PostgreSQL setup and am
hoping for some expert advice.
Several of the tables I work with have tw
>
> 8.4 supports FDW so I will assume you do not have the permissions to
> create one.
>
Thanks but I'm confused--the doc page you mentioned says the mysql FDW
isn't supported until 9.1.
>
>
>
>> In this case it looks like 24 tables, with CSV-like import files
>> totaling 7G.
>>
>> Since there
On 01/15/2013 04:09 PM, Ken Tanzer wrote:
Thanks for all the responses. I would totally go with the FDW, were I
not stuck using (someone else's) 8.4.
8.4 supports FDW so I will assume you do not have the permissions to
create one.
In this case it looks like 24 tables, with CSV-like import
On Tue, Jan 15, 2013 at 2:02 PM, Bruce Momjian wrote:
> On Sun, Jan 13, 2013 at 08:46:58PM -0700, Scott Marlowe wrote:
>> The reasons to NOT use ubuntu under PostgreSQL are primarily that 1:
>> they often choose a pretty meh grade kernel with performance
>> regressions for their initial LTS releas
Hi! (I'm Chris Angelico posting from a work account - usually I'm here
under the name ros...@gmail.com.)
I've run into a bit of an awkwardness in PostgreSQL setup and am
hoping for some expert advice.
Several of the tables I work with have two groups of fields: standard
fields and "free fields".
Thanks for all the responses. I would totally go with the FDW, were I not
stuck using (someone else's) 8.4.
In this case it looks like 24 tables, with CSV-like import files totaling
7G.
Since there didn't seem to be a clean, simple and automated path from mysql
to postgres, I'm back to skipping
Em 15/01/2013 21:36, Adrian Klaver escreveu:
On 01/15/2013 03:30 PM, Edson Richter wrote:
I was wondering, would be a nice addition the ability to read the chain
of triggers (may be another trigger variable like TG_OP, called
"TG_CHAIN" or something else that will be an array with the name of th
On 01/15/2013 03:30 PM, Edson Richter wrote:
I was wondering, would be a nice addition the ability to read the chain
of triggers (may be another trigger variable like TG_OP, called
"TG_CHAIN" or something else that will be an array with the name of the
triggers called before current trigger).
W
I was wondering, would be a nice addition the ability to read the chain
of triggers (may be another trigger variable like TG_OP, called
"TG_CHAIN" or something else that will be an array with the name of the
triggers called before current trigger).
Would help debug database triggers that have
Thanks for the quick and detailed response, Tom.
Yes, I did add a redundant where clause with a restriction on b.date_id on the
range queries. This appears to speed things up since it does an index scan on
the b table before the merge join.
We will get more intelligent on query generation (ou
Venky Kandaswamy writes:
>On 9.1, I am running into a curious issue.
It's not very curious at all, or at least people on pgsql-performance
(the right list for this sort of question) would have figured it out
quickly. You're getting a crummy plan because of a crummy row estimate.
When you do
Bruce Momjian writes:
> On Sun, Jan 13, 2013 at 04:51:55PM -0500, Tom Lane wrote:
>> pg_dump does not dump/restore database properties, only database
>> contents. Properties are the responsibility of pg_dumpall, which
>> you bypassed (for databases anyway).
> Isn't this a bug? Seems there is no
On Tue, Jan 15, 2013 at 4:55 AM, Brian Sutherland
wrote:
> I'm guessing that it's some kind of race condition, but I wouldn't know
> where to start looking.
Look for a recursive import (A imports B, B imports A) or multiple
threads trying to import simultaneously - Python sometimes has issues
wit
On Sun, Jan 13, 2013 at 04:51:55PM -0500, Tom Lane wrote:
> Scott Ribe writes:
> > Built & installed 9.2.3. Dumped 9.1 db (using 9.2 pg_dump IIRC). Restored.
> > Database search path was not restored. Had to execute alter database ...
> > set search_path to...
>
> That's a hole in the particular
On Sun, Jan 13, 2013 at 08:46:58PM -0700, Scott Marlowe wrote:
> The reasons to NOT use ubuntu under PostgreSQL are primarily that 1:
> they often choose a pretty meh grade kernel with performance
> regressions for their initial LTS release. I.e. they'll choose a
> 3.4.0 kernel over a very stable
Le mardi 15 janvier 2013 à 07:52 -0700, Scott Marlowe a écrit :
> On Tue, Jan 15, 2013 at 4:54 AM, Vincent Veyron wrote:
> > Le lundi 14 janvier 2013 à 16:35 -0600, Shaun Thomas a écrit :
> >
> >> My personal server is on Debian too, with a similar uptime. But we
> >> recently ran into this guy on
All,
On 9.1, I am running into a curious issue. I will explain the issue in high
level terms using psuedo SQL statements. Consider a SQL statement:
SELECT a, b, c FROM tab WHERE a = value1;
- This does an index scan followed by a merge join and takes about 37 secs to
execute
If I change the
Hi Chris
> From: Chris Travers
>To: Postgres General
>Sent: Tuesday, 15 January 2013, 7:59
>Subject: [GENERAL] Recommendations on plpgsql debugger?
>
>
>Hi all;
>
>
>I have a client who needs a way to step through a PL/PGSQL function and
>ideally see what one is doing at present. I noticed
On Wed, Jan 16, 2013 at 1:22 AM, ning chan wrote:
> Hi Albe,
> Thanks for your kind response.
> The wordings is confusing, it gives me impression that the recovery is going
> on.
Yes. Walreceiver receives the WAL records from the master server, and writes
them to the pg_xlog directory. Then the s
One thing I've done in the past is to create a temporary table and insert
"raise notice" debug statements to it, incrementing a sequence as it went
along. Then just select the message with order by the seq. Useful in the
recursive calls I was testing at the time. But it did require that I sti
On Tue, Jan 15, 2013 at 7:36 AM, Tom Lane wrote:
> "T. E. Lawrence"
> writes:
>> On 15.01.2013, at 05:45, Jeff Janes wrote:
Is the autovacuum 100% reliable in relation to VACUUM ANALYZE?
>
>>> No. For example, if you constantly do things that need an access exclusive
>>> lock, then auto
Hi Albe,
Thanks for your kind response.
The wordings is confusing, it gives me impression that the recovery is
going on.
On Tue, Jan 15, 2013 at 2:09 AM, Albe Laurenz wrote:
> ning chan wrote:
> > I am new to PG Streaming Replication and I have a simple question
> hopefully someone can answer th
"T. E. Lawrence"
writes:
> On 15.01.2013, at 05:45, Jeff Janes wrote:
>>> Is the autovacuum 100% reliable in relation to VACUUM ANALYZE?
>> No. For example, if you constantly do things that need an access exclusive
>> lock, then autovac will keep getting interrupted and never finish.
> I se
On Tue, Jan 15, 2013 at 4:54 AM, Vincent Veyron wrote:
> Le lundi 14 janvier 2013 à 16:35 -0600, Shaun Thomas a écrit :
>
>> My personal server is on Debian too, with a similar uptime. But we
>> recently ran into this guy on our 12.04 Ubuntu systems:
>>
>> https://bugs.launchpad.net/ubuntu/+source
Le mardi 15 janvier 2013 à 12:54 +0100, Daniel Verite a écrit :
> Vincent Veyron wrote:
>
> >
> > > On Debian/Ubuntu, the default behavior is to have SSL enabled out
> > > of the box, including for TCP connections to localhost.
> >
> > It is in Ubuntu, but not in Debian.
>
> No, I've seen
Vincent Veyron wrote:
>
> > On Debian/Ubuntu, the default behavior is to have SSL enabled out
> > of the box, including for TCP connections to localhost.
>
> It is in Ubuntu, but not in Debian.
No, I've seen it a number of times with Debian. pg_createcluster will enable
SSL in postgresq
Le lundi 14 janvier 2013 à 16:35 -0600, Shaun Thomas a écrit :
> My personal server is on Debian too, with a similar uptime. But we
> recently ran into this guy on our 12.04 Ubuntu systems:
>
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1055222
>
Ha, so you seem to need to use the X
Hi all,
I observed this strange data error:
In pg_dump output, I found such line, which is causing error upon restore:
GRANT SELECT,INSERT,DELETE,UPDATE ON TABLE addresshistory TO "158755274";
It appears that pg_class.relacl column has this literally typed in:
dev=# select count(*) from pg_class
Hmm, nvm :-(
PostgreSQL does not yet support MERGE...
Kind regards/met vriendelijke groet,
Serge Fonville
http://www.sergefonville.nl
Convince Microsoft!
They need to add TRUNCATE PARTITION in SQL Server
https://connect.microsoft.com/SQLServer/feedback/details/417926/truncate-partition-of-part
Hi,
Based on my understanding of the problem, would this be possible to solve
with a MERGE statement?
HTH
Kind regards/met vriendelijke groet,
Serge Fonville
http://www.sergefonville.nl
Convince Microsoft!
They need to add TRUNCATE PARTITION in SQL Server
https://connect.microsoft.com/SQLServ
On 15.01.2013, at 05:45, Jeff Janes wrote:
>> Which makes me think that, as we grew the database more than 250 times in
>> size over a 2-3 months period, relying on autovacuum (some tables grew from
>> 200k to 50m records, other from 1m to 500m records), the autovacuum has
>> either let us do
Chris Travers wrote:
> I have a client who needs a way to step through a PL/PGSQL function and
> ideally see what one is doing
> at present. I noticed that there used to be an EDB Debugger module for this
> purpose but I can't seem
> to find one for 9.1 and I can't seem to pull from csv to try.
ning chan wrote:
> I am new to PG Streaming Replication and I have a simple question hopefully
> someone can answer this
> easily.
>
> I setup a Primary Hot Standby and they are connecting each other without
> problem.
> Looking at the wal sender process, both servers pointing to the same locati
Hi all;
I have a client who needs a way to step through a PL/PGSQL function and
ideally see what one is doing at present. I noticed that there used to be
an EDB Debugger module for this purpose but I can't seem to find one for
9.1 and I can't seem to pull from csv to try.
Any alternatives? Am I
39 matches
Mail list logo