On Sun, 30 Sep 2012 20:24:47 -0700
Darren Duncan dar...@darrenduncan.net wrote:
Unless you have very unique needs, keeping an open connection for
days is just wrong anyway; if its for the sake of some user GUI or
shell, there probably should be safeguards there to encourage users
to not keep
Hello,
here is a small test case to reproduce an issue from our production
system:
- one table with 2 columns (sid, ua) containing a lot of nulls
- two indexes with the same condition:
i_sid : on (sid)WHERE sid IS NOT NULL
i_ua : on (ua,sid) WHERE sid IS NOT NULL
and a query with
I've rechecked it on Postgres 9.2 and the issue seems to be fixed by now
:-)
sorry for the spam.
Marc
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
ow...@postgresql.org] On Behalf Of Marc Mamin
Sent: Montag, 1. Oktober 2012 09:34
To:
You caused it yourself, then. Don't do that. (Or if you must,
it's your own responsibility to fix things when they break. But
preventing read access to pg_catalog seems pretty crippling.)
I don't want arbitrary program to have access to the system catalogue
and read willy-nilly, thanks.
Tom,
I traced through the problem with 'no symbol table' being created and managed
to create a version of Postgres 9.2.1 with a symbol table. Here's the trace ...
Rebuild postgres with debugging ...
export OBJECT_MODE=64
export CFLAGS=-maix64 -g
export LDFLAGS=-maix64 -Wl,-bbigtoc
export
On 10/01/12 1:20 AM, Mr Dash Four wrote:
FWIW, it's probably the 'user: ' || u_name expressions that result
in this specific failure.
I found what is the cause of this - I had to add an explicit cast on
all text expressions, like user: ::text as well as u_name::text.
That way the problem goes
On 2012-10-01, Levente Kovacs leventel...@gmail.com wrote:
On Sun, 30 Sep 2012 20:24:47 -0700
Darren Duncan dar...@darrenduncan.net wrote:
Unless you have very unique needs, keeping an open connection for
days is just wrong anyway; if its for the sake of some user GUI or
shell, there
Hi,
- PG 9.0.10
- Pgbouncer version 1.4.2
Not long ago, during the last server reboot for us, we had fixed the
really painful (and largely mysterious) process of setting up
pgbouncer.
File permissions and other mysteries were solved with help from Raghavendra:
On Mon, Oct 1, 2012 at 3:56 PM, Phoenix Kiula phoenix.ki...@gmail.comwrote:
Hi,
- PG 9.0.10
- Pgbouncer version 1.4.2
Not long ago, during the last server reboot for us, we had fixed the
really painful (and largely mysterious) process of setting up
pgbouncer.
File permissions and
Hi Tom/Matthew,
Just to chime in on this thread - I'm currently validating Postgres on
AIXv7.1 and confirm that I also see the same error. I can reproduce the
error with 9.2.1 and 9.2.0 but unlike Matthew I'm using a built from
source build using the IBM xcl compiler rather than gcc.
I
I am working on an audit logging trigger that gets called for every row
inserted, updated or deleted on any table.
For this, I need to store a couple of temporary session variables such as
the ID of the user performing the change, which can be set at the start of
the session.
Until now I have been
Carrington, Matthew (Produban) matthew.carring...@produban.co.uk writes:
pg_dump: reading user-defined aggregate functions
Breakpoint 1, exit_horribly (modulename=0x0, fmt=0x10006a590 out of
memory\n)
at dumputils.c:1314
1314dumputils.c: A file or directory in the path name does not
Tom,
Yes, that look right for AIX . Here's the relevant section from the malloc man
page ...
Return Values
Upon successful completion, the malloc subroutine returns a pointer to
space
suitably aligned for the storage of any type of object. If the size
requested is
0,
On 1 October 2012 14:47, Marco Craveiro marco.crave...@gmail.com wrote:
Hello Postgres general
We're experiencing a lot of errors when using CDash on PostgreSQL 9.1,
hosted on Mac OSX 10.6.8. The actual error message is as follows:
SQL error in Cannot insert test:
Scott Marlowe scott.marl...@gmail.com writes:
I've had problems with ssl connections dying on me. For slony
replication I had to make sure the connections were NOT ssl or they'd
die and subscriptions would just keep repeating after getting 80%
through and getting a connect error. This was
Peter,
Thanks for your prompt reply.
A call to PageAddItem(), made within _bt_pgaddtup(), is where this
failure seems to ultimately originate from. What we're missing here is
the reason for PageAddItem() returning InvalidOffsetNumber. That is
usually, though not necessarily, separately
On Mon, Oct 1, 2012 at 8:36 AM, Moshe Jacobson mo...@neadwerx.com wrote:
I am working on an audit logging trigger that gets called for every row
inserted, updated or deleted on any table.
For this, I need to store a couple of temporary session variables such as
the ID of the user performing
Hi,
I had a very strange effect on the weekend that smells like a bug, so
i'd like so share it.
Setup:
machine A: 16 CPU Cores (modern), 128GB RAM, nice 6-drive SAS Raid-10
machines B, C: 8 Cores (substantially older than A), 48GB Ram, some scsi
Raid, substantially slower than A
The
Marco Craveiro marco.crave...@gmail.com writes:
We're experiencing a lot of errors when using CDash on PostgreSQL 9.1,
hosted on Mac OSX 10.6.8. The actual error message is as follows:
SQL error in Cannot insert test:
utility/asserter/assert_file_returns_true_for_empty_files into the
On Mon, Oct 1, 2012 at 7:28 AM, pfote pf...@ypsilon.net wrote:
Hi,
I had a very strange effect on the weekend that smells like a bug, so i'd
like so share it.
Setup:
machine A: 16 CPU Cores (modern), 128GB RAM, nice 6-drive SAS Raid-10
machines B, C: 8 Cores (substantially older than A),
Tom,
Following on from that man page extract I tried a build using the suggested
compiler macro (_LINUX_SOURCE_COMPAT) ...
export CC=/opt/freeware/bin/gcc -maix64
export OBJECT_MODE=64
export CFLAGS=-D_LINUX_SOURCE_COMPAT -maix64 -g
export LDFLAGS=-maix64 -Wl,-bbigtoc
export AR=ar -X64
export
On Wed, Sep 26, 2012 at 5:39 AM, Alexander Shutyaev shuty...@gmail.comwrote:
SHMALL = 2097152
SHMALL * PAGE_SIZE = 2097152 * 4096 = 8589934592
which is smaller than your requested allocation.
SHMALL and SHMMAX need to be sized together. You likely want 4314090 as
your SHMALL.
Bug as Devrim
Merlin,
On Mon, Oct 1, 2012 at 10:28 AM, Merlin Moncure mmonc...@gmail.com wrote:
Couple points:
*) Functions without exception blocks are faster than those with.
Clearly.
*) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)
I don't think that can be assumed by your
Hey,
So, I've searched around through the archives, and it seems this has
come up a couple times in the past. But one scenario that was never
explored was when using one .pgpass file in a cluster of servers, in
which case it makes sense to save it in source control, or something
like
Hello Tom,
This definitely looks like index corruption, but blaming it on the
filesystem might be premature. I'm wondering if this could be an
artifact of the WAL-replay bug fixed in 9.1.6. I'd suggest updating
and then reindexing the index ...
We are running 9.1.2 it seems:
select
On Mon, Oct 1, 2012 at 10:21 AM, Moshe Jacobson mo...@neadwerx.com wrote:
Merlin,
On Mon, Oct 1, 2012 at 10:28 AM, Merlin Moncure mmonc...@gmail.com wrote:
Couple points:
*) Functions without exception blocks are faster than those with.
Clearly.
*) Therefore, CREATE/IF NOT EXISTS is
On Mon, Oct 1, 2012 at 11:22 AM, Merlin Moncure mmonc...@gmail.com wrote:
We currently do use permanent tables using pg_backend_pid(). It's because of
the connection pooling specifically that we are having problems with stale
data. I have been unable to find a way to automatically clear that
Could you please check permission of /var/run/pgbouncer/ directory. If
pgbouncer directory does not have postgres user permissions,please assign
it and then start the pgbouncer.
The /var/run/pgbouncer/ directory has
chown -R postgres:postgres ..
The port number everywhere is already
You should never put your passwords (or private keys) in source control; it
would be better to use the puppet/bcfg option.
Generally you want to keep your sensitive and less sensitive info separate. If
you have passwords in version control and later want to delete them, you have to
mess with
On 10/01/2012 12:19 PM, Darren Duncan wrote:
You should never put your passwords (or private keys) in source control;
it would be better to use the puppet/bcfg option.
That was kind of my point. Puppet / Bcfg2 have the same problem. About a
dozen people have access to our bcfg2 repo than I
This could be just about anything. Which OS are you running? Did you
check any logs when everything went crazy?
Sorry, should have been more verbose.
OS is debian squeeze, pg installed is the latest 9.2.1 from
pgapt.debian.net. Logs where checked, no errors/warnings at all. Query
plans have
Tom/Matthew,
I also tried the same macro with the xlc compiler with similar results
in that pg_dump now works as expected :-)
For info here's my build setup:-
./configure CC=xlc LIBS=-lssl -lcrypto -lz -lreadline -lcurses -lld
-lmass -lm CFLAGS=-qlanglvl=extc89 -D_LINUX_SOURCE_COMPAT
-Original Message-
From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
Sent: Friday, September 28, 2012 1:07 PM
To: Yelai, Ramkumar IN BLR STS; pgsql-general@postgresql.org
Cc: scrawf...@pinpointresearch.com; and...@2ndquadrant.com
Subject: RE: [GENERAL] Re: Need help in reclaiming disk
On Oct 1, 2012, at 10:26 AM, Shaun Thomas stho...@optionshouse.com wrote:
On 10/01/2012 12:19 PM, Darren Duncan wrote:
You should never put your passwords (or private keys) in source control;
it would be better to use the puppet/bcfg option.
That was kind of my point. Puppet / Bcfg2 have
Andrew Hastie and...@ahastie.net writes:
Tom: Is this something we should get added into the AIX Platform
specific notes?
No, it's something we need to fix. See hackers thread:
http://archives.postgresql.org/pgsql-hackers/2012-10/msg00029.php
That #define isn't a bad quick-workaround if you
On Mon, Oct 1, 2012 at 10:26 AM, Shaun Thomas stho...@optionshouse.com wrote:
On 10/01/2012 12:19 PM, Darren Duncan wrote:
You should never put your passwords (or private keys) in source control;
it would be better to use the puppet/bcfg option.
That was kind of my point. Puppet / Bcfg2
Is there a link between Sybase and Postgres?
I ask because I came across a bug in Netezza, and Netezza has a
well-known Postgres lineage, but when web-searching the bug, the first
thing I found was a Sybase reference-
Hi everyone,
We have two postgresql 9.0 databases (32-bits) with more than 10,000
schemas. When we try to run ANALYZE in those databases we get errors like
this (after a few hours):
2012-09-14 01:46:24 PDT ERROR: out of memory
2012-09-14 01:46:24 PDT DETAIL: Failed on request of size 421.
On 10/01/2012 02:05 PM, Jeff Janes wrote:
Who are those people? Do they have administrative access to the 20
machines? If so, it seems to me that the game is already over. If
not, what mechanism do you use to keep them out? Perhaps that
mechanism could be extended to cover this case as
On Mon, Oct 1, 2012 at 03:49:14PM -0400, yary wrote:
Is there a link between Sybase and Postgres?
I ask because I came across a bug in Netezza, and Netezza has a
well-known Postgres lineage, but when web-searching the bug, the first
thing I found was a Sybase reference-
Hugo Nabble hugo.t...@gmail.com writes:
We have two postgresql 9.0 databases (32-bits) with more than 10,000
schemas. When we try to run ANALYZE in those databases we get errors like
this (after a few hours):
2012-09-14 01:46:24 PDT ERROR: out of memory
2012-09-14 01:46:24 PDT DETAIL:
yary not@gmail.com writes:
Is there a link between Sybase and Postgres?
Not that I've ever heard of.
I ask because I came across a bug in Netezza, and Netezza has a
well-known Postgres lineage, but when web-searching the bug, the first
thing I found was a Sybase reference-
and indeed that is the exact bug I found in Netezza! first_value
works great, last_value has window size 1 unless you also say rows
between unbounded preceding and unbounded following.
What Tom said but:
...has window size 1 is not correct. The window size is larger but the
returned
On Mon, Oct 1, 2012 at 12:22 PM, Merlin Moncure mmonc...@gmail.com wrote:
*) Functions without exception blocks are faster than those with.
*) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)
I don't think that can be assumed by your premise above. Essentially we
are
I doubt that the number of *schemas* is a big deal here, but the number
of *tables* might well be. How many? Also, 9.0.what?
Each schema has 22 tables, so we can count at least 22 x 10,000 = 220,000
tables.
The postgresql version is 9.0.7-1.
Regards,
Hugo
--
View this message in context:
On Monday, October 01, 2012 03:10:43 PM Shaun Thomas wrote:
I can't remember about Puppet since I haven't used it in so long, but
bcfg2 is basically just a giant directory structure, and we put ours in
GIT for safekeeping and to track changes. Implementing ACLs in GIT is a
bit of a PITA, so
Hello postgresql gurus,
I want to have an array of composite type in a table. Retrieving/saving
value work fine.
I just wonder how I can search against composite type array.
CREATE TYPE CompXYZ AS (
attr1 integer,
attr2 text,
attr3 inet
);
Thanks to all for the education! The bug was in my understanding (and
that bloggers)... and the diverse SQL implementations are doing what
they're meant to. I'll read up more.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Mon, Oct 1, 2012 at 12:52 PM, Hugo Nabble hugo.t...@gmail.com wrote:
Hi everyone,
We have two postgresql 9.0 databases (32-bits)
Why 32 bits? Is that what your hardware is?
with more than 10,000
schemas. When we try to run ANALYZE in those databases we get errors like
this (after a few
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ChoonSoo Park
Sent: Monday, October 01, 2012 5:50 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] How to search for composite type array
Hello postgresql gurus,
I want to have an array
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Has anyone come up with a good solution for distributing a .pgpass file
that doesn't expose it to anyone who has access to the distribution
mechanism?
No, you cannot easily keep it in version control/puppet securely.
One way is to have
Hi all
While examining a reported issue with the JDBC driver I'm finding myself
wanting SQL-level functions to get the scale and precision of a numeric
result from an operation like:
select NUMERIC(8,4) '1.'
union
select INTEGER 4;
I can write:
SELECT pg_typeof(a), a FROM (
Hi,
Is there any way to force the query planner to do a materialisation stage?
I have a query that joins two views, and takes 28 seconds to run.
However if I create temporary tables that contain the contents of each
view, and then join them, the total time is 1.3 seconds.
Is there a way to
On Oct 1, 2012, at 22:33, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
Hi,
Is there any way to force the query planner to do a materialisation stage?
I have a query that joins two views, and takes 28 seconds to run.
However if I create temporary tables that contain the
Hi,
On 2 October 2012 12:33, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
I have a query that joins two views, and takes 28 seconds to run.
However if I create temporary tables that contain the contents of each view,
and then join them, the total time is 1.3 seconds.
try offset
On 02/10/12 12:44, David Johnston wrote:
On Oct 1, 2012, at 22:33, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
Hi,
Is there any way to force the query planner to do a materialisation stage?
I have a query that joins two views, and takes 28 seconds to run.
However if I create
On 02/10/12 13:02, Ondrej Ivanič wrote:
Hi,
On 2 October 2012 12:33, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
I have a query that joins two views, and takes 28 seconds to run.
However if I create temporary tables that contain the contents of each view,
and then join them,
On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
Could you please check permission of /var/run/pgbouncer/ directory. If
pgbouncer directory does not have postgres user permissions,please assign
it and then start the pgbouncer.
The /var/run/pgbouncer/ directory
Hi,
On 2 October 2012 13:28, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
I have a query that joins two views, and takes 28 seconds to run.
However if I create temporary tables that contain the contents of each
view,
and then join them, the total time is 1.3 seconds.
try
59 matches
Mail list logo