On 08.01.2012 22:18, Ryan Kelly wrote:
@@ -1570,7 +1570,13 @@ do_connect(char *dbname, char *user, char *host, char
*port)
keywords[7] = NULL;
values[7] = NULL;
- n_conn = PQconnectdbParams(keywords, values, true);
+ if
On Mon, Jan 9, 2012 at 07:34, Jaime Casanova ja...@2ndquadrant.com wrote:
Hi,
I was trying pg_basebackup on head, i used this command:
postgres@jaime:/usr/local/pgsql/9.2$ bin/pg_basebackup -D $PWD/data2
-x stream -P -p 54392
i got this error
19093/19093 kB (100%), 1/1 tablespace
On Mon, Jan 9, 2012 at 11:09, Magnus Hagander mag...@hagander.net wrote:
On Mon, Jan 9, 2012 at 07:34, Jaime Casanova ja...@2ndquadrant.com wrote:
Hi,
I was trying pg_basebackup on head, i used this command:
postgres@jaime:/usr/local/pgsql/9.2$ bin/pg_basebackup -D $PWD/data2
-x stream -P
On Mon, Jan 09, 2012 at 10:35:50AM +0200, Heikki Linnakangas wrote:
On 08.01.2012 22:18, Ryan Kelly wrote:
@@ -1570,7 +1570,13 @@ do_connect(char *dbname, char *user, char *host, char
*port)
keywords[7] = NULL;
values[7] = NULL;
-n_conn =
On Sat, Jan 7, 2012 at 9:31 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Anyway, here's a new version of the patch. It no longer busy-waits for
in-progress insertions to finish, and handles xlog-switches. This is now
feature-complete. It's a pretty complicated patch, so I
On 09.01.2012 15:44, Simon Riggs wrote:
On Sat, Jan 7, 2012 at 9:31 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Anyway, here's a new version of the patch. It no longer busy-waits for
in-progress insertions to finish, and handles xlog-switches. This is now
On 07/01/12 23:01, Peter Eisentraut wrote:
On tor, 2012-01-05 at 10:04 +, Benedikt Grundmann wrote:
We have recently upgrade two of our biggest postgres databases
to new hardware and minor version number bump (8.4.5 - 8.4.9).
We are experiencing a big performance regression in some
Attached is a trivial patch to add what I believe to be a missing
.gitignore file. You won't run into it unless you are getting
things set up for running pgindent.
-Kevin
*** /dev/null
--- b/src/tools/entab/.gitignore
***
*** 0
--- 1
+ /entab
--
Sent via pgsql-hackers
On Mon, Jan 9, 2012 at 12:00, Magnus Hagander mag...@hagander.net wrote:
On Mon, Jan 9, 2012 at 11:09, Magnus Hagander mag...@hagander.net wrote:
On Mon, Jan 9, 2012 at 07:34, Jaime Casanova ja...@2ndquadrant.com wrote:
Hi,
I was trying pg_basebackup on head, i used this command:
On Mon, Jan 9, 2012 at 17:24, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Attached is a trivial patch to add what I believe to be a missing
.gitignore file. You won't run into it unless you are getting
things set up for running pgindent.
Applied, thanks.
--
Magnus Hagander
Me:
I found that I needed to adjust the command given in the README file
for pgindent. Trivial patch attached.
The one other issue I ran into in following the latest pgindent
instructions was that I had to add #include stdlib.h to the
parse.c file (as included in the pg_bsd_indent-1.1.tar.gz file
On mån, 2012-01-02 at 06:43 +0200, Peter Eisentraut wrote:
I figured the best and most flexible way to address this is to export
acldefault() as an SQL function and replace
aclexplode(proacl)
with
aclexplode(coalesce(proacl, acldefault('f', proowner)))
where 'f' here is
On Mon, Jan 9, 2012 at 12:31 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
I found that I needed to adjust the command given in the README file
for pgindent. Trivial patch attached.
Committed.
The one other issue I ran into in following the latest pgindent
instructions was that I
Shouldn't it have been closed weeks ago?
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 09.01.2012 20:37, Josh Berkus wrote:
Shouldn't it have been closed weeks ago?
There are still patches in Needs Review and Ready for Committer
states...
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list
So here is a patch for that.
There are a few cases that break when hiding all variable length fields:
Access to indclass in relcache.c, as discussed upthread, which should be
fixed.
Access to pg_largeobject.data. This is apparently OK, per comment in
inv_api.c.
Access to pg_proc.proargtypes
Peter Eisentraut pete...@gmx.net writes:
So I think the relcache.c thing should be fixed and then this might be
good to go.
Cosmetic gripes: I think we could get rid of the various comments that
say things like variable length fields start here, since the #ifdef
CATALOG_VARLEN lines now
On 1/9/12 10:39 AM, Heikki Linnakangas wrote:
On 09.01.2012 20:37, Josh Berkus wrote:
Shouldn't it have been closed weeks ago?
There are still patches in Needs Review and Ready for Committer
states...
Well, at this point I think we should bump them to CF4. Certainly
nobody is working on
Obviously, many indexes are unique and thus won't have duplicates at
all. But if someone creates an index and doesn't make it unique, odds
are very high that it has some duplicates. Not sure how many we
typically expect to see, but more than zero...
Peter may not, but I personally admin
On Mon, Jan 9, 2012 at 2:29 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Can we also try aligning the actual insertions onto cache lines rather
than just MAXALIGNing them? The WAL header fills half a cache line as
it is, so many other records will fit nicely. I'd like to
Hi,
On Sat, 2012-01-07 at 08:47 +0100, Pavel Stehule wrote:
There is broken link on
http://yum.postgresql.org/repopackages.php page
PostgreSQL 9.1 - Fedora 14 -
http://yum.postgresql.org/9.1/fedora/fedora-14-i386/pgdg-fedora-9.1-2.noarch.rpm
- 404 - Not Found
Fixed, but please report it
2012/1/9 Devrim GÜNDÜZ dev...@gunduz.org:
Hi,
On Sat, 2012-01-07 at 08:47 +0100, Pavel Stehule wrote:
There is broken link on
http://yum.postgresql.org/repopackages.php page
PostgreSQL 9.1 - Fedora 14 -
http://yum.postgresql.org/9.1/fedora/fedora-14-i386/pgdg-fedora-9.1-2.noarch.rpm
-
Every so often one or other of my buildfarm animals running on Windows 7
(64 bit) gets a regression failure running the ECPG tests. See
http://www.pgbuildfarm.org/cgi-bin/show_failures.pl?max_days=90member=choughmember=pittastage=ECPG-Check.
It's not always the same test that fails, but it
Joel Jacobson j...@gluefinance.com wrote:
The perl script pg_callgraph.pl replaces the oids with actual
function names before generating the call graphs using GraphVIz:
Regardless of anything else, I think you need to allow for function
overloading. You could cover that, I think, by
Hello,
There is a buffer overflow in sample code's test_parser.c that can yield to a
segmentation fault. The next byte of the buffer is tested against ' ' before
its availability is checked.
You will find attached a simple patch that fixes the bug.
Paul
--
Semiocast
On Jan 9, 2012, at 2:08 PM, Joel Jacobson wrote:
Generates call graphs of function calls within a transaction in run-time.
Related to this... we had Command Prompt write a function for us that would
spit out the complete call-graph of the current call stack whenever it was
called. Alvaro
On Jan 8, 2012, at 5:25 PM, Simon Riggs wrote:
On Mon, Dec 19, 2011 at 8:18 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Double-writes would be a useful option also to reduce the size of WAL that
needs to be shipped in replication.
Or you could just use a filesystem
On Jan 6, 2012, at 8:40 PM, Robert Haas wrote:
Somewhat depressingly,
virtually all of the interesting activity still centers around the
same three locks that were problematic back then, which means that -
although overall performance has improved quite a bit - we've not
really delivered any
Paul Guyot pgu...@kallisys.net writes:
There is a buffer overflow in sample code's test_parser.c that can yield to a
segmentation fault. The next byte of the buffer is tested against ' ' before
its availability is checked.
Hmm, yeah. The probability of a failure is very low of course, but
On Jan 6, 2012, at 4:36 AM, Andres Freund wrote:
On Friday, January 06, 2012 11:30:53 AM Simon Riggs wrote:
On Fri, Jan 6, 2012 at 1:10 AM, Stephen Frost sfr...@snowman.net wrote:
* Simon Riggs (si...@2ndquadrant.com) wrote:
I discover that non-all-zeroes holes are fairly common, just not very
On 1/9/12 1:37 PM, Josh Berkus wrote:
Shouldn't it have been closed weeks ago?
It's still In Progress mostly because I flaked out for the holidays
after pushing to get most things ready for commit or returned a few
weeks ago, but not quite nailing it shut. I'm back to mostly full-time
on
On Mon, Jan 9, 2012 at 7:24 PM, Jim Nasby j...@nasby.net wrote:
On Jan 6, 2012, at 8:40 PM, Robert Haas wrote:
Somewhat depressingly,
virtually all of the interesting activity still centers around the
same three locks that were problematic back then, which means that -
although overall
On 1/5/12 1:19 AM, David Fetter wrote:
To achieve efficiency, the checkpoint writer and bgwriter should batch
writes to multiple pages together. Currently, there is an option
batched_buffer_writes that specifies how many buffers to batch at a
time. However, we may want to remove that option
On 1/7/12 5:26 AM, Heikki Linnakangas wrote:
Perhaps there
needs to be a third setting, calculate-but-dont-verify, where CRCs are
updated but existing CRCs are not expected to be correct. And a utility
to scan through your database and fix any incorrect CRCs, so that after
that's run in
Joachim Wieland j...@mcknight.de writes:
[ send NOTIFYs to slaves by means of: ]
In the patch I added a new WAL message type, XLOG_NOTIFY that writes
out WAL records when the notifications are written into the pages of
the SLRU ring buffer. Whenever an SLRU page is found to be full, a new
WAL
On 12/30/11 9:44 AM, Aidan Van Dyk wrote:
So moving to this new double-write-area bandwagon, we move from a WAL
FPW synced at the commit, collect as many other writes, then final
sync type system to a system where *EVERY* write requires syncs of 2
separate 8K writes at buffer write-out time.
On 9 January 2012 19:45, Josh Berkus j...@agliodbs.com wrote:
Obviously, many indexes are unique and thus won't have duplicates at
all. But if someone creates an index and doesn't make it unique, odds
are very high that it has some duplicates. Not sure how many we
typically expect to see,
37 matches
Mail list logo