Re: [HACKERS] 9.2.1 index-only scans : abnormal heap fetches after VACUUM FULL

2014-02-16 Thread Andres Freund
On 2014-02-15 21:34:15 -0500, Bruce Momjian wrote:
 Thank you for the thorough review.  Unless someone else can complete
 this, I think it should be marked as returned with feedback.  I don't
 think I am going to learn enough to complete this during the
 commit-fest.

Agreed. Marked it as such.


 I guess there is always PG 9.5.

I sure hope so ;)

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] narwhal and PGDLLIMPORT

2014-02-16 Thread Andres Freund
Marco, Andrew:

On 2014-02-15 22:11:37 +0100, Marco Atzeri wrote:
 ../../src/timezone/localtime.o ../../src/timezone/strftime.o
 ../../src/timezone/pgtz.o ../../src/port/libpgport_srv.a
 ../../src/common/libpgcommon_srv.a -lintl -lssl -lcrypto -lcrypt -lldap -o
 postgres
 libpq/auth.o:auth.c:(.text+0x1940): undefined reference to `in6addr_any'
 libpq/auth.o:auth.c:(.text+0x1954): undefined reference to `in6addr_any'
 libpq/auth.o:auth.c:(.text+0x196d): undefined reference to `in6addr_any'
 libpq/auth.o:auth.c:(.text+0x1979): undefined reference to `in6addr_any'
 /usr/lib/gcc/i686-pc-cygwin/4.8.2/../../../../i686-pc-cygwin/bin/ld:
 libpq/auth.o: bad reloc address 0xce4 in section `.rdata'
 collect2: error: ld returned 1 exit status
 Makefile:66: recipe for target 'postgres' failed
 make[2]: *** [postgres] Error 1
 make[2]: Leaving directory
 '/pub/devel/postgresql/git/postgresql_build/src/backend'

Could either of you try whether compiling with the attached hack fixes
anything on cygwin?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
diff --git a/src/backend/Makefile b/src/backend/Makefile
index 870a022..a991134 100644
--- a/src/backend/Makefile
+++ b/src/backend/Makefile
@@ -62,6 +62,8 @@ endif
 
 ifeq ($(PORTNAME), cygwin)
 
+LIBS += -lwsock32 -lws2_32
+
 postgres: $(OBJS)
 	$(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_EX) $(export_dynamic) -Wl,--stack,$(WIN32_STACK_RLIMIT) -Wl,--export-all-symbols -Wl,--out-implib=libpostgres.a $(call expand_subsys,$^) $(LIBS) -o $@
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recovery inconsistencies, standby much larger than primary

2014-02-16 Thread Greg Stark
On Sat, Feb 15, 2014 at 11:45 AM, Andres Freund and...@2ndquadrant.com wrote:
 I guess the theoretically correct thing would be to make all WAL records
 about truncation and unlinking contain the current size of the relation,
 but especially with deletions and forks that will probably turn out to
 be annoying to do.

Here's another alternative.

In md.c when extending a file to RELSEG_SIZE always check if the next
segment is already there and truncate it if it is to avoid magically
slurping in that data. That maintains the invariant that the first
short segment will mark the end of the relation. If you have a short
or missing segment then you'll ignore all the later segments.

I think to make this work you would have to sync the newly truncated
segment first before extending the current segment though. And this
would cause every relation extension to do an extra filesystem lookup.
Perhaps only doing this in recovery or *with assertions enabled?)
would mitigate that cost.

This makes a mockery of the comment in xlogutils.c that we would
rather not lose data in the case of a lost inode. But I feel like the
data in the later segments was already lost before the earlier segment
was filled up, it hardly helps matters if it can sometimes be unlost
if the earlier data happens to get written to in a particular pattern.


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] narwhal and PGDLLIMPORT

2014-02-16 Thread Tom Lane
Hiroshi Inoue in...@tpf.co.jp writes:
 (2014/02/15 2:32), Tom Lane wrote:
 And what happens is this:
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=narwhaldt=2014-02-14%2017%3A00%3A02
 namely, it gets through plperl now and then chokes with the same
 symptoms on pltcl.  So I guess we need the same hack in pltcl.
 The fun never stops ...

 Pltcl still fails.
 tclxx.dll lives in bin directory not in lib directory.
 The attached patch would fix the problem.

Pushed, thanks.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with displaying wide tables in psql

2014-02-16 Thread Sergey Muraviov
Hi.

Thanks for your review.

2014-02-15 20:08 GMT+04:00 Emre Hasegeli e...@hasegeli.com:

 Hi,

 This is my review about 3th version of the patch. It is an useful
 improvement in my opinion. It worked well on my environment.

 2013-12-11 17:43:06, Sergey Muraviov sergey.k.murav...@gmail.com:
  It works in expanded mode when either format option is set to wrapped
  (\pset format wrapped), or we have no pager, or pager doesn't chop long
  lines (so you can still use the trick).

 I do not like this logic on the IsWrappingNeeded function. It does not
 seems right to check the environment variables for less. It would be hard
 to explain this behavior to the users. It is better to make this only
 the behavior of the wrapped format in expanded mode, in my opinion.


You are right. This logic is too complicated.
New patch works with PRINT_WRAPPED option only.


{
if (opt_border  2)
fprintf(fout, %s\n,
 dlineptr[line_count].ptr);
else
fprintf(fout, %-s%*s
 %s\n, dlineptr[line_count].ptr,
dwidth -
 dlineptr[line_count].width, ,
 
 dformat-rightvrule);
}

 Is it necessary to keep this old print line code? It seems to me the new
 code works well on (dlineptr[line_count].width = dwidth) condition.


New code doesn't work with empty strings but I've done minor optimization
for this case.

-- 
Best regards,
Sergey Muraviov
From 48ba7ed8c5ff82cdc1c912ff9ac966962f18ce54 Mon Sep 17 00:00:00 2001
From: Sergey Muraviov sergey.k.murav...@gmail.com
Date: Sun, 16 Feb 2014 20:00:10 +0400
Subject: [PATCH] Now patch works with PRINT_WRAPPED option only

---
 src/bin/psql/print.c | 79 ++--
 1 file changed, 71 insertions(+), 8 deletions(-)

diff --git a/src/bin/psql/print.c b/src/bin/psql/print.c
index 79fc43e..d7bf412 100644
--- a/src/bin/psql/print.c
+++ b/src/bin/psql/print.c
@@ -1234,6 +1234,45 @@ print_aligned_vertical(const printTableContent *cont, FILE *fout)
 			fprintf(fout, %s\n, cont-title);
 	}
 
+	if (cont-opt-format == PRINT_WRAPPED)
+	{
+		int output_columns = 0;
+		/*
+		 * Choose target output width: \pset columns, or $COLUMNS, or ioctl
+		 */
+		if (cont-opt-columns  0)
+			output_columns = cont-opt-columns;
+		else
+		{
+			if (cont-opt-env_columns  0)
+output_columns = cont-opt-env_columns;
+#ifdef TIOCGWINSZ
+			else
+			{
+struct winsize screen_size;
+
+if (ioctl(fileno(stdout), TIOCGWINSZ, screen_size) != -1)
+	output_columns = screen_size.ws_col;
+			}
+#endif
+		}
+
+		output_columns -= hwidth;
+
+		if (opt_border == 0)
+			output_columns -= 1;
+		else
+		{
+			output_columns -= 3; /* -+- */
+
+			if (opt_border  1)
+output_columns -= 4; /* +--+ */
+		}
+
+		if ((output_columns  0)  (dwidth  output_columns))
+			dwidth = output_columns;
+	}
+
 	/* print records */
 	for (i = 0, ptr = cont-cells; *ptr; i++, ptr++)
 	{
@@ -1292,20 +1331,45 @@ print_aligned_vertical(const printTableContent *cont, FILE *fout)
 			else
 fputc(' ', fout);
 
-			if (!dcomplete)
+			if ((!dcomplete)  (dlineptr[line_count].width  0))
 			{
-if (opt_border  2)
-	fprintf(fout, %s\n, dlineptr[line_count].ptr);
-else
-	fprintf(fout, %-s%*s %s\n, dlineptr[line_count].ptr,
-			dwidth - dlineptr[line_count].width, ,
-			dformat-rightvrule);
+int offset = 0;
+int chars_to_output = dlineptr[line_count].width;
+while (chars_to_output  0)
+{
+	int target_width, bytes_to_output;
+	if (offset  0)
+	{
+		if (opt_border == 2)
+			fprintf(fout, %s , dformat-leftvrule);
+
+		fprintf(fout, %*s, hwidth,  );
+
+		if (opt_border  0)
+			fprintf(fout,  %s , dformat-midvrule);
+		else
+			fputc(' ', fout);
+	}
+
+	target_width = dwidth;
+	bytes_to_output = strlen_max_width(dlineptr[line_count].ptr + offset,
+		target_width, encoding);
+	fputnbytes(fout, (char *)(dlineptr[line_count].ptr + offset), bytes_to_output);
+	chars_to_output -= target_width;
+	offset += bytes_to_output;
+
+	if (opt_border  2)
+		fputc('\n', fout);
+	else
+		fprintf(fout, %*s %s\n, dwidth - target_width, , dformat-rightvrule);
+}
 
 if (!dlineptr[line_count + 1].ptr)
 	dcomplete = 1;
 			}
 			else
 			{
+dcomplete = 1;
 if (opt_border  2)
 	fputc('\n', fout);
 else
@@ -2175,7 +2239,6 @@ print_troff_ms_vertical(const printTableContent *cont, FILE *fout)
 /* Public functions		*/
 //
 
-
 /*
  * PageOutput
  *
-- 
1.8.5.3


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changeset Extraction v7.6.1

2014-02-16 Thread Andres Freund
On 2014-02-15 17:29:04 -0500, Robert Haas wrote:
 On Fri, Feb 14, 2014 at 4:55 AM, Andres Freund and...@2ndquadrant.com wrote:
  [ new patches ]
 
 0001 already needs minor
 
 + * copied stuff from tuptoaster.c. Perhaps there should be toast_internal.h?
 
 Yes, please.  If you can submit a separate patch creating this file
 and relocating this stuff there, I will commit it.

I started to work on that, but I am not sure we actually need it
anymore. tuptoaster.h isn't included in that many places, so perhaps we
should just add it there?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] narwhal and PGDLLIMPORT

2014-02-16 Thread Marco Atzeri



On 16/02/2014 15:43, Andres Freund wrote:

Marco, Andrew:

On 2014-02-15 22:11:37 +0100, Marco Atzeri wrote:

../../src/timezone/localtime.o ../../src/timezone/strftime.o
../../src/timezone/pgtz.o ../../src/port/libpgport_srv.a
../../src/common/libpgcommon_srv.a -lintl -lssl -lcrypto -lcrypt -lldap -o
postgres
libpq/auth.o:auth.c:(.text+0x1940): undefined reference to `in6addr_any'
libpq/auth.o:auth.c:(.text+0x1954): undefined reference to `in6addr_any'
libpq/auth.o:auth.c:(.text+0x196d): undefined reference to `in6addr_any'
libpq/auth.o:auth.c:(.text+0x1979): undefined reference to `in6addr_any'
/usr/lib/gcc/i686-pc-cygwin/4.8.2/../../../../i686-pc-cygwin/bin/ld:
libpq/auth.o: bad reloc address 0xce4 in section `.rdata'
collect2: error: ld returned 1 exit status
Makefile:66: recipe for target 'postgres' failed
make[2]: *** [postgres] Error 1
make[2]: Leaving directory
'/pub/devel/postgresql/git/postgresql_build/src/backend'


Could either of you try whether compiling with the attached hack fixes
anything on cygwin?

Greetings,

Andres Freund



on cygwin32 bit it works, but it stops later on
---
sl -lcrypto -lz -lreadline -lcrypt -o psql.exe
tab-complete.o:tab-complete.c:(.text+0xa98): undefined reference to 
`rl_line_buffer'
tab-complete.o:tab-complete.c:(.text+0xa387): undefined reference to 
`rl_attempted_completion_function'
tab-complete.o:tab-complete.c:(.text+0xa391): undefined reference to 
`rl_basic_word_break_characters'
tab-complete.o:tab-complete.c:(.text+0xa3a4): undefined reference to 
`rl_readline_name'
/usr/lib/gcc/i686-pc-cygwin/4.8.2/../../../../i686-pc-cygwin/bin/ld: 
tab-complete.o: bad reloc address 0x30ec in section `.rdata'
/usr/lib/gcc/i686-pc-cygwin/4.8.2/../../../../i686-pc-cygwin/bin/ld: 
final link failed: Invalid operation

collect2: error: ld returned 1 exit status
Makefile:33: recipe for target 'psql' failed
---

on cygwin 64bit, that I was not testing before,
something is strange
--
 -lintl -lssl -lcrypto -lcrypt -lldap -lwsock32 -lws2_32 -o postgres
postmaster/postmaster.o:postmaster.c:(.rdata$.refptr.environ[.refptr.environ]+0x0): 
undefined reference to `environ'

collect2: error: ld returned 1 exit status
Makefile:68: recipe for target 'postgres' failed
make[2]: *** [postgres] Error 1
---

of course 9.3.2 builds fine on cygwin 64bit.

Regards
Marco






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] narwhal and PGDLLIMPORT

2014-02-16 Thread Tom Lane
Marco Atzeri marco.atz...@gmail.com writes:
 On 16/02/2014 15:43, Andres Freund wrote:
 Could either of you try whether compiling with the attached hack fixes
 anything on cygwin?

 on cygwin32 bit it works, but it stops later on
 ---
 sl -lcrypto -lz -lreadline -lcrypt -o psql.exe
 tab-complete.o:tab-complete.c:(.text+0xa98): undefined reference to 
 `rl_line_buffer'

 on cygwin 64bit, that I was not testing before,
 something is strange
 --
   -lintl -lssl -lcrypto -lcrypt -lldap -lwsock32 -lws2_32 -o postgres
 postmaster/postmaster.o:postmaster.c:(.rdata$.refptr.environ[.refptr.environ]+0x0):
  
 undefined reference to `environ'

So what we currently know is that on cygwin, some of the core system
include files have been declspec'd, but others haven't; and headers
for third-party libraries like libxml and libreadline mostly haven't.

I'm starting to get the feeling that we're going to have to admit
defeat and not try to use --disable-auto-import on cygwin builds.
That platform is evidently not capable of supporting it.

We seem to be pretty nearly there on getting the MSVC and Mingw builds
to reliably complain about missing PGDLLIMPORTs, so maybe it's good
enough if those builds do it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] narwhal and PGDLLIMPORT

2014-02-16 Thread Andres Freund
On 2014-02-16 12:57:46 -0500, Tom Lane wrote:
 Marco Atzeri marco.atz...@gmail.com writes:
  On 16/02/2014 15:43, Andres Freund wrote:
  Could either of you try whether compiling with the attached hack fixes
  anything on cygwin?
 
  on cygwin32 bit it works, but it stops later on
  ---
  sl -lcrypto -lz -lreadline -lcrypt -o psql.exe
  tab-complete.o:tab-complete.c:(.text+0xa98): undefined reference to 
  `rl_line_buffer'
 
  on cygwin 64bit, that I was not testing before,
  something is strange
  --
-lintl -lssl -lcrypto -lcrypt -lldap -lwsock32 -lws2_32 -o postgres
  postmaster/postmaster.o:postmaster.c:(.rdata$.refptr.environ[.refptr.environ]+0x0):
   
  undefined reference to `environ'

That's in this case because it's our own extern, that itself would
probably be fixable, but:

 So what we currently know is that on cygwin, some of the core system
 include files have been declspec'd, but others haven't; and headers
 for third-party libraries like libxml and libreadline mostly haven't.

it's not going to work for the external libraries.

 I'm starting to get the feeling that we're going to have to admit
 defeat and not try to use --disable-auto-import on cygwin builds.
 That platform is evidently not capable of supporting it.

Agreed. It's probably doable if somebody actually using cygwin
themselves would invest a day or two and work on upstreaming the
changes, but it looks painful to do indirectly.

 We seem to be pretty nearly there on getting the MSVC and Mingw builds
 to reliably complain about missing PGDLLIMPORTs, so maybe it's good
 enough if those builds do it.

Is there anything missing on that end?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] narwhal and PGDLLIMPORT

2014-02-16 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2014-02-16 12:57:46 -0500, Tom Lane wrote:
 I'm starting to get the feeling that we're going to have to admit
 defeat and not try to use --disable-auto-import on cygwin builds.
 That platform is evidently not capable of supporting it.

 Agreed. It's probably doable if somebody actually using cygwin
 themselves would invest a day or two and work on upstreaming the
 changes, but it looks painful to do indirectly.

Yeah, and I doubt that anybody in the Cygwin project is going to see
the point of maintaining such patches anyway.  As Marco said, their
idea is to provide a Unix-ish platform as best they can, not adopt all
the worst features of Windows.  I kinda wonder why they have declspec's
on the getopt variables at all.

 We seem to be pretty nearly there on getting the MSVC and Mingw builds
 to reliably complain about missing PGDLLIMPORTs, so maybe it's good
 enough if those builds do it.

 Is there anything missing on that end?

I think it's all fixed, but I want to wait for another buildfarm cycle
just to be sure all the relevant critters are failing where we expect
them to and not somewhere else (like pltcl).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] narwhal and PGDLLIMPORT

2014-02-16 Thread Andres Freund
On 2014-02-16 13:25:58 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2014-02-16 12:57:46 -0500, Tom Lane wrote:
  I'm starting to get the feeling that we're going to have to admit
  defeat and not try to use --disable-auto-import on cygwin builds.
  That platform is evidently not capable of supporting it.
 
  Agreed. It's probably doable if somebody actually using cygwin
  themselves would invest a day or two and work on upstreaming the
  changes, but it looks painful to do indirectly.
 
 Yeah, and I doubt that anybody in the Cygwin project is going to see
 the point of maintaining such patches anyway.  As Marco said, their
 idea is to provide a Unix-ish platform as best they can, not adopt all
 the worst features of Windows.  I kinda wonder why they have declspec's
 on the getopt variables at all.

When searching for others having issues, I discovered several other
projects also disabling auto-import, so it's not just us. And, afaics,
it's mostly upstream bugs, e.g. libxml's header just misses an extern
in one of the cygwin specific defines.
I don't think we've actually found bugs in cygwin yet. We'd overridden
the provided declspec for getopt et al, that's why it failed, and we're
failing to link to several of the libraries we're using. I think the
only reason the latter isn't breaking our neck is that they are
indirectly included and the auto-import code is saving our neck.

That said, even if the upstream projects were fixed, we probably
wouldn't want to enable this unconditionally for a fair while. And I am
surely not volunteering to do the necessary work, and I somehow think
neither are you...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New hook after raw parsing, before analyze

2014-02-16 Thread David Beck
 There is a hook post_parse_analyze_hook but I think it comes too
 late as it comes after the analyze step which is when Postgres looks
 up the schema information for every relation mentioned in the query.
 What you would need is a post_parse_hook which would work on the raw
 parse tree before the analyze step. That doesn't seem terribly
 controversial to add though there may be some technical details. The
 API would of course be completely unstable from major release to major
 release -- the parse tree gets knocked around quite a bit.

Good to know that parse tree is not stable. My original proposal was exactly 
the one you describe. I called it post_raw_parse_hook and wanted to call it 
from exec_simple_query(), right after pg_parse_query().
May be not the best place or way to achieve this? 

It was a few liner patch, I wonder if it ever has a chance to get into Postgres.

 And I have to wonder if you aren't going the long way around to do
 something that can be done more simply some other way. If you have
 150k objects I wonder if your objects aren't all very similar and
 could be handled by a single Postgres schema object. Either a single
 FDW object or a simple function.

I need to do a bit more research on that side. I was hoping to spend less time 
on understanding that legacy system...

 As a side note, you should evaluate carefully what lazily creating
 objects will buy you. Perhaps just creating 150k objects would be
 cheaper than maintaining this code. In particular since the user
 *might* access all 150k you still have to worry about the worst case
 anyway and it might be cheaper to just engineer for it in the first
 place.

I have a similar problem with the legacy system. The set of 150k objects may 
change in between releases. My gut feeling is that it is easier to understand 
and keep track of an open source project like Postgres.

Best regards and thanks for all insights and ideas, David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Draft release notes up for review

2014-02-16 Thread Tom Lane
Draft release notes for 9.3.3 are committed and can be read at
http://www.postgresql.org/docs/devel/static/release-9-3-3.html
Any comments before I start transposing them into the back branches?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2.1 index-only scans : abnormal heap fetches after VACUUM FULL

2014-02-16 Thread Jim Nasby

On 1/24/14, 3:52 PM, Jaime Casanova wrote:

On Tue, Dec 3, 2013 at 11:25 AM, Bruce Momjianbr...@momjian.us  wrote:


Is everyone else OK with this approach?  Updated patch attached.


Hi,

I started to look at this patch and i found that it fails an assertion
as soon as you run a VACUUM FULL after a lazy VACUUM even if those are
on unrelated relations. For example in an assert-enabled build with
the regression database run:

VACUUM customer;
[... insert here whatever commands you like or nothing at all ...]
VACUUM FULL tenk1;


Is anyone else confused/concerned that regression testing didn't pick this up? 
The vacuum.sql test does not test lazy vacuum at all, and I can't seem to find 
any other tests that test lazy vacuum either...
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HBA files w/include support?

2014-02-16 Thread Jim Nasby

On 2/14/14, 8:36 AM, Stephen Frost wrote:

* Bruce Momjian (br...@momjian.us) wrote:

In an ideal world we would have a tool where you could plug in a
username, database, IP address, and test pg_hba.conf file and it would
report what line is matched.


That's not a bad idea, but we don't expose the logic that figures that
out today..  It would, perhaps, not be horrible to duplicate it, but
then we'd need to make sure that we update both places if it ever
changes (not that it's changed much in oh-so-many-years).  Perhaps
another candidate to be a GSoC project.


Stupid question... is there a reason we couldn't use the same code for both?

BTW, I'm not sure that SQL would be the appropriate API for this testing; but presumably 
it wouldn't be hard to add functionality to the wire protocol to support the case of 
hypothetically, if I were to attempt a connection that looks like this, what would 
happen?
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HBA files w/include support?

2014-02-16 Thread Jim Nasby

On 2/14/14, 1:06 PM, Jeff Janes wrote:

On Fri, Feb 14, 2014 at 6:33 AM, Bruce Momjian br...@momjian.us 
mailto:br...@momjian.us wrote:

On Fri, Feb 14, 2014 at 03:28:23AM -0500, Stephen Frost wrote:
  Bruce,

  Having @include and directory.d-style capabilities for pg_hba.conf *and*
  pg_ident.conf would make managing larger environments much better.
  There has been some talk about providing those capabilities via tables
  in the catalog, but I'm not aware of anyone working on it and it'd
  certainly be quite a bit more work than adding include/dir.d options.

Do we want a TODO for this?


If we are assembling a wish-list, I've often wanted the opposite of an include. 
 I want the ability to encapsulate the contents of pg_hba.conf directly into 
postgresql.conf.  So instead of giving a filename to hba_file, optionally give 
a multi-lined string with some kind of here-doc like mechanism, or something 
like that.

When I set up a forked dev environment and then eventually want to compare the 
diverged dev setup back to production, I often forget to compare the 
pg_hba.conf file.


So is this just to avoid having to diff 2 files instead of one?

On it's face, I'm not seeing the value of putting pg_hba.conf inside 
postgresql.conf... what am I missing?
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HBA files w/include support?

2014-02-16 Thread Jim Nasby

On 2/14/14, 10:14 AM, Andres Freund wrote:

I was asking for use-cases so we could figure out what's the right thing;-)

The argument about wanting to assemble a pg_hba file from separately
managed configuration pieces seems to have some merit, but the weak
spot there is how do you define the search order?  Or are you planning
to just cross your fingers and hope it doesn't matter too much?

The usual solution is to prepend a numeric prefix guaranteeing the
search order. 00 is sysadmin stuff, 10 replication, 20 database specific
or somesuch. I think most admins using automated tools to manage bigger
configuration files by using some .d config directory already know how
to deal with that problem.


Would the inclusion of the entire directory be done via a single #include (or 
whatever syntax) directive in pg_hba.conf?

I think that's probably OK. But if we're talking about something like hey, if 
there's a pg_hba.d directory then magically slurp that in, that's far less useful 
and a much bigger foot-gun. (It also wouldn't provide any value for what Jerry (the op) 
needs).


To summarize, here's what I've seen on this discussion:

- People seem to generally be in favor of the idea of includes, though it's not 
completely clear if people want specific include file X at this point in the ruleset or 
something more nebulous.
- It would be useful to have a mechanism for testing a pg_hba.conf file.
- It would also be useful for denied connections to log the actual line/file 
that denied the connection.
- This would be a good GSoC project.
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HBA files w/include support?

2014-02-16 Thread Stephen Frost
* Jim Nasby (j...@nasby.net) wrote:
 On 2/14/14, 8:36 AM, Stephen Frost wrote:
 * Bruce Momjian (br...@momjian.us) wrote:
 In an ideal world we would have a tool where you could plug in a
 username, database, IP address, and test pg_hba.conf file and it would
 report what line is matched.
 
 That's not a bad idea, but we don't expose the logic that figures that
 out today..  It would, perhaps, not be horrible to duplicate it, but
 then we'd need to make sure that we update both places if it ever
 changes (not that it's changed much in oh-so-many-years).  Perhaps
 another candidate to be a GSoC project.
 
 Stupid question... is there a reason we couldn't use the same code for both?

It'd just be a matter of shifting things around to make that work.  I'm
not against it, but this code is hardly of general or common use.

 BTW, I'm not sure that SQL would be the appropriate API for this testing; but 
 presumably it wouldn't be hard to add functionality to the wire protocol to 
 support the case of hypothetically, if I were to attempt a connection that 
 looks like this, what would happen?

Well, we have that, and it's just do it and you'll see.  Making that
easier to determine would have to be done post-authentication anyway,
lest we make it easier for would-be attackers, and at that point I'm not
sure that there's much benefit in having something in the protocol for
this rather than just a handy SQL function, which people who care about
these things are probably going to be pretty familiar with anyway..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] HBA files w/include support?

2014-02-16 Thread Stephen Frost
* Jim Nasby (j...@nasby.net) wrote:
 Would the inclusion of the entire directory be done via a single #include (or 
 whatever syntax) directive in pg_hba.conf?

Not sure we've even figured that out yet, but probably.

 I think that's probably OK. But if we're talking about something like hey, 
 if there's a pg_hba.d directory then magically slurp that in, that's far 
 less useful and a much bigger foot-gun. (It also wouldn't provide any value 
 for what Jerry (the op) needs).

I agree that it's best to have it be explicit, though the packagers may
go ahead and set things up such that a pg_hba.d directory exists by
default on their distribution.

 To summarize, here's what I've seen on this discussion:
 
 - People seem to generally be in favor of the idea of includes, though it's 
 not completely clear if people want specific include file X at this point in 
 the ruleset or something more nebulous.

My thought would be to support both individual files and directories,
where files in a directory are included in C/POSIX lexical order. 

 - It would be useful to have a mechanism for testing a pg_hba.conf file.

Agreed.

 - It would also be useful for denied connections to log the actual line/file 
 that denied the connection.

Agreed- in the postmaster log, of course.  We would not change the
response to the client.

 - This would be a good GSoC project.

That's my 2c on it at least.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] HBA files w/include support?

2014-02-16 Thread Jeff Janes
On Sunday, February 16, 2014, Jim Nasby j...@nasby.net wrote:

 On 2/14/14, 1:06 PM, Jeff Janes wrote:

 On Fri, Feb 14, 2014 at 6:33 AM, Bruce Momjian br...@momjian.us mailto:
 br...@momjian.us wrote:

 On Fri, Feb 14, 2014 at 03:28:23AM -0500, Stephen Frost wrote:
   Bruce,

   Having @include and directory.d-style capabilities for pg_hba.conf
 *and*
   pg_ident.conf would make managing larger environments much better.
   There has been some talk about providing those capabilities via
 tables
   in the catalog, but I'm not aware of anyone working on it and it'd
   certainly be quite a bit more work than adding include/dir.d
 options.

 Do we want a TODO for this?


 If we are assembling a wish-list, I've often wanted the opposite of an
 include.  I want the ability to encapsulate the contents of pg_hba.conf
 directly into postgresql.conf.  So instead of giving a filename to
 hba_file, optionally give a multi-lined string with some kind of here-doc
 like mechanism, or something like that.

 When I set up a forked dev environment and then eventually want to
 compare the diverged dev setup back to production, I often forget to
 compare the pg_hba.conf file.


 So is this just to avoid having to diff 2 files instead of one?


It is not so much doing two diffs, as remembering to do two diffs, that I
would like to avoid.  pg_hba.conf is very easy to forget about.  I guess if
I spent more time worrying about it, then that is the one I would remember
and postgresql.conf is the one I'd forget.

Cheers,

Jeff


Re: [HACKERS] Auto-tuning work_mem and maintenance_work_mem

2014-02-16 Thread Robert Haas
On Thu, Feb 13, 2014 at 3:34 PM, Bruce Momjian br...@momjian.us wrote:
 On Fri, Oct 11, 2013 at 03:39:51PM -0700, Kevin Grittner wrote:
 Josh Berkus j...@agliodbs.com wrote:
  On 10/11/2013 01:11 PM, Bruce Momjian wrote:
  In summary, I think we need to:
 
  *  decide on new defaults for work_mem and maintenance_work_mem
  *  add an initdb flag to allow users/packagers to set shared_bufffers?
  *  add an autovacuum_work_mem setting?
  *  change the default for temp_buffers?
 
  If we're changing defaults, bgwriter_lru_maxpages and vacuum_cost_limit
  could also use a bump; those thresholds were set for servers with  1GB
  of RAM.

 +1 on those.

 Also, I have often had to bump cpu_tuple_cost into the 0.03 to 0.05
 range to get a good plan.  In general, this makes the exact
 settings of *_page_cost less fussy, and I have hit situations where
 I was completely unable to get a good plan to emerge without
 bumping cpu_tuple_cost relative to the other cpu costs.  I know that
 it's possible to engineer a workload that shows any particular cost
 adjustment to make things worse, but in real-life production
 environments I have never seen an increase in this range make plan
 choice worse.

 So, would anyone like me to create patches for any of these items before
 we hit 9.4 beta?  We have added autovacuum_work_mem, and increasing
 work_mem and maintenance_work_mem by 4x is a simple operation.  Not sure
 about the others.  Or do we just keep this all for 9.5?

I don't think anyone objected to increasing the defaults for work_mem
and maintenance_work_mem by 4x, and a number of people were in favor,
so I think we should go ahead and do that.  If you'd like to do the
honors, by all means!

The current bgwriter_lru_maxpages value limits the background writer
to a maximum of 4MB/s.  If one imagines shared_buffers = 8GB, that
starts to seem rather low, but I don't have a good feeling for what a
better value would be.

The current vacuum cost delay settings limit autovacuum to about
2.6MB/s.  I am inclined to think we need a rather large bump there,
like 10x, but maybe it would be more prudent to do a smaller bump,
like say 4x, to avoid changing the default behavior too dramatically
between releases.  IOW, I guess I'm proposing raising
vacuum_cost_limit from 200 to 800.

I don't really know about cpu_tuple_cost.  Kevin's often advocated
raising it, but I haven't heard anyone else advocate for that.  I
think we need data points from more people to know whether or not
that's a good idea in general.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Auto-tuning work_mem and maintenance_work_mem

2014-02-16 Thread Josh Berkus
On 02/16/2014 09:26 PM, Robert Haas wrote:
 I don't really know about cpu_tuple_cost.  Kevin's often advocated
 raising it, but I haven't heard anyone else advocate for that.  I
 think we need data points from more people to know whether or not
 that's a good idea in general.

In 10 years of tuning PostgreSQL professionally, I still don't have a
mathematical model for the interaction of the various *_cost parameters
with the speeds of CPU, RAM and IO.   If someone else has one, please
post it so that we can make some intelligent decisions on defaults.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] pg_ctl always uses the same event source

2014-02-16 Thread Amit Kapila
On Sat, Feb 1, 2014 at 12:31 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 I think it's just a very minor coding style thing, so I am marking this patch 
 as
 Ready For Committer.

I could see that this patch has been marked as Needs Review in CF app.
suggesting that it should be rejected based on Tom's rejection in below mail:
http://www.postgresql.org/message-id/3315.1390836...@sss.pgh.pa.us

If I understand correctly that objection was on changing Default Event
Source name, and the patch now doesn't contain that change, it's
just a bug fix for letting pg_ctl know the non-default event source
set by user.

Please clarify if I misunderstood something, else this should be changed
to Ready For Committer.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes up for review

2014-02-16 Thread Josh Berkus
On 02/16/2014 03:41 PM, Tom Lane wrote:
 Draft release notes for 9.3.3 are committed and can be read at
 http://www.postgresql.org/docs/devel/static/release-9-3-3.html
 Any comments before I start transposing them into the back branches?

Major:

Do we have an explantion of what a multixact is, anywhere, so that we
can link it?

Minor:

ECPG or ecpg?  Pick one or the other.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE FOREIGN TABLE ( ... LIKE ... )

2014-02-16 Thread David Fetter
On Sat, Feb 15, 2014 at 03:14:03PM +0100, Andres Freund wrote:
 On 2014-01-31 18:16:18 +0100, Vik Fearing wrote:
  On 01/25/2014 06:25 AM, David Fetter wrote:
   Please find attached the next rev :)
  
  This version looks committable to me, so I am marking it as such.
 
 This doesn't contain a single regression test, I don't see how that's
 ok. Marking as waiting on author.

It now contains regression tests.  Re-marking.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out 
b/contrib/postgres_fdw/expected/postgres_fdw.out
index 38c6cf8..eab48fe 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -72,6 +72,20 @@ CREATE FOREIGN TABLE ft2 (
c8 user_enum
 ) SERVER loopback;
 ALTER FOREIGN TABLE ft2 DROP COLUMN cx;
+CREATE FOREIGN TABLE ft_tables (
+LIKE information_schema.tables
+) SERVER loopback
+OPTIONS (
+schema_name 'information_schema',
+table_name 'tables'
+);
+CREATE FOREIGN TABLE ft_columns (
+LIKE information_schema.columns
+) SERVER loopback
+OPTIONS (
+schema_name 'information_schema',
+table_name 'columns'
+);
 -- ===
 -- tests for validator
 -- ===
@@ -113,12 +127,14 @@ ALTER FOREIGN TABLE ft2 OPTIONS (schema_name 'S 1', 
table_name 'T 1');
 ALTER FOREIGN TABLE ft1 ALTER COLUMN c1 OPTIONS (column_name 'C 1');
 ALTER FOREIGN TABLE ft2 ALTER COLUMN c1 OPTIONS (column_name 'C 1');
 \det+
- List of foreign tables
- Schema | Table |  Server  |  FDW Options  | 
Description 
-+---+--+---+-
- public | ft1   | loopback | (schema_name 'S 1', table_name 'T 1') | 
- public | ft2   | loopback | (schema_name 'S 1', table_name 'T 1') | 
-(2 rows)
+ List of foreign tables
+ Schema |   Table|  Server  |   FDW Options
| Description 
+++--+--+-
+ public | ft1| loopback | (schema_name 'S 1', table_name 'T 1')
| 
+ public | ft2| loopback | (schema_name 'S 1', table_name 'T 1')
| 
+ public | ft_columns | loopback | (schema_name 'information_schema', 
table_name 'columns') | 
+ public | ft_tables  | loopback | (schema_name 'information_schema', 
table_name 'tables')  | 
+(4 rows)
 
 -- Now we should be able to run ANALYZE.
 -- To exercise multiple code paths, we use local stats on ft1
@@ -231,13 +247,25 @@ SELECT * FROM ft1 t1 WHERE t1.c1 = 101 AND t1.c6 = '1' 
AND t1.c7 = '1';
  101 |  1 | 00101 | Fri Jan 02 00:00:00 1970 PST | Fri Jan 02 00:00:00 1970 | 
1  | 1  | foo
 (1 row)
 
--- aggregate
+-- aggregates
 SELECT COUNT(*) FROM ft1 t1;
  count 
 ---
   1000
 (1 row)
 
+SELECT t.table_name, COUNT(*)
+FROM ft_tables t JOIN ft_columns c USING (table_catalog, table_schema, 
table_name)
+WHERE t.table_type = 'FOREIGN TABLE'
+GROUP BY t.table_name;
+ table_name | count 
++---
+ ft_columns |44
+ ft2| 8
+ ft_tables  |12
+ ft1| 8
+(4 rows)
+
 -- join two tables
 SELECT t1.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t1.c1 
OFFSET 100 LIMIT 10;
  c1  
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql 
b/contrib/postgres_fdw/sql/postgres_fdw.sql
index ce8bb75..c5d2fde 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -81,6 +81,22 @@ CREATE FOREIGN TABLE ft2 (
 ) SERVER loopback;
 ALTER FOREIGN TABLE ft2 DROP COLUMN cx;
 
+CREATE FOREIGN TABLE ft_tables (
+LIKE information_schema.tables
+) SERVER loopback
+OPTIONS (
+schema_name 'information_schema',
+table_name 'tables'
+);
+
+CREATE FOREIGN TABLE ft_columns (
+LIKE information_schema.columns
+) SERVER loopback
+OPTIONS (
+schema_name 'information_schema',
+table_name 'columns'
+);
+
 -- ===
 -- tests for validator
 -- ===
@@ -145,8 +161,12 @@ SELECT * FROM ft1 WHERE false;
 -- with WHERE clause
 EXPLAIN (VERBOSE, COSTS false) SELECT * FROM ft1 t1 WHERE t1.c1 = 101 AND 
t1.c6 = '1' AND t1.c7 = '1';
 SELECT * FROM ft1 t1 WHERE t1.c1 = 101 AND t1.c6 = '1' AND t1.c7 = '1';
--- aggregate
+-- aggregates
 SELECT COUNT(*) FROM ft1 t1;
+SELECT t.table_name, COUNT(*)
+FROM ft_tables t JOIN 

Re: [HACKERS] Auto-tuning work_mem and maintenance_work_mem

2014-02-16 Thread Gavin Flower

On 17/02/14 15:26, Robert Haas wrote:

On Thu, Feb 13, 2014 at 3:34 PM, Bruce Momjian br...@momjian.us wrote:

On Fri, Oct 11, 2013 at 03:39:51PM -0700, Kevin Grittner wrote:

Josh Berkus j...@agliodbs.com wrote:

On 10/11/2013 01:11 PM, Bruce Momjian wrote:

In summary, I think we need to:

*  decide on new defaults for work_mem and maintenance_work_mem
*  add an initdb flag to allow users/packagers to set shared_bufffers?
*  add an autovacuum_work_mem setting?
*  change the default for temp_buffers?

If we're changing defaults, bgwriter_lru_maxpages and vacuum_cost_limit
could also use a bump; those thresholds were set for servers with  1GB
of RAM.

+1 on those.

Also, I have often had to bump cpu_tuple_cost into the 0.03 to 0.05
range to get a good plan.  In general, this makes the exact
settings of *_page_cost less fussy, and I have hit situations where
I was completely unable to get a good plan to emerge without
bumping cpu_tuple_cost relative to the other cpu costs.  I know that
it's possible to engineer a workload that shows any particular cost
adjustment to make things worse, but in real-life production
environments I have never seen an increase in this range make plan
choice worse.

So, would anyone like me to create patches for any of these items before
we hit 9.4 beta?  We have added autovacuum_work_mem, and increasing
work_mem and maintenance_work_mem by 4x is a simple operation.  Not sure
about the others.  Or do we just keep this all for 9.5?

I don't think anyone objected to increasing the defaults for work_mem
and maintenance_work_mem by 4x, and a number of people were in favor,
so I think we should go ahead and do that.  If you'd like to do the
honors, by all means!

The current bgwriter_lru_maxpages value limits the background writer
to a maximum of 4MB/s.  If one imagines shared_buffers = 8GB, that
starts to seem rather low, but I don't have a good feeling for what a
better value would be.

The current vacuum cost delay settings limit autovacuum to about
2.6MB/s.  I am inclined to think we need a rather large bump there,
like 10x, but maybe it would be more prudent to do a smaller bump,
like say 4x, to avoid changing the default behavior too dramatically
between releases.  IOW, I guess I'm proposing raising
vacuum_cost_limit from 200 to 800.

I don't really know about cpu_tuple_cost.  Kevin's often advocated
raising it, but I haven't heard anyone else advocate for that.  I
think we need data points from more people to know whether or not
that's a good idea in general.

Processors have been getting faster, relative to spinning rust, over the 
years.  So it puzzles me why anybody would want to raise the 
cpu_tuple_cost!  Possibly, the various costs should change if the 
database is on SSD's?  Of course, I have the implicit assumption that 
cost factors like 'cpu_tuple_cost' have more than just a vague relation 
to the semantics implied by their naming!



It would be good, if can we get some clarity on what these various cost 
factors are actually meant to do and how they relate to each other.



Cheers,
Gavin


Cheers,
Gavin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Retain dynamic shared memory segments for postmaster lifetime

2014-02-16 Thread Kyotaro HORIGUCHI
Thank you for letting me know of that.

 Using MSVC.
 We have gendef.pl which can do it.

Mmm.. My eyes skipped over it. Everything became clear for
me. Thank you.

 Example in Postgres project properties, in
 Configuration Properties-Build Events-Pre-Link Event, there
 is a Command Line like below which can do the required work.
 perl src\tools\msvc\gendef.pl Debug\postgres x64

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers