Re: [HACKERS] txid failed epoch increment, again, aka 6291

2012-09-07 Thread Daniel Farina
On Thu, Sep 6, 2012 at 3:04 AM, Noah Misch  wrote:
> On Tue, Sep 04, 2012 at 09:46:58AM -0700, Daniel Farina wrote:
>> I might try to find the segments leading up to the overflow point and
>> try xlogdumping them to see what we can see.
>
> That would be helpful to see.
>
> Just to grasp at yet-flimsier straws, could you post (URL preferred, else
> private mail) the output of "objdump -dS" on your "postgres" executable?

https://dl.dropbox.com/s/444ktxbrimaguxu/txid-wrap-objdump-dS-postgres.txt.gz

Sure, it's a 9.0.6 with pg_cancel_backend by-same-role backported
along with the standard debian changes, so nothing all that
interesting should be going on that isn't going on normally with
compilers on this platform.  I am also starting to grovel through this
assembly, although I don't have a ton of experience finding problems
this way.

To save you a tiny bit of time aligning the assembly with the C, this line

   c797f:   e8 7c c9 17 00  callq  244300 

Seems to be the beginning of:

LWLockAcquire(XidGenLock, LW_SHARED);
checkPoint.nextXid = ShmemVariableCache->nextXid;
checkPoint.oldestXid = ShmemVariableCache->oldestXid;
checkPoint.oldestXidDB = ShmemVariableCache->oldestXidDB;
LWLockRelease(XidGenLock);


>> If there's anything to note about the workload, I'd say that it does
>> tend to make fairly pervasive use of long running transactions which
>> can span probably more than one checkpoint, and the txid reporting
>> functions, and a concurrency level of about 300 or so backends ... but
>> per my reading of the mechanism so far, it doesn't seem like any of
>> this should matter.
>
> Thanks for the details; I agree none of that sounds suspicious.
>
> After some further pondering and testing, this remains a mystery to me.  These
> symptoms imply a proper update of ControlFile->checkPointCopy.nextXid without
> having properly updated ControlFile->checkPointCopy.nextXidEpoch.  After
> recovery, only CreateCheckPoint() updates ControlFile->checkPointCopy at all.
> Its logic for doing so looks simple and correct.

Yeah.  I'm pretty flabbergasted that so much seems to be going right
while this goes wrong.

-- 
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump transaction's read-only mode

2012-09-07 Thread Kevin Grittner
Pavan Deolasee  wrote:
> I'm looking at the following code in pg_dump.c
> 
>   /*
>* Start transaction-snapshot mode transaction to dump
>* consistent data.
>*/
>   ExecuteSqlStatement(fout, "BEGIN");
> if (fout->remoteVersion >= 90100)
> {
>   if (serializable_deferrable)
> ExecuteSqlStatement(fout,
> "SET TRANSACTION ISOLATION LEVEL "
>  "SERIALIZABLE, READ ONLY, DEFERRABLE");
>   else
> ExecuteSqlStatement(fout,
> "SET TRANSACTION ISOLATION LEVEL "
> "REPEATABLE READ");
> }
> else
>   ExecuteSqlStatement(fout,
> "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE");
> 
> Is there a reason why we do not the RR transaction as READ ONLY
> above ? I understand that unlike in the case of SERIALIZABLE
> transaction, it may not have any performance impact. But isn't it a
> good practice anyways to guard against any unintended database
> modification while taking a dump or a safe guard against any future
> optimizations for read-only transactions ? More so because RR seems
> to the default for pg_dump
 
That makes sense to me.  The reason I didn't make that change when I
added the serializable special case to pg_dump was that it seemed
like a separate question; I didn't want to complicate an already big
patch with unnecessary changes to non-serializable transactions.
 
-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] txid failed epoch increment, again, aka 6291

2012-09-07 Thread Noah Misch
On Fri, Sep 07, 2012 at 01:37:57AM -0700, Daniel Farina wrote:
> On Thu, Sep 6, 2012 at 3:04 AM, Noah Misch  wrote:
> > On Tue, Sep 04, 2012 at 09:46:58AM -0700, Daniel Farina wrote:
> >> I might try to find the segments leading up to the overflow point and
> >> try xlogdumping them to see what we can see.
> >
> > That would be helpful to see.
> >
> > Just to grasp at yet-flimsier straws, could you post (URL preferred, else
> > private mail) the output of "objdump -dS" on your "postgres" executable?
> 
> https://dl.dropbox.com/s/444ktxbrimaguxu/txid-wrap-objdump-dS-postgres.txt.gz

Thanks.  Nothing looks amiss there.

I've attached the test harness I used to try reproducing this.  It worked
through over 500 epoch increments without a hitch; clearly, it fails to
reproduce an essential aspect of your system.  Could you attempt to modify it
in the direction of better-resembling your production workload until it
reproduces the problem?

nm


burnxid.shar
Description: Unix shell archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-07 Thread Magnus Hagander
On Thu, Sep 6, 2012 at 1:06 AM, Andrew Dunstan  wrote:
>
> On 09/05/2012 06:13 PM, Peter Eisentraut wrote:
>>
>> On 8/29/12 11:52 PM, Andrew Dunstan wrote:
>
> Why does this need to be tied into the build farm?  Someone can surely

 set up a script that just runs the docs build at every check-in, like it
 used to work.  What's being proposed now just sounds like a lot of
 complication for little or no actual gain -- net loss in fact.
>>>
>>> It doesn't just build the docs. It makes the dist snapshots too.
>>
>> Thus making the turnaround time on a docs build even slower ... ?
>
>
>
> A complete run of this process takes less than 15 minutes. And as I have
> pointed out elsewhere that could be reduced substantially by skipping
> certain steps. It's as simple as changing the command line in the crontab
> entry.

Is it possible to run it only when the *docs* have changed, and not
when it's just a code-commit? meaning, is the detection smart enough
for that?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Issue observed in cascade standby setup and analysis for same

2012-09-07 Thread Amit Kapila
On Thursday, September 06, 2012 9:58 PM Josh Berkus wrote:
On 9/6/12 7:06 AM, Amit Kapila wrote:
>> 1.Set up postgresql-9.2beta2 on  all hosts.

> Did you retest this with 9.2rc1?  Beta2 was a while ago 

  Tested in 9.2rc1, the problem occurs incase I use database and backup of 9.2 
Beta2. However when created fresh database and backup, it doesn't occur.  This 
problem doesn't occur every time, so I will try more to reproduce it on 9.2 RC1 
database as well.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-07 Thread Andrew Dunstan


On 09/07/2012 09:57 AM, Magnus Hagander wrote:

On Thu, Sep 6, 2012 at 1:06 AM, Andrew Dunstan  wrote:


A complete run of this process takes less than 15 minutes. And as I have
pointed out elsewhere that could be reduced substantially by skipping
certain steps. It's as simple as changing the command line in the crontab
entry.

Is it possible to run it only when the *docs* have changed, and not
when it's just a code-commit? meaning, is the detection smart enough
for that?





There is a filter mechanism used in detecting is a run is needed, and in 
modern versions of the client (Release 4.7, one version later than 
guaibasaurus is currently using) it lets you have both include and 
exclude filters. For example, you could have this config setting:


trigger_include => qr(/doc/src/),

and it would then only match changed files in the docs tree.

It's a global mechanism, not per step. So it will run all the steps 
(other than those you have told it to skip) if it finds any files 
changed that match the filter conditions.


If you do that you would probably want to have two animals, one doing 
docs builds only and running frequently, one doing the dist stuff much 
less frequently.



cheers

andrew






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-07 Thread Tom Lane
Heikki Linnakangas  writes:
> Would socketpair(2) be simpler?

Attached is a revised version of the patch that uses socketpair(2).
This is definitely a lot less invasive --- the backend side of the
patch, in particular, is far shorter, and there are fewer portability
hazards since we're not trying to replace sockets with pipes.

I've not done anything yet about the potential security issues
associated with untrusted libpq connection strings.  I think this
is still at the proof-of-concept stage; in particular, it's probably
time to see if we can make it work on Windows before we worry more
about that.

I'm a bit tempted though to pull out and apply the portions of the
patch that replace libpq's assorted ad-hoc closesocket() calls with
a centralized pqDropConnection routine.  I think that's probably a good
idea independently of this feature.

regards, tom lane

diff --git a/src/backend/main/main.c b/src/backend/main/main.c
index 33c5a0a4e645624515016397da0011423d993c70..968959b85ef53aa2ec0ef8c5c5a9bc544291bfe5 100644
*** a/src/backend/main/main.c
--- b/src/backend/main/main.c
*** main(int argc, char *argv[])
*** 191,196 
--- 191,198 
  		AuxiliaryProcessMain(argc, argv);		/* does not return */
  	else if (argc > 1 && strcmp(argv[1], "--describe-config") == 0)
  		GucInfoMain();			/* does not return */
+ 	else if (argc > 1 && strncmp(argv[1], "--child=", 8) == 0)
+ 		ChildPostgresMain(argc, argv, get_current_username(progname)); /* does not return */
  	else if (argc > 1 && strcmp(argv[1], "--single") == 0)
  		PostgresMain(argc, argv, get_current_username(progname)); /* does not return */
  	else
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 73520a6ca2f4d3300f3d8939c0e9412064a911c3..5a51fa9cb9abb39b0ac02d21a25e5940159582ea 100644
*** a/src/backend/postmaster/postmaster.c
--- b/src/backend/postmaster/postmaster.c
*** ExitPostmaster(int status)
*** 4268,4273 
--- 4268,4350 
  	proc_exit(status);
  }
  
+ 
+ /*
+  * ChildPostgresMain - start a new-style standalone postgres process
+  *
+  * This may not belong here, but it does share a lot of code with ConnCreate
+  * and BackendInitialize.  Basically what it has to do is set up a
+  * MyProcPort structure and then hand off control to PostgresMain.
+  * Beware that not very much stuff is initialized yet.
+  *
+  * In the future it might be interesting to support a "standalone
+  * multiprocess" mode in which we have a postmaster process that doesn't
+  * listen for connections, but does supervise autovacuum, bgwriter, etc
+  * auxiliary processes.  So that's another reason why postmaster.c might be
+  * the right place for this.
+  */
+ void
+ ChildPostgresMain(int argc, char *argv[], const char *username)
+ {
+ 	Port	   *port;
+ 
+ 	/*
+ 	 * Fire up essential subsystems: error and memory management
+ 	 */
+ 	MemoryContextInit();
+ 
+ 	/*
+ 	 * Build a Port structure for the client connection
+ 	 */
+ 	if (!(port = (Port *) calloc(1, sizeof(Port
+ 		ereport(FATAL,
+ (errcode(ERRCODE_OUT_OF_MEMORY),
+  errmsg("out of memory")));
+ 
+ 	/*
+ 	 * GSSAPI specific state struct must exist even though we won't use it
+ 	 */
+ #if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+ 	port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+ 	if (!port->gss)
+ 		ereport(FATAL,
+ (errcode(ERRCODE_OUT_OF_MEMORY),
+  errmsg("out of memory")));
+ #endif
+ 
+ 	/* The file descriptor of the client socket is the argument of --child */
+ 	if (sscanf(argv[1], "--child=%d", &port->sock) != 1)
+ 		ereport(FATAL,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+  errmsg("invalid argument for --child: \"%s\"", argv[1])));
+ 
+ 	/* Default assumption about protocol to use */
+ 	FrontendProtocol = port->proto = PG_PROTOCOL_LATEST;
+ 
+ 	/* save process start time */
+ 	port->SessionStartTime = GetCurrentTimestamp();
+ 	MyStartTime = timestamptz_to_time_t(port->SessionStartTime);
+ 
+ 	/* set these to empty in case they are needed */
+ 	port->remote_host = "";
+ 	port->remote_port = "";
+ 
+ 	MyProcPort = port;
+ 
+ 	/*
+ 	 * We can now initialize libpq and then enable reporting of ereport errors
+ 	 * to the client.
+ 	 */
+ 	pq_init();	/* initialize libpq to talk to client */
+ 	whereToSendOutput = DestRemote;	/* now safe to ereport to client */
+ 
+ 	/* And pass off control to PostgresMain */
+ 	PostgresMain(argc, argv, username);
+ 
+ 	abort();	/* not reached */
+ }
+ 
+ 
  /*
   * sigusr1_handler - handle signal conditions from child processes
   */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f1248a851bf90188da8d3a7e8b61ac99bf78ebbd..c6a7de63c77f79c85be2170d4e5254b790f3fd5e 100644
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
*** process_postgres_switches(int argc, char
*** 3257,3264 
  	{
  		gucsource = PGC_S_ARGV; /* switches came from command line */
  
! 		/* Ignore the 

Re: [HACKERS] improving python3 regression test setup

2012-09-07 Thread Peter Eisentraut
On 9/6/12 8:56 PM, Alvaro Herrera wrote:
> Excerpts from Peter Eisentraut's message of jue sep 06 21:33:33 -0300 2012:
>> I have developed a patch to make the python3 regression test setup a bit
>> simpler.  Currently, we are making mangled copies of
>> plpython/{expected,sql} to plpython/python3/{expected,sql}, and run the
>> tests in plpython/python3.  This has the disadvantage that the
>> regression.diffs file, if any, ends up in plpython/python3, which is not
>> the normal location.  If we instead make the mangled copies in
>> plpython/{expected,sql}/python3/, we can run the tests from the normal
>> directory, regression.diffs ends up the normal place, and the pg_regress
>> invocation also becomes a lot simpler.  It's also more obvious at run
>> time what's going on, because the tests end up being named
>> "python3/something" in the test output.
> 
> Uhm .. wouldn't it be simpler if the sql files were in input/ and the
> expected in output/, and have pg_regress do the mangling?  Maybe there
> would need to be some tweak to pg_regress itself (such as the ability to
> pass mangling to be done), but that seems cleaner to me.

Maybe that could be made to work if pg_regress were passed in a script
to do the mangling.  (You don't want to hard-code the specific
requirements of plpython into pg_regress.)  But that seems like a lot of
extra work for no real additional benefit.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #7521: Cannot disable WAL log while using pg_dump

2012-09-07 Thread Pavan Deolasee
(Adding -hackers. Did not realize it got dropped)

On Fri, Sep 7, 2012 at 11:25 PM, Gezeala M. Bacuño II wrote:

> On Fri, Sep 7, 2012 at 7:17 AM, Pavan Deolasee 
> wrote:
> >
> >
> > On Fri, Sep 7, 2012 at 7:00 PM, Marie Bacuno II 
> wrote:
> >>
> >>
> >> On Sep 7, 2012, at 2:19, Pavan Deolasee 
> wrote:
> >>
> >>
> >> > or have long running transactions ?
> >>
> >> Yes but I don't think there are when the snapshot was taken. Does the
> >> pg_xlog_location_diff() result from latest and prior checkpoint upon
> >> start-up indicates the size of replayed changes?
> >>
> >
> > Thats the amount of additional WAL generated after you started the
> server.
> >
> >>
> >> >
> >> > BTW, the following query returns ~60GB. Thats the amount of WAL
> >> > written after the server was started and at the end of pg_dump (I
> >> > don't think pg_xlog_location_diff() is available in the older
> >> > releases).
> >> >
> >> > postgres=# select pg_xlog_location_diff('4450/7A14F280',
> >> > '4441/5E681F38')/(2^30);
> >> >?column?
> >> > --
> >> > 60.1980484202504
> >>
> >> It'll be great to know what the wals modified..?
> >
> >
> > You would need something like xlogdump to decipher them. I quickly tried
> > this and it seems to work against 8.4 version that you are running.
> > https://github.com/snaga/xlogdump
> >
> > Download the source code, compile and run it against one of the most
> recent
> > WAL files in the cluster against which you ran pg_dump. You would need to
> > set PATH to contain the pg_config of the server you are running. Please
> post
> > the output.
> >
> > Thanks,
> > Pavan
> >
> >
>
> Here you go:
>
> ## last WAL
> $ xlogdump -S /dbpool/data/pg_xlog/00014450007A
>
> /dbpool/data/pg_xlog/00014450007A:
>
> Unexpected page info flags 0003 at offset 0
> Skipping unexpected continuation record at offset 0
> ReadRecord: record with zero len at 17488/7A14F310
> Unexpected page info flags 0001 at offset 15
> Skipping unexpected continuation record at offset 15
> Unable to read continuation page?
>  ** maybe continues to next segment **
> ---
> TimeLineId: 1, LogId: 17488, LogSegment: 122
>
> Resource manager stats:
>   [0]XLOG  : 3 records, 120 bytes (avg 40.0 bytes)
>  checkpoint: 3, switch: 0, backup end: 0
>   [1]Transaction: 0 record, 0 byte (avg 0.0 byte)
>  commit: 0, abort: 0
>   [2]Storage   : 0 record, 0 byte (avg 0.0 byte)
>   [3]CLOG  : 0 record, 0 byte (avg 0.0 byte)
>   [4]Database  : 0 record, 0 byte (avg 0.0 byte)
>   [5]Tablespace: 0 record, 0 byte (avg 0.0 byte)
>   [6]MultiXact : 0 record, 0 byte (avg 0.0 byte)
>   [7]Reserved 7: 0 record, 0 byte (avg 0.0 byte)
>   [8]Reserved 8: 0 record, 0 byte (avg 0.0 byte)
>   [9]Heap2 : 2169 records, 43380 bytes (avg 20.0 bytes)
>   [10]Heap  : 0 record, 0 byte (avg 0.0 byte)
>  ins: 0, upd/hot_upd: 0/0, del: 0
>   [11]Btree : 0 record, 0 byte (avg 0.0 byte)
>   [12]Hash  : 0 record, 0 byte (avg 0.0 byte)
>   [13]Gin   : 0 record, 0 byte (avg 0.0 byte)
>   [14]Gist  : 0 record, 0 byte (avg 0.0 byte)
>   [15]Sequence  : 0 record, 0 byte (avg 0.0 byte)
>
> Backup block stats: 2169 blocks, 16551816 bytes (avg 7631.1 bytes)
>
>
I think both my theories seem to be holding up. Heap2 resource manager is
used only for vacuum freeze, lazy vacuum or HOT prune. Given your access
pattern, I bet its the third activity that kicking in on your database. You
got many pages with dead tuples and they are getting cleaned at the first
opportunity, which happens to be the pg_dump thats run immediately after
the server restart. This is seen by all 2169 WAL records in the file being
attributed to the Heap2 RM above.

Whats additionally happening is each of these records are on different heap
pages. The cleanup activity dirties those pages. Since each of these pages
is being dirtied for the first time after a recent checkpoint and
full_page_writes is turned ON, entire page is backed up in the WAL record.
You can see the exact number of backup blocks in the stats above.

I don't think we have any mechanism to control or stop HOT from doing what
it wants to do, unless you are willing to run a modified server for this
reason. But you can at least bring down the WAL volume by turning
full_page_writes OFF.

Thanks,
Pavan


Re: [HACKERS] [BUGS] BUG #7521: Cannot disable WAL log while using pg_dump

2012-09-07 Thread Gezeala M . Bacuño II
adding pgsql-bugs list in case OP posts back.

On Fri, Sep 7, 2012 at 11:29 AM, Pavan Deolasee
 wrote:
> (Adding -hackers. Did not realize it got dropped)
>
> On Fri, Sep 7, 2012 at 11:25 PM, Gezeala M. Bacuño II 
> wrote:
>>
>> On Fri, Sep 7, 2012 at 7:17 AM, Pavan Deolasee 
>> wrote:
>> >
>> >
>> > On Fri, Sep 7, 2012 at 7:00 PM, Marie Bacuno II 
>> > wrote:
>> >>
>> >>
>> >> On Sep 7, 2012, at 2:19, Pavan Deolasee 
>> >> wrote:
>> >>
>> >>
>> >> > or have long running transactions ?
>> >>
>> >> Yes but I don't think there are when the snapshot was taken. Does the
>> >> pg_xlog_location_diff() result from latest and prior checkpoint upon
>> >> start-up indicates the size of replayed changes?
>> >>
>> >
>> > Thats the amount of additional WAL generated after you started the
>> > server.
>> >
>> >>
>> >> >
>> >> > BTW, the following query returns ~60GB. Thats the amount of WAL
>> >> > written after the server was started and at the end of pg_dump (I
>> >> > don't think pg_xlog_location_diff() is available in the older
>> >> > releases).
>> >> >
>> >> > postgres=# select pg_xlog_location_diff('4450/7A14F280',
>> >> > '4441/5E681F38')/(2^30);
>> >> >?column?
>> >> > --
>> >> > 60.1980484202504
>> >>
>> >> It'll be great to know what the wals modified..?
>> >
>> >
>> > You would need something like xlogdump to decipher them. I quickly tried
>> > this and it seems to work against 8.4 version that you are running.
>> > https://github.com/snaga/xlogdump
>> >
>> > Download the source code, compile and run it against one of the most
>> > recent
>> > WAL files in the cluster against which you ran pg_dump. You would need
>> > to
>> > set PATH to contain the pg_config of the server you are running. Please
>> > post
>> > the output.
>> >
>> > Thanks,
>> > Pavan
>> >
>> >
>>
>> Here you go:
>>
>> ## last WAL
>> $ xlogdump -S /dbpool/data/pg_xlog/00014450007A
>>
>> /dbpool/data/pg_xlog/00014450007A:
>>
>> Unexpected page info flags 0003 at offset 0
>> Skipping unexpected continuation record at offset 0
>> ReadRecord: record with zero len at 17488/7A14F310
>> Unexpected page info flags 0001 at offset 15
>> Skipping unexpected continuation record at offset 15
>> Unable to read continuation page?
>>  ** maybe continues to next segment **
>> ---
>> TimeLineId: 1, LogId: 17488, LogSegment: 122
>>
>> Resource manager stats:
>>   [0]XLOG  : 3 records, 120 bytes (avg 40.0 bytes)
>>  checkpoint: 3, switch: 0, backup end: 0
>>   [1]Transaction: 0 record, 0 byte (avg 0.0 byte)
>>  commit: 0, abort: 0
>>   [2]Storage   : 0 record, 0 byte (avg 0.0 byte)
>>   [3]CLOG  : 0 record, 0 byte (avg 0.0 byte)
>>   [4]Database  : 0 record, 0 byte (avg 0.0 byte)
>>   [5]Tablespace: 0 record, 0 byte (avg 0.0 byte)
>>   [6]MultiXact : 0 record, 0 byte (avg 0.0 byte)
>>   [7]Reserved 7: 0 record, 0 byte (avg 0.0 byte)
>>   [8]Reserved 8: 0 record, 0 byte (avg 0.0 byte)
>>   [9]Heap2 : 2169 records, 43380 bytes (avg 20.0 bytes)
>>   [10]Heap  : 0 record, 0 byte (avg 0.0 byte)
>>  ins: 0, upd/hot_upd: 0/0, del: 0
>>   [11]Btree : 0 record, 0 byte (avg 0.0 byte)
>>   [12]Hash  : 0 record, 0 byte (avg 0.0 byte)
>>   [13]Gin   : 0 record, 0 byte (avg 0.0 byte)
>>   [14]Gist  : 0 record, 0 byte (avg 0.0 byte)
>>   [15]Sequence  : 0 record, 0 byte (avg 0.0 byte)
>>
>> Backup block stats: 2169 blocks, 16551816 bytes (avg 7631.1 bytes)
>>
>
> I think both my theories seem to be holding up. Heap2 resource manager is
> used only for vacuum freeze, lazy vacuum or HOT prune. Given your access
> pattern, I bet its the third activity that kicking in on your database. You
> got many pages with dead tuples and they are getting cleaned at the first
> opportunity, which happens to be the pg_dump thats run immediately after the
> server restart. This is seen by all 2169 WAL records in the file being
> attributed to the Heap2 RM above.
>
> Whats additionally happening is each of these records are on different heap
> pages. The cleanup activity dirties those pages. Since each of these pages
> is being dirtied for the first time after a recent checkpoint and
> full_page_writes is turned ON, entire page is backed up in the WAL record.
> You can see the exact number of backup blocks in the stats above.
>
> I don't think we have any mechanism to control or stop HOT from doing what
> it wants to do, unless you are willing to run a modified server for this
> reason. But you can at least bring down the WAL volume by turning
> full_page_writes OFF.
>
> Thanks,
> Pavan

Great. Finally got some light on this. I'll disable full_page_writes
on my next backup and will post back results tomorrow. Thanks.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] txid failed epoch increment, again, aka 6291

2012-09-07 Thread Daniel Farina
On Fri, Sep 7, 2012 at 5:49 AM, Noah Misch  wrote:
> On Fri, Sep 07, 2012 at 01:37:57AM -0700, Daniel Farina wrote:
>> On Thu, Sep 6, 2012 at 3:04 AM, Noah Misch  wrote:
>> > On Tue, Sep 04, 2012 at 09:46:58AM -0700, Daniel Farina wrote:
>> >> I might try to find the segments leading up to the overflow point and
>> >> try xlogdumping them to see what we can see.
>> >
>> > That would be helpful to see.
>> >
>> > Just to grasp at yet-flimsier straws, could you post (URL preferred, else
>> > private mail) the output of "objdump -dS" on your "postgres" executable?
>>
>> https://dl.dropbox.com/s/444ktxbrimaguxu/txid-wrap-objdump-dS-postgres.txt.gz
>
> Thanks.  Nothing looks amiss there.
>
> I've attached the test harness I used to try reproducing this.  It worked
> through over 500 epoch increments without a hitch; clearly, it fails to
> reproduce an essential aspect of your system.  Could you attempt to modify it
> in the direction of better-resembling your production workload until it
> reproduces the problem?

Sure, I can mess around with it on our exact environment as well
(compilers, Xen, et al).  We have not seen consistent reproduction
either -- most epochs seem to fail to increment (sample size: few, but
more than three) but epoch incrementing has happened more than zero
times for sure.

I wonder if we can rope in this guy, who is the only other report I've
seen of this:

http://lists.pgfoundry.org/pipermail/skytools-users/2012-March/001601.html

So I'm CCing him

He seems to have reproduced it in 9.1, but I haven't seen his
operating system information on my very brief skim of that thread.

-- 
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-07 Thread Alvaro Herrera
Excerpts from Andrew Dunstan's message of vie sep 07 13:50:44 -0300 2012:

> There is a filter mechanism used in detecting is a run is needed, and in 
> modern versions of the client (Release 4.7, one version later than 
> guaibasaurus is currently using) it lets you have both include and 
> exclude filters. For example, you could have this config setting:
> 
>  trigger_include => qr(/doc/src/),
> 
> and it would then only match changed files in the docs tree.
> 
> It's a global mechanism, not per step. So it will run all the steps 
> (other than those you have told it to skip) if it finds any files 
> changed that match the filter conditions.

Sounds good.

> If you do that you would probably want to have two animals, one doing 
> docs builds only and running frequently, one doing the dist stuff much 
> less frequently.

What seems to make the most sense to me is to have a separate work
directory for the buildfarm script to run, without setting up a whole
buildfarm animal.  That separate dir would build only the devel docs,
triggered only by changes in doc/src, and would not do anything else.
Thus we could leave guaibasaurus alone to do dist building.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-07 Thread Heikki Linnakangas

On 07.09.2012 10:49, Tom Lane wrote:

Heikki Linnakangas  writes:

Would socketpair(2) be simpler?


Attached is a revised version of the patch that uses socketpair(2).
This is definitely a lot less invasive --- the backend side of the
patch, in particular, is far shorter, and there are fewer portability
hazards since we're not trying to replace sockets with pipes.

I've not done anything yet about the potential security issues
associated with untrusted libpq connection strings.  I think this
is still at the proof-of-concept stage; in particular, it's probably
time to see if we can make it work on Windows before we worry more
about that.

I'm a bit tempted though to pull out and apply the portions of the
patch that replace libpq's assorted ad-hoc closesocket() calls with
a centralized pqDropConnection routine.  I think that's probably a good
idea independently of this feature.


Sounds good.

It's worth noting that now that libpq constructs the command line to 
execute "postgres --child= -D ", we'll be stuck with that set 
of arguments forever, because libpq needs to be able to talk to 
different versions. Or at least we'd need to teach libpq to check the 
version of binary and act accordingly, if we change that syntax. That's 
probably OK, I don't feel any pressure to change those command line 
arguments anyway.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-07 Thread Tom Lane
Heikki Linnakangas  writes:
> It's worth noting that now that libpq constructs the command line to 
> execute "postgres --child= -D ", we'll be stuck with that set 
> of arguments forever, because libpq needs to be able to talk to 
> different versions. Or at least we'd need to teach libpq to check the 
> version of binary and act accordingly, if we change that syntax. That's 
> probably OK, I don't feel any pressure to change those command line 
> arguments anyway.

Yeah.  The -D syntax seems safe enough from here.  One thing that's on
my to-do list for this patch is to add a -v switch to set the protocol
version, so that we don't need to assume that libpq and backend have the
same default protocol version.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-07 Thread Tom Lane
Heikki Linnakangas  writes:
> On 07.09.2012 10:49, Tom Lane wrote:
>> I'm a bit tempted though to pull out and apply the portions of the
>> patch that replace libpq's assorted ad-hoc closesocket() calls with
>> a centralized pqDropConnection routine.  I think that's probably a good
>> idea independently of this feature.

> Sounds good.

Done; here's the rebased patch.

regards, tom lane

diff --git a/src/backend/main/main.c b/src/backend/main/main.c
index 33c5a0a4e645624515016397da0011423d993c70..968959b85ef53aa2ec0ef8c5c5a9bc544291bfe5 100644
*** a/src/backend/main/main.c
--- b/src/backend/main/main.c
*** main(int argc, char *argv[])
*** 191,196 
--- 191,198 
  		AuxiliaryProcessMain(argc, argv);		/* does not return */
  	else if (argc > 1 && strcmp(argv[1], "--describe-config") == 0)
  		GucInfoMain();			/* does not return */
+ 	else if (argc > 1 && strncmp(argv[1], "--child=", 8) == 0)
+ 		ChildPostgresMain(argc, argv, get_current_username(progname)); /* does not return */
  	else if (argc > 1 && strcmp(argv[1], "--single") == 0)
  		PostgresMain(argc, argv, get_current_username(progname)); /* does not return */
  	else
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 73520a6ca2f4d3300f3d8939c0e9412064a911c3..5a51fa9cb9abb39b0ac02d21a25e5940159582ea 100644
*** a/src/backend/postmaster/postmaster.c
--- b/src/backend/postmaster/postmaster.c
*** ExitPostmaster(int status)
*** 4268,4273 
--- 4268,4350 
  	proc_exit(status);
  }
  
+ 
+ /*
+  * ChildPostgresMain - start a new-style standalone postgres process
+  *
+  * This may not belong here, but it does share a lot of code with ConnCreate
+  * and BackendInitialize.  Basically what it has to do is set up a
+  * MyProcPort structure and then hand off control to PostgresMain.
+  * Beware that not very much stuff is initialized yet.
+  *
+  * In the future it might be interesting to support a "standalone
+  * multiprocess" mode in which we have a postmaster process that doesn't
+  * listen for connections, but does supervise autovacuum, bgwriter, etc
+  * auxiliary processes.  So that's another reason why postmaster.c might be
+  * the right place for this.
+  */
+ void
+ ChildPostgresMain(int argc, char *argv[], const char *username)
+ {
+ 	Port	   *port;
+ 
+ 	/*
+ 	 * Fire up essential subsystems: error and memory management
+ 	 */
+ 	MemoryContextInit();
+ 
+ 	/*
+ 	 * Build a Port structure for the client connection
+ 	 */
+ 	if (!(port = (Port *) calloc(1, sizeof(Port
+ 		ereport(FATAL,
+ (errcode(ERRCODE_OUT_OF_MEMORY),
+  errmsg("out of memory")));
+ 
+ 	/*
+ 	 * GSSAPI specific state struct must exist even though we won't use it
+ 	 */
+ #if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
+ 	port->gss = (pg_gssinfo *) calloc(1, sizeof(pg_gssinfo));
+ 	if (!port->gss)
+ 		ereport(FATAL,
+ (errcode(ERRCODE_OUT_OF_MEMORY),
+  errmsg("out of memory")));
+ #endif
+ 
+ 	/* The file descriptor of the client socket is the argument of --child */
+ 	if (sscanf(argv[1], "--child=%d", &port->sock) != 1)
+ 		ereport(FATAL,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+  errmsg("invalid argument for --child: \"%s\"", argv[1])));
+ 
+ 	/* Default assumption about protocol to use */
+ 	FrontendProtocol = port->proto = PG_PROTOCOL_LATEST;
+ 
+ 	/* save process start time */
+ 	port->SessionStartTime = GetCurrentTimestamp();
+ 	MyStartTime = timestamptz_to_time_t(port->SessionStartTime);
+ 
+ 	/* set these to empty in case they are needed */
+ 	port->remote_host = "";
+ 	port->remote_port = "";
+ 
+ 	MyProcPort = port;
+ 
+ 	/*
+ 	 * We can now initialize libpq and then enable reporting of ereport errors
+ 	 * to the client.
+ 	 */
+ 	pq_init();	/* initialize libpq to talk to client */
+ 	whereToSendOutput = DestRemote;	/* now safe to ereport to client */
+ 
+ 	/* And pass off control to PostgresMain */
+ 	PostgresMain(argc, argv, username);
+ 
+ 	abort();	/* not reached */
+ }
+ 
+ 
  /*
   * sigusr1_handler - handle signal conditions from child processes
   */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f1248a851bf90188da8d3a7e8b61ac99bf78ebbd..c6a7de63c77f79c85be2170d4e5254b790f3fd5e 100644
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
*** process_postgres_switches(int argc, char
*** 3257,3264 
  	{
  		gucsource = PGC_S_ARGV; /* switches came from command line */
  
! 		/* Ignore the initial --single argument, if present */
! 		if (argc > 1 && strcmp(argv[1], "--single") == 0)
  		{
  			argv++;
  			argc--;
--- 3257,3266 
  	{
  		gucsource = PGC_S_ARGV; /* switches came from command line */
  
! 		/* Ignore the initial --single or --child argument, if present */
! 		if (argc > 1 &&
! 			(strcmp(argv[1], "--single") == 0 ||
! 			 strncmp(argv[1], "--child=", 8) == 0))
  		{
  			argv++;
  			argc--;
*** PostgresMain(int argc, char *argv[],

Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-07 Thread Merlin Moncure
On Thu, Sep 6, 2012 at 12:56 PM, Jeff Davis  wrote:
> On Wed, 2012-09-05 at 17:03 -0400, Tom Lane wrote:
>> In general I think the selling point for such a feature would be "no
>> administrative hassles", and I believe that has to go not only for the
>> end-user experience but also for the application-developer experience.
>> If you have to manage checkpointing and vacuuming in the application,
>> you're probably soon going to look for another database.
>
> Maybe there could be some hooks (e.g., right after completing a
> statement) that see whether a vacuum or checkpoint is required? VACUUM
> can't be run in a transaction block[1], so there are some details to
> work out, but it might be a workable approach.

If it was me, I'd want finer grained control of if/when automatic
background optimization work happened.  Something like
DoBackgroundWork(int forThisManySeconds).  Of course, for that to
work, we'd need to have resumable vacuum.

I like the idea of keeping everything single threaded.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-07 Thread Andres Freund
On Friday, September 07, 2012 11:21:00 PM Merlin Moncure wrote:
> On Thu, Sep 6, 2012 at 12:56 PM, Jeff Davis  wrote:
> > On Wed, 2012-09-05 at 17:03 -0400, Tom Lane wrote:
> >> In general I think the selling point for such a feature would be "no
> >> administrative hassles", and I believe that has to go not only for the
> >> end-user experience but also for the application-developer experience.
> >> If you have to manage checkpointing and vacuuming in the application,
> >> you're probably soon going to look for another database.
> > 
> > Maybe there could be some hooks (e.g., right after completing a
> > statement) that see whether a vacuum or checkpoint is required? VACUUM
> > can't be run in a transaction block[1], so there are some details to
> > work out, but it might be a workable approach.
> 
> If it was me, I'd want finer grained control of if/when automatic
> background optimization work happened.  Something like
> DoBackgroundWork(int forThisManySeconds).  Of course, for that to
> work, we'd need to have resumable vacuum.
> 
> I like the idea of keeping everything single threaded.
To me this path seems to be the best way to never get the feature at all...

Andres
-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-07 Thread Jim Nasby

On 9/2/12 7:23 PM, Tom Lane wrote:

4. As coded, the backend assumes the incoming pipe is on its FD 0 and the
outgoing pipe is on its FD 1.  This made the command line simple but I'm
having second thoughts about it: if anything inside the backend tries to
read stdin or write stdout, unpleasant things will happen.  It might be
better to not reassign the pipe FD numbers.  In that case we'd have to
pass them on the command line, so that the syntax would be something
like "postgres --child 4,5 -D pgdata ...".


Would it be sufficient to just hard-code the FD's to use? I'm not sure why 
someone would need to change them, so long as we steer clear of 
STD(IN|OUT|ERR)...
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] build farm machine using mixed results

2012-09-07 Thread Andrew Dunstan


On 09/04/2012 08:51 PM, Andrew Dunstan wrote:


On 09/04/2012 08:37 PM, Tom Lane wrote:

Andrew Dunstan  writes:

Frankly, I have had enough failures of parallel make that I think doing
this would generate a significant number of non-repeatable failures (I
had one just the other day that took three invocations of make to get
right). So I'm not sure doing this would advance us much, although I'm
open to persuasion.

Really?  I routinely use -j4 for building, and it's been a long time
since I've seen failures.  I can believe that for instance "make check"
in contrib would have a problem running in parallel, but the build
process per se seems reliable enough from here.





Both cases were vpath builds, which is what I usually use, if that's a 
useful data point.


Maybe I run on lower level hardware than you do. I saw this again this 
afternoon after I posted the above. In both cases this was the machine 
that runs the buildfarm's crake. I'll try to get a handle on it.






Well, it looks like it's always failing on ecpg, with preproc.h not 
being made in the right order. Here is the last bit of a make log 
starting from when it starts on ecpg. This is pretty repeatable.


cheers

andrew


-

make -C ecpg all
make[3]: Entering directory 
`/home/pgl/npgl/vpath.testpar/src/interfaces/ecpg'

make -C include all
make[4]: Entering directory 
`/home/pgl/npgl/vpath.testpar/src/interfaces/ecpg/include'

make[4]: Nothing to be done for `all'.
make[4]: Leaving directory 
`/home/pgl/npgl/vpath.testpar/src/interfaces/ecpg/include'

make -C pgtypeslib all
make -C preproc all
make[4]: Entering directory 
`/home/pgl/npgl/vpath.testpar/src/interfaces/ecpg/pgtypeslib'
ccache gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute 
-Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -g -pthread  -D_REENTRANT -D_THREAD_SAFE 
-D_POSIX_PTHREAD_SEMANTICS -fpic -I../include 
-I/home/andrew/pgl/pg_head/src/interfaces/ecpg/include 
-I/home/andrew/pgl/pg_head/src/include/utils 
-I/home/andrew/pgl/pg_head/src/interfaces/libpq 
-I../../../../src/include -I/home/andrew/pgl/pg_head/src/include 
-D_GNU_SOURCE -I/usr/include/libxml2  -DSO_MAJOR_VERSION=3  -c -o 
numeric.o 
/home/andrew/pgl/pg_head/src/interfaces/ecpg/pgtypeslib/numeric.c -MMD 
-MP -MF .deps/numeric.Po
ccache gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute 
-Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -g -pthread  -D_REENTRANT -D_THREAD_SAFE 
-D_POSIX_PTHREAD_SEMANTICS -fpic -I../include 
-I/home/andrew/pgl/pg_head/src/interfaces/ecpg/include 
-I/home/andrew/pgl/pg_head/src/include/utils 
-I/home/andrew/pgl/pg_head/src/interfaces/libpq 
-I../../../../src/include -I/home/andrew/pgl/pg_head/src/include 
-D_GNU_SOURCE -I/usr/include/libxml2  -DSO_MAJOR_VERSION=3  -c -o 
common.o 
/home/andrew/pgl/pg_head/src/interfaces/ecpg/pgtypeslib/common.c -MMD 
-MP -MF .deps/common.Po
ccache gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute 
-Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -g -pthread  -D_REENTRANT -D_THREAD_SAFE 
-D_POSIX_PTHREAD_SEMANTICS -fpic -I../include 
-I/home/andrew/pgl/pg_head/src/interfaces/ecpg/include 
-I/home/andrew/pgl/pg_head/src/include/utils 
-I/home/andrew/pgl/pg_head/src/interfaces/libpq 
-I../../../../src/include -I/home/andrew/pgl/pg_head/src/include 
-D_GNU_SOURCE -I/usr/include/libxml2  -DSO_MAJOR_VERSION=3  -c -o 
datetime.o 
/home/andrew/pgl/pg_head/src/interfaces/ecpg/pgtypeslib/datetime.c -MMD 
-MP -MF .deps/datetime.Po
make[4]: Entering directory 
`/home/pgl/npgl/vpath.testpar/src/interfaces/ecpg/preproc'

make -C ../../../../src/port all
make[5]: Entering directory `/home/pgl/npgl/vpath.testpar/src/port'
make -C ../backend submake-errcodes
make[6]: Entering directory `/home/pgl/npgl/vpath.testpar/src/backend'
make[6]: Nothing to be done for `submake-errcodes'.
make[6]: Leaving directory `/home/pgl/npgl/vpath.testpar/src/backend'
make[5]: Leaving directory `/home/pgl/npgl/vpath.testpar/src/port'
'/usr/bin/perl' 
/home/andrew/pgl/pg_head/src/interfaces/ecpg/preproc/parse.pl 
/home/andrew/pgl/pg_head/src/interfaces/ecpg/preproc < 
/home/andrew/pgl/pg_head/src/interfaces/ecpg/preproc/../../../backend/parser/gram.y 
> preproc.y
ccache gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute 
-Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -g -pthread  -D_REENTRANT -D_THREAD_SAFE 
-D_POSIX_PTHREAD_SEMANTICS -fpic -I../include 
-I/home/andrew/pgl/pg_head/src/interfaces/ecpg/include 
-I/home/andrew/pgl/pg_head/src/include/utils 
-I/home/andrew/pgl/pg_head/src/interfaces/libpq 
-I../../../../src/include -I/home/andrew/pgl/pg_head/src/include 
-D_GNU_SOURCE 

Re: [HACKERS] build farm machine using mixed results

2012-09-07 Thread Tom Lane
Andrew Dunstan  writes:
> Well, it looks like it's always failing on ecpg, with preproc.h not 
> being made in the right order. Here is the last bit of a make log 
> starting from when it starts on ecpg. This is pretty repeatable.

Hmph.  I can't reproduce it at all on my Fedora 16 box.  What version
of make are you using?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-07 Thread Gurjeet Singh
On Wed, Sep 5, 2012 at 10:29 PM, Andrew Dunstan  wrote:

>
> On 09/05/2012 10:14 PM, Tom Lane wrote:
>
>>
>> The people who would be interested in this are currently using something
>> like SQLite within a single application program.
>>
>

> Exactly. I think it's worth stating that this has a HUGE potential
> audience, and if we can get to this the deployment of Postgres could
> mushroom enormously. I'm really quite excited about it.
>

/me shares the feeling :)

-- 
Gurjeet Singh


Re: [HACKERS] build farm machine using mixed results

2012-09-07 Thread Andrew Dunstan


On 09/07/2012 08:43 PM, Tom Lane wrote:

Andrew Dunstan  writes:

Well, it looks like it's always failing on ecpg, with preproc.h not
being made in the right order. Here is the last bit of a make log
starting from when it starts on ecpg. This is pretty repeatable.

Hmph.  I can't reproduce it at all on my Fedora 16 box.  What version
of make are you using?



$ make -v
GNU Make 3.82
Built for x86_64-redhat-linux-gnu


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] build farm machine using mixed results

2012-09-07 Thread Andrew Dunstan


On 09/07/2012 09:55 PM, Andrew Dunstan wrote:


On 09/07/2012 08:43 PM, Tom Lane wrote:

Andrew Dunstan  writes:

Well, it looks like it's always failing on ecpg, with preproc.h not
being made in the right order. Here is the last bit of a make log
starting from when it starts on ecpg. This is pretty repeatable.

Hmph.  I can't reproduce it at all on my Fedora 16 box.  What version
of make are you using?



$ make -v
GNU Make 3.82
Built for x86_64-redhat-linux-gnu






OK, I just tried on a different F16 machine and it didn't happen. I 
wonder what's different.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] build farm machine using mixed results

2012-09-07 Thread Andrew Dunstan


On 09/07/2012 10:46 PM, Andrew Dunstan wrote:


On 09/07/2012 09:55 PM, Andrew Dunstan wrote:


On 09/07/2012 08:43 PM, Tom Lane wrote:

Andrew Dunstan  writes:

Well, it looks like it's always failing on ecpg, with preproc.h not
being made in the right order. Here is the last bit of a make log
starting from when it starts on ecpg. This is pretty repeatable.

Hmph.  I can't reproduce it at all on my Fedora 16 box.  What version
of make are you using?



$ make -v
GNU Make 3.82
Built for x86_64-redhat-linux-gnu






OK, I just tried on a different F16 machine and it didn't happen. I 
wonder what's different.


This seems totally stupid, but it happens when the path to the current 
directory includes a cross-device symlink. If I cd following the link, 
then this effect doesn't happen. Weird.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers