ITAGAKI Takahiro [EMAIL PROTECTED] wrote
I'm interested in it, with which we could improve responsiveness during
checkpoints. Though it is Linux specific system call, but we could use
the combination of mmap() and msync() instead of it; I mean we can use
mmap only to flush dirty pages, not
I would like to see some checking of this, though. Currently
I'm doing testing of PostgreSQL under very large numbers of
connections (2000+) and am finding that there's a huge volume
of xlog output ... far more than
comparable RDBMSes. So I think we are logging stuff we
don't really
Michael Fuhr wrote:
On Sun, Jun 18, 2006 at 07:18:07PM -0600, Michael Fuhr wrote:
Maybe I'm misreading the packet, but I think the query is for
''kaltenbrunner.cc (two single quotes followed by kaltenbrunner.cc)
Correction: ''.kaltenbrunner.cc
yes that is exactly the issue - the postmaster
Great minds think alike ;-) ... I just committed exactly that protocol.
I believe it is correct, because AFAICS there are only four possible
risk cases:
Congrats !
For general culture you might be interested in reading this :
Andrew Dunstan wrote:
Tom Lane wrote:
Anyway, the tail end of the trace
shows it repeatedly sending off a UDP packet and getting practically the
same data back:
I'm not too up on what the DNS protocol looks like on-the-wire, but I'll
bet this is it. I think it's trying to look up
Qingqing Zhou [EMAIL PROTECTED] wrote:
I'm interested in it, with which we could improve responsiveness during
checkpoints. Though it is Linux specific system call, but we could use
the combination of mmap() and msync() instead of it; I mean we can use
mmap only to flush dirty pages, not
On Mon, 2006-06-19 at 15:32 +0800, Qingqing Zhou wrote:
ITAGAKI Takahiro [EMAIL PROTECTED] wrote
I'm interested in it, with which we could improve responsiveness during
checkpoints. Though it is Linux specific system call, but we could use
the combination of mmap() and msync() instead
Stefan Kaltenbrunner wrote:
Andrew Dunstan wrote:
Why are we actually looking up anything? Just so we can bind to a
listening socket?
Anyway, maybe the box needs a lookup line in its /etc/resolv.conf to
direct it to use files first, something like
lookup file bind
Stefan, can you look
Giampaolo,
On Sun, Jun 18, 2006 at 01:26:21AM +0200, Giampaolo Tomassoni wrote:
Or... Can I put a custom variable in pgsql.conf?
Like that you mean?
custom_variable_classes = 'identify'# list of custom variable classnames
identify.id = 42
template1=# show identify.id;
Andrew Dunstan [EMAIL PROTECTED] writes:
The question isn't whether is succeeds, it's how long it takes to
succeed. When I increased the pg_regress timeout it actually went
through the whole regression test happily. I suspect we have 2 things
eating up the 60s timeout here: loading the
On Mon, Jun 19, 2006 at 09:21:21AM -0400, Tom Lane wrote:
Of course the $64 question is *why* is 8.0 trying to resolve that name,
particularly seeing that the later branches apparently aren't.
The formatting of the message suggests it is a gethostbyname('')
doing it. Did any quoting rules
Might it not be a win to also store per backend global
values in the
shared memory segment? Things like time of last command,
number of
transactions executed in this backend, backend start
time and other
values that are fixed-size?
I'm including backend start time, command
Martijn van Oosterhout wrote:
On Mon, Jun 19, 2006 at 09:21:21AM -0400, Tom Lane wrote:
Of course the $64 question is *why* is 8.0 trying to resolve that name,
particularly seeing that the later branches apparently aren't.
The formatting of the message suggests it is a gethostbyname('')
Giampaolo,
On Sun, Jun 18, 2006 at 01:26:21AM +0200, Giampaolo Tomassoni wrote:
Or... Can I put a custom variable in pgsql.conf?
Like that you mean?
custom_variable_classes = 'identify'# list of custom variable
classnames
identify.id = 42
template1=# show
Stefan Kaltenbrunner [EMAIL PROTECTED] writes:
I tcpdump'd the dns-traffic on that box during a postmaster startup and
it's definitly trying to look up ''.kaltenbrunner.cc a lot of times.
I just strace'd postmaster start on a Fedora box and can see nothing
corresponding. Since this is a make
Oh, I think I see the problem:
8.0 pg_regress:
if [ $unix_sockets = no ]; then
postmaster_options=$postmaster_options -c listen_addresses=$hostname
else
postmaster_options=$postmaster_options -c listen_addresses=''
fi
8.1 pg_regress:
if [ $unix_sockets = no ];
Stefan Kaltenbrunner wrote:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
The question isn't whether is succeeds, it's how long it takes to
succeed. When I increased the pg_regress timeout it actually went
through the whole regression test happily. I suspect we have 2
I wrote:
8.0 pg_regress:
postmaster_options=$postmaster_options -c listen_addresses=''
8.1 pg_regress:
postmaster_options=$postmaster_options -c listen_addresses=
and in fact here's the commit that changed that:
2005-06-19 22:26 tgl
* src/test/regress/pg_regress.sh:
Tom Lane wrote:
Oh, I think I see the problem:
8.0 pg_regress:
if [ $unix_sockets = no ]; then
postmaster_options=$postmaster_options -c listen_addresses=$hostname
else
postmaster_options=$postmaster_options -c listen_addresses=''
fi
8.1 pg_regress:
if [
* reader's read starts before and ends after writer's update: reader
will certainly note a change in update counter.
* reader's read starts before and ends within writer's update: reader
will note a change in update counter.
* reader's read starts within and ends after writer's update:
...omissis...
yes, it's for contrib modules. but you can access it via SHOW so maybe it
makes sense to include it in pg_settings as well. Not for now but for the
future maybe...
I agree: it could be a useful feature.
giampaolo
Joachim
--
Joachim Wieland
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
The question isn't whether is succeeds, it's how long it takes to
succeed. When I increased the pg_regress timeout it actually went
through the whole regression test happily. I suspect we have 2 things
eating up the 60s timeout here:
Hi all,
I'm still fighting with pltcl test that doesn't return the error message
when elog ERROR message is called.
I've played witrh pltcl.c pltcl_error and removed the calls to PG_TRY,
PG_CATCH and PG_ENDTRY to proove that elog it self had a problem...
How can I check what happens in elog?
As of CVS tip, PG does up to four separate gettimeofday() calls upon the
arrival of a new client command. This is because the statement_timestamp,
stats_command_string, log_duration, and statement_timeout features each
independently save an indication of statement start time. Given what
we've
* Simon Riggs:
Other files are fsynced at checkpoint - always all dirty blocks in the
whole file.
Optionally, sync_file_range does not block the calling process, so
it's very easy to flush all files at once, which could in theory
reduce seeking overhead.
---(end of
On Mon, Jun 19, 2006 at 11:17:48AM -0400, Tom Lane wrote:
instead? The effect would be that for an idle backend,
pg_stat_activity.query_start would reflect the start time of its latest
query instead of the time at which it finished the query. I can see
some use for the current behavior but I
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2006-06-19 at 15:32 +0800, Qingqing Zhou wrote:
ITAGAKI Takahiro [EMAIL PROTECTED] wrote
I'm interested in it, with which we could improve responsiveness during
checkpoints. Though it is Linux specific system call, but we could use
On Mon, 2006-06-19 at 15:04 -0400, Greg Stark wrote:
We fsync the xlog at transaction commit, so only the leading edge needs
to be synced - would the call help there? Presumably the OS can already
locate all blocks associated with a particular file fairly quickly
without doing a full
Motivation:
--
The main goal for this Generic Monitoring Framework is to provide a common
interface for adding instrumentation points or probes to
Postgres so its behavior can be easily observed by developers and
administrators even in production systems. This framework will allow
On Mon, 2006-06-19 at 11:17 -0400, Tom Lane wrote:
As of CVS tip, PG does up to four separate gettimeofday() calls upon the
arrival of a new client command. This is because the statement_timestamp,
stats_command_string, log_duration, and statement_timeout features each
independently save an
Simon Riggs [EMAIL PROTECTED] writes:
Presumably you don't mean *every* client message, just stmt start ones.
At the moment I've got it setting the statement_timestamp on receipt of
any message that could lead to execution of user-defined code; that
includes Query, Parse, Bind, Execute,
Robert Lor [EMAIL PROTECTED] writes:
The main goal for this Generic Monitoring Framework is to provide a
common interface for adding instrumentation points or probes to
Postgres so its behavior can be easily observed by developers and
administrators even in production systems.
What is the
Hi, a quickie:
I was offline last week due to my ADSL line going down, so I was unable
to follow the discussions closely. I'll be back at the
non-transactional catalogs and relminxid discussions later (hopefully
tomorrow or on wednesday).
--
Alvaro Herrera
Ühel kenal päeval, E, 2006-06-19 kell 11:17, kirjutas Tom Lane:
As of CVS tip, PG does up to four separate gettimeofday() calls upon the
arrival of a new client command. This is because the statement_timestamp,
stats_command_string, log_duration, and statement_timeout features each
On Jun 19, 2006, at 4:40 PM, Tom Lane wrote:
Robert Lor [EMAIL PROTECTED] writes:
The main goal for this Generic Monitoring Framework is to provide a
common interface for adding instrumentation points or probes to
Postgres so its behavior can be easily observed by developers and
Tom Lane wrote:
Robert Lor [EMAIL PROTECTED] writes:
The main goal for this Generic Monitoring Framework is to provide a
common interface for adding instrumentation points or probes to
Postgres so its behavior can be easily observed by developers and
administrators even in production
On Mon, Jun 19, 2006 at 05:20:31PM -0400, Theo Schlossnagle wrote:
Heh. Syscall probes and FBT probes in Dtrace have zero overhead.
User-space probes do have overhead, but it is only a few instructions
(two I think). Besically, the probe points are replaced by illegal
instructions and
[EMAIL PROTECTED] (Robert Lor) writes:
For DTrace, probes can be enabled using a D script. When the probes
are not enabled, there is absolutely no performance hit whatsoever.
That seems inconceivable.
In order to have a way of deciding whether or not the probes are
enabled, there has *got* to
Theo Schlossnagle wrote:
Heh. Syscall probes and FBT probes in Dtrace have zero overhead.
User-space probes do have overhead, but it is only a few instructions
(two I think). Besically, the probe points are replaced by illegal
instructions and the kernel infrastructure for Dtrace will
I notice buildfarm member snake is unhappy:
The program postgres is needed by initdb but was not found in the
same directory as
C:/msys/1.0/local/build-farm/HEAD/pgsql.696/src/test/regress/tmp_check/install/usr/local/build-farm/HEAD/inst/bin/initdb.exe.
Check your installation.
I'm betting
On Jun 19, 2006, at 6:41 PM, Robert Lor wrote:
Theo Schlossnagle wrote:
Heh. Syscall probes and FBT probes in Dtrace have zero
overhead. User-space probes do have overhead, but it is only a
few instructions (two I think). Besically, the probe points are
replaced by illegal
Jim C. Nasby wrote:
On Mon, Jun 19, 2006 at 05:20:31PM -0400, Theo Schlossnagle wrote:
Heh. Syscall probes and FBT probes in Dtrace have zero overhead.
User-space probes do have overhead, but it is only a few instructions
(two I think). Besically, the probe points are replaced by illegal
On Jun 19, 2006, at 7:39 PM, Mark Kirkwood wrote:
We will need to benchmark on FreeBSD to see if those comments about
overhead stand up to scrutiny there too.
I've followed the development of DTrace on FreeBSD and the design
approach is mostly identical to the Solaris one. This would mean
Greg Stark [EMAIL PROTECTED] writes:
Come to think of it I wonder whether there's anything to be gained by using
smaller files for tables. Instead of 1G files maybe 256M files or something
like that to reduce the hit of fsyncing a file.
Actually probably not. The weak part of our current
As I follow Relyea Mike's recent post of possible memory leak, I think that
we are lack of a good way of identifing memory usage. Maybe we should also
remember __FILE__, __LINE__ etc for better memory usage diagnose when
TRACE_MEMORY is on?
Regards,
Qingqing
---(end of
Hi folks,
I'm trying to use PAM auth on PostgreSQL, but I still cannot
get success on PAM auth (with PG813 and RHEL3).
pg_hba.conf has
hostpamtest all 0.0.0.0/0 pam
/etc/pam.d/postgresql is
#%PAM-1.0
auth required pam_stack.so service=system-auth
I'm trying to determine why thrush has been failing on PG CVS HEAD for
the past few days. Could you try running the attached program on that
machine, and see what it prints? I suspect it will dump core :-(
Note: you might need to use -D_GNU_SOURCE to get it to compile at all.
Qingqing Zhou wrote:
As I follow Relyea Mike's recent post of possible memory leak, I think that
we are lack of a good way of identifing memory usage. Maybe we should also
remember __FILE__, __LINE__ etc for better memory usage diagnose when
TRACE_MEMORY is on?
Hmm, this would have been a
Alvaro Herrera [EMAIL PROTECTED] writes:
About the exact form we'd give the feature: maybe write each
allocation/freeing to a per-backend file, say /tmp/pgmem.pid. Also
memory context creation, destruction, reset. Having the __FILE__ and
__LINE__ on each operation would be a good tracing
49 matches
Mail list logo