It would be the most practical way for a DBA to monitor an application. But
it's not going to be convenient for clients like pgadmin or psql. Even a web
server may want to, for example, stream ajax code updating a progress bar
until it has results and then stream the ajax to display the results.
Andrew Dunstan [EMAIL PROTECTED] writes:
Unless it also lies on the echoed command line this seems an
unconvincing explanation. The seahorse log says:
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
I wrote:
I just committed a change to extract the paths via pg_config_paths.h.
If that doesn't fix it then I guess the next thing is to put in some
debug printout to show what values are really getting compiled in :-(
Seems that *did* fix it, which opens a whole new set of questions about
how
Why make it so complicated?
There could be a guc to indicate that the client is interested in
progress updates. For the execution phase, elog(INFO,...) could be
emitted for each major plan node. (The client would probably run the
explain plan beforehand or it would be embedded in the elog).
Tom Lane wrote:
We used to pass these values almost same way when we first did initdb in
C, and I don't recall any such problems. We had:
override CPPFLAGS := -DPGBINDIR=\$(*bindir*)\ -DPGDATADIR=\$(*datadir*)\
-DFRONTEND -I$(*libpq_srcdir*) $(*CPPFLAGS*)
That seems a bit
Jim C. Nasby wrote:
Something that would be extremely useful to add to the first pass of
this would be to have a work_mem limiter. This would allow users to set
work_mem much more aggressively without worrying about pushing the
machine to swapping. That capability alone would make this valuable
Mark Kirkwood [EMAIL PROTECTED] writes:
Right - in principle it is not that difficult to add (once I have the
machinery for the cost limiter going properly that is). I thinking we
could either:
1. Add hooks to count work_mem allocations where they happen, or
2. Scan the plan tree and
Joe Conway [EMAIL PROTECTED] writes:
I did some testing today against mysql and found that it will easily
absorb insert statements with 1 million targetlists provided you set
max_allowed_packet high enough for the server. It peaked out at about
600MB, compared to my test similar last night
Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
I did some testing today against mysql and found that it will easily
absorb insert statements with 1 million targetlists provided you set
max_allowed_packet high enough for the server. It peaked out at about
600MB, compared to my test
Joe Conway [EMAIL PROTECTED] writes:
The difficulty is finding a way to avoid all that extra work without a
very ugly special case kludge just for inserts.
[ thinks a bit ... ]
It seems to me that the reason it's painful is exactly that INSERT
... VALUES is a kluge already. We've
Tom Lane wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
Right - in principle it is not that difficult to add (once I have the
machinery for the cost limiter going properly that is). I thinking we
could either:
1. Add hooks to count work_mem allocations where they happen, or
2. Scan the plan
Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
The difficulty is finding a way to avoid all that extra work without a
very ugly special case kludge just for inserts.
[ thinks a bit ... ]
It seems to me that the reason it's painful is exactly that INSERT
... VALUES is a kluge already.
I wrote:
What I think happened here is that diff reported a difference and
pg_regress misinterpreted the exit status as being a hard failure.
Can someone check on whether it's possible to tell the difference
between these cases with Windows diff ?
So the latest result shows that the return
Joe Conway [EMAIL PROTECTED] writes:
Tom Lane wrote:
I think the place we'd ultimately like to get to involves changing the
executor's Result node type to have a list of targetlists and sequence
through those lists to produce its results
I was actually just looking at that and ended up
101 - 114 of 114 matches
Mail list logo