On 12/01/2010 05:32 AM, Jeff Janes wrote:
On 11/28/10, Robert Haasrobertmh...@gmail.com wrote:
In a close race, I don't think we should get bogged down in
micro-optimization here, both because micro-optimizations may not gain
much and because what works well on one platform may not do much at
I am suggesting adding a function to libpq:
PGresult *PQvexec(PGconn *conn, const char *fmt, ...);
It behaves similar to PQexec, but it allows for printf style varargs and
does connection re-establishing if the connection fails (it can be
discussed if this already to much magic, maybe remove
2010/12/5 Marc Balmer m...@msys.ch:
I am suggesting adding a function to libpq:
PGresult *PQvexec(PGconn *conn, const char *fmt, ...);
It behaves similar to PQexec, but it allows for printf style varargs and
does connection re-establishing if the connection fails (it can be
discussed if
On Sun, Dec 5, 2010 at 10:22, Marc Balmer m...@msys.ch wrote:
I am suggesting adding a function to libpq:
PGresult *PQvexec(PGconn *conn, const char *fmt, ...);
It behaves similar to PQexec, but it allows for printf style varargs and
How is that not a horrible idea, compared to using
On 05.12.2010 12:10, Magnus Hagander wrote:
On Sun, Dec 5, 2010 at 10:22, Marc Balmerm...@msys.ch wrote:
I am suggesting adding a function to libpq:
PGresult *PQvexec(PGconn *conn, const char *fmt, ...);
It behaves similar to PQexec, but it allows for printf style varargs and
How is that
Am 05.12.2010 um 11:57 schrieb Heikki Linnakangas
heikki.linnakan...@enterprisedb.com:
On 05.12.2010 12:10, Magnus Hagander wrote:
On Sun, Dec 5, 2010 at 10:22, Marc Balmerm...@msys.ch wrote:
I am suggesting adding a function to libpq:
PGresult *PQvexec(PGconn *conn, const char *fmt,
On Sun, Dec 5, 2010 at 11:57, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 05.12.2010 12:10, Magnus Hagander wrote:
On Sun, Dec 5, 2010 at 10:22, Marc Balmerm...@msys.ch wrote:
I am suggesting adding a function to libpq:
PGresult *PQvexec(PGconn *conn, const char *fmt,
Hey hackers,
Varargs-exec is useful only when programmer calls it directly.
It is useless when libpq is used to create a more flexible high-level
library (e.g., for C++). PQexecParams (PQexecPrepared) are good
for it.
What about auto reconnect. There are PQreset already and
PG_CONNECTION_OK
On 12/04/2010 11:11 PM, Itagaki Takahiro wrote:
On Sun, Dec 5, 2010 at 07:24, Andrew Dunstanand...@dunslane.net wrote:
Looking at file_parser.c, it seems to be largely taken from copy.c. Wouldn't
it be better to call those functions, or refactor them so they are callable
if necessary?
We
Thanks for your reply.
Yes, i mean disk may lie to os.
On Fri, Dec 3, 2010 at 12:14 PM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
On 12/03/2010 06:43 PM, Heikki Linnakangas wrote:
On 03.12.2010 13:49, flyusa2010 fly wrote:
When writing log, dbms should synchronously flush log to
Hi, folks,
I'm a newbie to postgres. I'm confused with xlog and clog.
To my initial understanding, xlog is the periodic checkpoint log for data,
while clog is for commit log (very unclear for me ...)
Thanks!
I found that postgres uses different semaphore system call on some different
operating systems.
For example, I found that on linux, System V semaphore (semop etc.) is used
to implement locking, while on Darwin, POSIX semaphore (sem_wait, sem_post
etc.) is used.
linux and Darwin support both System
On Sat, Dec 04, 2010 at 09:27:52PM +0800, Boxuan Zhai wrote:
Dear Greg,
I have updated the MERGE patch for two main problems.
Please attach the actual patch :)
Cheers,
David.
--
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype:
flyusa2010 fly wrote:
clog is for commit log (very unclear for me ...)
It is an important part of tracking tuple visibility.
You might want to start with this page:
http://wiki.postgresql.org/wiki/Hint_Bits
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
On 12/04/2010 07:12 PM, Robert Haas wrote:
I wouldn't necessarily be opposed to official topic branches at some point in
the future, but I think it's premature to speculate about whether it'd be
useful here.
I'd need a lot of convincing if it imposed an extra burden on people
like Tom.
I'm reviving the discussion on the subject topic because I just had
an epiphany which makes it seem simple to implement. The concept of
this is that if you start a SERIALIZABLE READ ONLY transaction in an
SSI environment when certain conditions are true, it doesn't need to
acquire predicate locks
flyusa2010 fly flyusa2...@gmail.com writes:
I found that postgres uses different semaphore system call on some different
operating systems.
For example, I found that on linux, System V semaphore (semop etc.) is used
to implement locking, while on Darwin, POSIX semaphore (sem_wait, sem_post
Andrew Dunstan and...@dunslane.net writes:
On 12/04/2010 07:12 PM, Robert Haas wrote:
I wouldn't necessarily be opposed to official topic branches at some point
in the future, but I think it's premature to speculate about whether it'd be
useful here.
I'd need a lot of convincing if it
Kevin Grittner kevin.gritt...@wicourts.gov writes:
I'm reviving the discussion on the subject topic because I just had
an epiphany which makes it seem simple to implement. The concept of
this is that if you start a SERIALIZABLE READ ONLY transaction in an
SSI environment when certain
On 12/05/2010 12:10 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 12/04/2010 07:12 PM, Robert Haas wrote:
I wouldn't necessarily be opposed to official topic branches at some point in
the future, but I think it's premature to speculate about whether it'd be
useful
Attached is a patch that allows CopyReadAttibutesText() and
CopyReadAttributesCSV() to read arbitrary numbers of attributes.
Underflowing attributes are recorded as null, and space is made for
overflowing attributes on a line.
This patch doesn't result in any user-visible behavior. The
Joachim Wieland wrote:
Regarding snapshot cloning and dump consistency, I brought this up
already several months ago and asked if the feature is considered
useful even without snapshot cloning.
In addition, Joachim submitted a synchronized snapshot patch that looks
to me like it slipped
The one time this year top-posting seems appropriate...this patch seems
stalled waiting for some sort of response to the concerns Alvaro raised
here.
Alvaro Herrera wrote:
Excerpts from Fujii Masao's message of jue nov 25 10:47:12 -0300 2010:
The attached patch s/CopyXLog/CopyBoth/g and
Tom Lane wrote:
Kevin Grittner writes:
I'm reviving the discussion on the subject topic because I just
had an epiphany which makes it seem simple to implement. The
concept of this is that if you start a SERIALIZABLE READ ONLY
transaction in an SSI environment when certain conditions are
Boxuan Zhai wrote:
I have updated the MERGE patch for two main problems.
The patch inside the .tar.gz file you attached isn't right; that
extracts to a tiny file of junk characters.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services and
Greg Smith g...@2ndquadrant.com writes:
In addition, Joachim submitted a synchronized snapshot patch that looks
to me like it slipped through the cracks without being fully explored.
...
The way I read that thread, there were two objections:
1) This mechanism isn't general enough for all
Marko Tiikkaja wrote:
This is almost exactly the patch from 2010-02 without
CommandCounterIncrement()s. It's still a bit rough around the edges
and needs some more comments, but I'm posting it here anyway.
This patch passes all regression tests, but feel free to try to break
it, there are
* no coordination of restarts/configuration changes between the cluster
and the pooler
* you have two pieces of config files to configure your pooling settings
(having all that available say in a catalog in pg would be awesome)
* you lose all of the advanced authentication features of pg
On fre, 2010-12-03 at 15:27 -0500, Robert Haas wrote:
On Fri, Dec 3, 2010 at 2:56 PM, r t pg...@xzilla.net wrote:
What exactly was the objection to the following --
ALTER TABLE table_name ADD PRIMARY KEY (column_list) USING index_name;
Is the objection that you might have been trying to
On Sun, Dec 5, 2010 at 11:59 AM, Josh Berkus j...@agliodbs.com wrote:
* no coordination of restarts/configuration changes between the cluster
and the pooler
* you have two pieces of config files to configure your pooling settings
(having all that available say in a catalog in pg would be
On Sun, Dec 5, 2010 at 12:45 PM, Rob Wultsch wult...@gmail.com wrote:
One thing I would suggest that the PG community keeps in mind while
talking about built in connection process caching, is that it is very
nice feature for memory leaks caused by a connection to not exist for
and continue
Kevin Grittner wrote:
Andres Freund wrote:
On Thursday 02 December 2010 00:48:53 Kevin Grittner wrote:
Is there any provision for one backend to cause a *different*
backend which is idle in a transaction to terminate cleanly when
it attempts to process its next statement?
You might want
Jeff Janes wrote:
I've attached a tiny patch to apply over yours, to deal with this and
with the case where no files are synced.
Thanks for that. That obvious error eluded me because in most of the
patch update testing I was doing (on ext3), the longest sync was always
about the same
Heikki Linnakangas wrote:
If you fsync() a file with one dirty page in it, it's going to return
very quickly, but a 1GB file will take a while. That could be
problematic if you have a thousand small files and a couple of big
ones, as you would want to reserve more time for the big ones. I'm
Josh Berkus wrote:
I modified test_fsync in two ways to run this; first, to make it support
O_DIRECT, and second to make it run in the *current* directory.
Patch please? I agree with the latter change; what test_fsync does is
surprising.
I suggested a while ago that we refactor test_fsync
On Sun, Dec 5, 2010 at 2:53 PM, Greg Smith g...@2ndquadrant.com wrote:
Heikki Linnakangas wrote:
If you fsync() a file with one dirty page in it, it's going to return very
quickly, but a 1GB file will take a while. That could be problematic if you
have a thousand small files and a couple of
After an unfortunate two weeks where I was lost in Nyquil-land and not
paying as much attention as I should have, I just finished a pass
checking over all the open 26 CommitFest items. Just under 50% of the
patches that were open at the start of the CF are still hanging around,
with 6 in
Itagaki Takahiro itagaki.takah...@gmail.com writes:
On Fri, Dec 3, 2010 at 18:02, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
My understanding is that the variadic form shadows the other one in a
way that it's now impossible to call it from SQL level. That's the
reason why I did the (text,
Rob Wultsch wrote:
Forgive me, but is all of this a step on the slippery slope to
direct io? And is this a bad thing
I don't really think so. There's an important difference in my head
between direct I/O, where the kernel is told write this immediately!,
and what I'm trying to achive. I
On Mon, Dec 6, 2010 at 08:01, Tom Lane t...@sss.pgh.pa.us wrote:
Why is there a variadic replace() in this patch at all? It seems just
about entirely unrelated to the stated purpose of the patch, as well
as being of dubious usefulness.
As I wrote in the previous mail, the most important part
On Sun, Dec 5, 2010 at 12:41 PM, Andrew Dunstan and...@dunslane.net wrote:
Well, ISTM that amounts to not having official topic branches :-) I agree
that this is supposed to be one of git's strengths (or more exactly a
strength of distributed SCM's generally). I don't really see any great
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if we could make that work it would be more general and
it wouldn't need any special privileges. The trick seems to be to apply
sufficient
On Sun, Dec 5, 2010 at 3:17 PM, Rob Wultsch wult...@gmail.com wrote:
On Sun, Dec 5, 2010 at 12:45 PM, Rob Wultsch wult...@gmail.com wrote:
One thing I would suggest that the PG community keeps in mind while
talking about built in connection process caching, is that it is very
nice feature for
On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wult...@gmail.com wrote:
I think you have read a bit more into what I have said than is
correct. MySQL can deal with thousands of users and separate schemas
on commodity hardware. There are many design decisions (some
questionable) that have made
On 12/05/2010 08:55 PM, Robert Haas wrote:
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if we could make that work it would be more general and
it wouldn't need any special privileges.
On Sat, Dec 4, 2010 at 8:04 PM, Jeff Janes jeff.ja...@gmail.com wrote:
But who would be doing the passing? For the postmaster to be doing
that would probably go against the minimalist design. It would have
to keep track of which backend is available, and which db and user it
is primed for.
On Sun, Dec 5, 2010 at 6:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Itagaki Takahiro itagaki.takah...@gmail.com writes:
On Fri, Dec 3, 2010 at 18:02, Dimitri Fontaine dimi...@2ndquadrant.fr
wrote:
My understanding is that the variadic form shadows the other one in a
way that it's now
On Sun, Dec 5, 2010 at 9:04 PM, Andrew Dunstan and...@dunslane.net wrote:
Why not just say give me the snapshot currently held by process ?
And please, not temp files if possible.
As far as I'm aware, the full snapshot doesn't normally exist in
shared memory, hence the need for publication
On Sun, Dec 5, 2010 at 6:59 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wult...@gmail.com wrote:
I think you have read a bit more into what I have said than is
correct. MySQL can deal with thousands of users and separate schemas
on commodity
On Sun, Dec 5, 2010 at 9:35 PM, Rob Wultsch wult...@gmail.com wrote:
On Sun, Dec 5, 2010 at 6:59 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wult...@gmail.com wrote:
I think you have read a bit more into what I have said than is
correct. MySQL can
On 12/5/2010 4:22 AM, Marc Balmer wrote:
I am suggesting adding a function to libpq:
PGresult *PQvexec(PGconn *conn, const char *fmt, ...);
It behaves similar to PQexec, but it allows for printf style varargs and
does connection re-establishing if the connection fails (it can be
discussed if
A varargs version of PQexecParams() would be handy, though. Imagine being able
to do:
PQexecVParams(SELECT * FROM mytable WHERE foo = $1 AND bar = $2, foovar,
barvar);
instead of constructing an array for the variables.
http://libpqtypes.esilo.com/man3/PQexecf.html
--
Andrew Chernow
On Sun, Dec 5, 2010 at 9:27 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 5, 2010 at 9:04 PM, Andrew Dunstan and...@dunslane.net wrote:
Why not just say give me the snapshot currently held by process ?
And please, not temp files if possible.
As far as I'm aware, the full
On Dec 4, 2010, at 6:14 AM, Dimitri Fontaine wrote:
Hi,
Thanks for the review, that's quite one! :)
I'm not sure to follow you all along, it seems like the reading is try
it first then understand and comment again, so sometimes I'm not sure
if you're saying that docs are missing the
On 22/11/2010 7:37 PM, Magnus Hagander wrote:
Finally getting to looking at this one - sorry about the very long delay.
Ditto, I'm afraid.
I agree with Heikki's earlier comment that it's better to have this
included in the backend - but that's obviously not going to happen for
On Mon, Dec 6, 2010 at 3:07 AM, Greg Smith g...@2ndquadrant.com wrote:
The one time this year top-posting seems appropriate...this patch seems
stalled waiting for some sort of response to the concerns Alvaro raised
here.
Sorry for the delay. I didn't have the time.
I gave this a look. It
On 6/12/2010 12:57 PM, Craig Ringer wrote:
On 22/11/2010 7:37 PM, Magnus Hagander wrote:
Finally getting to looking at this one - sorry about the very long delay.
Ditto, I'm afraid.
Oh, I forgot to mention in the patch email: I'm not sure I've taken the
right approach in terms of how I've
Thank you Joachim;
Yes, and the current patch requires the original (publisher)
transaction is alive to prevent RecentXmin updated.
I hope this restriction is acceptable if publishing/subscribing is
provided via functions, not statements.
Cheers;
--
Koichi Suzuki
2010/12/6 Joachim
Hey Andrew,
No, thanks. :-)
And I don't think that libpq should follows it (libpqtypes).
2010/12/6 Andrew Chernow a...@esilo.com
A varargs version of PQexecParams() would be handy, though. Imagine being
able
to do:
PQexecVParams(SELECT * FROM mytable WHERE foo = $1 AND bar = $2, foovar,
On Fri, Oct 15, 2010 at 9:41 PM, Fujii Masao masao.fu...@gmail.com wrote:
The timeout doesn't oppose to 'wait-forever'. Even if you choose 'wait
-forever' (i.e., you set allow_standalone_master to false), the master
should detect the standby crash as soon as possible by using the
timeout. For
On 06.12.2010 02:55, Robert Haas wrote:
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if we could make that work it would be more general and
it wouldn't need any special privileges. The
On 06.12.2010 07:42, Fujii Masao wrote:
On Fri, Oct 15, 2010 at 9:41 PM, Fujii Masaomasao.fu...@gmail.com wrote:
The timeout doesn't oppose to 'wait-forever'. Even if you choose 'wait
-forever' (i.e., you set allow_standalone_master to false), the master
should detect the standby crash as soon
On Mon, Dec 6, 2010 at 3:42 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Fri, Oct 15, 2010 at 9:41 PM, Fujii Masao masao.fu...@gmail.com wrote:
The timeout doesn't oppose to 'wait-forever'. Even if you choose 'wait
-forever' (i.e., you set allow_standalone_master to false), the master
63 matches
Mail list logo