On Tue, Aug 4, 2009 at 4:30 AM, Pavel Stehulepavel.steh...@gmail.com wrote:
forward patch to pg_hackers
There is fixed patch. Please, Jaime, can you look on it?
this one passed regression tests and my personal test script (of
course it passed the script the last time too)... i'm doing a lot
I wrote:
As I said, my inclination for improving this area, if someone wanted
to work on it, would be to find a way to do truncate-in-place on
temp tables. ISTM that in the case you're showing --- truncate that's
not within a subtransaction, on a table that's drop-on-commit anyway
--- we
On Tue, Aug 4, 2009 at 19:13, Kevin Fieldkevinjamesfi...@gmail.com wrote:
On Sat, Aug 1, 2009 at 20:30, Kevin Fieldkevinjamesfi...@gmail.com
wrote:
The event viewer says:
The description for Event ID ( 0 ) in Source ( PostgreSQL )
cannot
be
found. The local computer may not
On Thu, Aug 6, 2009 at 09:25, Magnus Hagandermag...@hagander.net wrote:
On Tue, Aug 4, 2009 at 19:13, Kevin Fieldkevinjamesfi...@gmail.com wrote:
On Sat, Aug 1, 2009 at 20:30, Kevin Fieldkevinjamesfi...@gmail.com
wrote:
The event viewer says:
The description for Event ID ( 0 ) in
Tom Lane t...@sss.pgh.pa.us writes:
Andrew Dunstan and...@dunslane.net writes:
preventing a clash might be fairly difficult.
Yeah, I was just thinking about that. The easiest way to avoid
collisions would be to make pg_dump (in --binary-upgrade mode)
responsible for being sure that *every*
Tom Lane írta:
At the moment it looks to me like pg_migrator has crashed and burned
for 8.4, at least for general-purpose usage.
It means that you don't have the restraint that
you thought you have. So you can change the
RedHat/Fedora PostgreSQL 8.4 packages to use
the upstream default for
On Wed, Aug 05, 2009 at 08:57:14PM +0200, Pavel Stehule wrote:
2009/8/5 Tom Lane t...@sss.pgh.pa.us:
Peter pointed out upthread that the SQL standard already calls out some
things that should be available in this way --- has anyone studied that
yet?
yes - it's part of GET DIAGNOSTICS
With the talk about adding compression to pg_dump lately, I've been
wondering if tables and indexes could be compressed too.
So I've implemented a quick on-the-fly compression patch for postgres
Sorry for the long email, but I hope you find this interesting.
Why compress ?
1- To
2009/8/6 Sam Mason s...@samason.me.uk:
On Wed, Aug 05, 2009 at 08:57:14PM +0200, Pavel Stehule wrote:
2009/8/5 Tom Lane t...@sss.pgh.pa.us:
Peter pointed out upthread that the SQL standard already calls out some
things that should be available in this way --- has anyone studied that
yet?
On Thursday 06 August 2009 06:32:06 Bruce Momjian wrote:
I have applied the attached patch to pg_migrator to detect enum,
composites, and arrays. I tested it and the only error I got was with
the breakmigrator table that was supplied by Jeff, and once I removed
that table the migration went
On Wednesday 05 August 2009 16:13:48 Magnus Hagander wrote:
Just to verify, there is not going to be any changes in the actual
format of the generated files, right?
Correct.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Wednesday 05 August 2009 17:45:46 Pavel Stehule wrote:
SQLCODE could carry enough information about user or system exception.
There are reserved space for custom codes. Maybe for administration
should be interesting, if error is system error or application error -
but this should be
On Wed, Aug 5, 2009 at 16:53, Tom Lanet...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
But. I'll look into cleaning those up for HEAD anyway, but due to lack
of reports I think we should skip backpatch. Reasonable?
Fair enough.
Here's what I came up with. Seems ok?
On Wed, Aug 5, 2009 at 16:11, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:
Magnus Hagander wrote:
As for the source, I think we'd just decorate the error messages
with errsource(ERRSOURCE_USER) or something like that at places where
needed, and have it default to internal - so
Dimitri Fontaine wrote:
Tom Lane t...@sss.pgh.pa.us writes:
We could stop doing that
once we have all the user tables in place --- I don't believe it's
necessary to preserve the OIDs of user indexes. But we need to
preserve toast table OIDs, and toast table index OIDs too if those
are
What use is there for fuzzy predicates? I think it would mainly be to
stop more students from coming up with new implementations of the same
thing over and over.
Well, I'm sorry if anyone of us who is involved on these projects have
already explain the true usefulness of sqlf and fuzzy
Andrew Dunstan and...@dunslane.net wrote:
Excluding every database that has a composite/array-of
user-defined-type/enum type would be pretty nasty. After all, these
are features we boast of.
Any idea whether domains are an issue? I was thinking of trying this
tool soon, and we don't seem
Magnus Hagander mag...@hagander.net writes:
On Wed, Aug 5, 2009 at 16:11, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:
Would you like to propose a concrete list sources that we would have?
The implementation effort depends a lot on the categorization.
Well, the only one I
On Thu, Aug 6, 2009 at 16:20, Tom Lanet...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
On Wed, Aug 5, 2009 at 16:11, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:
Would you like to propose a concrete list sources that we would have?
The implementation
Isn't that function leaking res pointer? Also, I'm curious why you're
fixed
allocating 2*sizeof(TSLexeme) in unaccent_lexize ...
That's is a dictionary's interface part: lexize returns an array of TSLexeme and
last structure should have lexeme field NULL.
filter_dictionary file is not
Alvaro Herrera alvhe...@commandprompt.com writes:
Dimitri Fontaine wrote:
It seems harder to come up with a general purpose syntax to support the
feature in case of toast tables, though.
There's already general purpose syntax for relation options which can be
used to get options that do not
Kevin Grittner wrote:
Andrew Dunstan and...@dunslane.net wrote:
Excluding every database that has a composite/array-of
user-defined-type/enum type would be pretty nasty. After all, these
are features we boast of.
Any idea whether domains are an issue? I was thinking of trying
Magnus Hagander mag...@hagander.net writes:
On Thu, Aug 6, 2009 at 16:20, Tom Lanet...@sss.pgh.pa.us wrote:
Well, it seems like you could get 90% of the way there just by filtering
on the PID --- watching the bgwriter, walwriter, and archiver should
cover this use-case reasonably well.
Andrew Dunstan and...@dunslane.net writes:
Kevin Grittner wrote:
Any idea whether domains are an issue?
I don't believe that they are an issue. The issue arises only when a
catalog oid is used in the on-disk representation of a type. AFAIK the
on-disk representation of a domain is the same
Tom Lane wrote:
Alvaro Herrera alvhe...@commandprompt.com writes:
Dimitri Fontaine wrote:
It seems harder to come up with a general purpose syntax to support the
feature in case of toast tables, though.
There's already general purpose syntax for relation options which
On Thu, Aug 6, 2009 at 16:33, Tom Lanet...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
On Thu, Aug 6, 2009 at 16:20, Tom Lanet...@sss.pgh.pa.us wrote:
Well, it seems like you could get 90% of the way there just by filtering
on the PID --- watching the bgwriter, walwriter,
Andrew Dunstan and...@dunslane.net writes:
It's going to be fairly grotty whatever we do. I'm worried a bit that
we'll be providing some footguns, but I guess we'll just need to hold
our noses and do whatever it takes.
Yeah. One advantage of the GUC approach is we could make 'em SUSET.
I
On Thu, Aug 6, 2009 at 4:32 AM, Dimitri Fontainedfonta...@hi-media.com wrote:
Tom Lane t...@sss.pgh.pa.us writes:
Andrew Dunstan and...@dunslane.net writes:
preventing a clash might be fairly difficult.
Yeah, I was just thinking about that. The easiest way to avoid
collisions would be to
Magnus Hagander mag...@hagander.net writes:
On Thu, Aug 6, 2009 at 16:33, Tom Lanet...@sss.pgh.pa.us wrote:
I don't think there'd be much logical difficulty in having an output
field (ie, CSV column or log_line_prefix escape) that represents a
classification of the PID, say as postmaster,
On Aug 6, 2009, at 7:28 AM, Tom Lane wrote:
That would cover the problem for OIDs needed during CREATE TABLE, but
what about types and enum values?
I haven't been following this discussion very closely, but wanted to
ask: is someone writing regression tests for these cases that
On Thursday 06 August 2009 17:54:37 David E. Wheeler wrote:
On Aug 6, 2009, at 7:28 AM, Tom Lane wrote:
That would cover the problem for OIDs needed during CREATE TABLE, but
what about types and enum values?
I haven't been following this discussion very closely, but wanted to
ask: is
Last night I needed to move a bunch of data from an OLTP database to an
archive database, and used dblink with a bunch of insert statements.
Since I was moving about 4m records this was distressingly but not
surprisingly slow. It set me wondering why we don't build more support
for libpq
* Andrew Dunstan (and...@dunslane.net) wrote:
Tom Lane wrote:
I'm not sure whether there is consensus on not using GRANT ON VIEW
(ie, having these patches treat tables and views alike). I was waiting
to see if Stephen would put forward a convincing counterargument ...
Conceptually it is
On Thu, Aug 6, 2009 at 11:11 AM, Andrew Dunstanand...@dunslane.net wrote:
Last night I needed to move a bunch of data from an OLTP database to an
archive database, and used dblink with a bunch of insert statements. Since I
was moving about 4m records this was distressingly but not surprisingly
On Wed, 2009-08-05 at 22:57 -0400, Bruce Momjian wrote:
Andrew Dunstan wrote:
Well, pg_migrator has gotten pretty far without supporting these
features, and I think I would have heard about it if someone had these
and migrated because vacuum analyze found it right away. I am afraid
the
On Thu, Aug 06, 2009 at 11:11:58AM -0400, Andrew Dunstan wrote:
Last night I needed to move a bunch of data from an OLTP database to an
archive database, and used dblink with a bunch of insert statements.
Since I was moving about 4m records this was distressingly but not
surprisingly
1. The docs should be clarified a little. For instance, it should have a
link back to the definition of a prefix search (12.3.2). I included my
doc suggestions as an attachment.
Thank you, merged
2. dsynonym_init() uses findwrd() in a slightly confusing (and perhaps
fragile) way. After calling
2009/8/6 Teodor Sigaev teo...@sigaev.ru:
1. The docs should be clarified a little. For instance, it should have a
link back to the definition of a prefix search (12.3.2). I included my
doc suggestions as an attachment.
Thank you, merged
2. dsynonym_init() uses findwrd() in a slightly
David Fetter wrote:
For what it's worth, DBI-Link provides a lot of this.
Indeed, but that assumes that perl+DBI+DBD::Pg is available, which is by
no means always the case. If we're going to have a dblink module ISTM it
should be capable of reasonable bulk operations.
cheers
On Thu, Aug 06, 2009 at 11:41:55AM +0200, Pavel Stehule wrote:
typically in SQL/PSM (stored procedures - look on GET DIAGNOSTICS
statement in plpgsql doc), maybe in ecpg. Other's environments raise
exception - so you can get some data from exception or from special
structures related to
On Thu, Aug 06, 2009 at 12:28:15PM -0400, Andrew Dunstan wrote:
David Fetter wrote:
For what it's worth, DBI-Link provides a lot of this.
Indeed, but that assumes that perl+DBI+DBD::Pg is available, which
is by no means always the case. If we're going to have a dblink
module ISTM it
On Thu, Aug 6, 2009 at 11:11 AM, Andrew Dunstanand...@dunslane.net wrote:
Last night I needed to move a bunch of data from an OLTP database to an
archive database, and used dblink with a bunch of insert statements. Since I
was moving about 4m records this was distressingly but not surprisingly
Today we got a report in the spanish list about the message in $subject.
The server is 8.4 running on Windows.
Any ideas? I'm wondering what kind of diagnostics we can run to debug
the problem. xlogdump perhaps?
--
Alvaro Herrerahttp://www.CommandPrompt.com/
On Thu, 2009-08-06 at 12:19 -0400, Robert Haas wrote:
Based on these comments, do you want to go ahead and mark this Ready
for Committer?
Done, thanks Teodor.
However, on the commitfest page, the patches got updated in the wrong
places: prefix support and filtering dictionary support are
I'm reviewing the plpython data type handling patch from the commit fest. I
have not dealt much with the plpython code before, and I'm a bit puzzled by
its elaborately silly handling of null values. A representative example (for
the current code):
if
On Thursday 06 August 2009 17:33:40 Tom Lane wrote:
I don't think there'd be much logical difficulty in having an output
field (ie, CSV column or log_line_prefix escape) that represents a
classification of the PID, say as postmaster, backend, AV worker,
AV launcher, bgwriter, It would
Tom Lane t...@sss.pgh.pa.us wrote:
I'm not proposing that we implement GET DIAGNOSTICS as a statement.
I was just thinking that the list of values it's supposed to make
available might do as a guide to what extra error fields we need to
provide where.
From what I could find on a quick
Tom Lane wrote:
I took a look through the CVS history and verified that there were
no post-8.4 commits that looked like they'd affect performance in
this area. So I think it's got to be a platform difference not a
PG version difference. In particular I think we are probably looking
at a
Tom Lane wrote:
The attached prototype patch does this
and seems to fix the speed problem nicely. It's not tremendously
well tested, but perhaps you'd like to test? Should work in 8.4.
I'll give it a try and report back (though probably not until tomorrow).
-- todd
--
Sent via
On Thu, Aug 6, 2009 at 12:53 PM, Jeff Davispg...@j-davis.com wrote:
On Thu, 2009-08-06 at 12:19 -0400, Robert Haas wrote:
Based on these comments, do you want to go ahead and mark this Ready
for Committer?
Done, thanks Teodor.
However, on the commitfest page, the patches got updated in the
On Thu, Aug 6, 2009 at 11:32, Todd A. Cooktc...@blackducksoftware.com wrote:
Tom Lane wrote:
I took a look through the CVS history and verified that there were
no post-8.4 commits that looked like they'd affect performance in
this area. So I think it's got to be a platform difference not a
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
From what I could find on a quick scan:
RETURNED_SQLSTATE
CLASS_ORIGIN
SUBCLASS_ORIGIN
CONSTRAINT_CATALOG
CONSTRAINT_SCHEMA
CONSTRAINT_NAME
CATALOG_NAME
SCHEMA_NAME
TABLE_NAME
COLUMN_NAME
CURSOR_NAME
MESSAGE_TEXT
MESSAGE_LENGTH
Peter Eisentraut pete...@gmx.net writes:
And then, what is the supposed semantics of calling a nonstrict input
function with NULL as the cstring value? InputFunctionCall() requires
that the return value is null if and only if the input cstring was
NULL, but we'll call the input function
On Aug 5, 2009, at 11:59 AM, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
... bulk-grant could be based on object type,
object name (with wildcard or regexp pattern), schema membership, or
maybe other things, and I think that would be quite useful if we can
figure out how to make
\cmd grant select on * to user
when I wrote epsql I implemented \fetchall metastatement.
http://okbob.blogspot.com/2009/03/experimental-psql.html
It's should be used for GRANT
DECLARE x CURSOR FOR SELECT * FROM information_schema.tables
\fetchall x GRANT ALL ON :table_name TO public;
On 8/6/09 2:39 AM, PFC wrote:
With the talk about adding compression to pg_dump lately, I've been
wondering if tables and indexes could be compressed too.
So I've implemented a quick on-the-fly compression patch for postgres
I find this very interesting, and would like to test it
Pierre,
On Thu, Aug 6, 2009 at 11:39 AM, PFCli...@peufeu.com wrote:
The best for this is lzo : very fast decompression, a good compression ratio
on a sample of postgres table and indexes, and a license that could work.
The license of lzo doesn't allow us to include it in PostgreSQL
without
I like the idea too, but I think there are some major problems to
solve. In particular I think we need a better solution to blocks
growing than sparse files.
The main problem with using sparse files is that currently postgres is
careful to allocate blocks early so it can fail if there's not
On Thu, Aug 6, 2009 at 4:03 PM, Greg Starkgsst...@mit.edu wrote:
I like the idea too, but I think there are some major problems to
solve. In particular I think we need a better solution to blocks
growing than sparse files.
How much benefit does this approach have over using TOAST compression
Hi,
I've been looking through the current state of docuemtation,
including comments, with respect to the executor code and I would
like to improve upon their condition. If anyone has notes,
pseudocode, thoughts on how it all really works, or anything that
Alvaro Herrera wrote:
2009-08-05 11:58:19 COTLOG: el sistema de bases de datos fue
interrumpido durante la recuperación en 2009-08-05 11:12:14 COT
2009-08-05 11:58:19 COTHINT: Esto probablemente significa que
algunos datos están corruptos y tendrá que usar el respaldo más
reciente para la
When I try compile postgresql with --libeditpreferred option,
compilation fails when readline is also installed on the system. You can
see error report on:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=dot_mothdt=2009-08-06%2012:46:04
The main problem is in src/bin/psql/input.h where is
Robert Haas robertmh...@gmail.com wrote:
On Thu, Aug 6, 2009 at 4:03 PM, Greg Starkgsst...@mit.edu wrote:
I like the idea too, but I think there are some major problems to
solve. In particular I think we need a better solution to blocks
growing than sparse files.
How much benefit does this
Pavel Stehule pavel.steh...@gmail.com writes:
There is fixed patch. Please, Jaime, can you look on it?
Applied with significant revisions. I really wanted this code factored
out, because we'd otherwise end up duplicating it in other PLs (and it
was already duplicative of execQual.c). So I
Alvaro Herrera alvhe...@commandprompt.com writes:
This is new code in 8.4. Is no one concerned about this?
[ shrug... ] It's uninvestigatable with only this amount of detail.
How about a test case, or at least a backtrace?
regards, tom lane
--
Sent via pgsql-hackers
2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
After adding %p to the log_line_prefix, it becomes clear that the
process calling XLogInsert here is the startup process.
This is new code in 8.4. Is no one concerned about this?
Can you get a backtrace?
--
greg
Greg Stark wrote:
2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
After adding %p to the log_line_prefix, it becomes clear that the
process calling XLogInsert here is the startup process.
This is new code in 8.4. Is no one concerned about this?
Can you get a backtrace?
I'll ask
2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
2009-08-05 11:58:19 COTLOG: la dirección de página 0/6D374000 en el
archivo de registro 0, segmento 117, posición 3620864 es inesperada
Incidentally, Google's translate gives me the impression that the
above message corresponds to:
Greg Stark gsst...@mit.edu writes:
2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
2009-08-05 11:58:19 COTLOG: la dirección de página 0/6D374000 en el
archivo de registro 0, segmento 117, posición 3620864 es inesperada
Incidentally, Google's translate gives me the impression that the
Greg Stark wrote:
2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
2009-08-05 11:58:19 COTLOG: la dirección de página 0/6D374000 en el
archivo de registro 0, segmento 117, posición 3620864 es inesperada
Incidentally, Google's translate gives me the impression that the
above message
Zdenek Kotala zdenek.kot...@sun.com writes:
It seems to me that editline never distributed history.h file and
HAVE_EDITLINE_HISTORY_H is nonsense. But I'm not sure.
I wouldn't count on that, in part because there are so many versions of
editline. On an OS X machine I see
$ ls -l
On 8/6/09 1:03 PM, Greg Stark wrote:
One possibility is to handle only read-only tables. That would make
things a *lot* simpler. But it sure would be inconvenient if it's only
useful on large static tables but requires you to rewrite the whole
table -- just what you don't want to do with large
Is there a reason we don't use pg_type.typcategory to detect arrays in
Postgres 8.4? Right now I see this in pg_dump.c:
if (g_fout-remoteVersion = 80300)
{
appendPQExpBuffer(query, SELECT tableoid, oid, typname,
typnamespace,
Bernd Helmle maili...@oopsware.de writes:
Here again a patch version with updated documentation. I will stop
reviewing this patch now and mark this ready for committer, so we have some
time left to incorporate additional feedback.
I'm starting to look at this now, and my very first reaction
I'm curious what advantages there are in building compression into
the database itself, rather than using filesystem-based compression.
I see ZFS articles[1] discuss how enabling compression
improves performance with ZFS; for Linux, Btrfs has compression
features as well[2]; and on Windows NTFS
Bruce Momjian br...@momjian.us writes:
Is there a reason we don't use pg_type.typcategory to detect arrays in
Postgres 8.4? Right now I see this in pg_dump.c:
typcategory is user-assignable and thus not too reliable; furthermore
it wouldn't prove that the type is the array type for its
Bruce Momjian wrote:
Is there a reason we don't use pg_type.typcategory to detect arrays in
Postgres 8.4? Right now I see this in pg_dump.c:
if (g_fout-remoteVersion = 80300)
{
appendPQExpBuffer(query, SELECT tableoid, oid, typname,
typnamespace,
Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
Is there a reason we don't use pg_type.typcategory to detect arrays in
Postgres 8.4? Right now I see this in pg_dump.c:
typcategory is user-assignable and thus not too reliable; furthermore
it wouldn't prove that the type is the
Bruce Momjian wrote:
Bruce Momjian wrote:
Is there a reason we don't use pg_type.typcategory to detect arrays in
Postgres 8.4? Right now I see this in pg_dump.c:
if (g_fout-remoteVersion = 80300)
{
appendPQExpBuffer(query, SELECT tableoid, oid, typname,
Robert Haas wrote:
On Wed, Aug 5, 2009 at 6:59 PM, Josh Berkusj...@agliodbs.com wrote:
As far as the release notes, I think we would have to have proof that
the alpha-generated release notes are as good or close to the quality of
the release notes using the current process. ?If they are, we
Bruce,
I would love to get out of the release-note-writing business, but I
can't imagine how such a document could be written incrementally, so it
is logical that I would want some kind of test to see that the method I
didn't think would work would actually work.
What about Robert's
Josh Berkus wrote:
Bruce,
I would love to get out of the release-note-writing business, but I
can't imagine how such a document could be written incrementally, so it
is logical that I would want some kind of test to see that the method I
didn't think would work would actually work.
Josh Berkus wrote:
Bruce,
I would love to get out of the release-note-writing business, but I
can't imagine how such a document could be written incrementally, so it
is logical that I would want some kind of test to see that the method I
didn't think would work would actually work.
Peter Eisentraut wrote:
On Thursday 06 August 2009 06:32:06 Bruce Momjian wrote:
I have applied the attached patch to pg_migrator to detect enum,
composites, and arrays. I tested it and the only error I got was with
the breakmigrator table that was supplied by Jeff, and once I removed
David E. Wheeler wrote:
On Aug 6, 2009, at 7:28 AM, Tom Lane wrote:
That would cover the problem for OIDs needed during CREATE TABLE, but
what about types and enum values?
I haven't been following this discussion very closely, but wanted to
ask: is someone writing regression tests for
Joshua D. Drake wrote:
On Wed, 2009-08-05 at 22:57 -0400, Bruce Momjian wrote:
Andrew Dunstan wrote:
Well, pg_migrator has gotten pretty far without supporting these
features, and I think I would have heard about it if someone had these
and migrated because vacuum analyze found it
On Thu, Aug 6, 2009 at 8:20 PM, Bruce Momjianbr...@momjian.us wrote:
Josh Berkus wrote:
Bruce,
I would love to get out of the release-note-writing business, but I
can't imagine how such a document could be written incrementally, so it
is logical that I would want some kind of test to see
pg_ctl stop -m smart will wait for all connections are disconnected and
pg_ctl stop -m fast will disconnect all connections forcibly.
But fast after smart also wait for disconnections.
Can we change the behavior that fast overwrites smart mode?
I'd like to achieve the following sequence:
$
On Thu, Aug 6, 2009 at 7:10 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Bernd Helmle maili...@oopsware.de writes:
Here again a patch version with updated documentation. I will stop
reviewing this patch now and mark this ready for committer, so we have some
time left to incorporate additional
Tom Lane wrote:
Alvaro Herrera alvhe...@commandprompt.com writes:
Bruce asked me to look for places in the docs that mention that an
ANALYZE is recommended, to mention the possibility that autovacuum takes
care. This patch does that.
I think you found the right places to touch, but is
Hi,
I found include/commands/version.h is empty and not included from any files.
What is the purpose of the file?
http://doxygen.postgresql.org/version_8h.html
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
On Aug 6, 2009, at 6:00 PM, Bruce Momjian wrote:
Yes, I have regression tests I run but they are not in CVS, partly
because they are tied to other scripts I have to manage server
settings.
Here are my scripts:
http://momjian.us/tmp/pg_migrator_test.tgz
One big problem is that
Hi all,
Here is a short patch implementing a new feature in pgbench so as to allow
shell commands to be launched in a transaction file of pgbench.
the user has just to add at the beginning of the command line in his
transaction file \shell + the command wanted.
As an example of transaction:
Sorry I forgot to attach the the patch.
Regards,
Michael
On Fri, Aug 7, 2009 at 12:23 PM, Michael Paquier
michael.paqu...@gmail.comwrote:
Hi all,
Here is a short patch implementing a new feature in pgbench so as to allow
shell commands to be launched in a transaction file of pgbench.
the
Hi,
On Fri, Aug 7, 2009 at 10:31 AM, Itagaki
Takahiroitagaki.takah...@oss.ntt.co.jp wrote:
pg_ctl stop -m smart will wait for all connections are disconnected and
pg_ctl stop -m fast will disconnect all connections forcibly.
But fast after smart also wait for disconnections.
Can we change
2009/8/6 Teodor Sigaev teo...@sigaev.ru:
Isn't that function leaking res pointer? Also, I'm curious why you're
fixed
allocating 2*sizeof(TSLexeme) in unaccent_lexize ...
That's is a dictionary's interface part: lexize returns an array of TSLexeme
and last structure should have lexeme
On Thu, Aug 6, 2009 at 11:26 PM, Michael
Paquiermichael.paqu...@gmail.com wrote:
Sorry I forgot to attach the the patch.
Please add your patches at
https://commitfest.postgresql.org/action/commitfest_view/open
...Robert
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
Michael Paquier michael.paqu...@gmail.com wrote:
Here is a short patch implementing a new feature in pgbench so as to allow
shell commands to be launched in a transaction file of pgbench.
\shell ls ~/pg_twophase;
+1 for \shell command itself, but does the performance fit for your purpose?
Michael Paquier escribió:
I also created a page in postgresql's wiki about this feature.
Please refer to this link:
http://wiki.postgresql.org/wiki/Pgbench:_shell_command
Please don't use colons in wiki page names. Pgbench_shell_command
should be fine.
--
Alvaro Herrera
Yes it dramatically decreases the transaction flow.
This function has not been implemented at all for performance but for
analysis purposes.
I used it mainly to have a look at state files size in pg_twophase for
transactions that are prepared but not committed.
Regards
On Fri, Aug 7, 2009 at
1 - 100 of 103 matches
Mail list logo