Hello, This is a new version of the patch formerly known as
'alternative storage for libpq'.
- Changed the concept to 'Alternative Row Processor' from
'Storage handler'. Symbol names are also changed.
- Callback function is modified following to the comment.
- From the restriction of time, I
On Thu, Jan 26, 2012 at 3:48 PM, David E. Wheeler da...@justatheory.com wrote:
On Jan 26, 2012, at 9:40 AM, Dimitri Fontaine wrote:
Not for 9.2, but I can't help thinking that if we could manage to host
the .so module itself in the catalogs, we could solve updating it in a
transactional way
On 23 January 2012 20:14, Pavel Stehule pavel.steh...@gmail.com wrote:
Hello
2012/1/23 Robert Haas robertmh...@gmail.com:
On Tue, Jan 3, 2012 at 2:49 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
jup, we can continue in enhancing step by step.
I change a patch and now ALTER TABLE,
So I'm going to prepare the next version of the patch with this design:
- in catalog extension scripts for inline extension
pg_extension_script(extoid, oldversion, version, script)
oldversion is null when create extension is used
unless when using the create extension from
On 27.01.2012 11:57, Dean Rasheed wrote:
I just noticed this copy-and-paste error in the ALTER FOREIGN TABLE docs:
IF EXISTS:
Do not throw an error if the sequence does not exist. A notice is issued
in this case.
That should be foreign table not sequence.
Thanks, fixed.
--
On Thu, Jan 26, 2012 at 7:01 PM, Dan Scales sca...@vmware.com wrote:
I'm not sure why you moved the checksum calculation (PageSetVerificationInfo)
to mdwrite() rather than smgrwrite(). If there were every another storage
backend, it would have to duplicate the checksum check, right? Is
On 26.01.2012 04:10, Robert Haas wrote:
On Wed, Jan 25, 2012 at 3:11 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Attached is a patch to do that. It adds a new mode to
LWLockConditionalAcquire(), LW_EXCLUSIVE_BUT_WAIT. If the lock is free, it
is acquired and the function
On Thu, Jan 26, 2012 at 11:36 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
I'm not surprised that you weren't able to measure a performance
regression from the binary bloat. Any such regression is bound to be
very small and probably quite difficult to notice most of the time;
it's really
On Fri, Jan 27, 2012 at 8:35 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Yeah, we have to be careful with any overhead in there, it can be a hot
spot. I wouldn't expect any measurable difference from the above, though.
Could I ask you to rerun the pgbench tests you did
Uh, obviously I meant causal relationship and not correlation.
On 27 January 2012 13:37, Robert Haas robertmh...@gmail.com wrote:
I completely agree. So the point is that, when faced a patch that
adds an atypically large number of CPU instructions, we ought to ask
ourselves whether those
On Fri, Jan 27, 2012 at 9:27 AM, Peter Geoghegan pe...@2ndquadrant.com wrote:
Well, I don't think it's all that subjective - it's more the case that
it is just difficult, or it gets that way as you consider more
specialisations.
Sure it's subjective. Two well-meaning people could have
On Sun, Jan 15, 2012 at 5:05 PM, Josh Kupershmidt schmi...@gmail.com wrote:
On Sun, Jan 15, 2012 at 3:02 PM, Gabriele Bartolini
gabriele.bartol...@2ndquadrant.it wrote:
My actual intention was to have the filename as output of the command, in
order to easily pipe it to another script. Hence
On Mon, Jan 16, 2012 at 12:06 PM, Marti Raudsepp ma...@juffo.org wrote:
Here's v6 of my expression caching patch.
The patch is not attached.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list
On Fri, Jan 27, 2012 at 2:57 AM, Kyotaro HORIGUCHI
horiguchi.kyot...@oss.ntt.co.jp wrote:
Hello, This is a new version of the patch formerly known as
'alternative storage for libpq'.
I took a quick look at the patch and the docs. Looks good and agree
with rationale and implementation. I see
On Fri, Jan 27, 2012 at 05:57:01PM +0900, Kyotaro HORIGUCHI wrote:
Hello, This is a new version of the patch formerly known as
'alternative storage for libpq'.
- Changed the concept to 'Alternative Row Processor' from
'Storage handler'. Symbol names are also changed.
- Callback function
Hello,
Last year, I asked for your opinions about how to fix the bug of unreliable
pg_ctl -w start, as in the thread:
http://archives.postgresql.org/pgsql-hackers/2011-05/msg01407.php
The phenomenon was that pg_ctl -w start did not return for 60 seconds when
postgresql.conf contained a
On Fri, Jan 27, 2012 at 09:35:04AM -0600, Merlin Moncure wrote:
On Fri, Jan 27, 2012 at 2:57 AM, Kyotaro HORIGUCHI
- The meaning of PGresAttValue is changed. The field 'value' now
contains a value withOUT terminating zero. This change seems to
have no effect on any other portion within
On Sun, Jan 15, 2012 at 1:01 PM, Joachim Wieland j...@mcknight.de wrote:
So this is the parallel pg_dump patch, generalizing the existing
parallel restore and allowing parallel dumps for the directory archive
format, the patch works on Windows and Unix.
This patch introduces a large amount of
On Fri, Jan 27, 2012 at 10:57 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 15, 2012 at 1:01 PM, Joachim Wieland j...@mcknight.de wrote:
So this is the parallel pg_dump patch, generalizing the existing
parallel restore and allowing parallel dumps for the directory archive
format,
Hello,
I have developed a variation of cover density ranking functions that counts
only covers that are lesser than a specified limit. It is useful for finding
combinations of terms that appear nearby one another. Here is an example of
usage:
-- normal cover density ranking : not changed
And here is the patch, that I forgot to attach
Hello,
I have developed a variation of cover density ranking functions that counts
only covers that are lesser than a specified limit. It is useful for finding
combinations of terms that appear nearby one another. Here is an example of
On Fri, Jan 27, 2012 at 10:58 AM, Robert Haas robertmh...@gmail.com wrote:
It's not clear to me why fmtQualifiedId needs to move to dumputils.c.
The way you have it, fmtQualifiedId() is now with fmtId(), but no
longer with fmtCopyColumnList(), the only other similarly named
function in that
The rank counts 1/coversize. So bigger covers will not have much impact
anyway. What is the need of the patch?
-Sushant.
On Fri, 2012-01-27 at 18:06 +0200, karave...@mail.bg wrote:
Hello,
I have developed a variation of cover density ranking functions that
counts only covers that are
On Sun, Jan 15, 2012 at 1:01 PM, Joachim Wieland j...@mcknight.de wrote:
So this is the parallel pg_dump patch, generalizing the existing
parallel restore and allowing parallel dumps for the directory archive
format, the patch works on Windows and Unix.
It seems a little unfortunate that we
- Цитат от Sushant Sinha (sushant...@gmail.com), на 27.01.2012 в 18:32 -
The rank counts 1/coversize. So bigger covers will not have much impact
anyway. What is the need of the patch?
-Sushant.
If you want to find only combinations of words that are close one to another,
with the
On 27.01.2012 18:46, Robert Haas wrote:
On Sun, Jan 15, 2012 at 1:01 PM, Joachim Wielandj...@mcknight.de wrote:
In parallel restore, the master closes its own connection to the
database before forking of worker processes, just as it does now. In
parallel dump however, we need to hold the
- Цитат от karave...@mail.bg, на 27.01.2012 в 18:48 -
- Цитат от Sushant Sinha (sushant...@gmail.com), на 27.01.2012 в 18:32
-
The rank counts 1/coversize. So bigger covers will not have much impact
anyway. What is the need of the patch?
-Sushant.
If you want to find
On Sat, Jan 14, 2012 at 7:34 AM, Sergey Konoplev gray...@gmail.com wrote:
Hi,
http://www.postgresql.org/docs/9.1/interactive/catalog-pg-statistic.html
It specifies that entries are created by ANALYZE, but does not mention
that if a table is empty the entry for it is not created.
The actual
On Fri, Jan 27, 2012 at 11:53 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
If the master process keeps the locks it acquires in the beginning, you
could fall back to dumping those tables where the child lock fails using the
master connection.
Hmm, that's a thought.
On Wed, Jan 11, 2012 at 6:43 AM, Ashutosh Bapat
ashutosh.ba...@enterprisedb.com wrote:
Hi,
After running regression, I ran EXPLAIN on one of the queries in regression
(test create_misc) and got following output
regression=# explain verbose select * into table ramp from road where name ~
On Thu, Dec 29, 2011 at 4:17 PM, Noah Misch n...@leadboat.com wrote:
We document that a polymorphic C-language function may identify the concrete
data type of each argument using calls to get_fn_expr_argtype(). That relies
on FmgrInfo.fn_expr, which only the executor sets. Calls of internal
MauMau maumau...@gmail.com writes:
In PostgreSQL 9.1, the wait processing is largely modified. However, the
same assumption seems to still remain, though the duration is 5 seconds. 5
seconds of wait is probably insufficient for my case. I think no fixed
duration is appropriate.
Well, feel
On Sat, Jan 7, 2012 at 12:30 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I feel like this is a trick question, but I'll ask anyway: Can't we
just ignore ANALYZE?
AFAICS, no. ANALYZE will run user-defined code: not only user-supplied
stats collection functions, but user-defined index expressions.
On Thu, Jan 12, 2012 at 4:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:
While working on bug #6393 I was reminded of the truth of $SUBJECT: any
rows inserted into the new toast table will have the xmin of the CLUSTER
or VACUUM FULL operation, and invalid xmax, whereas their parent heap
rows will
On Fri, Jan 27, 2012 at 9:14 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Jan 14, 2012 at 7:34 AM, Sergey Konoplev gray...@gmail.com wrote:
I've added a note to that effect to the documentation for ANALYZE,
which seems like a more appropriate place than the pg_statistic
documentation.
Not to mention palloc, another extremely fundamental and non-reentrant
subsystem.
Possibly we could work on making all that stuff re-entrant, but it would
be a huge amount of work for a distant and uncertain payoff.
Right. I think it makes more sense to try to get parallelism working
first
The advantage of putting the checksum calculation in smgrwrite() (or mdwrite())
is that it catches a bunch of page writes that don't go through the buffer pool
(see calls to smgrwrite() in nbtree.c, nbtsort.c, spginsert.c)
Also, I missed this before: don't you want to add the checksum
On Thu, Jan 12, 2012 at 4:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
The following patch adds a pgbench option -I to load data using
INSERTs, so that we can begin benchmark testing with rows that have
large numbers of distinct un-hinted transaction ids. With a database
pre-created using
On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, it was. Sorry about that. New version attached, retesting while
you read this.
In my hands I could never get this patch to do anything. The new
cache was never used.
I think that that was because RecentXminPageno
On Fri, Jan 27, 2012 at 9:47 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 15, 2012 at 5:05 PM, Josh Kupershmidt schmi...@gmail.com wrote:
On Sun, Jan 15, 2012 at 3:02 PM, Gabriele Bartolini
gabriele.bartol...@2ndquadrant.it wrote:
My actual intention was to have the filename as
Excerpts from Josh Kupershmidt's message of vie ene 27 19:43:51 -0300 2012:
On Fri, Jan 27, 2012 at 9:47 AM, Robert Haas robertmh...@gmail.com wrote:
This email thread seems to have trailed off without reaching a
conclusion. The patch is marked as Waiting on Author in the
CommitFest
On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Also, I think the general approach is wrong. The only reason to have
these pages in shared memory is that we can control access to them to
prevent write/write and read/write corruption. Since these pages are
never
Indeed, I'm a beginner in Make, but I read few tutorials and was able to
do what I wanted outside of PG using a simple make file.
Now, when moving to PG, I found the Make structure much more complicated
and didn't know where to add my configuration.
I'm looking only for this file to run in PG (the
It looks like initdb doesn't fsync all the files it creates, e.g. the
PG_VERSION file.
While it's unlikely that it would cause any real data loss, it can be
inconvenient in some testing scenarios involving VMs.
Thoughts? Would a patch to add a few fsync calls to initdb be accepted?
Is a
On Fri, Jan 27, 2012 at 3:16 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Also, I think the general approach is wrong. The only reason to have
these pages in shared memory is that we can control access to them to
prevent
Hi,
I'm using latest git master (latest entry
0816fad6eebddb8f1f0e21635e46625815d690b9) and I'm getting an error
when trying to create a large data set with pgbench:
thom@swift:~/Development$ createdb pgbench
thom@swift:~/Development$ pgbench -i -s 100 pgbench
NOTICE: table pgbench_branches
From: Tom Lane t...@sss.pgh.pa.us
Well, feel free to increase that duration if you want. The reason it's
there is to not wait for a long time if the postmaster falls over
instantly at startup, but in a non-interactive situation you might not
care.
Yes, just lengthening the wait duration
From: Tom Lane t...@sss.pgh.pa.us
Well, feel free to increase that duration if you want. The reason it's
there is to not wait for a long time if the postmaster falls over
instantly at startup, but in a non-interactive situation you might not
care.
Yes, just lengthening the wait duration
Robert Haas robertmh...@gmail.com writes:
It's a feature, not a bug, that we schema-qualify names when VERBOSE
is specified. That was done on purpose for the benefit of external
tools that might need this information to disambiguate which object is
being referenced.
Table *aliases*, of
MauMau maumau...@gmail.com writes:
From: Tom Lane t...@sss.pgh.pa.us
Looks like complete nonsense to me, if the goal is to behave sanely when
postmaster.pid hasn't been created yet. Where do you think get_pgpid
gets the PID from?
Yes, I understand that get_pgpid() gets the pid from
Thom Brown t...@linux.com writes:
I'm using latest git master (latest entry
0816fad6eebddb8f1f0e21635e46625815d690b9) and I'm getting an error
when trying to create a large data set with pgbench:
LOG: could not stat file base/pgsql_tmp/pgsql_tmp8056.0: Success
STATEMENT: alter table
Alvaro Herrera alvhe...@alvh.no-ip.org writes:
I expected the FETCH to return one row, with the latest data, i.e.
(1, 3), but instead it's returning empty.
This is the same thing I was complaining about in the bug #6123 thread,
On Fri, Jan 27, 2012 at 04:19:41PM -0800, Jeff Davis wrote:
It looks like initdb doesn't fsync all the files it creates, e.g. the
PG_VERSION file.
While it's unlikely that it would cause any real data loss, it can be
inconvenient in some testing scenarios involving VMs.
Thoughts? Would a
From: Tom Lane t...@sss.pgh.pa.us
I don't see any point in worrying about that case when you can't handle
the basic case that the postmaster hasn't created postmaster.pid yet.
In any case, this does nothing at all to answer the question you posed,
which was how long is it reasonable to wait for
54 matches
Mail list logo