On Fri, Nov 30, 2012 at 5:46 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Think of someone setting up a test server, by setting it up as a standby
from the master. Now, when someone holds a transaction open in the test
server, you get bloat in the master. Or if you set up a standby
On Fri, Nov 30, 2012 at 6:06 PM, Kevin Grittner kgri...@mail.com wrote:
Without hot standby feedback, reporting queries are impossible.
I've experienced it. Cancellations make it impossible to finish
any decently complex reporting query.
With what setting of max_standby_streaming_delay? I
On Fri, Nov 30, 2012 at 6:20 PM, Kevin Grittner kgri...@mail.com wrote:
Claudio Freire wrote:
With what setting of max_standby_streaming_delay? I would rather
default that to -1 than default hot_standby_feedback on. That
way what you do on the standby only affects the standby.
1d
On Fri, Nov 30, 2012 at 6:49 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
I have most certainly managed databases where holding up vacuuming
on the source would cripple performance to the point that users
would have demanded that any other process causing it must be
immediately
On Thu, Dec 27, 2012 at 11:46 AM, Peter Bex peter@xs4all.nl wrote:
Implementing a more secure challenge-response based algorithm means
a change in the client-server protocol. Perhaps something like SCRAM
(maybe through SASL) really is the way forward for this, but that
seems like quite a
On Wed, Jan 2, 2013 at 10:03 AM, Magnus Hagander mag...@hagander.net wrote:
Finally we deny MD5 - I have no idea why we do that.
Because it's broken, same motivation as in the thread for implementing
ZK authentication.
Also, I seem to have missed something because the thread subject
mentions
On Mon, Jan 7, 2013 at 3:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
One issue that needs some thought is that the argument for this formula
is based entirely on thinking about b-trees. I think it's probably
reasonable to apply it to gist, gin, and sp-gist as well, assuming we
can get some
On Tue, Jan 8, 2013 at 10:20 AM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Jan 8, 2013 at 4:04 AM, Takeshi Yamamuro
yamamuro.take...@lab.ntt.co.jp wrote:
Apart from my patch, what I care is that the current one might
be much slow against I/O. For example, when compressing
and writing
On Tue, Jan 8, 2013 at 11:39 AM, Merlin Moncure mmonc...@gmail.com wrote:
Reference:
http://postgresql.1045698.n5.nabble.com/Simple-join-doesn-t-use-index-td5738689.html
This is a pretty common gotcha: user sets shared_buffers but misses
the esoteric but important effective_cache_size. ISTM
On Wed, Jan 9, 2013 at 3:39 PM, Josh Berkus j...@agliodbs.com wrote:
It seems to me that pgfincore has the smarts we need to know about that,
and that Cédric has code and refenrences for making it work on all
platforms we care about (linux, bsd, windows for starters).
Well, fincore is
On Mon, Jan 14, 2013 at 1:01 PM, Stephen Frost sfr...@snowman.net wrote:
I do like the idea of a generalized answer which just runs a
user-provided command on the server but that's always going to require
superuser privileges.
Unless it's one of a set of superuser-authorized compression tools.
On Mon, Jan 14, 2013 at 11:33 PM, Stephen Frost sfr...@snowman.net wrote:
Now, protocol-level on-the-wire compression
is another option, but there's quite a few drawbacks to that and quite a
bit of work involved. Having support for COPY-based compression could
be an answer for many cases
On Tue, Jan 15, 2013 at 1:08 PM, Stephen Frost sfr...@snowman.net wrote:
Where it does work well is when you move into a bulk-data mode (ala
COPY) and can compress relatively large amounts of data into a smaller
number of full-size packets to be sent.
Well... exactly. COPY is one case, big
On Tue, Jan 15, 2013 at 8:19 PM, Bruce Momjian br...@momjian.us wrote:
Given our row-based storage architecture, I can't imagine we'd do
anything other than take a row-based approach to this.. I would think
we'd do two things: parallelize based on partitioning, and parallelize
seqscan's
On Tue, Jan 15, 2013 at 7:46 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Compressing every small packet seems like it'd be overkill and might
surprise people by actually reducing performance in the case of lots of
small requests.
Yeah, proper selection and integration of a compression method would
On Wed, Jan 16, 2013 at 12:13 AM, Stephen Frost sfr...@snowman.net wrote:
* Claudio Freire (klaussfre...@gmail.com) wrote:
On Tue, Jan 15, 2013 at 8:19 PM, Bruce Momjian br...@momjian.us wrote:
The 1GB idea is interesting. I found in pg_upgrade that file copy would
just overwhelm the I/O
On Wed, Jan 16, 2013 at 12:55 AM, Stephen Frost sfr...@snowman.net wrote:
If memory serves me correctly (and it does, I suffered it a lot), the
performance hit is quite considerable. Enough to make it a lot worse
rather than not as good.
I feel like we must not be communicating very well.
On Wed, Jan 16, 2013 at 10:33 AM, Stephen Frost sfr...@snowman.net wrote:
* Claudio Freire (klaussfre...@gmail.com) wrote:
Well, there's the fault in your logic. It won't be as linear.
I really don't see how this has become so difficult to communicate.
It doesn't have to be linear.
We're
On Wed, Jan 16, 2013 at 10:04 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Hmm...
How about being aware of multiple spindles - so if the requested data
covers multiple spindles, then data could be extracted in parallel. This
may, or may not, involve multiple I/O channels?
On Wed, Jan 16, 2013 at 8:19 PM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Jan 15, 2013 at 4:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I find the argument that this supports compression-over-the-wire to be
quite weak, because COPY is only one form of bulk data transfer, and
one that a
On Wed, Jan 16, 2013 at 11:44 PM, Bruce Momjian br...@momjian.us wrote:
On Wed, Jan 16, 2013 at 05:04:05PM -0800, Jeff Janes wrote:
On Tuesday, January 15, 2013, Stephen Frost wrote:
* Gavin Flower (gavinflo...@archidevsys.co.nz) wrote:
How about being aware of multiple spindles - so
On Thu, Jan 24, 2013 at 6:36 AM, Xi Wang xi.w...@gmail.com wrote:
icc optimizes away the overflow check x + y x (y 0), because
signed integer overflow is undefined behavior in C. Instead, use
a safe precondition test x INT_MAX - y.
I should mention gcc 4.7 does the same, and it emits a
On Thu, Oct 18, 2012 at 2:33 PM, Josh Berkus j...@agliodbs.com wrote:
I should also add that this is an switchable sync/asynchronous
transactional queue, whereas LISTEN/NOTIFY is a synchronous
transactional queue.
Thanks for explaining.
New here, I missed half the conversation, but since
I've noticed, doing some reporting queries once, that index scans fail
to saturate server resources on compute-intensive queries.
Problem is, just after fetching a page, postgres starts computing
stuff before fetching the next. This results in I/O - compute - I/O -
compute alternation that
On Thu, Oct 18, 2012 at 5:30 PM, Claudio Freire klaussfre...@gmail.com wrote:
Backward:
QUERY PLAN
On Fri, Oct 19, 2012 at 5:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
It looks like we could support
CREATE TABLE t1 (c int[] REFERENCES BY ELEMENT t2);
but (1) this doesn't seem terribly intelligible to me, and
(2) I don't see how we modify that if we want to provide
On Thu, Oct 18, 2012 at 7:42 PM, Claudio Freire klaussfre...@gmail.com wrote:
Fun. That didn't take long.
With the attached anti-sequential scan patch, and effective_io_concurrency=8:
QUERY PLAN
On Tue, Oct 23, 2012 at 9:44 AM, John Lumby johnlu...@hotmail.com wrote:
From: Claudio Freire klaussfre...@gmail.com
I hope I'm not talking to myself.
Indeed not. I also looked into prefetching for pure index scans for
b-trees (and extension to use async io).
http
On Sat, Oct 27, 2012 at 3:41 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
I think you're just moving the atomic-write problem from the data pages
to wherever you keep these pointers.
If the pointers are stored as simple 4-byte integers, you probably could
assume that they're
On Tue, Oct 23, 2012 at 10:54 AM, Claudio Freire klaussfre...@gmail.com wrote:
Indeed not. I also looked into prefetching for pure index scans for
b-trees (and extension to use async io).
http://archives.postgresql.org/message-id/BLU0-SMTP31709961D846CCF4F5EB4C2A3930%40phx.gbl
Yes, I've
On Mon, Oct 29, 2012 at 7:07 PM, Cédric Villemain
ced...@2ndquadrant.com wrote:
But it also looks forgotten. Bringing it back to life would mean
building the latest kernel with that patch included, replicating the
benchmarks I ran here, sans pg patch, but with patched kernel, and
reporting the
On Thu, Nov 1, 2012 at 1:37 PM, John Lumby johnlu...@hotmail.com wrote:
Claudio wrote :
Oops - forgot to effectively attach the patch.
I've read through your patch and the earlier posts by you and Cédric.
This is very interesting. You chose to prefetch index btree (key-ptr)
pages
On Thu, Nov 1, 2012 at 2:00 PM, Andres Freund and...@2ndquadrant.com wrote:
I agree. I'm a bit hesitant to subscribe to yet another mailing list
FYI you can send messages to linux-kernel without subscribing (there's
no moderation either).
Subscribing to linux-kernel is like drinking from a
On Thu, Nov 1, 2012 at 10:59 PM, Greg Smith g...@2ndquadrant.com wrote:
On 11/1/12 6:13 PM, Claudio Freire wrote:
posix_fadvise what's the trouble there, but the fact that the kernel
stops doing read-ahead when a call to posix_fadvise comes. I noticed
the performance hit, and checked
On Tue, Nov 6, 2012 at 6:59 PM, Tom Lane t...@sss.pgh.pa.us wrote:
If, instead, you are keen on getting the source code for libpq in a
separate tarball, I'd seriously question why that would be expected to be
valuable. On most systems, these days, it doesn't take terribly much time
or space
On Tue, Nov 6, 2012 at 7:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Claudio Freire klaussfre...@gmail.com writes:
Maybe anl libs / install-libs makefile target?
I've already faced the complicated procedure one has to go through to
build and install only libpq built from source
On Thu, May 31, 2012 at 11:17 AM, Robert Klemme
shortcut...@googlemail.com wrote:
OK, my fault was to assume you wanted to measure only your part, while
apparently you meant overall savings. But Tom had asked for separate
measurements if I understood him correctly. Also, that measurement of
On Thu, May 31, 2012 at 11:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:
The performance patches we applied to pg_dump over the past couple weeks
were meant to relieve pain in situations where the big server-side
lossage wasn't the dominant factor in runtime (ie, partial dumps).
But this one is
On Thu, May 31, 2012 at 12:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
No, Tatsuo's patch attacks a phase dominated by latency in some
setups.
No, it does not. The reason it's a win is that it avoids the O(N^2)
behavior in the server. Whether the bandwidth savings is worth worrying
about
On Sun, Feb 24, 2013 at 11:08 AM, Stephen Frost sfr...@snowman.net wrote:
* Heikki Linnakangas (hlinnakan...@vmware.com) wrote:
So if you want to be kind to readers, look at the patch and choose
the format depending on which one makes it look better. But there's
no need to make a point of it
On Tue, Apr 2, 2013 at 1:24 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
The problem is that the state is maintained only to an integer number of
milliseconds starting at 1, so it can take a number of attempts for the
random increment to jump from 1 to 2, and
On Fri, Apr 19, 2013 at 6:19 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Apr 3, 2013 at 6:40 PM, Greg Stark st...@mit.edu wrote:
On Fri, Aug 21, 2009 at 6:54 PM, decibel deci...@decibel.org wrote:
Would it? Risk seems like it would just be something along the lines of
the high-end of
On Fri, Apr 19, 2013 at 7:43 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, Apr 19, 2013 at 2:24 PM, Claudio Freire klaussfre...@gmail.com
wrote:
Especially if there's some locality of occurrence, since analyze
samples pages, not rows.
But it doesn't take all rows in each sampled page
On Wed, Apr 24, 2013 at 6:47 PM, Joachim Wieland j...@mcknight.de wrote:
On Wed, Apr 24, 2013 at 4:05 PM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
What might make sense is something like pg_dump_restore which would have
no intermediate storage at all, just pump the data etc from
On Tue, May 14, 2013 at 11:50 AM, Noah Misch n...@leadboat.com wrote:
On Mon, May 13, 2013 at 09:52:43PM +0200, Kohei KaiGai wrote:
2013/5/13 Noah Misch n...@leadboat.com
The choice of whether to parallelize can probably be made a manner similar
to
the choice to do an external sort: the
On Wed, May 15, 2013 at 3:04 PM, Noah Misch n...@leadboat.com wrote:
On Tue, May 14, 2013 at 12:15:24PM -0300, Claudio Freire wrote:
You know what would be a low-hanging fruit that I've been thinking
would benefit many of my own queries?
Parallel sequential scan nodes. Even if there's no real
On Fri, May 17, 2013 at 4:25 PM, Kevin Grittner kgri...@ymail.com wrote:
(3) The count algorithm must be implemented in a way that understands
MVCC internals: Reading the base tables must be done using a technique
that reads all rows (i.e., also the ones not visible to the current
On Thu, May 23, 2013 at 8:27 PM, Greg Smith g...@2ndquadrant.com wrote:
The main unintended consequences issue I've found so far is when a cost
delayed statement holds a heavy lock. Autovacuum has some protection
against letting processes with an exclusive lock on a table go to sleep. It
On Thu, May 23, 2013 at 8:46 PM, Greg Smith g...@2ndquadrant.com wrote:
On 5/23/13 7:34 PM, Claudio Freire wrote:
Why not make the delay conditional on the amount of concurrency, kinda
like the commit_delay? Although in this case, it should only count
unwaiting connections.
The test run
On Fri, May 24, 2013 at 4:10 PM, Szymon Guz mabew...@gmail.com wrote:
I'm thinking about something else. We could convert it into Decimal
(http://docs.python.org/2/library/decimal.html) class in Python.
Unfortunately this class requires import like `from decimal import Decimal`
from a
On Fri, May 24, 2013 at 4:22 PM, Szymon Guz mabew...@gmail.com wrote:
Hm... maybe you're right. I think I don't understand fully how the
procedures are executed, and I need to read more to get it.
Well, it's easy.
Instead of PLyFloat_FromNumeric[0], you can make a
PLyDecimal_FromNumeric.
On Mon, May 27, 2013 at 8:13 PM, Peter Eisentraut pete...@gmx.net wrote:
On Fri, 2013-05-24 at 16:46 -0300, Claudio Freire wrote:
Well, it's easy.
Instead of PLyFloat_FromNumeric[0], you can make a
PLyDecimal_FromNumeric.
Please send a patch. This would be a welcome addition.
I can write
On Wed, Jun 12, 2013 at 11:55 AM, Robert Haas robertmh...@gmail.com wrote:
I hope PostgreSQL will provide a reliable archiving facility that is ready
to use.
+1. I think we should have a way to set an archive DIRECTORY, rather
than an archive command. And if you set it, then PostgreSQL
On Wed, Jun 12, 2013 at 6:03 PM, Joshua D. Drake j...@commandprompt.com wrote:
Right now you have to be a rocket
scientist no matter what configuration you're running.
This is quite a bit overblown. Assuming your needs are simple. Archiving is
at it is now, a relatively simple process to
On Wed, Jun 19, 2013 at 7:13 AM, Tatsuo Ishii is...@postgresql.org wrote:
For now, my idea is pretty vague.
- Record info about modified blocks. We don't need to remember the
whole history of a block if the block was modified multiple times.
We just remember that the block was modified
On Wed, Jun 19, 2013 at 3:54 PM, Jim Nasby j...@nasby.net wrote:
On 6/19/13 11:02 AM, Claudio Freire wrote:
On Wed, Jun 19, 2013 at 7:13 AM, Tatsuo Ishii is...@postgresql.org
wrote:
For now, my idea is pretty vague.
- Record info about modified blocks. We don't need to remember
On Wed, Jun 19, 2013 at 6:20 PM, Stephen Frost sfr...@snowman.net wrote:
* Claudio Freire (klaussfre...@gmail.com) wrote:
I don't see how this is better than snapshotting at the filesystem
level. I have no experience with TB scale databases (I've been limited
to only hundreds of GB), but from
On Wed, Jun 19, 2013 at 7:18 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
If you have the two technologies, you could teach them to work in
conjunction: you set up WAL replication, and tell the WAL compressor to
prune updates for high-update tables (avoid useless traffic), then use
On Wed, Jun 19, 2013 at 7:39 PM, Tatsuo Ishii is...@postgresql.org wrote:
I'm thinking of implementing an incremental backup tool for
PostgreSQL. The use case for the tool would be taking a backup of huge
database. For that size of database, pg_dump is too slow, even WAL
archive is too
On Wed, Jun 19, 2013 at 8:40 PM, Tatsuo Ishii is...@postgresql.org wrote:
On Wed, Jun 19, 2013 at 6:20 PM, Stephen Frost sfr...@snowman.net wrote:
* Claudio Freire (klaussfre...@gmail.com) wrote:
I don't see how this is better than snapshotting at the filesystem
level. I have no experience
On Mon, Jun 24, 2013 at 2:22 PM, Josh Berkus j...@agliodbs.com wrote:
I have previously proposed that all of the reviewers of a given
PostgreSQL release be honored in the release notes as a positive
incentive, and was denied on this from doing so. Not coincidentally, we
don't seem to have any
On Mon, Jun 24, 2013 at 2:41 PM, Andres Freund and...@2ndquadrant.com wrote:
I don't like idea of sending gifts. I do like the idea of public thanks. We
should put full recognition in the release notes for someone who reviews a
patch. If they didn't review the patch, the person that wrote the
On Mon, Jun 24, 2013 at 9:19 PM, Joshua D. Drake j...@commandprompt.com wrote:
I think the big question is whether you can _control_ what C++ features
are used, or whether you are perpetually instructing users what C++
features not to use.
How is that different than us having to do the same
On Tue, Jun 25, 2013 at 12:55 PM, Robert Haas robertmh...@gmail.com wrote:
Let me back up a minute. You told the OP that he could make hash
partitioning by writing his own constraint and trigger functions. I
think that won't work. But I'm happy to be proven wrong. Do you have
an example
On Tue, Jun 25, 2013 at 2:17 PM, Josh Berkus j...@agliodbs.com wrote:
How should reviewers get credited in the release notes?
c) on the patch they reviewed, for each patch
This not only makes sense, it also lets people reading release notes
know there's been a review, and how thorough it was. I
On Tue, Jun 25, 2013 at 4:15 PM, Andres Freund and...@2ndquadrant.com wrote:
However, can you tell me what exactly you are concerned about? lz4 is
under the BSD license, and released by Google.
Snappy is released/copyrighted by google. lz4 by Yann Collet.
Both are under BSD licenses (3 and 2
On Tue, Jun 25, 2013 at 4:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
However, I find it hard to think that hash partitioning as such is very
high on the to-do list. As was pointed out upthread, the main practical
advantage of partitioning is *not* performance of routine queries, but
improved
On Tue, Jun 25, 2013 at 6:52 PM, Kevin Grittner kgri...@ymail.com wrote:
I agree though, that having an index implementation that can do the
first level split faster than any partitioning mechanism can do is
better, and that the main benefits of partitioning are in
administration, *not*
On Wed, Jun 26, 2013 at 10:25 AM, Andrew Dunstan and...@dunslane.net wrote:
On 06/26/2013 09:14 AM, Bruce Momjian wrote:
On Wed, Jun 26, 2013 at 10:40:17AM +1000, Brendan Jurd wrote:
On 26 June 2013 03:17, Josh Berkus j...@agliodbs.com wrote:
How should reviewers get credited in the
On Wed, Jun 26, 2013 at 11:14 AM, Bruce Momjian br...@momjian.us wrote:
On Wed, Jun 26, 2013 at 05:10:00PM +0300, Heikki Linnakangas wrote:
In practice, there might be a lot of quirks and inefficiencies and
locking contention etc. involved in various DBMS's, that you might
be able to work
On Thu, Jun 27, 2013 at 6:20 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Jun 26, 2013 at 11:14 AM, Claudio Freire klaussfre...@gmail.com
wrote:
Now I just have two indices. One that indexes only hot tuples, it's
very heavily queried and works blazingly fast, and one that indexes
On Fri, Jun 28, 2013 at 2:18 PM, Jim Nasby j...@nasby.net wrote:
On 6/17/13 3:38 PM, Josh Berkus wrote:
Why? Why can't we just update the affected pages in the index?
The page range has to be scanned in order to find out the min/max values
for the indexed columns on the range; and then,
On Fri, Jun 28, 2013 at 4:16 PM, Josh Berkus j...@agliodbs.com wrote:
(2) ideas on how we can speed up/parallelize performance testing efforts
are extremely welcome.
An official perf-test script in GIT, even if it only tests general
pg-bench-like performance, that can take two builds and
On Fri, Jun 28, 2013 at 5:14 PM, Steve Singer st...@ssinger.info wrote:
On 06/27/2013 05:04 AM, Szymon Guz wrote:
On 27 June 2013 05:21, Steve Singer st...@ssinger.info
mailto:st...@ssinger.info wrote:
On 06/26/2013 04:47 PM, Szymon Guz wrote:
Hi Steve,
thanks for the changes.
On Sat, Jun 29, 2013 at 7:58 PM, Josh Berkus j...@agliodbs.com wrote:
Dividing the tests into different sections is as simple as creating one
schedule file per section.
Oh? Huh. I'd thought it would be much more complicated. Well, by all
means, let's do it then.
I think I should point
On Sun, Jun 30, 2013 at 9:45 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-06-30 14:42:24 +0200, Szymon Guz wrote:
On 30 June 2013 14:31, Martijn van Oosterhout klep...@svana.org wrote:
On Sun, Jun 30, 2013 at 02:18:07PM +0200, Szymon Guz wrote:
python does not any any sort of
On Mon, Jul 1, 2013 at 2:29 AM, james ja...@mansionfamily.plus.com wrote:
On 01/07/2013 02:43, Claudio Freire wrote:
In essence, you'd have to use another implementation. CPython guys
have left it very clear they don't intend to fix that, as they don't
consider it a bug. It's just how
On Mon, Jul 1, 2013 at 4:32 PM, Robert Haas robertmh...@gmail.com wrote:
This shouldn't be too complex, and should give us a fixed nlogn complexity
even for wild data sets, without affecting existing normal data sets that
are present in every day transactions. I even believe that those data
On Mon, Jul 1, 2013 at 5:12 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jul 1, 2013 at 3:54 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Mon, Jul 1, 2013 at 4:32 PM, Robert Haas robertmh...@gmail.com wrote:
This shouldn't be too complex, and should give us a fixed nlogn
On Tue, Jul 2, 2013 at 12:36 PM, Peter Geoghegan p...@heroku.com wrote:
On Tue, Jul 2, 2013 at 5:04 AM, Atri Sharma atri.j...@gmail.com wrote:
I think if you'll try it you'll find that we perform quite well on
data sets of this kind - and if you read the code you'll see why.
Right, let me
On Fri, Jul 5, 2013 at 2:02 AM, Tatsuo Ishii is...@postgresql.org wrote:
- Support for NATIONAL_CHARACTER_SET GUC variable that will determine
the encoding that will be used in NCHAR/NVARCHAR columns.
You said NCHAR's encoding is UTF-8. Why do you need the GUC if NCHAR's
encoding is fixed to
On Fri, Jul 5, 2013 at 11:47 PM, Peter Eisentraut pete...@gmx.net wrote:
On Fri, 2013-06-28 at 17:29 -0300, Claudio Freire wrote:
Why not forego checking of the type, and instead check the interface?
plpy.info(x.as_tuple())
Should do.
d = decimal.Decimal((0,(3,1,4),-2))
d.as_tuple
On Sat, Jul 6, 2013 at 2:39 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Peter Eisentraut pete...@gmx.net writes:
PL/Python: Convert numeric to Decimal
Assorted buildfarm members don't like this patch.
Do you have failure details?
This is probably an attempt to operate decimals vs floats.
Ie:
, 2013 at 9:16 AM, Andrew Dunstan and...@dunslane.net wrote:
On 07/06/2013 01:52 AM, Claudio Freire wrote:
On Sat, Jul 6, 2013 at 2:39 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Peter Eisentraut pete...@gmx.net writes:
PL/Python: Convert numeric to Decimal
Assorted buildfarm members don't like
On Sun, Jul 7, 2013 at 4:28 PM, Peter Eisentraut pete...@gmx.net wrote:
On Sun, 2013-07-07 at 02:01 -0300, Claudio Freire wrote:
You really want to test more than just the str. The patch contains
casts to int and float, which is something existing PLPython code will
be doing, so it's good
On Thu, Jul 11, 2013 at 1:13 AM, Sean Chittenden s...@chittenden.org wrote:
, I suppose two things can be done:
1. Quit the connection
With my Infosec hat on, this is the correct option - force the client
back in to compliance with whatever the stated crypto policy is through
a
On Mon, Jul 22, 2013 at 6:04 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Pavan Deolasee escribió:
Hello,
While doing some tests, I observed that expression indexes can malfunction
if the underlying expression changes.
[...]
Perhaps this is a known behaviour/limitation, but I could
On Tue, Aug 6, 2013 at 3:31 PM, Bruce Momjian br...@momjian.us wrote:
I'd like to look at use cases, and let's see how ALTER SYSTEM SET
addresses or doesn't address these use cases. I'd really like it if
some other folks also posted use cases they know of.
(1) Making is easier for GUIs to
On Wed, Aug 14, 2013 at 9:34 PM, Peter Eisentraut pete...@gmx.net wrote:
On Tue, 2013-08-13 at 14:30 -0700, Josh Berkus wrote:
Currently PL/python has 1 dimension hardcoded for returning arrays:
create or replace function nparr ()
returns float[][]
language plpythonu
as $f$
from numpy
On Wed, Sep 11, 2013 at 12:27 PM, Bruce Momjian br...@momjian.us wrote:
Another argument in favor: this is a default setting, and by default,
shared_buffers won't be 25% of RAM.
So, are you saying you like 4x now?
Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
On Tue, Oct 8, 2013 at 1:23 AM, Atri Sharma atri.j...@gmail.com wrote:
Consider the aspects associated with open addressing.Open addressing
can quickly lead to growth in the main table.Also, chaining is a much
cleaner way of collision resolution,IMHO.
What do you mean by growth in the main
On Thu, Oct 10, 2013 at 1:13 PM, Robert Haas robertmh...@gmail.com wrote:
(1) Define the issue as not our problem. IOW, as of now, if you
want to use PostgreSQL, you've got to either make POSIX shared memory
work on your machine, or change the GUC that selects the type of
dynamic shared
On Wed, Oct 16, 2013 at 5:30 PM, Bruce Momjian br...@momjian.us wrote:
On Wed, Oct 16, 2013 at 04:25:37PM -0400, Andrew Dunstan wrote:
On 10/09/2013 11:06 AM, Andrew Dunstan wrote:
The assumption that each connection won't use lots of work_mem is
also false, I think, especially in these
On Tue, Oct 29, 2013 at 1:10 PM, Peter Geoghegan p...@heroku.com wrote:
On Tue, Oct 29, 2013 at 7:53 AM, Leonardo Francalanci m_li...@yahoo.it
wrote:
I don't see much interest in insert-efficient indexes.
Presumably someone will get around to implementing a btree index
insertion buffer one
On Wed, Oct 30, 2013 at 10:53 AM, Simon Riggs si...@2ndquadrant.com wrote:
LSM-tree also covers the goal of maintaining just 2 sub-trees and a
concurrent process of merging sub-trees. That sounds like it would
take a lot of additional time to get right and would need some
off-line process to
On Sun, Nov 3, 2013 at 3:58 PM, Florian Weimer fwei...@redhat.com wrote:
I would like to add truly asynchronous query processing to libpq, enabling
command pipelining. The idea is to to allow applications to auto-tune to
the bandwidth-delay product and reduce the number of context switches
On Mon, Nov 4, 2013 at 1:09 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Nov 2, 2013 at 6:07 AM, Simon Riggs si...@2ndquadrant.com wrote:
On 29 October 2013 16:10, Peter Geoghegan p...@heroku.com wrote:
On Tue, Oct 29, 2013 at 7:53 AM, Leonardo Francalanci m_li...@yahoo.it
wrote:
I
On Mon, Nov 4, 2013 at 1:27 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Nov 4, 2013 at 11:24 AM, Claudio Freire klaussfre...@gmail.com
wrote:
Such a thing would help COPY, so maybe it's worth a look
I have little doubt that a deferred insertion buffer of some kind
could help
On Mon, Nov 4, 2013 at 5:01 PM, Simon Riggs si...@2ndquadrant.com wrote:
Of course, it's possible that even we do get a shared memory
allocator, a hypothetical person working on this project might prefer
to make the data block-structured anyway and steal storage from
shared_buffers. So my
On Tue, Nov 5, 2013 at 6:57 AM, Leonardo Francalanci m_li...@yahoo.it wrote:
Simon Riggs wrote
Minmax indexes seem to surprise many people, so broad generalisations
aren't likely to be useful.
I think the best thing to do is to publish some SQL requests that
demonstrate in detail what you
1 - 100 of 470 matches
Mail list logo