eally hard time determining when to degrade.
+1
This is also how 2PC works, btw - the database provides the building
blocks, i.e. PREPARE and COMMIT, and leaves it to a transaction manager
to deal with issues that require a whole-cluster perspective.
best regards,
Florian Pflug
--
Sent via
"#consistent_into on" or whatever.
*) Have pg_dump add that to all plpgsql functions if the server
version is < 9.4 or whatever major release this ends up in
That's all just my opinion of course.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pg
consistent_into to true;
I don't think a GUC is the best way to handle this. Handling
this via a per-function setting similar to #variable_conflict would
IMHO be better.So a function containing
#into_surplus_rows error
would complain whereas
#into_surplus_rows ignore_for_select
wou
On Jan11, 2014, at 18:53 , Andres Freund wrote:
> On 2014-01-11 18:28:31 +0100, Florian Pflug wrote:
>> Hm, I was about to suggest that you can set statement_timeout before
>> doing COMMIT to limit the amount of time you want to wait for the
>> standby to respond. Interes
grpah (18) says
The declared type of an ENL is an
implementation-defined exact numeric type whose scale is the number of
s to the right of the . There shall be an exact numeric
type capable of representing the value of ENL exactly.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
the amount of time you want to wait for the
standby to respond. Interestingly, however, that doesn't seem to work,
which is weird, since AFAICS statement_timeout simply generates a
query cancel requester after the timeout has elapsed, and cancelling
the COMMIT with Ctrl-C in psql *does* work.
I
On Jan10, 2014, at 15:10 , Merlin Moncure wrote:
> On Fri, Jan 10, 2014 at 6:00 AM, Florian Pflug wrote:
>> On Jan10, 2014, at 11:00 , Merlin Moncure wrote:
>>> On Fri, Jan 10, 2014 at 3:52 AM, Marko Tiikkaja wrote:
>>>> On 1/10/14, 10:41 AM, Merlin Moncure wro
On Jan11, 2014, at 01:24 , Jim Nasby wrote:
> On 1/10/14, 1:07 PM, Tom Lane wrote:
>> Florian Pflug writes:
>>> >On Jan10, 2014, at 19:08 , Tom Lane wrote:
>>>> >>Although, having said that ... maybe "build your own aggregate" would
>>>&
On Jan10, 2014, at 17:46 , Tom Lane wrote:
> Florian Pflug writes:
>> On Jan10, 2014, at 15:49 , Tom Lane wrote:
>>> Also, it might be reasonable for both the regular and the inverse
>>> transition functions to be strict. If a null entering the window
>>>
maybe even a small minority
> requirement. People who have the chops to get this sort of thing right
> can probably manage a custom aggregate definition.
So we'd put a footgun into the hands of people who don't know what they're
doing, to be fired for performance's
sed optimization for float is akin to C compiler
which decided to evaluate
a + b + c + … z
as
-a + (2a - b) + (2b - c) + … + (2y - z) + 2z
Algebraically, these are the same, but it'd still be insane to
do that.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-
On Jan10, 2014, at 15:49 , Tom Lane wrote:
> Florian Pflug writes:
>> Looking at the code it seems that for quite a few existing aggregates,
>> the state remains NULL until the first non-NULL input is processed. But
>> that doesn't hurt much - those aggregates ca
This solution isn't particularly pretty, but I don't currently see a good
alternative that allows implementing inverse transfer functions is something
other than C and avoid needless overhead for those which are written in C.
best regards,
Florian Pflug
--
Sent via pgsql-hackers ma
; [2,1], 3
> [2,2], 4
Now that we have WITH ORDINALITY, it'd be sufficient to have a
variant of array_dims() that returns int[][] instead of text, say
array_dimsarray(). Your unnest_dims could then be written as
unnest(array_dimsarray(array)) with ordinality
best regards,
florian pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Jan9, 2014, at 23:26 , Jim Nasby wrote:
> On 1/9/14, 11:08 AM, Marko Tiikkaja wrote:
>> On 1/9/14 5:44 PM, Florian Pflug wrote:
>>> On Jan9, 2014, at 14:57 , Dean Rasheed wrote:
>>>> On 19 December 2013 08:05, Pavel Stehule wrote:
>>>>> length
On Jan9, 2014, at 18:09 , Tom Lane wrote:
> Florian Pflug writes:
>> For float 4 and float8, wasn't the consensus that the potential
>> lossy-ness of addition makes this impossible anyway, even without the
>> NaN issue? But...
>
> Well, that was my opinion,
y('{{1,2},{3,4}}'::int[][]) = 4. That would make it
> consistent with the choices we've already made for unnest() and
> ordinality:
> - cardinality(foo) = (select count(*) from unnest(foo)).
> - unnest with ordinality would always result in ordinals in the range
> [
eporting "I can't do it here,
> fall back to the hard way".
that sounds like it might be possible to make things work for float4
and float8 afterall, if we can determine whether a particular addition
was lossy or not.
best regards,
Florian Pflug
--
Sent via pg
egards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
see* another
transaction's modifications but doesn't due to begin/commit ordering
it must appear before that transaction.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
tion detail that shouldn't be observable. If we want to
allow people to use lazy de-TOAST-ing as an optimization tool, we
should provide an explicit way to do so, e.g. by flagging variables
in pl/pgsql as REFERENCE or something like that.
best regards,
Florian Pflug
--
Sent via pgsql-hack
WHERE id=1;
W1: SELECT count FROM t WHERE id=2;
W2: SET enable_seqscan=off;
W2: START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
W2: UPDATE t SET count=count+1 WHERE id=2;
W2: COMMIT;
R : START TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY;
R : SELECT 1;
W1: COMMIT;
R : SELECT data FROM t WHERE i
apshot and wait again. Thus, once the START TRANSACTION
with the DEFERRABLE flag has committed, you can be sure that the transaction
won't later be aborted due to a serialization error.
BTW, since this is a question about how to use postgres rather than
how to extend it, it actually belongs on
On Dec26, 2013, at 21:30 , Florian Pflug wrote:
> On Dec23, 2013, at 18:39 , Peter Eisentraut wrote:
>> On 12/19/13, 6:40 PM, Florian Pflug wrote:
>>> The following example fails for XMLOPTION set to DOCUMENT as well as for
>>> XMLOPTION set to CONTENT.
>>>
On Dec23, 2013, at 18:39 , Peter Eisentraut wrote:
> On 12/19/13, 6:40 PM, Florian Pflug wrote:
>> The following example fails for XMLOPTION set to DOCUMENT as well as for
>> XMLOPTION set to CONTENT.
>>
>> select xmlconcat(
>>xmlparse(document ']>&
On Dec23, 2013, at 03:45 , Robert Haas wrote:
> On Fri, Dec 20, 2013 at 8:16 PM, Florian Pflug wrote:
>> On Dec20, 2013, at 18:52 , Robert Haas wrote:
>>> On Thu, Dec 19, 2013 at 6:40 PM, Florian Pflug wrote:
>>>> Solving this seems a bit messy, unfortunat
On Dec20, 2013, at 18:52 , Robert Haas wrote:
> On Thu, Dec 19, 2013 at 6:40 PM, Florian Pflug wrote:
>> Solving this seems a bit messy, unfortunately. First, I think we need to
>> have some XMLOPTION value which is a superset of all the others - otherwise,
>> dump
ly to be able to process such
a value. (1) matches how we currently handle XML declarations (), so
I'm slightly in favour of that.
Thoughts?
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
d have to verify do what we would
have done had we already been in SERIALIZABLE mode when the modification
occurred. That means checking for SIREAD locks taken by other transactions,
on the tuple and all relevant index pages (plus all corresponding
coarser-grained entities like the tuples's page, the table, …).
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ent. I don't mean that
as an argument against changing the sampling method, just as something
to watch out for.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ethod seems to exhibit the worst rather than the best properties of block-
and row- based sampling. What conclusions to draw of that, I'm not sure yet -
other that if we move to block-based sampling, we'll certainly have to change
the way we estimate n_distinct.
best regards,
Flor
mbers, in contrast, only tells you that the AVERAGE of
n samples of a random variable will converge to the random variables'
expected value as n goes to infinity (there are different versions of the
law which guarantee different kinds of convergence, weak or strong).
best regards,
Florian Pflu
On Dec11, 2013, at 11:47 , Andres Freund wrote:
> On 2013-12-11 11:42:25 +0100, Florian Pflug wrote:
>> On Dec5, 2013, at 15:44 , Andres Freund wrote:
>>> There might be some ugly compiler dependent magic we could do. Depending
>>> on how we decide to declare offset
s less ugly than the
alternatives. But I'm afraid is also unportable - typeof() is a GCC
extension, not a part of ANSI C, no?
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
st like
ri_trigger.c's ri_PerformCheck() does.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
can be found here:
https://github.com/fgp/lockbench.
The different LWLock implementations live in the various pg_lwlock_* subfolders.
Here's a pointer into the relevant thread:
http://www.postgresql.org/message-id/651002c1-2ec1-4731-9b29-99217cb36...@phlo.org
best regards,
Florian Pflug
hough WITH RECURSIVE already was a *huge* improvement in this
area!). But storing each graph as a graph type would do isn't the
way forward, IMHO.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
solved code issue that I know of is moving the compiler flags behind
> a configure check. I would greatly appreciate it if you could take a look at
> that. My config-fu is weak and it would take me some time to figure out how
> to do that.
Do we necessarily have to do that before beta?
ith a GiST-based btree and extend it to support
"distance" function. Looking at contrib/btree_gist and the built-in GiST
operator class for point types should help.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ardware-accelerated CRC, reaching about 6bytes/cycle
on modern Intel CPUS. This really is plenty fast - if I'm not mistaken, it
translates to well over 10 GB/s.
So overall -1 for removing the shift.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql
On Apr22, 2013, at 21:14 , Jeff Davis wrote:
> On Mon, 2013-04-22 at 20:04 +0200, Florian Pflug wrote:
>> The one downside of the fnv1+shift approach is that it's built around
>> the assumption that processing 64-bytes at once is the sweet spot. That
>> might be true for
bytes at once is the sweet spot. That
might be true for x86 and x86_64 today, but it won't stay that way for
long, and quite surely isn't true for other architectures. That doesn't
necessarily rule it out, but it certainly weakens the argument that
slipping it into 9.3 avoids having
On Apr19, 2013, at 14:46 , Martijn van Oosterhout wrote:
> On Wed, Apr 17, 2013 at 12:49:10PM +0200, Florian Pflug wrote:
>> Fixing this on the receive side alone seems quite messy and fragile.
>> So instead, I think we should let the master send a shutdown message
>> after i
On 18.04.2013, at 20:02, Ants Aasma wrote:
> On Thu, Apr 18, 2013 at 8:24 PM, Ants Aasma wrote:
>> On Thu, Apr 18, 2013 at 8:15 PM, Florian Pflug wrote:
>>> So either the CRC32-C polynomial isn't irreducible, or there something
>>> fishy going on. Could there be
e that?
The third possibility is that I've overlooking something, of course ;-)
Will think more about this tomorrow if time permits
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
s and blocks of all 1 bits".
>
> That is fairly easy to fix by using a different modulus: 251 vs 255.
At the expense of a drastic performance hit though, no? Modulus operations
aren't exactly cheap.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hacke
s (e.g. those which swap the 64*i-th and the (64+1)-ith
>> words), but those seem very unlikely to occur randomly. But if we're
>> worried about that, we could use your linear combination method for
>> the aggregation phase.
>
> I don't think it significantly reduc
's
a special CRC instruction, that is). And there's also a ton of stuff on
cryptographic hashing, but those are optimized for a completely different
use-case...
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Apr17, 2013, at 18:15 , Bruce Momjian wrote:
> On Wed, Apr 17, 2013 at 05:28:06PM +0200, Florian Pflug wrote:
>> However, you're right that time's running out. It'd be a shame though
>> if we'd lock ourselves into CRC as the only available algorithm essential
t; Any opinions if it would be a reasonable tradeoff to have a better
> checksum with great performance on latest x86 CPUs and good
> performance on other architectures at the expense of having only ok
> performance on older AMD CPUs?
The loss on AMD is offset by the increased performanc
ually
chance, that'd be far easier to pull off than actually supporting multiple
page layouts. If that works, then shipping 9.3 with CRC is probably
the best solution. If not, we should see to it that something like Ants
parallel version of FNV or a smallget into 9.3 if at all possible,
IMHO.
best
ld log a fat WARNING.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Apr16, 2013, at 23:41 , Ants Aasma wrote:
> On Tue, Apr 16, 2013 at 11:20 PM, Florian Pflug wrote:
>> On Apr13, 2013, at 17:14 , Ants Aasma wrote:
>>> Based on current analysis, it is particularly good at detecting single
>>> bit errors, as good at detecting burs
FNV hashing. It mentions a few rules for
picking suitable primes
http://www.isthe.com/chongo/tech/comp/fnv
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
eport(ERROR,
(errmsg("could not send data to WAL stream: %s",
PQerrorMessage(streamConn;
Unless I'm missing something, that certainly seems to explain
how a standby can lag behind even after a controlled shutdown of
the master.
best regards,
Florian Pflug
--
Se
es of user tables
and indices are simply copied unchanged.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ole of record N+1 and then
crash to cause a false positive).
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Apr3, 2013, at 15:30 , Andrew Dunstan wrote:
> On 04/02/2013 02:46 PM, Florian Pflug wrote:
>> If we're going to break compatibility, we should IMHO get rid of
>> non-zero lower bounds all together. My guess is that the number of
>> affected users wouldn't be mu
d IMHO get rid of
non-zero lower bounds all together. My guess is that the number of
affected users wouldn't be much higher than for the proposed patch,
and it'd allow lossless mapping to most language's native array types…
best regards,
Florian Pflug
--
Sent via p
ly) a chance of 1 in 32768, i.e the
strength of the checksum is reduced by one bit. That's still acceptable,
I'd say.
In practice, 0xDEAD may be a bad choice because of it's widespread use
as an uninitialized marker for blocks of memory. A randomly picked value
would probably be a better
On Jun28, 2012, at 17:29 , Tom Lane wrote:
> Kohei KaiGai writes:
>> 2012/6/27 Florian Pflug :
>>> Hm, what happens if a SECURITY DEFINER functions returns a refcursor?
>
>> My impression is, here is no matter even if SECURITY DEFINER function
>> returns refc
handle that today. If the executor is
responsible for permission checks, that wouldn't we apply the calling
function's privilege level in that case, at least of the cursor isn't
fetched from in the SECURITY DEFINER function? If I find some time,
I'll check...
best regards,
Flori
ecution time, that counts.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
nt direction. Since those types of compressions are
usually pretty easy to decompress, that reduces the amount to work
non-libpq clients have to put in to take advantage of compression.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
ally that simple, at least if we want
to support compression without authentication or encryption, and don't
want to restrict ourselves to using OpenSSL forever. So unless we give
up at least one of those requirements, the arguments for using
SSL-compression are rather thin, I think.
best regard
ons,
e.g. for every *lt() (less-than) there's a *gt() (greater-than).
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ich argument types swapped), and it should in turn
name the original operator as it's COMMUTATOR.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
provided that the background
process ignores hint bits when fetching the old and new tuples.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
a bit harder, I fear. If a tuple is
updated multiple times by the same transaction, you cannot decide whether a
tuple was visible in a certain snapshot unless you have access to the updating
backend's ComboCID hash.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing li
On Jun21, 2012, at 11:55 , Peter Geoghegan wrote:
> On 21 June 2012 10:24, Florian Pflug wrote:
>> On Jun21, 2012, at 02:22 , Peter Geoghegan wrote:
>>> I've written a very small C++ program, which I've attached, that
>>> basically proves that this can still m
m() otherwise. But then it passes *false*
to determine the speed of strcoll(), and *true* to determine the speed
of strxfm()...
Also, place_random_string() needs to insert a terminating zero byte,
otherwise set_test segfaults, at least on my OSX machine.
best regards,
Florian Pflug
--
Sent via p
agree that there's a use-case for having a textual type which
treats equivalent strings as equal (and which should then also treat
equivalent Unicode representations of the same character as equal). But
it should be a separate type, not bolted onto the existing textual types.
best regards,
Floria
On Jun20, 2012, at 22:40 , Marko Kreen wrote:
> On Wed, Jun 20, 2012 at 10:05 PM, Florian Pflug wrote:
>> I'm starting to think that relying on SSL/TLS for compression of
>> unencrypted connections might not be such a good idea after all. We'd
>> be using the pro
MD5
or SHA digest of every packet sent.
I'm starting to think that relying on SSL/TLS for compression of
unencrypted connections might not be such a good idea after all. We'd
be using the protocol in a way it quite clearly never was intended to
be used...
best regards,
Florian
On Jun20, 2012, at 17:34 , Tom Lane wrote:
> Florian Pflug writes:
>> I wonder though if shouldn't restrict the allowed ciphers list to being
>> a simple list of supported ciphers. If our goal is to support multiple
>> SSL libraries transparently then surely having ope
see how it was done there, and most likely
> you'll find that at least one of the functions they use has no man
> page. Documentation isn't their strong point.
Yes, unfortunately.
I wonder though if shouldn't restrict the allowed ciphers list to being
a simple list of suppo
phers" list? The DBA wouldn't necessarily be
aware that such a cipher even exists, since it could have been made
available by an openssl upgrade…
But if we restrict the negotiable ciphers to the configure list + NULL,
then we're good I think.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Jun15, 2012, at 12:09 , Magnus Hagander wrote:
> On Fri, Jun 15, 2012 at 5:52 PM, Florian Pflug wrote:
>> On Jun15, 2012, at 07:50 , Magnus Hagander wrote:
>>> Second, we also have things like the JDBC driver and the .Net driver
>>> that don't use libpq. the
Java Secure Socket Extension). The JSSE implementation included with
the oracle JRE doesn't seem to support compression according to the
wikipedia page quoted above. But chances are that there exists an
alternative implementation which does.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
higher, of course).
SSL NULL-chipher connections would be treated like unencrypted connections
when matching against pg_hba.conf.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
to use the
debian-supplied libpq instead of your own.
Supporting sockets in multiple directories would solve that, once and for all.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ds
are stored in the registry, but are only accessible to the LocalSystem
account [2]. Querying them from the postgres installer thus isn't really an
option. But what you could do, I guess, is to offer the user the ability to
change the password, and using the approach from [1] to update th
to me that you wouldn't actually
be able to do anything useful with the conserved space, since postgres
could re-claim it at any time. At which point it'd better be available,
or your whole cluster comes to a screeching halt...
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Jun7, 2012, at 12:08 , Sandro Santilli wrote:
> On Thu, Jun 07, 2012 at 12:00:09PM +0200, Florian Pflug wrote:
>> On Jun7, 2012, at 10:20 , Sandro Santilli wrote:
>
>>> In that case I can understand Tom's advice about providing a callback,
>>> and then I wou
On Jun7, 2012, at 10:20 , Sandro Santilli wrote:
> On Sat, May 26, 2012 at 01:24:26PM +0200, Florian Pflug wrote:
>> On May26, 2012, at 12:40 , Simon Riggs wrote:
>>> On 25 May 2012 17:34, Tom Lane wrote:
>>>> I assume that the geos::util::Interrupt::request() c
rd-coded? What are
> your opinions?
If we're going to have this at all, we should go all the way and support an
arbitrary number of sockets. But then, is there any advantage in providing this
feature natively compare to simply creating symlinks?
best regards,
Florian Pflug
--
On Jun5, 2012, at 22:33 , Kohei KaiGai wrote:
> 2012/6/5 Florian Pflug :
>> I can live with any behaviour, as long as it doesn't depends on details
>> of the query plan. My vote would be for always using the role which was
>> active at statement creation time (i.e. at
On Jun5, 2012, at 15:07 , Kohei KaiGai wrote:
> 2012/6/5 Florian Pflug :
>> On Jun5, 2012, at 11:43 , Kohei KaiGai wrote:
>>> I think it does not require to add a mechanism to invalidate
>>> prepared-statement, because all the checks are applied on
>>> execut
On Jun5, 2012, at 11:43 , Kohei KaiGai wrote:
> 2012/6/5 Florian Pflug :
>> What's to be gained by that? Once there's *any* way to bypass a RLS
>> policy, you'll have to deal with the plan invalidation issues you
>> mentioned anyway. ISTM that complexity-wide,
users to bypass RLS policy.
What's to be gained by that? Once there's *any* way to bypass a RLS
policy, you'll have to deal with the plan invalidation issues you
mentioned anyway. ISTM that complexity-wide, you don't save much by not
having RLSBYPASS (or something similar), but
On Jun4, 2012, at 18:38 , Kohei KaiGai wrote:
> 2012/6/4 Florian Pflug :
>> Without something like RLSBYPASS, the DBA needs to have intimate
>> knowledge about the different RLS policies to e.g. guarantee that his
>> backups aren't missing crucial information, or that
t team for a specific application.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Jun1, 2012, at 21:07 , Robert Haas wrote:
> On Fri, Jun 1, 2012 at 2:54 PM, Florian Pflug wrote:
>> On Jun1, 2012, at 19:51 , Robert Haas wrote:
>>> On Fri, Jun 1, 2012 at 8:47 AM, Florian Pflug wrote:
>>>> We'd drain the unpin queue whenever we don
On Jun1, 2012, at 19:51 , Robert Haas wrote:
> On Fri, Jun 1, 2012 at 8:47 AM, Florian Pflug wrote:
>> A simpler idea would be to collapse UnpinBuffer() / PinBuffer() pairs
>> by queing UnpinBuffer() requests for a while before actually updating
>> shared state.
>
on't drop the pin in the first place if we know we're
> likely to touch that buffer again soon. btree root pages might be an
> exception, but I'm not even convinced of that one.
But Sergey's use-case pretty convincingly shows that, more generally,
inner sides of a nested loo
g PinBuffer() multiple times for multiple overlapping
pins of a single buffer by a single backend. The strategy above would extend
that to not-quite-overlapping pins.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On May31, 2012, at 02:26 , Sergey Koposov wrote:
> On Thu, 31 May 2012, Florian Pflug wrote:
>> Wait, so performance *increased* by spreading the backends out over as many
>> dies as possible, not by using as few as possible? That'd be exactly the
>> opposite of wh
On May31, 2012, at 01:16 , Sergey Koposov wrote:
> On Wed, 30 May 2012, Florian Pflug wrote:
>>
>> I wonder if the huge variance could be caused by non-uniform synchronization
>> costs across different cores. That's not all that unlikely, because at least
>> so
hatever indexes are defined on the
table. If one of the inserted fields is larger than the TOAST threshold, you'll
also get a separate record for the TOAST-table insertion, and the main tuple
will only contain references to the chunks in the TOAST table.
best regards,
Florian Pflug
--
ild processes, that should then spread
your backends out over exactly the cores you specify.
best regards,
Florian Pflug
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
101 - 200 of 759 matches
Mail list logo