On 07.10.2010 06:39, Robert Haas wrote:
On Tue, Oct 5, 2010 at 3:42 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Right, *column* filtering seems easy and entirely secure. The angst
here is about row filtering. Can we have a view in which users can see
the values of a column for some rows, with
2010/10/7 Robert Haas robertmh...@gmail.com:
On Mon, Oct 4, 2010 at 2:52 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
I am thinking so you can remove a scrollable cursor support from
ToDo for plpgsql. Scrollable cursors are supported and supported
syntax are same as core SQL language.
I
Robert Haas wrote:
Greg, have you run into any other evidence suggesting a problem with 2.6.32?
I haven't actually checked myself yet. Right now the only distribution
shipping 2.6.32 usefully is Ubuntu 10.04, which I can't recommend anyone
use on a server because their release schedules
Any feedbacks from TGL and Heikki, then?
2010/7/29 Joshua D. Drake j...@commandprompt.com:
On Thu, 2010-07-29 at 19:52 +0200, Vincenzo Romano wrote:
2010/7/29 Joshua D. Drake j...@commandprompt.com:
On Thu, 2010-07-29 at 19:34 +0200, Vincenzo Romano wrote:
I expect that a more complex
On 07.10.2010 10:09, Vincenzo Romano wrote:
Any feedbacks from TGL and Heikki, then?
I don't have anything to add to what others said already. Your best
advice is to test it yourself.
I would expect the plan time to be linear relative to the number of
partial indexes or child tables
On Wed, 2010-10-06 at 10:57 -0700, Josh Berkus wrote:
I also strongly believe that we should get single-standby
functionality committed and tested *first*, before working further on
multi-standby.
Yes, lets get k = 1 first.
With k = 1 the number of standbys is not limited, so we can still
On Wed, 2010-10-06 at 10:57 -0700, Josh Berkus wrote:
(2), (3) Degradation: (Jeff) these two cases make sense only if we
give
DBAs the tools they need to monitor which standbys are falling behind,
and to drop and replace those standbys. Otherwise we risk giving DBAs
false confidence that
On Thu, 2010-10-07 at 10:28 +0300, Heikki Linnakangas wrote:
constraint exclusion of CHECK constraints on the partitions is
exponential
Constraint exclusion is linear with respect to number of partitions.
Why do you say exponential?
--
Simon Riggs www.2ndQuadrant.com
PostgreSQL
On tis, 2010-10-05 at 22:28 -0400, Joachim Wieland wrote:
This is exactly what I am seeing here. However contrary to your case
the
patch makes it even worse on my side. With the patch compiled in and a
pg_hba.conf entry of localhost, I cannot connect anymore to -h
localhost, I get no
On 07.10.2010 10:41, Simon Riggs wrote:
On Thu, 2010-10-07 at 10:28 +0300, Heikki Linnakangas wrote:
constraint exclusion of CHECK constraints on the partitions is
exponential
Constraint exclusion is linear with respect to number of partitions.
Why do you say exponential?
For some reason I
On 10/06/2010 10:01 PM, Simon Riggs wrote:
The code to implement your desired option is
more complex and really should come later.
I'm sorry, but I think of that exactly the opposite way. The timeout for
automatic continuation after waiting for a standby is the addition. The
wait state of the
Markus Wanner mar...@bluegap.ch writes:
I'm just saying that this should be an option, not the only choice.
I'm sorry, I just don't see the use case for a mode that drops
guarantees when they are most needed. People who don't need those
guarantees should definitely go for async replication
On Wed, Oct 6, 2010 at 4:06 PM, Simon Riggs si...@2ndquadrant.com wrote:
The problem is how much WAL is stored on (any) node. Currently that is
wal_keep_segments, which doesn't work very well, but I've seen no better
ideas that cover all important cases.
What about allowing the master to read
On 07.10.2010 12:52, Dimitri Fontaine wrote:
Markus Wannermar...@bluegap.ch writes:
I'm just saying that this should be an option, not the only choice.
I'm sorry, I just don't see the use case for a mode that drops
guarantees when they are most needed. People who don't need those
guarantees
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
Either that, or you configure your system for asynchronous replication
first, and flip the switch to synchronous only after the standby has caught
up. Setting up the first standby happens only once when you initially set up
the
On Thu, 2010-10-07 at 11:46 +0200, Markus Wanner wrote:
On 10/06/2010 10:01 PM, Simon Riggs wrote:
The code to implement your desired option is
more complex and really should come later.
I'm sorry, but I think of that exactly the opposite way.
I see why you say that. Dimitri's suggestion
On Tue, 05 Oct 2010 14:59:27 -0400
Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I wonder if we
should focus on first efforts on really simple cases like CSV files
(as you mentioned) and perhaps something like memcached, which has
different properties than a
On Thu, Oct 7, 2010 at 2:38 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
2010/10/7 Robert Haas robertmh...@gmail.com:
On Mon, Oct 4, 2010 at 2:52 AM, Pavel Stehule pavel.steh...@gmail.com
wrote:
I am thinking so you can remove a scrollable cursor support from
ToDo for plpgsql. Scrollable
2010/10/7 Robert Haas robertmh...@gmail.com:
On Thu, Oct 7, 2010 at 2:38 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
2010/10/7 Robert Haas robertmh...@gmail.com:
On Mon, Oct 4, 2010 at 2:52 AM, Pavel Stehule pavel.steh...@gmail.com
wrote:
I am thinking so you can remove a scrollable
On 10-10-07 05:52 AM, Fujii Masao wrote:
On Wed, Oct 6, 2010 at 4:06 PM, Simon Riggssi...@2ndquadrant.com wrote:
The problem is how much WAL is stored on (any) node. Currently that is
wal_keep_segments, which doesn't work very well, but I've seen no better
ideas that cover all important cases.
On 10/07/2010 01:08 PM, Simon Riggs wrote:
Adding timeout is very little code. We can take that out of the patch if
that's an objection.
Okay. If you take it out, we are at the wait-forever option, right?
If not, I definitely don't understand how you envision things to happen.
I've been asking
On Thu, Oct 7, 2010 at 7:34 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
2010/10/7 Robert Haas robertmh...@gmail.com:
On Thu, Oct 7, 2010 at 2:38 AM, Pavel Stehule pavel.steh...@gmail.com
wrote:
2010/10/7 Robert Haas robertmh...@gmail.com:
On Mon, Oct 4, 2010 at 2:52 AM, Pavel Stehule
On Thu, Oct 7, 2010 at 3:30 AM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, lets get k = 1 first.
With k = 1 the number of standbys is not limited, so we can still have
very robust and highly available architectures. So we mean
first-acknowledgement-releases-waiters.
+1. I like the design
On Thu, Oct 7, 2010 at 2:02 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 07.10.2010 06:39, Robert Haas wrote:
On Tue, Oct 5, 2010 at 3:42 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Right, *column* filtering seems easy and entirely secure. The angst
here is about row
2010/10/7 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
On 07.10.2010 10:41, Simon Riggs wrote:
On Thu, 2010-10-07 at 10:28 +0300, Heikki Linnakangas wrote:
constraint exclusion of CHECK constraints on the partitions is
exponential
Constraint exclusion is linear with respect to
Salut Dimitri,
On 10/07/2010 12:32 PM, Dimitri Fontaine wrote:
Another one is to say that I want sync rep when the standby is
available, but I don't have the budget for more. So I prefer a good
alerting system and low-budget-no-guarantee when the standby is down,
that's my risk evaluation.
I
On Thu, Oct 7, 2010 at 8:46 PM, Steve Singer ssin...@ca.afilias.info wrote:
Then you have to deal with telling the archive how long it needs to keep WAL
segments because the master might ask for them back.
Yeah, it's not easy to determine how long we should keep the archived WAL files.
We need
* Robert Haas (robertmh...@gmail.com) wrote:
On Thu, Oct 7, 2010 at 2:02 AM, Heikki Linnakangas
Looks good. It gives the impression that you need to be able to a create
custom function to exploit, though. It would be good to mention that
internal functions can be used too, revoking access
On 09/23/2010 01:18 PM, Aidan Van Dyk wrote:
On Thu, Sep 23, 2010 at 11:49 AM, Tom Lanet...@sss.pgh.pa.us wrote:
Magnus Hagandermag...@hagander.net writes:
On Thu, Sep 23, 2010 at 17:32, Andrew Dunstanand...@dunslane.net wrote:
Are we sure that's going to stop the DOS issue?
As long as
On 07.10.2010 16:10, Stephen Frost wrote:
Also, even if you can't create functions (due to lack of create
privileges on any schema), you could use DO clauses now.
There's no way to shoehorn a DO clause into a SELECT, you can't do:
SELECT data FROM view WHERE (DO $$ RAISE NOTICE argument; $$)
On Thu, Oct 7, 2010 at 9:10 AM, Stephen Frost sfr...@snowman.net wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
On Thu, Oct 7, 2010 at 2:02 AM, Heikki Linnakangas
Looks good. It gives the impression that you need to be able to a create
custom function to exploit, though. It would be
Markus Wanner mar...@bluegap.ch writes:
Why does one ever want the guarantee that sync replication gives to only
hold true up to one failure, if a better guarantee doesn't cost anything
extra? (Note that a good alerting system is impossible to achieve with
only two servers. You need a third
On Thu, Oct 7, 2010 at 9:08 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Thu, Oct 7, 2010 at 8:46 PM, Steve Singer ssin...@ca.afilias.info wrote:
Then you have to deal with telling the archive how long it needs to keep WAL
segments because the master might ask for them back.
Yeah, it's not
* Heikki Linnakangas (heikki.linnakan...@enterprisedb.com) wrote:
On 07.10.2010 16:10, Stephen Frost wrote:
Also, even if you can't create functions (due to lack of create
privileges on any schema), you could use DO clauses now.
There's no way to shoehorn a DO clause into a SELECT, you can't
* Robert Haas (robertmh...@gmail.com) wrote:
On Thu, Oct 7, 2010 at 9:10 AM, Stephen Frost sfr...@snowman.net wrote:
This might be overly pedantic, but I don't think 'tampering' gives the
right impression.
I'm open to suggestions.
Yeah, wasn't coming up with a better word myself. :/
On Thu, Oct 7, 2010 at 8:10 AM, Vincenzo Romano
vincenzo.rom...@notorand.it wrote:
Making these things sub-linear (whether not O(log n) or even O(1) ),
provided that there's way to, would make this RDBMS more appealing
to enterprises.
I mean also partial indexes (as an alternative to table
On Thu, Oct 7, 2010 at 6:32 AM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
Or if the standby is lagging and the master wal_keep_segments is not
sized big enough. Is that a catastrophic loss of the standby too?
Sure, but that lagged standy is already asynchrounous, not
synchrounous. If it
On Thu, Oct 7, 2010 at 15:16, Andrew Dunstan and...@dunslane.net wrote:
On 09/23/2010 01:18 PM, Aidan Van Dyk wrote:
On Thu, Sep 23, 2010 at 11:49 AM, Tom Lanet...@sss.pgh.pa.us wrote:
Magnus Hagandermag...@hagander.net writes:
On Thu, Sep 23, 2010 at 17:32, Andrew
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
On 07.10.2010 10:41, Simon Riggs wrote:
Constraint exclusion is linear with respect to number of partitions.
Why do you say exponential?
For some reason I thought the planner needs to check the constraints of
the partitions
2010/10/7 Robert Haas robertmh...@gmail.com:
Well, you can't just arbitrarily turn a O(n) algorithm into an O(lg n)
That's trivially true. I was not asking for the recipe to do it.
algorithm. I think the most promising approach to scaling to large
numbers of partitions is the patch that
On 10/07/2010 09:44 AM, Magnus Hagander wrote:
On Thu, Oct 7, 2010 at 15:16, Andrew Dunstanand...@dunslane.net wrote:
On 09/23/2010 01:18 PM, Aidan Van Dyk wrote:
On Thu, Sep 23, 2010 at 11:49 AM, Tom Lanet...@sss.pgh.pa.uswrote:
Magnus Hagandermag...@hagander.netwrites:
On Thu,
Aidan Van Dyk ai...@highrise.ca writes:
Sure, but that lagged standy is already asynchrounous, not
synchrounous. If it was synchronous, it would have slowed the master
down enough it would not be lagged.
Agreed, except in the case of a joining standby. But you're saying it
better than I do:
On Thu, Oct 7, 2010 at 16:07, Andrew Dunstan and...@dunslane.net wrote:
On 10/07/2010 09:44 AM, Magnus Hagander wrote:
On Thu, Oct 7, 2010 at 15:16, Andrew Dunstanand...@dunslane.net wrote:
On 09/23/2010 01:18 PM, Aidan Van Dyk wrote:
On Thu, Sep 23, 2010 at 11:49 AM, Tom
2010/10/7 Tom Lane t...@sss.pgh.pa.us:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
On 07.10.2010 10:41, Simon Riggs wrote:
Constraint exclusion is linear with respect to number of partitions.
Why do you say exponential?
For some reason I thought the planner needs to check
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
I see the main problem in the way the planner understands which partition
is useful and which one is not.
Having the DDL supporting the feature could just be syntactic sugar
if the underlying mechanism is inadequate.
I'm pretty sure the
On Thu, Oct 7, 2010 at 10:08 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
Aidan Van Dyk ai...@highrise.ca writes:
Sure, but that lagged standy is already asynchrounous, not
synchrounous. If it was synchronous, it would have slowed the master
down enough it would not be lagged.
Agreed,
Vincenzo Romano wrote:
I see the main problem in the way the planner understands which partition
is useful and which one is not.
Having the DDL supporting the feature could just be syntactic sugar
if the underlying mechanism is inadequate.
You have the order of this backwards. In order to
2010/10/7 Stephen Frost sfr...@snowman.net:
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
I see the main problem in the way the planner understands which partition
is useful and which one is not.
Having the DDL supporting the feature could just be syntactic sugar
if the underlying
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
Which kind of information are you thinking about?
I think that the stuff you put into the CHECK condition for the table
will say it all.
The problem is that CHECK conditions can contain just about anything,
hence the planner needs to deal
2010/10/7 Greg Smith g...@2ndquadrant.com:
Vincenzo Romano wrote:
I see the main problem in the way the planner understands which
partition
is useful and which one is not.
Having the DDL supporting the feature could just be syntactic sugar
if the underlying mechanism is inadequate.
You
Excerpts from Vincenzo Romano's message of jue oct 07 10:44:34 -0400 2010:
Do you mean the check constraint is used as plain text to be (somehow)
executed?
If this is the case, then you (all) are perfectly and obviously right
and I'm just fishing
for bicycles in the sea.
Yeah, hence this
2010/10/7 Alvaro Herrera alvhe...@commandprompt.com:
Excerpts from Vincenzo Romano's message of jue oct 07 10:44:34 -0400 2010:
Do you mean the check constraint is used as plain text to be (somehow)
executed?
If this is the case, then you (all) are perfectly and obviously right
and I'm just
2010/10/7 Stephen Frost sfr...@snowman.net:
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
Which kind of information are you thinking about?
I think that the stuff you put into the CHECK condition for the table
will say it all.
The problem is that CHECK conditions can contain just
On Tue, Oct 5, 2010 at 8:11 AM, Peter Eisentraut pete...@gmx.net wrote:
On mån, 2010-10-04 at 23:41 -0400, Robert Haas wrote:
Well, it's not really useful, but that's how it works everywhere. On
Linux, fsync carries the stuff from the kernel's RAM to the disk
controller's RAM, and then it
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
I would expect a parser to ... ehm ... parse the CHECK constraint
expression at CREATE TABLE time and
extract all the needed high quality metadata, like the list of
columns involved and the type of
checks (range, value list, etc.).
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
2010/10/7 Stephen Frost sfr...@snowman.net:
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
The problem is that CHECK conditions can contain just about anything,
hence the planner needs to deal with that possibility.
Not
On Wed, Oct 6, 2010 at 5:21 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Robert Haas's message of mié oct 06 17:02:22 -0400 2010:
2010/10/5 KaiGai Kohei kai...@ak.jp.nec.com:
However, we also have a few headache cases.
DefineType() creates a new type object and its
2010/10/7 Stephen Frost sfr...@snowman.net:
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
2010/10/7 Stephen Frost sfr...@snowman.net:
* Vincenzo Romano (vincenzo.rom...@notorand
.it) wrote:
The problem is that CHECK conditions can contain just about anything,
hence the planner
2010/10/7 Stephen Frost sfr...@snowman.net:
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
I would expect a parser to ... ehm ... parse the CHECK constraint
expression at CREATE TABLE time and
extract all the needed high quality metadata, like the list of
columns involved and the
Vincenzo Romano vincenzo.rom...@notorand.it wrote:
2010/10/7 Stephen Frost sfr...@snowman.net:
Yes, that would be the problem. Proving something based on
expressions is alot more time consuming and complicated than
being explicitly told what goes where.
Consuming computing resources at
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
So, what'd be the right approach in your vision?
Have you read http://wiki.postgresql.org/wiki/Table_partitioning and the
various places it links to..?
I mean, if you think about partitioning a-la Oracle, then you'll have to
parse those
Aidan Van Dyk ai...@highrise.ca writes:
*shrug* The joining standby is still asynchronous at this point.
It's not synchronous replication. It's just another ^k of the N
slaves serving stale data ;-)
Agreed *here*, but if you read the threads again, you'll see that's not
at all what's been
Robert Haas wrote:
Proposed doc patch attached.
Looks accurate to me. I like the additional linking to the Reliability
page you put in there too. Heavily referencing that important page from
related areas is a good thing, particularly now that it's got a lot more
details than it used
Robert Haas robertmh...@gmail.com writes:
Proposed doc patch attached.
discusesed? Otherwise +1
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Thu, Oct 7, 2010 at 11:52 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Proposed doc patch attached.
discusesed? Otherwise +1
Woops, thanks. Committed with that change. I back-patched it back to
8.3, which is as far as it applied with only minor
On Thu, Oct 7, 2010 at 11:45 AM, Greg Smith g...@2ndquadrant.com wrote:
Robert Haas wrote:
Proposed doc patch attached.
Looks accurate to me. I like the additional linking to the Reliability page
you put in there too. Heavily referencing that important page from related
areas is a good
Markus Wanner wrote:
I think that's a pretty special case, because the good alerting system
is at least as expensive as another server that just persistently stores
and ACKs incoming WAL.
The cost of hardware capable of running a database server is a large
multiple of what you can build an
2010/10/7 Stephen Frost sfr...@snowman.net:
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
So, what'd be the right approach in your vision?
Have you read http://wiki.postgresql.org/wiki/Table_partitioning and the
various places it links to..?
I mean, if you think about partitioning
On Oct 7, 2010, at 12:26 PM, Robert Haas wrote:
On Thu, Oct 7, 2010 at 11:45 AM, Greg Smith g...@2ndquadrant.com wrote:
Robert Haas wrote:
Proposed doc patch attached.
Looks accurate to me. I like the additional linking to the Reliability page
you put in there too. Heavily referencing
On 10/7/10 6:41 AM, Aidan Van Dyk wrote:
I'm really confused with all this k N scenarious I see bandied
about, because, all it really amounts to is I only want *one*
syncronous replication, and a bunch of synchrounous replications.
And a bit of chance thrown in the mix to hope the syncronous
On 06.10.2010 19:26, Greg Smith wrote:
Now, the more relevant question, what I actually need in order for a
Sync Rep feature in 9.1 to be useful to the people who want it most I
talk to. That would be a simple to configure setup where I list a subset
of important nodes, and the appropriate
On 10/7/10, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
On 06.10.2010 19:26, Greg Smith wrote:
Now, the more relevant question, what I actually need in order for a
Sync Rep feature in 9.1 to be useful to the people who want it most I
talk to. That would be a simple to
On Thu, Oct 7, 2010 at 1:22 PM, Josh Berkus j...@agliodbs.com wrote:
So if you have k = 3 and N = 10, then you can have 10 standbys and only
3 of them need to ack any specific commit for the master to proceed. As
long as (a) you retain at least one of the 3 which ack'd, and (b) you
have some
On 10/7/10 10:27 AM, Heikki Linnakangas wrote:
The standby name is a GUC in the standby's configuration file:
standby_name='bostonserver'
The list of important nodes is also a GUC, in the master's configuration
file:
synchronous_standbys='bostonserver, oxfordserver'
This seems to
If you want synchronous replication because you want query
availabilty while making sure you're not getting stale queries from
all your slaves, than using your k N (k = 3 and N - 10) situation is
screwing your self.
Correct. If that is your reason for synch standby, then you should be
using
On 10/07/2010 06:41 PM, Greg Smith wrote:
The cost of hardware capable of running a database server is a large
multiple of what you can build an alerting machine for.
You realize you don't need lots of disks nor RAM for a box that only
ACKs? A box with two SAS disks and a BBU isn't that
On Thu, Oct 7, 2010 at 1:39 PM, Dave Page dp...@pgadmin.org wrote:
On 10/7/10, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
On 06.10.2010 19:26, Greg Smith wrote:
Now, the more relevant question, what I actually need in order for a
Sync Rep feature in 9.1 to be useful to the
On Thu, Oct 7, 2010 at 1:45 PM, Josh Berkus j...@agliodbs.com wrote:
On 10/7/10 10:27 AM, Heikki Linnakangas wrote:
The standby name is a GUC in the standby's configuration file:
standby_name='bostonserver'
The list of important nodes is also a GUC, in the master's configuration
file:
But as a practical matter, I'm afraid the true cost of the better
guarantee you're suggesting here is additional code complexity that will
likely cause this feature to miss 9.1 altogether. As far as I'm
concerned, this whole diversion into the topic of quorum commit is only
consuming
Simon, Fujii,
What follows are what I see as the major issues with making two-server
synch replication work well. I would like to have you each answer them,
explaining how your patch and your design addresses each issue. I
believe this will go a long way towards helping the majority of the
On Thu, Oct 7, 2010 at 1:27 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Let me check that I got this right, and add some details to make it more
concrete: Each standby is given a name. It can be something like boston1
or testserver. It does *not* have to be unique across
Aidan Van Dyk ai...@highrise.ca wrote:
To get non-stale responses, you can only query those k=3
servers. But you've shot your self in the foot because you don't
know which 3/10 those will be. The other 7 *are* stale (by
definition). They talk about picking the caught up slave when
the
On Thu, Oct 7, 2010 at 2:10 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Aidan Van Dyk ai...@highrise.ca wrote:
To get non-stale responses, you can only query those k=3
servers. But you've shot your self in the foot because you don't
know which 3/10 those will be. The other 7 *are*
Robert Haas robertmh...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
With web applications, at least, you often don't care that the
data read is absolutely up-to-date, as long as the point in time
doesn't jump around from one request to the next. When we have
used
I think they work together fine. Greg's idea is that you list the
important standbys, and a synchronization guarantee that you'd like to
have for at least one of them. Simon's idea - at least at 10,000 feet
- is that you can take a pass on that guarantee for transactions that
don't need
On 10/07/2010 03:19 PM, Dimitri Fontaine wrote:
I think you're all into durability, and that's good. The extra cost is
service downtime
It's just *reduced* availability. That doesn't necessarily mean
downtime, if you combine cleverly with async replication.
if that's not what you're after:
On 10/07/2010 07:44 PM, Aidan Van Dyk wrote:
The only case I see a race to quorum type of k N being useful is
if you're just trying to duplicate data everywhere, but not actually
querying any of the replicas. I can see that all queries go to the
master, but the chances are pretty high the
Hello,
today I found a few bugs:
a) parser allow a labels on invalid positions with strange runtime bug:
postgres=# CREATE OR REPLACE FUNCTION foo()
RETURNS void AS $$
BEGIN
FOR i IN 1..2
invalidLabel
LOOP
RAISE NOTICE '%',i;
END LOOP;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION
On Thu, Oct 7, 2010 at 2:33 PM, Josh Berkus j...@agliodbs.com wrote:
I think they work together fine. Greg's idea is that you list the
important standbys, and a synchronization guarantee that you'd like to
have for at least one of them. Simon's idea - at least at 10,000 feet
- is that you
On Thu, Oct 7, 2010 at 2:31 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas robertmh...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
With web applications, at least, you often don't care that the
data read is absolutely up-to-date, as long as the point
Markus Wanner mar...@bluegap.ch writes:
I don't buy that. The risk calculation gets a lot simpler and obvious
with strict guarantees.
Ok, I'm lost in the use cases and analysis.
I still don't understand why you want to consider the system already
synchronous when it's not, whatever is the
Robert Haas robertmh...@gmail.com wrote:
Establishing an affinity between a session and one of the database
servers will only help if the traffic is strictly read-only.
Thanks; I now see your point.
In our environment, that's pretty common. Our most heavily used web
app (the one for which
On 10/07/2010 10:11 AM, Magnus Hagander wrote:
OTOH, this patch seems pretty small and simple to maintain.
True, it is rather small.
Does anybody know if there's an automated way to maintain that on
freebsd ports, and if so, how that works? I want to be *sure* we can't
accidentally upgrade
Firstly I want to say I think this discussion is over-looking some
benefits of the current system in other use cases. I don't think we
should get rid of the current system even once we have proper
partitioning. It solves use cases such as data warehouse queries that
need to do a full table scan of
On Thu, Oct 7, 2010 at 21:31, Andrew Dunstan and...@dunslane.net wrote:
On 10/07/2010 10:11 AM, Magnus Hagander wrote:
OTOH, this patch seems pretty small and simple to maintain.
True, it is rather small.
Does anybody know if there's an automated way to maintain that on
freebsd ports,
All,
In my effort to make the discussion around the design decisions of synch
rep less opaque, I'm starting a separate thread about what has developed
to be one of the more contentious issues.
I'm going to champion timeouts because I plan to use them. In fact, I
plan to deploy synch rep with a
On Thu, Oct 7, 2010 at 10:27 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
The standby name is a GUC in the standby's configuration file:
standby_name='bostonserver'
Fwiw I was hoping it would be possible to set every machine up with an
identical postgresql.conf file. That
On Thu, 2010-10-07 at 14:10 +0200, Vincenzo Romano wrote:
Making these things sub-linear (whether not O(log n) or even O(1) ),
provided that there's way to, would make this RDBMS more appealing
to enterprises.
I mean also partial indexes (as an alternative to table partitioning).
Being able
On Thu, 2010-10-07 at 13:44 -0400, Aidan Van Dyk wrote:
To get non-stale responses, you can only query those k=3 servers.
But you've shot your self in the foot because you don't know which
3/10 those will be. The other 7 *are* stale (by definition). They
talk about picking the caught up
On Thu, 2010-10-07 at 19:50 +0200, Markus Wanner wrote:
So far I've been under the impression that Simon already has the code
for quorum_commit k = 1.
I do, but its not a parameter. The k = 1 behaviour is hardcoded and
considerably simplifies the design. Moving to k 1 is additional work,
1 - 100 of 132 matches
Mail list logo