Hello Andrew,
Andrew Sullivan wrote:
Yes. And silent as ever. :-)
Are the slides of your PgCon talk available for download somewhere?
BTW: up until recently, there was yet another mailing list:
[EMAIL PROTECTED] It was less focused on hooks
and got at least some traffic. :-) Are those
Hi Jeff,
Jeff Cohen wrote:
If you don't define a default partition to handle outliers, the
insert should fail with an error.
IMO, you should always have a default partition, then, so as not to
violate the constraints (by rejecting tuples which are correct according
to the constraints).
Hi,
Hannu Krosing wrote:
I guess it would go to some default partition ?
Which doesn't have a name so far, which prevents from addressing that
partition. Nor is it pretty well defined, it's just a rest.
sure, but this can become really tedious for 1024 partitions,
Well, managing 1024
Hi,
Hans-Juergen Schoenig wrote:
What do you need so many partitions for?
having so many tables is not funny but it can be the only reasonable choice.
Well, what do you do with all those partitions? Most of them will end up
on the same storage subsystem. So, if you don't partition to
Hi,
Tom Lane wrote:
I don't agree with that at all. I can imagine plenty of situations
where a tuple falling outside the range of available partitions *should*
be treated as an error. For instance, consider timestamped observations
--- data in the future is certainly bogus, and data further
Hi,
Gregory Stark wrote:
In a previous life I had a database which had daily partitions. I assure you
it was unquestionably the right decision. Each day's data wasn't just
distinguished by the timestamp but actually was entirely separate from the
previous day's data. Both the archiving strategy
Hi,
Tom Lane wrote:
DBAs tend to be belt *and* suspenders guys, no?
I rather know those admins with stupid looking faces who are wondering
why their transactions fail. Often enough, that can have a lot of
different reasons. Extending the set of possible traps doesn't seem like
a clever
Hi,
Zeugswetter Andreas ADI SD wrote:
Yes, but the problem with the timestamp partitioned tables is, that the
window is sliding. Thus you would need two alter tables for each new
period. One that changes the constraint + one that creates the new
partition. So it seems natural to join the two
Hi,
(sorry for the previous one, if delivered, that went of too early...)
Zeugswetter Andreas ADI SD wrote:
Yes, but the problem with the timestamp partitioned tables is, that the
window is sliding. Thus you would need two alter tables for each new
period. One that changes the constraint + one
Hi,
Jeff Cohen wrote:
We did look at allowing general functions for partitioning and this was
one concern. The other is that we want to enforce that a row only gets
inserted into a single partition, so we wanted a declarative syntax
where it was relatively easy to check that range and list
Hi,
Gokulakannan Somasundaram wrote:
I'm also not sure it really buys us anything over having a second
dead-space-map data structure. The WAL is much larger and serves other
purposes which would limit what we can do with it.
Ok. One obvious advantage is that it saves the contention
Hi,
Tom Lane wrote:
Well, one of the principal arguments for having VACUUM at all is that it
off-loads required maintenance effort from foreground transaction code
paths.
Off-loading doesn't mean we don't have to do the work, so it's obviously
is a compromise.
AFAICT, having to write some
Hi,
Gokulakannan Somasundaram wrote:
But i am just thinking of creating the DSM
by reading through the WAL Logs, instead of asking the Inserts, updates
and deletes to do the DSM creation.
What's the advantage of that? What's wrong with collecting the
information for DSM at transaction
Hi,
Simon Riggs wrote:
On Wed, 2008-01-09 at 18:04 +0100, Markus Schiltknecht wrote:
What do you think about letting the database system know the split point
vs it having to find optimal split points automatically?
For me, managing the table's files can be separate from the chunking
Hi,
Gokulakannan Somasundaram wrote:
because of the contention. Am i missing something
here? While Vacuum is reading the DSM, operations may not be able to
update the bits. We need to put the DSM in shared memory, if all the
processes are going to update it, whereas if Vacuum is going to form
Hi,
Gokulakannan Somasundaram wrote:
If we can ask the Vacuum process to scan
the WAL log, it can get all the relevant details on where it needs to
go.
You seem to be assuming that only few tuples have changed between
vacuums, so that WAL could quickly guide the VACUUM processes to the
Hi,
I'm trying to run 'make check' on a 64bit Debian unstable. That aborts
after 60 seconds due to not being able to connect to the postmaster.
I figured that there's nothing wrong with the postmaster, rather psql
can't start up, because it gets linked against an older libpq.so.5. It
looks
Andrew Dunstan wrote:
Smells suspiciously like an rpath problem to me. What are your configure
settings?
Ah, yeah, I see. Using something else than --prefix=/usr helped.
Thanks for the hint!
Regards
Markus
---(end of broadcast)---
TIP 5:
Simon Riggs wrote:
Hmmm. I think it fits rather neatly with BitmapIndexScans. It would be
easy to apply the index condition and/or filters to see which segments
are excluded and then turn off bits in the bitmap appropriately.
Yeah, good point.
Not fully sure about IndexScans yet. I don't
Simon Riggs wrote:
I have to admit I always found it kludgy to have objects named
invoices_2000_JAN and invoices_2000_FEB and so on. It's kind of an meta
denormalization. But so is specifying where clauses repeatedly.
The idea for using the WHERE clauses was to specifically avoid naming.
I
Hi,
Gregory Stark wrote:
That's an interesting thought. I think your caveats are right but with some
more work it might be possible to work it out. For example if a background
process processed the WAL and accumulated an array of possibly-dead tuples to
process in batch. It would wait whenever
Hi,
Simon Riggs wrote:
With that in mind, can I clarify what you're thinking, please?
Sure, I can try to clarify:
2) the things you've been discussing are essential requirements of
partitioning and we could never consider it complete until they are also
included and we must therefore talk
Hi,
Simon Riggs wrote:
When I delete all rows WHERE some_date 'cut-off date' on a segment
boundary value that would delete all segments that met the criteria. The
following VACUUM will then return those segments to be read-write, where
they can then be refilled with new incoming data. The only
Hi,
Andrew Chernow wrote:
It might be something with the
attachment, who knows.
Most probably that was the case, yes. The -hackers list is limited,
please use -patches to send patches. ;-)
Regards
Markus
---(end of broadcast)---
TIP 6:
Hi,
IMO, the lengthy discussion about Segment Exclusion and Segment
Visibility Maps has long turned into a discussion about partitioning in
general. I'm thankful for all the new insights it has brought me and I
want to continue sharing my view on things. What's following is highly
Hi Csaba,
Csaba Nagy wrote:
One additional thought: what about a kind of segment fill factor ?
Meaning: each segment has some free space reserved for future
updates/inserts of records in the same range of it's partitioning
constraint. And when inserting/updating you put the new record into the
Hi,
Csaba Nagy wrote:
Sure, but it could be configurable and should only be enabled if the
table is marked as partitioned on some condition...
As I'm regarding SE as an optimization, I disagree here.. As all
optimizations, SE should conceptually be reasonably close to cost-less
when
Hi,
Andrew Sullivan wrote:
On Sat, Jan 05, 2008 at 08:02:41PM +0100, Markus Schiltknecht wrote:
Well, management of relations is easy enough, known to the DBA and most
importantly: it already exists. Having to set up something which is
*not* tied to a relation complicates things just because
Hi,
Andrew Sullivan wrote:
On Mon, Jan 07, 2008 at 07:16:35PM +0100, Markus Schiltknecht wrote:
Does anything speak against letting the DBA handle partitions as relations?
Yes: it doesn't solve the problem I have, which is that I don't want to have
to manage a whole bunch of tables. I want
Hi,
Gokulakannan Somasundaram wrote:
On Jan 5, 2008 6:15 PM, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
One thought I had back then, with partitioned tables was gee -- B-tree
index is already doing a partition; why do a manual partition on top of
that?.
Can you please
Hi,
Robert Treat wrote:
On Saturday 05 January 2008 14:02, Markus Schiltknecht wrote:
To satisfy all the different requirements of partitioning with segments
based partitioning, we'd have to allow a table to span multiple table
spaces. I'm not very keen on going that way.
Why?
Uh
Andrew Sullivan wrote:
On Fri, Jan 04, 2008 at 10:26:54PM +0100, Markus Schiltknecht wrote:
I'm still puzzled about how a DBA is expected to figure out which
segments to mark.
I think that part might be hand-wavy still. But once this facility is
there, what's to prevent the current active
Hi,
[EMAIL PROTECTED] wrote:
The main proposal deliberately has few, if any, knobs and dials. That's
a point of philosophy that I've had views on previously: my normal
stance is that we need some knobs to allow the database to be tuned to
individual circumstances.
One thought I had back
Hi,
Simon Riggs wrote:
On Fri, 2008-01-04 at 22:26 +0100, Markus Schiltknecht wrote:
I'm still puzzled about how a DBA is expected to figure out which
segments to mark. Simon, are you assuming we are going to pass on
segment numbers to the DBA one day?
No Way!
Ah, I'm glad ;-)
Simon
Hi,
Robert Treat wrote:
Personally I cant say it complicates things, because it isn't clear how it
will be managed. :-)
Well, management of relations is easy enough, known to the DBA and most
importantly: it already exists. Having to set up something which is
*not* tied to a relation
Hello Simon,
Simon Riggs wrote:
I've come
up with an alternative concept to allow us to discuss the particular
merits of each. ISTM that this new proposal has considerable potential.
Hm.. interesting idea.
If we were able to keep track of which sections of a table are now
read-only then we
Hi,
Simon Riggs wrote:
- any Fact table where measurements/observations/events are accumulated
e.g.
Web Hits (any Internet events)
Call Detail Records
Sales
Security Events
Scientific Measurements
Process Control
- any Major Entity where new entities are created from a sequence
e.g.
Orders,
Hi,
Simon Riggs wrote:
The smaller the partition size the greater the overhead of managing it.
Also I've been looking at read-only tables and compression, as you may
know. My idea was that in the future we could mark segments as either
- read-only
- compressed
- able to be shipped off to
Hi,
Simon Riggs wrote:
On Fri, 2008-01-04 at 13:06 -0500, Andrew Sullivan wrote:
On Fri, Jan 04, 2008 at 01:29:55PM +0100, Markus Schiltknecht wrote:
Agreed. Just a minor note: I find marked read-only too strong, as it
implies an impossibility to write. I propose speaking about mostly-read
Hi,
Alvaro Herrera wrote:
Simon Riggs wrote:
ISTM its just autovacuum launcher + Hot Standby mixed.
I don't think you need a launcher at all. Just get the postmaster to
start a configurable number of wal-replay processes (currently the
number is hardcoded to 1).
I also see similarity to
Hannu Krosing wrote:
until N fubbers used
..whatever a fubber is :-)
Nice typo!
Markus
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Hello Hannu,
Hannu Krosing wrote:
(For parallelized queries, superuser privileges might appear wrong, but
I'm arguing that parallelizing the rights checking isn't worth the
trouble, so the initiating worker backend should do that and only
delegate safe jobs to hepler backends. Or is that a
Hello Gregory,
Gregory Stark wrote:
Oracle is using Direct I/O so they need the reader and writer threads to avoid
blocking on i/o all the time. We count on the OS doing readahead and buffering
our writes so we don't have to. Direct I/O and needing some way to do
asynchronous writes and reads
Hi,
Josh Berkus wrote:
Here's the other VLDB features we're missing:
Parallel Query
Uh.. this only makes sense in a distributed database, no? I've thought
about parallel querying on top of Postgres-R. Does it make sense
implementing some form of parallel querying apart from the
Hi Josh,
Josh Berkus wrote:
Sure. Imagine you have a 5TB database on a machine with 8 cores and only one
concurrent user. You'd like to have 1 core doing I/O, and say 4-5 cores
dividing the scan and join processing into 4-5 chunks.
Ah, right, thank for enlightenment. Heck, I'm definitely
Hello Gregory,
Gregory Stark wrote:
It is kind of like a merge join but not quite. It's interleaving rows rather
than matching them up. It's more like the final merge of a sort which also
uses a heap to efficiently find the next value from the source tapes.
Well, maybe my point here is: why
Hi,
Florian Weimer wrote:
Florian Weimer wrote:
I think you need it because there are potentially many input types.
Eh, tapes.
Aha..
Given the partitioning case, I'd expect all rows to have an equal
tuple descriptor. Maybe this is a matter of what to optimize, then?
Could you elaborate
Hi,
Florian Weimer wrote:
I think you need it because there are potentially many input types.
Given the partitioning case, I'd expect all rows to have an equal tuple
descriptor. Maybe this is a matter of what to optimize, then?
Could you elaborate on what use case you have in mind?
Gregory Stark wrote:
Not quite the same since the Executor-based implementation would have a static
tree structure based on the partitions. Even if the partitions are all empty
except for one or two you would still have to push the result records through
all the nodes for the empty partitions.
Hi,
Heikki Linnakangas wrote:
AFAICT it's roughly the same data structure as the zipper tree you
envisioned, but not implemented with separate executor nodes for each
level.
Aha, that sounds good to me. ;-)
As I've already mentioned, I think it's even better to simply show the
user a
Hello Gregory,
Gregory Stark wrote:
I've been hacking on the idea of an Append node which maintains the ordering
of its subtables merging their records in order. This is important for
partitioned tables since otherwise a lot of plans are not available such as
merge joins.
Cool!
Some time
Hi,
Georgi Chulkov wrote:
Please allow me to ask then:
1. In your opinion, would the above scenario indeed benefit from a raw-device
interface for large objects?
No, because file systems also try to do what you outline above. They
certainly don't split sequential data up into blocks and
Hi,
Gregory Stark wrote:
Perhaps if you're doing some form of replication between different
architectures you might want to use binary representation for your transfers.
Or if you're doing something in a PL language like compressing or bundling up
multiple data in a container format or
Hello Tom,
Tom Lane wrote:
I think you'd be nuts to bet your data on the binary representations
really being cross-platform compatible.
Can you elaborate on this? AFAICT the send/recv functions use network
byte ordering. What are the other problems between different architectures?
There
Hi,
Tom Lane wrote:
Well, you're probably fine with integers and text, but beyond that it
gets a bit more dicey. I wouldn't want to assume that floats are
compatible across any random pair of architectures, and in the more
complex datatypes (such as arrays or geometric types) there might be
Hello Jan,
thank you for your feedback.
Jan Wieck wrote:
On 9/7/2007 11:01 AM, Markus Schiltknecht wrote:
This violates the common understanding of synchrony, because you can't
commit on a node A and then query another node B and expect it be
coherent immediately.
That's right
Hi,
Chris Browne wrote:
The approach that was going to be taken, in Slony-II, to apply locks
as early as possible so as to find conflicts as soon as possible,
rather than waiting, seems eager to me.
Agreed. WRT locking, one might also call it pessimistic, but that
sounds so... negative.
I
Hello Tom,
Tom Lane wrote:
So on the strength of that, I'm going to go ahead and commit the patch,
but I'd be interested to see benchmarks from people with access to
better hardware.
I've just completed two dbt2 test runs on a mid-level system, with 4GB
RAM and a 7 disk SATA RAID 1+0 w/ BBU.
Hi,
apoc9009 wrote:
Thadt is Replication NOT Backup
I've now read all of your messages in this thread, but I simply fail to
understand why you are that much opposed to the term 'replication'. I
think the only thing which comes any close to what you're looking for is
replication (in
Hi,
apoc9009 wrote:
Translation for you:
A Backup is a File or Set of Files thadt contains the Data of your
Business critical Informations.
It should not be Archived on the same place, the same House or the same
Room.
I disagree, a backup does not necessarily have to be a single file or a
Hi,
I'm asking for advice and hints regarding terms in database replication,
especially WRT Postgres-R. (Sorry for crossposting, but I fear not
reaching enough people on the Postgres-R ML alone)
I'm struggling on how to classify the Postgres-R algorithm. Up until
recently, most people
Hi Chris,
Chris Browne wrote:
I'm seeing some applications where it appears that there would be
value in introducing asynchronous messaging, ala message queueing.
http://en.wikipedia.org/wiki/Message_queue
ISTM that 'message queue' is a way too general term. There are hundreds
of different
Hi,
Tom Lane wrote:
We *have* a log-writing process. The problem is in getting the data to it.
Remember the imessages approach I'm using for Postgres-R? It passes
messages around using shared memory and signals the receiver on incoming
data. It's not perfect, sure, but it's a general
Tom Lane wrote:
Markus Schiltknecht [EMAIL PROTECTED] writes:
Tom Lane wrote:
We *have* a log-writing process. The problem is in getting the data to it.
Remember the imessages approach I'm using for Postgres-R? It passes
messages around using shared memory and signals the receiver
Hi,
Alvaro Herrera wrote:
Which is not always what happens in reality. Consider for example that
we borrowed some files from NetBSD, OpenBSD, Tcl, zic and others. It
would be nice to know exactly at what point we borrowed the file, so we
can go to the upstream repo and check if there's any
Hi
Jim C. Nasby wrote:
I understand the argument about metadata and all, and largely agree with
it. But on the other hand I think a version identifier is a critical
piece of information; it's just as critical as the file name when it
comes to identifying the information contained in the file.
Simon Riggs wrote:
i.e. if we have partitions for each year (2001, 2002, 2003 2004, 2005,
2006, 2007) AND we have already proved that 2005 is excluded when we
have a WHERE clause saying year = 2006, then we should be able to use
the ordering to prove that partitions for 2004 and before are also
Hi,
Zeugswetter Andreas ADI SD wrote:
CREATE INDEX x ON test(a, b, c);
isn't the same as
CRETAE INDEX x ON test(c, b, a);
That is only a problem if you also want to avoid a sort (e.g. for an
order by),
..or if you want to use that index for 'WHERE a = 5'. The first one is
probably
Hi,
Martijn van Oosterhout wrote:
The executor would have to be clever enough to not do a single index
scan, but possibly scan through multiple indexes when asking for
uniqueness, depending on the partitioning rule set.
But it's not the executor that checks uniqueness, it's built into the
Hi,
NikhilS wrote:
The following things are TODOs:
iv) Auto generate rules using the checks mentioned for the partitions, to
handle INSERTs/DELETEs/UPDATEs to navigate them to the appropriate child.
Note that checks specified directly on the master table will get
inherited
automatically.
Hi,
NikhilS wrote:
Our current partitioning solution is based on inheritance. With that in
mind, for 8.3 I thought an implementation based on auto rules creation
would be the way to go.
That's completely reasonable. And as I've said, it's probably even a
step towards what I've outlined
Hi,
Simon Riggs wrote:
I agree with much of your post, though this particular point caught my
eye. If you'll forgive me for jumping on an isolated point in your post:
No problem.
Multi-table indexes sound like a good solution until you consider how
big they would be. The reason we need a
Hi,
Gregory Stark wrote:
Put another way, multi-table indexes defeat the whole purpose of having
partitioned the table in the first place. If you could have managed a single
massive index then you wouldn't have bothered partitioning.
That depends very much on the implementation of the
Hi,
Simon Riggs wrote:
Most high volume tables are Fact tables with potentially more than 1 row
per Object/Dimension, so the unique index isn't appropriate in those
cases.
When partitioning a Major Entity its much easier to regard the PK as the
partitioning key + unique key, which is
Hi,
Gregory Stark wrote:
However there are also cases such as where you have a=0..99 in one partition
and a=100..199 in partition two, etc. It could still automatically build
indexes on (a,b,c) on each partition and somehow note that the unique
constraint is guaranteed across the whole
Hi,
Joshua D. Drake wrote:
If we don't have multi-table indexes how do we enforce a primary key
against a partitioned set?
The executor would have to be clever enough to not do a single index
scan, but possibly scan through multiple indexes when asking for
uniqueness, depending on the
Simon Riggs wrote:
The planner already uses the Append node to put together multiple plans.
The great thing is it will put together IndexScans and SeqScans as
applicable. No need for multi-scans as a special node type.
Yes... only that mixing 'concurrent' index scans in the right order
would
Andrew Dunstan wrote:
David Fetter wrote:
That would be where the provably-distinct part comes in, so yes.
That assumes you can provide some provably distinct test. In the general
case I have in mind that isn't so.
Could you please give a somewhat more concrete example, I'm not
following
Hi,
Andrew Dunstan wrote:
I guess my point was really that multi-table indexes might have uses
beyond partitioning.
Aha, now I understand. Thanks for the clarification.
Say I have two tables, each with a field FKed to a field in a third
table. I'd like to create the values to be unique
Hi,
Andrew Dunstan wrote:
O.k. everyone pay attention, I am about to agree with Greg! ;)
Greg are their tools to migrate CVS to monotone or whatever your
favorite is? The reason I ask is that I migrate the CVS to SVN every 4
hours I think it is and it isn't perfect.
monotone ships it's own
Hi,
Matthew D. Fuller wrote:
I would say that a far greater contributor in practice would simply be
frequency. If you diverge on your significant feature for 6 months,
then try to merge in upstream changes from the main dev, you will be
in hell no matter what merge algorithm you use.
Do you
Hi,
Tom Lane wrote:
Yah know, the one bit of these pitches that always sounds like pure
snake oil is the claim that they offer some kind of mechanical solution
to merge conflicts. AFAICS that has nothing to do with the SCMS in use
and everything to do with whether your diff command is
Hi,
Warren Turkal wrote:
Cvs2svn seems to make as much sense of CVS data as possible. The only real
problem I have seen is with regard to the malformed files I mentioned
earlier.
cvs2svn (1.x) still heavily relies on timestamps, which is certainly
correct in most cases. But they are
Hi,
[EMAIL PROTECTED] wrote:
I'll have to try kdiff3 - but the merge command, although it often works,
I strongly dislike when it marks up the lines as there was a conflict here
and gives you three files in the directory to choose to start from. This is
far too manual, which invites mistakes.
Hi,
Peter Eisentraut wrote:
Oleg Bartunov wrote:
It's not so big addition to the gram.y, see a list of commands
http://mira.sai.msu.su/~megera/pgsql/ftsdoc/sql-commands.html.
As we still to still discuss the syntax: is there a proposal for how a
function based syntax would look like?
Hi,
Andrew Dunstan wrote:
1. The buildfarm is very heavily dependent on CVS, and any change to
anything else will be quite painful. There is no guarantee that all the
members even have SVN installed,
But you can guarantee they have CVS or even cvsup installed? That seems
dubious to me.
Hi,
[ I've CCed the monotone-devel list, as I'm sure those people are
interested, too. ]
Stefan Kaltenbrunner wrote:
Beside that - are all of the currently supported Platforms officially
supported by the proposed SCMSes ?
I can only speak for monotone. We have (had) buildbots for x86
Hi,
Andrew Dunstan wrote:
CVSup is not required, and is absent from most existing clients. I don't
use it any more since the Fedora project stopped supporting it.
..which is quite understandable, concerning the PITA compiling modula-3
gives you (or at least has given me, it still hurts).
Hi,
Andrew Dunstan wrote:
If we are worried about the size of the transition table and keeping it
in cache (see remarks from Tom upthread) then adding more keywords seems
a bad idea, as it will surely expand the table. OTOH, I'd hate to make
that a design criterion.
Yeah, me too.
Hello Richard,
you should probably have read the thread on the PostgreSQL -hackers
mailing list I've linked to... at least you didn't make Tom's point ;-)
Richard Levitte - VMS Whacker wrote:
1. Do you want to stay with CVS or do you want to move to something
else?
Most PostgreSQL
Hi,
Pavel Stehule wrote:
Functions maybe doesn't see efective, but user's cannot learn new syntax.
Are you serious? That argument speaks exactly *for* extending the
grammar. From other databases, users are used to:
CREATE TABLE ... (SQL)
CREATE INDEX ... (SQL)
CREATE FULLTEXT INDEX ...
Hi,
Richard Levitte - VMS Whacker wrote:
In message [EMAIL PROTECTED] on Thu, 22 Feb 2007 17:38:26 +0100, Markus
Schiltknecht [EMAIL PROTECTED] said:
markus So far, I'm getting the sense that there are a lot of
markus opinions on what replacement system to use, a bit carelessly
markus
Hi,
Tom Lane wrote:
You mean four different object types. I'm not totally clear on bison's
scaling behavior relative to the number of productions
You really want to trade parser performance (which is *very*
implementation specific) for ease of use?
Bison generates a LALR [1] parser, which
Hi,
Florian G. Pflug wrote:
According to http://en.wikipedia.org/wiki/LR_parser processing one
token in any LR(1) parser in the worst case needs to
a) Do a lookup in the action table with the current (state, token) pair
b) Do a lookup in the goto table with a (state, rule) pair.
c) Push one
Hi,
since str(n?)cat got replaced with strlcat, I fail to build PostgreSQL
(current CVS HEAD). HAVING_DECL_STRLCAT is not set, so AFAIK, the
strlcat() function from src/port should be used. However, I've read the
README there, but still don't quite know what's wrong.
The linker throws:
gcc
Hi,
Jan Wieck wrote:
Are we still discussing if the Postgres backend may provide support for
a commit timestamp, that follows the rules for Lamport timestamps in a
multi-node cluster?
No. And I think you know my opinion about that by now. ;-)
It seems more like we are drifting into what
Hi,
Jan Wieck wrote:
Then let me give you a little puzzle just for the fun of it.
A database containing customer contact information (among other things)
is a two node multimaster system. One is serving the customer web
portal, the other is used by the company staff including the call
Hi,
Zeugswetter Andreas ADI SD wrote:
And time based
is surely one of the important conflict resolution methods for async MM
replication.
That's what I'm questioning. Wouldn't any other deterministic, but
seemingly random abort decision be as clever as time based conflict
resolution? It
Hi,
Jan Wieck wrote:
Whatever strategy one will use, in an async multimaster there are always
cases that can be resolved by rules (last update being one of them), and
some that I can't even imagine solving so far. I guess some of the cases
will simply boil down to the application has to make
Hi,
Theo Schlossnagle wrote:
On Feb 4, 2007, at 1:36 PM, Jan Wieck wrote:
Obviously the counters will immediately drift apart based on the
transaction load of the nodes as soon as the network goes down. And in
order to avoid this clock confusion and wrong expectation, you'd
rather have a
1 - 100 of 167 matches
Mail list logo