[snip]
Floating points numbers are accurate but not precise.
OK, now this one beats me... what's the difference between accurate
and exact ? I thought both mean something like correct, but precise
refers to some action and accurate applies to a situation or
description...
I'm actually curios
Cristian,
I bet it's related to some rounding issue and the fact that floating
formats are approximative even for small integers.
Probably 12 ands up being slightly less in floating format (something
like 11.999...), and the cast to integer is truncating it.
Not 100% sure though... read up on
On Mon, 2005-11-07 at 18:43, Tom Lane wrote:
[snip]
If it doesn't have a value for the parameter, it'll assume 10% of table
rows, which is what it's done for a long time if the LIMIT isn't
reducible to a constant.
Is 10% a reasonable guess here ?
Here we use limit in combination with prepared
On Fri, 2005-11-11 at 18:15, Jaime Casanova wrote:
On 11/11/05, Josh Berkus josh@agliodbs.com wrote:
Jaime,
so i suppose we can reuse many of the code breaking the merge in 3
pieces... for now they are just thougths, i will think more in this
and try to implement it...
OK, I'm relatively new on this list, and I might have missed a few
discussions on this topic.
I wonder if doing it this way would not be better than using a table
lock:
- set a save point;
- insert the row;
- on error:
- roll back to the save point;
- update the row;
-
On Fri, 2005-11-11 at 20:22, Bruno Wolff III wrote:
On Fri, Nov 11, 2005 at 18:48:33 +0100,
Csaba Nagy [EMAIL PROTECTED] wrote:
OK, I'm relatively new on this list, and I might have missed a few
discussions on this topic.
I wonder if doing it this way would not be better than using
Herrera wrote:
Csaba Nagy wrote:
session_1= create table test (col smallint primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
test_pkey for table test
CREATE TABLE
session_1= begin;
BEGIN
cnagy= insert into test values (1);
INSERT 165068987 1
session_2
Well, from my point of view it is more than delete and insert. That I
can do right now with existing infrastructure. The problem I try to
solve is something along: a bunch of clients try to update a count, and
ONE of them must initialize the count if it does not exist... this can't
be done with
Yes, these algorithms are clear to me, but they don't work for batch
updates in postgres without savepoints before each row insert/update,
which is not good for performance (not to mention on older postgres
versions without savepoint support it won't work at all). If there is a
way of no race
OK, in this case I don't care about either MERGE or REPLACE, but for an
UPSERT which does the locking :-)
Cheers,
Csaba.
On Thu, 2005-11-17 at 13:32, Martijn van Oosterhout wrote:
On Thu, Nov 17, 2005 at 12:52:53PM +0100, Csaba Nagy wrote:
Yes, these algorithms are clear to me, but they don't
http://acronymfinder.com/
Cheers,
Csaba.
On Fri, 2005-11-25 at 19:24, Gustavo Tonini wrote:
What is ISTM?
Sorry,
Gustavo.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index
Maybe mausoleum would be even better name :-D
Cheers,
Csaba.
On Thu, 2005-12-01 at 11:35, Euler Taveira de Oliveira wrote:
--- Richard Huxton dev@archonet.com escreveu:
If it's practical to keep them, I'd like to suggest doing so. If it's
not practical, could we have a
On Thu, 2005-12-01 at 22:01, Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
On the other hand the type I would prefer to see are hints that feed
directly
into filling in information the planner lacks. This only requires that the
user understand his own data and still lets the
[snip]
I want to be sure my existing queries keep using the plans they've been using
until I allow them to change.
I don't want to sit down and type select count(*) from users and have it not
work correctly (ie, use a sequential scan) because the system is so single
mindedly tuned for the
On Wed, 2005-12-07 at 19:36, Greg Stark wrote:
[snip]
We periodically ran into problems with load spikes or other performance
problems causing things to get very slow and stay slow for a while. Letting
things settle out usually worked but occasionally we had to restart the whole
system to
On Thu, 2005-12-08 at 16:05, Tom Lane wrote:
[SNIP]
There isn't any way for it to do that (ie, be sure everyone else has
adjusted to the new state of affairs), short of acquiring some sort of
short-term exclusive lock on the table, which is a really bad idea.
The pending lock would block other
Is there any chance for psql opening a new session if it's inside a
transaction and use that to do whatever querying is needed ? Just
something like the control connection on ftp (analogy not very good).
That could cause other surprises though (could fail for example due to
too many connections
Hi all,
After spending some time (~2 days to fully(?) understand the process)
setting up a standby machine using WAL shipping and experimentations to
be fairly sure it is working reliably, I started thinking of how I would
like it to be done.
My dream-process of setting up a standby would be:
Simon,
For me the usage pattern would be: log all params, bind time values, on
the same log line as log_min_duration entries. That's what I need to
know which are the non-performant queries, and it also helps on
occasions to identify application problems.
In any case all your plans sound very
Hi all,
I decided to start implementing a streamlined WAL shipping based standby
building procedure. My aim is fairly simple: to be able to build a
standby as automated as possible.
The ultimate simplicity would be for me:
- install postgres on the standby machine;
- create a directory for
, and you can have multiple slaves
at the same time if you like, e.g. if you want to smoothly move over the
slave to another machine.
Cheers,
Csaba.
On Tue, 2006-02-07 at 16:18, Tom Lane wrote:
Csaba Nagy [EMAIL PROTECTED] writes:
The procedure should be something similar to the one available today
On Tue, 2006-02-07 at 16:45, Tom Lane wrote:
Csaba Nagy [EMAIL PROTECTED] writes:
You obviously did not read further down :-)
I was proposing a subscription system, where the slave can specify the
oldest WAL file it is interested in, and keep that up to date as it
processes them
On Tue, 2006-02-07 at 16:58, Andrew Rawnsley wrote:
IMHO the #1 priority in the current PITR/WAL shipping system is to make the
standby able to tolerate being shut down and restarted, i.e. actually having
a true standby mode and not the current method of doing it only on startup.
This fits
Well, I've tested it a bit:
db=# select version();
version
PostgreSQL 8.1.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5
(Debian 1:3.3.5-13)
(1 row)
What do you find difficult about the current method? That's got to be
the first discussion point.
The main problem I have is the complexity of setup.
It involves a lot of additional scripting which you have to get it right
to be actually reliable. The documentation is giving a rough idea on
I think this was discussed many times on this list, and the main
conclusion was: if you don't trust your DB machine's admin, any security
measure against him will be only illusory. The sysadmin can in any case
access the data, you can just make this harder, you can't prevent that.
So you better
OK, this is news to me, I recall that last looking at the configuration
docs it was start-up time, but I might be wrong.
[looking up the docs]
OK, citing the 8.1 online docs:
17.5.3. Archiving
archive_command (string)
The shell command to execute to archive a completed segment
On Fri, 2006-02-24 at 19:12, Rod Taylor wrote:
On Fri, 2006-02-24 at 12:48 -0500, Tom Lane wrote:
Rod Taylor [EMAIL PROTECTED] writes:
I watch for table bloat but I haven't figured out a nice way of tracking
down the postgresql process with the oldest transaction running short of
What bothers me about the TODO item is that if we have to sequentially
scan indexes, are we really gaining much by not having to sequentially
scan the heap? If the heap is large enough to gain from a bitmap, the
index is going to be large too. Is disabling per-index cleanout for
expression
Are you running 8.1? If so, you can use autovacuum and set per table
thresholds (read vacuum aggressivly) and per table cost delay settings
so that the performance impact is minimal. If you have tried 8.1
autovacuum and found it unhelpful, I would be curious to find out why.
Yes, I'm
Wow, this is a great deal of burden that for sure I didn't have to do
last time :-( Not sure why, maybe the laws changed or something. It is
crystal clear that I have to do it this time however.
I think you're overreacting guys... I would first try and go to the
nearest Canadian embassy and
Ewe. How expensive is scanning an index compared to the heap? Does
anyone have figure on that in terms of I/O and time?
See this post for an example:
http://archives.postgresql.org/pgsql-performance/2006-02/msg00416.php
For my 200 million table, scanning the pk index took ~ 4 hours. And then
I got the impression that Csaba is looking more for multiple
simultaneous vacuum more than the partial vacuum.
So he rather needs Hannu Krosing's patch for simultaneous vacuum ...
Well, I guess that would be a good solution to the queue table
problem. The problem is that I can't deploy
Charlie,
I'm currently working on a similar solution (it's true I'm only in the
thinking phase). I don't have too much time to spend on it (~few hours
per week, on the train during commuting), so it's not progressing too
fast...
Nevertheless, I would like to do a first proof-of-concept version
[Please use reply to all so the list is CC-d]
Charlie,
I guess what you're after is to make sure the WAL buffers are shipped to
the stand-by at the same time as they are committed to disk. In any
other case your desire to have the stand-by EXACTLY in sync with the
primary server will not gonna
My own thoughts about the problems with our work_mem arrangement are
that the real problem is the rule that we can allocate work_mem per sort
or hash operation; this makes the actual total memory use per backend
pretty unpredictable for nontrivial queries. I don't know how to fix
this
- Postgres intrinsic log-shipping replication (we have one to contribute)
Are you saying you have a working WAL-shipping based portable (means
working well on all platforms) replication already done ? Cause I was
looking into implementing just this one :-)
Do you have some details how it works
I wanted to make a link to the cvsweb interface but my XSLT skills
wern't up to that apparently.
The following XSL fragment will do the trick for the cvsweb link:
trth align=leftFile/thtd
xsl:element name=a
xsl:attribute
ALTER TABLE childN ALTER INHERITS DROP (parent);
ALTER TABLE childN ALTER INHERITS ADD (parent);
Wouldn't it be possible to allow the ADD/DROP to happen in the same
statement, like:
ALTER TABLE childN ALTER INHERITS DROP crt_parent ADD new_parent;
or:
ALTER TABLE childN ALTER INHERITS DROP
[...]
There has to be a more linear way of handling this scenario.
So vacuum the table often.
Good advice, except if the table is huge :-)
Here we have for example some tables which are frequently updated but
contain 100 million rows. Vacuuming that takes hours. And the dead row
Good advice, except if the table is huge :-)
... Then the table shouldn't be designed to be huge. That represents
a design error.
[snip]
This demonstrates that archival material and active data should be
kept separately.
They have different access patterns; kludging them into the same
I suppose you have a table memberships (user_id, group_id) or something
like it ; it should have as few columns as possible ; then try regularly
clustering on group_id (maybe once a week) so that all the records for a
particular group are close together. Getting the members of a
Let me ask a question, you have this hundred million row table. OK, how
much of that table is read/write? Would it be posible to divide the
table into two (or more) tables where one is basically static, only
infrequent inserts and deletes, and the other is highly updated?
Well, all of it is
You said the activity comes in bursts per group, so the obvious
partitioning would be per group.
If you have too many groups to have one partition per group you could
try to find some modulo or other rule to spread them into separate
partitions.
This could be a solution... but then I'm not
Well, the only thing left is to cluster the database. There are a couple
ways to do this, one switch to a platform that supports clustering or
create an API to wrap multiple databases. If your queries are simple and
limited, you could create an HTTP/XML service that wraps a number of
These sort of solutions, IMHO, don't show how good PostgreSQL is, but show
where it is very lacking.
We all know Postgres is lacking; some of us try to improve it (some with
more success than others). People who know the current limitations but
like the capabilities, try to find
First of all, I base my assumptions on what I recall to have read on
this list, as I didn't try yet partitioning using inheritance. It's not
trivial to set up and I didn't have the time to play with it yet. So I
wouldn't know for sure that it won't work fine with our application, and
that will
Usually it gets really bad if you *don't* run vacuum continuously, maybe
hopeing to do it in slower times at night. For high-update db you have
to run it continuously, maybe having some 5-15 sec pauses between runs.
And how much I/O does this take?
Surprisingly its mostly WAL
On Sun, 2006-06-25 at 05:29, Alvaro Herrera wrote:
Mark Woodward wrote:
I originally suggested a methodology for preserving MVCC and everyone is
confusing it as update in place, this isnot what I intended.
It doesn't make sense, but maybe vacuuming a page would. Naturally, it
would need
I think one of the issues might have been: how will you handle other
indexes on the table when you can no longer point them at an item (since
items will need to move to maintain an IOT).
I guess you shouldn't allow any other indexes. That's a perfectly
acceptable compromise I think... it would
Does our CVS setup cater for seggregated rights like this? Or
would that
be done on a trust basis?
No, I don't believe you can do this with CVS at all. We'd need something
like SVN/WebDAV to be able to grant write access just to specific parts
of the tree to different people.
It is
On Thu, 2006-07-13 at 15:29, Stephen Frost wrote:
It's not the PostgreSQL project's problem, that's true, but it certainly
becomes an issue for distributions. Java as a PL ends up being a pretty
odd case.. If there isn't anything in the PL code itself which forces a
dependency beyond gcj
On Thu, 2006-07-13 at 17:03, Tom Lane wrote:
[...] I don't know what
other people who do core development feel about that --- but I dislike
the idea that when someone changes such an API, the buildfarm will go
all red because there's only one person with the ability to fix PL/Java.
But the
We already have EXPLAIN ANALYZE. Perhaps the right way to do this is
something that provides similar output. I could see something that
looks like EXPLAIN for the parts that have not yet executed, something
reasonable to show progress of the currently active part of the plan
(current time,
[snip] Suppose that every
body say me that POStgres is to slow for real time databases, then I will be
very full trying to resolve this problems with postgres, don't think that?
I think you didn't understand correctly: postgres is not slow, it is
just not suitable for real RT applications
[please use reply to all, otherwise you'll have what you just had: the
guy who you write goes home for the weekend and all the rest of the
people on the list who would answer you won't know there is soemthing to
answer...]
On Fri, 2006-07-21 at 13:39, moises wrote:
Sorry if I can't explain me
[snip]
OTOH, one has to be very careful to not mix terms here. In industrial
(production floor) applications, the term 'real time database' refers to
soemthing completely different than a relational, transactional DB.
But relational and transactional are orthogonal, they don't
imply/require
Semantically, NOLOCK (or whatever word) seems most closely associated with
CREATE INDEX, so having it as third word would make sense, and it would be
quite easy to parse for psql.
Strictly speaking, however, it would have to be NOLOCKLY in that case. :-)
In this case CREATE INDEX
OK, offset added to TODO item. What would the offset give us?
The last offset could be remembered by the external program, and it only
has to transfer from the last offset to the new one. It allows
incremental streaming of the WAL files... of course the external program
will be a lot more
The problems I see with this is if in this case the normal postgres
WAL
archiving won't conflict with this streaming ?
You are not forced to use it if your shell scripts do conflict.
What I envisioned, was that the current WAL archiving shell script would
just do some CRC check over
[snip] (In fact, it's
trivial to see how user-defined functions that are mislabeled immutable
could make this fail.) So retail vacuum without any cross-check that
you got all the index tuples is a scary proposition IMHO.
Wouldn't work to restrict that kind of vacuum to only tables which have
[snip]
Forcing people to use a specific casing scheme is just going to lead
to confusion and user frustration. If there's not a very solid
I guess nobody will force people to use the units at all.
*functional* argument for it, we shouldn't do it. Wanting to enforce
a convention that
On Thu, 2006-07-27 at 17:57, Peter Eisentraut wrote:
Florian G. Pflug wrote:
Rounding up would have the advantage that you could just specify 0
in the config file, and have postgres use the smallest value
possible.
In most algebras, dividing zero by something is still zero, so there'd
On Fri, 2006-07-28 at 17:37, Tom Lane wrote:
Marc G. Fournier [EMAIL PROTECTED] writes:
The attacking IP, from the logs, appears to be 87.230.6.96 ...
Perhaps a complaint to their ISP is in order --- RIPE suggests
[EMAIL PROTECTED]
That looks 1 level too high, the immediate source seems to
On Mon, 2006-07-31 at 15:00, Martijn van Oosterhout wrote:
On Mon, Jul 31, 2006 at 08:47:38AM -0400, Rod Taylor wrote:
It appears that the superuser does not have connection limit
enforcement. I think this should be changed.
So if some admin process goes awry and uses up all the connection
Nevermind, I realized now that you're talking about a different setting.
I thought there is a limit for super-users too... citation from:
http://www.postgresql.org/docs/8.1/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS
Cheers,
Csaba.
What I'm looking for is some concentrated testing. The fact that some
people once in a while SIGTERM a backend doesn't give me any confidence
in it.
Now wait a minute, is there some risk of lockup if I kill a backend ?
Cause I do that relatively often (say 20 times a day, when some web
users
You didn't answer the original question: is killing SIGTERM a backend
known/suspected to be dangerous ? And if yes, what's the risk (pointers
to discussions would be nice too).
statement_timeout is your friend.
I know, but unfortunately I can't use it. I did try to use
statement_timeout and it
On Thu, 2006-08-03 at 18:10, Csaba Nagy wrote:
You didn't answer the original question: is killing SIGTERM a backend
^^^
Nevermind, I don't do that. I do 'kill backend_pid' without specifying
the signal, and I'm sufficiently unfamiliar
Stuck? You have not shown us a case where SIGTERM rather than SIGINT
is necessary or appropriate. It seems to me the above is assuming the
existence of unknown backend bugs, exactly the same thing you think
I shouldn't be assuming ...
I do know a case where a plain kill will seem to be
man kill says the default is SIGTERM.
OK, so that means I do use it... is it known to be dangerous ? I thought
till now that it is safe to use. What about select pg_cancel_backend()
?
Thanks,
Csaba.
---(end of broadcast)---
TIP 9: In versions
Constantin,
What binding are you using ? We here use Java+JDBC, and we were able to
get stable query plans by forcing server side prepared statements (using
PGStatement#setPrepareThreshold with 1 as the threshold), where the
query is prepared without knowing the parameter values. This can
On Tue, 2006-08-08 at 12:36, Constantin Teodorescu wrote:
We have tried PGStatement#setPrepareThreshold with 1 as the threshold
but it's not a good solution.
Actually is worst. Considering that you have 5 different query plans,
you are selecting approx. random one of them, not taking into
On Tue, 2006-08-08 at 22:14, Tom Lane wrote:
So some kind of override for statistical guesses doesn't seem completely
silly to me. But it needs to be declarative information that's stored
somewhere out of view of the actual SQL queries. IMHO anyway.
The real problem is that sometimes there's
1. Define it as a feature not a bug. People do occasionally ask for
UPDATE foo ... LIMIT 1 after all. But this is a pretty klugy way of
getting that, and the arguments that say allowing LIMIT on updating
queries would be a bad idea haven't lost their force.
Being one of those who was asking
- Redefine prepared to mean parsed rather than parsed and planned.
How about prepared means really prepared... in the sense of parsed,
analyzed all sensible plans, and save a meta-plan which based on current
statistics and parameter values chooses one of the considered (and
cached) plans ?
That
On Thu, 2006-08-31 at 14:32, Peter Eisentraut wrote:
Am Donnerstag, 31. August 2006 14:11 schrieb Csaba Nagy:
How about prepared means really prepared... in the sense of parsed,
analyzed all sensible plans, and save a meta-plan which based on current
statistics and parameter values chooses
On Thu, 2006-08-31 at 15:19, Peter Eisentraut wrote:
OK, why don't you work out an example. Let's look at this query:
SELECT * FROM t1 WHERE a LIKE $1;
What two plans would you prepare?
if substring($1 from 1 for 1) != '%' then
use plan 1 (see below);
else
use plan 2 (see below);
end
On Thu, 2006-08-31 at 15:49, Peter Eisentraut wrote:
Note that plan 1 can only be created if you know the actual value for $1.
Why would that be so ? The plan can contain functions of $1 (both
constants in plan 1 are a function of $1).
Cheers,
Csaba
---(end of
Neither of these problems have anything to do with statistics getting
stale.
... and the second one would benefit from a meta-plan facility which
puts some meta-plan nodes on top of specific plans to dispatch based
on parameter values at runtime.
Incidentally, the dispatch could check the
Although I don't have a clear opinion myself, I sometimes read on this list
that people are using prepared statements to get safe, stable plans, i.e.
plans that don't depend on the specific parameter input.
I definitely want the possibility of getting stable plans. That's only
possible if
select format_uuid(mypk,'format2') from tbluuid;
and then get: 6b13c5a1-afb4-dcf5-ce8f-8b4656b6c93c
How about instead of fixed formats, you allow a format string using the
diverse parts of the GUID a la time formatting functions ? Then
everybody can format it as they want.
Just an idea.
True, but running several dozen instances on a single machine will
require a lot more memory (or, conversely, each individual database gets
a lot less memory to use).
Of course, this is all hand-waving right now... it'd be interesting to
see which approach was actually better.
I'm running
Not being an expert, but to me it looks like the client_encoding being set to
UTF8 but the data being sent is something other than UTF8. I've seen this
happen on Linux when connecting with PuTTY from Windows (and then psql from
the linux machine) and having the wrong encoding set in PuTTY.
And we're back to routine REINDEX I guess :-(. This doesn't seem like a
satisfactory answer.
If the reindex works online, it could be a satisfactory solution.
Cheers,
Csaba.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet,
I think you build a whole new index named something like .temp-reindex and
then as the last step of the second transaction delete the old idnex and
rename the new index.
That would require getting exclusive lock on the table.
Just out of curiosity, creating a new index concurrently (or
Backspace deletes character-wise, as long as you have LANG set
correctly. Check LANG and the LC_* environment variables.
OK, you're right:
$ echo $LANG
[EMAIL PROTECTED]
# show client_encoding ;
client_encoding
-
UTF8
(1 row)
But then I wonder why the client encoding is
[Snip explanations]
Comments?
Would it be reasonable to include one more switch: 'include
dependencies' ?
That would work like this:
* first consider all to be included objects (possibly limited by the
include switches);
* if dependencies are included, add all dependent objects, plus
We are two months past feature freeze ... adding entirely new features
to pg_dump is *not* on the table for 8.2.
Ok, clear.
The scenario I most care about is to be able to make a complete data
base dump (including non-schema objects) while excluding only a few
tables.
Isn't this the
On Mon, 2006-10-09 at 16:24, Tom Lane wrote:
I think we've agreed that if you use some exclusion switches, but not
any inclusion switches, then only the specific objects matching your
switches are excluded. CVS HEAD gets this wrong, but I'm going to work
on it today.
Cool, that makes it
The above process can be performed without tool support, but its clear
that further automation will help greatly here. I foresee that the
development of both server-side and tools will take more than one
release. Discussion of tool support can begin once we have agreed
server-side capability.
On Tue, 2008-09-09 at 20:59 +0200, Zeugswetter Andreas OSB sIT wrote:
All in all a useful streamer seems like a lot of work.
I mentioned some time ago an alternative idea of having the slave
connect through a normal SQL connection and call a function which
streams the WAL file from the point
On Thu, 2008-09-11 at 15:23 +0300, Heikki Linnakangas wrote:
I'd imagine that even if applying the WAL on the slave is blocked, it's
still streamed from the master to the slave, and in case of failover the
slave will fast-forward before starting up as the new master.
Which begs the question:
On Thu, 2008-09-11 at 15:42 +0300, Heikki Linnakangas wrote:
One problem with this, BTW, is that if there's a continuous stream of
medium-length transaction in the slave, each new snapshot taken will
prevent progress in the WAL replay, so the WAL replay will advance in
baby steps, and can
On Thu, 2008-09-11 at 16:19 +0300, Heikki Linnakangas wrote:
Well, yes, but you can fall behind indefinitely that way. Imagine that
each transaction on the slave lasts, say 10 minutes, with a new
transaction starting every 5 minutes. On the master, there's a table
that's being vacuumed (or
On Thu, 2008-09-11 at 15:33 +0200, Dimitri Fontaine wrote:
What would forbid the slave to choose to replay all currently lagging WALs
each time it's given the choice to advance a little?
Well now that I think I understand what Heikki meant, I also think the
problem is that there's no choice at
On Fri, 2008-09-12 at 09:38 +0100, Simon Riggs wrote:
If you request a block, we check to see whether there is a lookaside
copy of it prior to the tuple removals. We then redirect the block
request to a viewpoint relation's block. Each viewpoint gets a separate
relfilenode. We do the
On Fri, 2008-09-12 at 12:31 +0100, Richard Huxton wrote:
There was a suggestion (Simon - from you?) of a transaction voluntarily
restricting itself to a set of tables.
While thinking about how easy it would be for the DBA to specify the set
of tables a single query is accessing, first I thought
I think that enabling long-running queries this way is both
low-hanging
fruit (or at least medium-height-hanging ;) ) and also consistent to
PostgreSQL philosophy of not replication effort. As an example we trust
OS's file system cache and don't try to write our own.
I have again questions
On Fri, 2008-09-12 at 15:08 +0300, Hannu Krosing wrote:
* how will the buffers keep 2 different versions of the same page ?
As the FS snapshot is mounted as a different directory, it will have
it's own buffer pages.
Lack of knowledge about this shows my ignorance about the implementation
of
1 - 100 of 198 matches
Mail list logo