-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sun, Sep 28, 2008 at 09:24:48PM -0700, David E. Wheeler wrote:
On Sep 28, 2008, at 17:46, Tom Lane wrote:
BTW, I think it is (or should be) possible to create an index on
hstore-'mycol', so at least one of the reasons why you should *need*
to
Hi,
Stephen Frost wrote:
* Markus Wanner ([EMAIL PROTECTED]) wrote:
What does the subobject column for pg_shdepend buy us?
Tracking column-level ACL dependencies rather than having those
dependencies only be at the table-level. This complicates
pg_shdepend some, but simplifies the
2008/9/29 Tom Lane [EMAIL PROTECTED]:
Greg Stark [EMAIL PROTECTED] writes:
On 27 Sep 2008, at 09:56 PM, Tom Lane [EMAIL PROTECTED] wrote:
ISTM that the fundamental problem is that plpgsql doesn't distinguish
properly between a null row value (eg, null::somerowtype) and a
row of null values
* Markus Wanner ([EMAIL PROTECTED]) wrote:
Stephen Frost wrote:
* Markus Wanner ([EMAIL PROTECTED]) wrote:
What does the subobject column for pg_shdepend buy us?
Tracking column-level ACL dependencies rather than having those
dependencies only be at the table-level. This complicates
Hi,
Stephen Frost wrote:
As part of pg_attribute.. Having a seperate table would be an
alternative to adding a column to pg_shdepend.
Aha. Hm...
I thought tracking dependencies between tables and attributes
complicates DROP TABLE? Why doesn't that concern apply here?
And why do we keep the
What you're talking about is a document based database like
StrokeDB, CouchDB. With hstore you don't need to parse content of
'aggregate' column, it provides necessary methods. Also, we tried
to speedup selects using indexes. Probably, we need to refresh our
interest to hstore, do you have
Andrew Dunstan wrote:
Andrew Dunstan wrote:
this works better but there is something fishy still - using the same
dump file I get a proper restore using pg_restore normally. If I
however use -m for a parallel one I only get parts (in this case only
243 of the 709 tables) of the
Stefan Kaltenbrunner [EMAIL PROTECTED] writes:
pg_restore: [archiver (db)] could not execute query: ERROR: deadlock
detected
DETAIL: Process 18100 waits for AccessExclusiveLock on relation
1460818342 of database 1460815284; blocked by process 18103.
Process 18103 waits for
Markus Wanner [EMAIL PROTECTED] writes:
And why do we keep the attributes defaults in their own table with their
own OID, instead of merging them into pg_attributes?
That has already been explained multiple times in this thread, but: the
default expression is a separate entity from the
On Sun, 2008-09-28 at 21:16 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
It does nothing AFAICS for the
problem that when restarting archive recovery from a restartpoint,
it's not clear when it is safe to start letting in backends. You need
to get past the highest LSN
Tom Lane wrote:
Stefan Kaltenbrunner [EMAIL PROTECTED] writes:
pg_restore: [archiver (db)] could not execute query: ERROR: deadlock
detected
DETAIL: Process 18100 waits for AccessExclusiveLock on relation
1460818342 of database 1460815284; blocked by process 18103.
Process 18103 waits
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
Hmm, I'll bet the restore code doesn't realize that this can't run in
parallel with index creation on either table ...
Yeah. Of course, it's never needed to bother with stuff like that till now.
The very simple fix is probably to run
Abbas napsal(a):
Hi,
I have gone through the following stuff
1) previous emails on the patch
2) http://wiki.postgresql.org/wiki/In-place_upgrade
3) http://www.pgcon.org/2008/schedule/attachments/57_pg_upgrade_2008.pdf
4) http://wiki.postgresql.org/wiki/In-place_upgrade:Storage
Here is what I
Simon Riggs [EMAIL PROTECTED] writes:
... That kinda works, but the problem is that restartpoints are time based,
not log based. We need them to be deterministic for us to rely upon them
in the above way.
Right, but the performance disadvantages of making them strictly
log-distance-based are
Hi,
thank you for your patience in explaining. Rest assured that I've read
the relevant messages multiple times.
Tom Lane wrote:
the
default expression is a separate entity from the attribute itself,
That was the point I didn't understand...
..
Otherwise we couldn't handle the concept that
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
Hmm, I'll bet the restore code doesn't realize that this can't run in
parallel with index creation on either table ...
Yeah. Of course, it's never needed to bother with stuff like that till now.
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
Um, FKs could conflict with each other too, so that by itself isn't
gonna fix anything.
Good point. Looks like we'll need to make a list of can't run in
parallel with items as well as strict dependencies.
Yeah, I was just thinking
Tom Lane wrote:
FreeSpaceMapTruncateRel seems to have a bug in its early-exit test: in the
case where the number of FSM blocks stays the same, it fails to zero out slots
in the last block. I also think it's got an off-by-one problem in figuring
the number of FSM blocks: for the normal case
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
In fsm_rebuild_page, surely we needn't check if (lchild NodesPerPage).
Yes, we do.
But the loop starting point is such that you must be visiting a parent
with at least one child, no?
reveals a rather fundamental problem: it is
On Mon, 2008-09-29 at 08:46 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
... That kinda works, but the problem is that restartpoints are time based,
not log based. We need them to be deterministic for us to rely upon them
in the above way.
Right, but the performance
Le lundi 29 septembre 2008, Tom Lane a écrit :
* Extend the archive format to provide some indication that restoring
this object requires exclusive access to these dependencies.
* Hardwire knowledge into pg_restore that certain types of objects
require exclusive access to their dependencies.
Simon Riggs [EMAIL PROTECTED] writes:
I think we can get away with writing the LSN value to disk, as you
suggested, but only every so often. No need to do it after every WAL
record, just consistently every so often, so it gives us a point at
which we know we are safe.
Huh? How does that make
Is it possible to have a FATAL error that crashes a backend and for it
to *not* have written an abort WAL record for any previously active
transaction?
I think yes, but haven't managed to create this situation while testing
for it. If we either *always* write a WAL record, or PANIC then that
Dimitri Fontaine wrote:
Le lundi 29 septembre 2008, Tom Lane a écrit :
* Extend the archive format to provide some indication that restoring
this object requires exclusive access to these dependencies.
* Hardwire knowledge into pg_restore that certain types of objects
require exclusive
Simon Riggs [EMAIL PROTECTED] writes:
Is it possible to have a FATAL error that crashes a backend and for it
to *not* have written an abort WAL record for any previously active
transaction?
Well, a FATAL error will still go through transaction abort before
exiting, IIRC. The problem case is
On Mon, 2008-09-29 at 10:30 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Is it possible to have a FATAL error that crashes a backend and for it
to *not* have written an abort WAL record for any previously active
transaction?
Well, a FATAL error will still go through
On Mon, 2008-09-29 at 10:13 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I think we can get away with writing the LSN value to disk, as you
suggested, but only every so often. No need to do it after every WAL
record, just consistently every so often, so it gives us a point
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-09-29 at 10:30 -0400, Tom Lane wrote:
Like what?
For constructing snapshots during standby. I need a data structure where
emulated-as-running transactions can live. If backend birth/death is
intimately tied to WAL visible events then I can
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-09-29 at 10:13 -0400, Tom Lane wrote:
... If we crash and restart, we'll have to get to the end
of this file before we start letting backends in; which might be further
than we actually got before the crash, but not too much further because
On Mon, 2008-09-29 at 11:18 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-09-29 at 10:30 -0400, Tom Lane wrote:
Like what?
For constructing snapshots during standby. I need a data structure where
emulated-as-running transactions can live. If backend
On Mon, 2008-09-29 at 11:24 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-09-29 at 10:13 -0400, Tom Lane wrote:
... If we crash and restart, we'll have to get to the end
of this file before we start letting backends in; which might be further
than we actually
Currently, the CTE patch assumes (without checking) that the output
rowtype of a recursive WITH item is the same as the output rowtype
determined by inspecting its non-recursive term. Unfortunately
this is not always the case. Consider
WITH RECURSIVE q AS (
SELECT
On Sep 28, 2008, at 23:46, [EMAIL PROTECTED] wrote:
I'm not sure what that means. Can you create normal btree or hash
indexes
on hstore columns? And is the index useful for both `@` and `?`?
That means that those operations are supported by a GiST (or GIN)
index,
that is:
find the
Simon Riggs [EMAIL PROTECTED] writes:
* Might we make AbortTransaction critical just as far as the
END_CRIT_SECTION after XLogInsert in RecordTransactionAbort(), but no
further? Don't expect yes, but seems worth recording thoughts.
The problem is that pretty much everything that proc_exit runs
On Mon, 2008-09-29 at 12:14 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
* Might we make AbortTransaction critical just as far as the
END_CRIT_SECTION after XLogInsert in RecordTransactionAbort(), but no
further? Don't expect yes, but seems worth recording thoughts.
The
I run performance test on in-place patch prototype which I sent for review and I
got nice result:
Original:
-
MQThL (Maximum Qualified Throughput LIGHT): 2202.12 tpm
MQThM (Maximum Qualified Throughput MEDIUM): 4706.60 tpm
MQThH (Maximum Qualified Throughput HEAVY): 3956.64 tpm
KaiGai Kohei wrote:
As I repeated several times, SE-PostgreSQL applies the seuciry policy
of SELinux to achieve consistency in access controls. This feature
enables to restrict client's privileges on accesses to database objects,
as if it accesses to filesystem objects. Its background is our
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
Um, FKs could conflict with each other too, so that by itself isn't
gonna fix anything.
Good point. Looks like we'll need to make a list of can't run in
parallel with items as well as strict dependencies.
Yeah, I was
Peter,
How important is this consistency goal in reality?
It's actually the primary point of SE-Linux. Its audience wants a
centralized policy manager which applies access policies to everything
on the network, regardless of whether it's a file, a port, or a database.
Oracle has not
Peter Eisentraut wrote:
The major purpose of this feature is to provide the most important
component to run enterprise class web application with least privilege
set which is consistent at whole of the system.
How important is this consistency goal in reality? We typically
recommend
At the past, I had considered to implement polyinstantiated table
as a part of SE-PostgreSQL, but it required us unacceptable scale
of changes, so I dropped the idea.
The TrustedSolaris folks would like polyinstantiation, but I don't know
if they actually have anyone working on Postgres
Stefan Kaltenbrunner wrote:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
Um, FKs could conflict with each other too, so that by itself isn't
gonna fix anything.
Good point. Looks like we'll need to make a list of can't run in
parallel with items as well as
Josh Berkus wrote:
At the past, I had considered to implement polyinstantiated table
as a part of SE-PostgreSQL, but it required us unacceptable scale
of changes, so I dropped the idea.
The TrustedSolaris folks would like polyinstantiation, but I don't know
if they actually have
Peter Eisentraut wrote:
KaiGai Kohei wrote:
As I repeated several times, SE-PostgreSQL applies the seuciry policy
of SELinux to achieve consistency in access controls. This feature
enables to restrict client's privileges on accesses to database objects,
as if it accesses to filesystem
Bruce Momjian wrote:
I think there are two goals here. At the SQL-level, we will have
per-role row and column permissions (which seem valuable on their own),
and SE-PostgreSQL allows those permissions to be controlled at the
operating system level rather than at the database level.
Yes, it is
Bruce Momjian wrote:
Josh Berkus wrote:
At the past, I had considered to implement polyinstantiated table
as a part of SE-PostgreSQL, but it required us unacceptable scale
of changes, so I dropped the idea.
The TrustedSolaris folks would like polyinstantiation, but I don't know
if they
Have you seen the example?
http://kaigai.myhome.cx/index.php (id: foo/var/baz pass: sepgsql)
^
It means we can select one of foo, var or baz, and they have same
password. I'm sorry, if it was a confusable representation.
It
+ if (strcmp(te-desc,CONSTRAINT) == 0
||
+ strcmp(te-desc,FK
CONSTRAINT) == 0 ||
+ strcmp(te-desc,CHECK
CONSTRAINT) == 0 ||
+
Philip Warner wrote:
+ if (strcmp(te-desc,CONSTRAINT) == 0
||
+ strcmp(te-desc,FK
CONSTRAINT) == 0 ||
+ strcmp(te-desc,CHECK
CONSTRAINT) == 0 ||
+
Andrew Dunstan wrote:
Unfortunately, it quite possibly would. You would not be able to build
two indexes on the same table in parallel, even though they wouldn't
have conflicting locks.
I suppose so, but:
1. By the same logic it might speed things up; it might build two
completely separate
50 matches
Mail list logo