Hello all,
Some time back I posted a query to build a site with 150GB of database. In last
couple of weeks, lots of things were tested at my place and there are some
results and again some concerns.
This is a long post. Please be patient and read thr. If we win this, I guess we
have a good
On 26 Sep 2002 at 14:05, Shridhar Daithankar wrote:
Some time back I posted a query to build a site with 150GB of database. In last
couple of weeks, lots of things were tested at my place and there are some
results and again some concerns.
2) Creating index takes huge amount of time.
I'll preface this by saying that while I have a large database, it doesn't
require quite the performace you're talking about here.
On Thu, Sep 26, 2002 at 02:05:44PM +0530, Shridhar Daithankar wrote:
1) Database load time from flat file using copy is very high
2) Creating index takes huge
On 26 Sep 2002 at 10:51, [EMAIL PROTECTED] wrote:
Hi,
it seems you have to cluster it, I don't think you have another choise.
Hmm.. That didn't occur to me...I guess some real time clustering like usogres
would do. Unless it turns out to be a performance hog..
But this is just insert
Hello,
I've mailed this to the bugs list but that seems to have stopped working on
the 24th so I thought I'd better email it through on here.
I have found it is possible for a user with create table permission to crash
the 7.3b1 backend. The crash occurs because it is possible to have a table
Tom Lane wrote:
Michael Paesold [EMAIL PROTECTED] writes:
Only vacuum will reset the insert times to the lowest possible!
What does the vacuum code do?? :-]
It removes dead tuples. Dead tuples can only arise from update or
delete operations ... so you have not been telling us the whole
On 26 Sep 2002 at 12:28, Michael Paesold wrote:
But why is performance so much degrading? After 1 updates
on a row, the row seems to be unusable without vacuum! I hope
the currently discussed autovacuum daemon will help in such a
situation.
Let mw know if it works. Use CVS BTW.. I am
Hi Shridhar,
Shridhar Daithankar wrote:
snip
3) Any suggsestions for runtime as data load and query will be going in
parallel.
That sounds unusual. From reading this, it *sounds* like you'll be
running queries against an incomplete dataset, or maybe just running the
queries that affect the
On 26 Sep 2002 at 14:05, Shridhar Daithankar wrote:
Some time back I posted a query to build a site with 150GB of database. In
last
couple of weeks, lots of things were tested at my place and there are some
results and again some concerns.
2) Creating index takes huge amount of time.
On 26 Sep 2002 at 11:17, Mario Weilguni wrote:
On 26 Sep 2002 at 14:05, Shridhar Daithankar wrote:
Some time back I posted a query to build a site with 150GB of database. In
last
couple of weeks, lots of things were tested at my place and there are some
results and again some concerns.
On 26 Sep 2002 at 19:17, Justin Clift wrote:
Shridhar Daithankar wrote:
snip
3) Any suggsestions for runtime as data load and query will be going in
parallel.
That sounds unusual. From reading this, it *sounds* like you'll be
running queries against an incomplete dataset, or maybe just
On 26 Sep 2002 at 19:05, Martijn van Oosterhout wrote:
On Thu, Sep 26, 2002 at 02:05:44PM +0530, Shridhar Daithankar wrote:
1) Database load time from flat file using copy is very high
2) Creating index takes huge amount of time.
3) Any suggsestions for runtime as data load and query will
Shridhar Daithankar wrote:
snip
fsync=true (Sad but true. Left untouched.. Will that make a difference on
SCSI?)
Definitely. Have directly measured a ~ 2x tps throughput increase on
FreeBSD when leaving fsync off whilst performance measuring stuff
recently (PG 7.2.2). Like
Well the test runs were for 10GB of data. Schema is attached. Read in fixed
fonts..Last nullable fields are dummies but may be used in fututre and
varchars
are not acceptable(Not my requirement). Tuple size is around 100 bytes..
The index creation query was
CREATE INDEX index1 ON tablename
On Thu, Sep 26, 2002 at 03:01:35PM +0530, Shridhar Daithankar wrote:
Content-Description: Mail message body
The index creation query was
CREATE INDEX index1 ON tablename (esn,min,datetime);
What if I put datetime ahead? It's likely the the datetime field will have high
degree of locality
Shridhar Daithankar wrote:
On 26 Sep 2002 at 19:05, Martijn van Oosterhout wrote:
snip
fsync IIRC only affects the WAL buffers now but it may be quite expensive,
especially considering it's running on every transaction commit. Oh, your
WAL files are on a seperate disk from the data?
What could you recommend? Locking the table and selecting
max(invoice_id) wouldn't really be much faster, with max(invoice_id)
not using an index...
select invoice_id from table order by invoice_id desc limit 1;
should get you the maximum fast if you have a unique index on invoice_id.
Curt Sampson wrote:
On Wed, 25 Sep 2002, Jan Wieck wrote:
With the number of screws our product has, there are so many
possible combinations that don't work, why worry about one more
or less?
That's just silly, so I won't even bother replying.
Curt,
it might sound silly on first
Michael Paesold [EMAIL PROTECTED] writes:
So inserting an invoice would also do an update on a single row
of the cs_sequence table, which cause the problems.
Now, with a normal sequence, it works like a charm.
17 sec. for 1 rows and 2-3 sec. for commit.
But why is performance so much
Zeugswetter Andreas SB SD [EMAIL PROTECTED] wrote:
What could you recommend? Locking the table and selecting
max(invoice_id) wouldn't really be much faster, with max(invoice_id)
not using an index...
select invoice_id from table order by invoice_id desc limit 1;
should get you the
On Thu, 26 Sep 2002, Jan Wieck wrote:
But still, why set up a situation where your database might not
start? Why not set it up so that if you get just *one* environment
or command-line variable right, you can't set another inconsistently
and screw up your start anyway? Why store
Shridhar Daithankar [EMAIL PROTECTED] writes:
On 26 Sep 2002 at 11:50, Mario Weilguni wrote:
Just an idea, I noticed you use char(10) for esn and min, and use this as
index. Are these really fixed len fields all having 10 bytes?
10 bytes. Those are id numbers.. like phone numbers always have
On Sat, 07 Sep 2002 10:21:21 -0700
Joe Conway [EMAIL PROTECTED] wrote:
I just sent in a patch using the ancestor check method. It turned out
that the performance hit was pretty small on a moderate sized tree.
My test case was a 22 record bill-of-material table. The tree built
was 9
On Fri, 27 Sep 2002 02:02:49 +0900
I wrote [EMAIL PROTECTED] wrote:
On Sat, 07 Sep 2002 10:21:21 -0700
Joe Conway [EMAIL PROTECTED] wrote:
I just sent in a patch using the ancestor check method. It turned out
that the performance hit was pretty small on a moderate sized tree.
My
fixing a problem with the aliases, just want to make sure it goes through
propelry ...
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL
I think I've identified a primary cause for the no such pg_clog file
problem that we've seen reported several times.
A look at htup.h shows that the WAL only stores the low 8 bits of a
tuple's t_infomask (see xl_heap_header struct). There is some fooling
around in heapam.c's WAL redo routines
I have seen various benchmarks where XFS seems to perform best when it
comes to huge amounts of data and many files (due to balanced internal
b+ trees).
also, XFS seems to be VERY mature and very stable.
ext2/3 don't seem to be that fast in most of the benchmarks.
i did some testing with
Patrick Welche's recent problems (see pgsql-general) point out that the
old CREATE CONSTRAINT TRIGGER syntax that optionally omits a FROM
table clause doesn't work anymore --- the system *needs* tgconstrrelid
to be set in an RI constraint trigger record, because the RI triggers
now use that OID
I tend to agree with this though I have nothing to back up it with. My
impression is that XFS does very well for large files. Accepting that
as fact?, my impression is that XFS historically does well for
database's. Again, I have nothing to back that up other than hear-say
and conjecture.
On Thu, 2002-09-26 at 16:57, Tom Lane wrote:
This means that RI trigger definitions dating back to 7.0 (or whenever
it was that we fixed the pg_dump bug about not dumping tgconstrrelid)
don't work anymore.
I thought 7.0 introduced foreign keys in the first place, so perhaps
7.1?
However, if
On Thu, 2002-09-26 at 16:57, Tom Lane wrote:
snip
There are a couple things I think we should do. One: modify the CREATE
CONSTRAINT TRIGGER code to try to extract a foreign relation name from
the tgargs if FROM is missing. Without this, we have no hope of loading
working FK trigger
Rod Taylor [EMAIL PROTECTED] writes:
However, if they're coming from 7.0 or earlier would it be appropriate
to have them bounce through 7.2 / 7.1 first?
Won't help. 7.2 will dump 'em out without a FROM clause, just like they
were loaded.
Is this the only problem that 7.0 people are going to
Robert Treat [EMAIL PROTECTED] writes:
I'm trying to think of the cases where this extraction might fail, but
maybe more important is what happens if it does fail?
Then you have broken RI triggers ... which is the problem now.
regards, tom lane
Is this the only problem that 7.0 people are going to experience (server
side, SQL changes are abundant)?
You're missing the point. Welche was upgrading *from 7.2*. But his
trigger definitions had a dump/reload history going back to 7.0.
Oh.. I certainly did.
--
Rod Taylor
Tom Lane wrote:
It might work to measure time since the start of the whole process, or
until the timeout target, rather than accumulating adjustments to the
remains count each time through. In other words something like
at start: targettime = time() + specified-timeout
each
On Thu, 2002-09-26 at 17:22, Tom Lane wrote:
Robert Treat [EMAIL PROTECTED] writes:
I'm trying to think of the cases where this extraction might fail, but
maybe more important is what happens if it does fail?
Then you have broken RI triggers ... which is the problem now.
Uh...yeah, I
On Thu, 26 Sep 2002, Tom Lane wrote:
Patrick Welche's recent problems (see pgsql-general) point out that the
old CREATE CONSTRAINT TRIGGER syntax that optionally omits a FROM
table clause doesn't work anymore --- the system *needs* tgconstrrelid
to be set in an RI constraint trigger record,
Robert Treat [EMAIL PROTECTED] writes:
On Thu, 2002-09-26 at 17:22, Tom Lane wrote:
Robert Treat [EMAIL PROTECTED] writes:
I'm trying to think of the cases where this extraction might fail, but
maybe more important is what happens if it does fail?
Then you have broken RI triggers ...
Stephan Szabo [EMAIL PROTECTED] writes:
However, if we are going to put that kind of knowledge into pg_dump,
it would only be a small further step to have it dump these triggers
as ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot
better for forward compatibility than dumping
On Thu, 26 Sep 2002, Tom Lane wrote:
Stephan Szabo [EMAIL PROTECTED] writes:
However, if we are going to put that kind of knowledge into pg_dump,
it would only be a small further step to have it dump these triggers
as ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot
Masaru Sugawara wrote:
The previous patch fixed an infinite recursion bug in
contrib/tablefunc/tablefunc.c:connectby. But, other unmanageable error
seems to occur even if a table has commonplace tree data(see below).
I would think the patch, ancestor check, should be
if
Tatsuo Ishii kirjutas N, 26.09.2002 kell 03:37:
The actual checking is done in INSERT/UPDATE/COPY. However, the
checking is currently very limited: every byte of a mutibyte character
must be greater than 0x7f.
Where can I read about basic tech details of Unicode / Charset
Conversion / ...
configure somehow thinks it needs to #define _LARGE_FILES though, which
then clashes with pg_config.h's _LARGE_FILES. I think the test needs to
#include unistd.h .
_LARGE_FILES is defined because it's necessary to make off_t 64 bits. If
you disagree, please post compiler output.
Ah,
Bruce Momjian [EMAIL PROTECTED] writes:
diff -c -c -r1.294 postgres.c
*** src/backend/tcop/postgres.c25 Sep 2002 20:31:40 -1.294
--- src/backend/tcop/postgres.c26 Sep 2002 05:15:41 -
***
*** 1281,1288
* -d 0 allows user to
shared libs on AIX need to be able to resolve all symbols at linkage time.
Those two symbols are in backend/utils/SUBSYS.o but not in the postgres
executable.
My guess is, that they are eliminated by the linker ? Do they need an extern
declaration ?
Further research prooved, that the
Shridhar Daithankar [EMAIL PROTECTED] writes:
RedHat7.2/PostgreSQL7.1.3
I'd suggest a newer release of Postgres ... 7.1.3 is pretty old ...
Create unique composite index on 2 char and a timestamp field: 25226 sec.
What do you mean by char exactly? If it's really char(N), how much
are you
On 26 Sep 2002 at 10:33, Tom Lane wrote:
Shridhar Daithankar [EMAIL PROTECTED] writes:
RedHat7.2/PostgreSQL7.1.3
I'd suggest a newer release of Postgres ... 7.1.3 is pretty old ...
I agree.. downloadind 7.2.2 right away..
Create unique composite index on 2 char and a timestamp field:
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
Further research prooved, that the AIX linker eliminates functions on a per
c file basis if none of them is referenced elsewhere (declared extern or not).
Thus it eliminates the whole conv.c file from the postgres executable since
those
On 26 Sep 2002 at 10:42, Tom Lane wrote:
Justin Clift [EMAIL PROTECTED] writes:
If it's any help, when I was testing recently with WAL on a separate
drive, the WAL logs were doing more readwrites per second than the main
data drive.
... but way fewer seeks. For anything involving lots
On Thursday 26 September 2002 21:52, Shridhar Daithankar wrote:
I might have found the bottleneck, although by accident. Mysql was running
out of space while creating index. So my friend shut down mysql and tried
to move things by hand to create links. He noticed that even things like cp
Shridhar Daithankar wrote:
snip
My friend argues for ext2 to eliminate journalling overhead but I favour
reiserfs personally having used it in pgbench with 10M rows on paltry 20GB IDE
disk for 25 tps..
If it's any help, the setup I mentioned before with differnt disks for
the data and the WAL
On 27 Sep 2002 at 1:12, Justin Clift wrote:
Shridhar Daithankar wrote:
As a curiosity point, how predictable are the queries you're going to be
running on your database? They sound very simple and very predicatable.
Mostly predictable selects. Not a domain expert on telecom so not very
On Thu, 2002-09-26 at 09:52, Shridhar Daithankar wrote:
My friend argues for ext2 to eliminate journalling overhead but I favour
reiserfs personally having used it in pgbench with 10M rows on paltry 20GB IDE
disk for 25 tps..
We will be attempting raiserfs and/or XFS if required. I know
Manfred Koizar wrote:
On Wed, 25 Sep 2002 21:40:03 -0400 (EDT), Bruce Momjian
[EMAIL PROTECTED] wrote:
Item 3 is the most controversial. Some say sum all tuple counts, i.e.
sum INSERT/UPDATE/DELETE. That just seems to messy to me. I think
summing only the matching tags has the highest
Shridhar Daithankar wrote:
I might have found the bottleneck, although by accident. Mysql was running out
of space while creating index. So my friend shut down mysql and tried to move
things by hand to create links. He noticed that even things like cp were
terribly slow and it hit us.. May
Greg Copeland [EMAIL PROTECTED] writes:
I'm not sure about reiserfs or ext3 but with XFS, you can create your
log on another disk. Also worth noting is that you can also configure
the size and number of log buffers. There are also some other
performance type enhancements you can fiddle
On Thu, 2002-09-26 at 11:41, Bruce Momjian wrote:
Shridhar Daithankar wrote:
I might have found the bottleneck, although by accident. Mysql was running out
of space while creating index. So my friend shut down mysql and tried to move
things by hand to create links. He noticed that even
On Thu, 2002-09-26 at 11:41, Bruce Momjian wrote:
Shridhar Daithankar wrote:
I might have found the bottleneck, although by accident. Mysql was running out
of space while creating index. So my friend shut down mysql and tried to move
things by hand to create links. He noticed that even
If you are seeing very slow performance on a drive set, check dmesg to see
if you're getting SCSI bus errors or something similar. If your drives
aren't properly terminated then the performance will suffer a great deal.
---(end of broadcast)---
Further research prooved, that the AIX linker eliminates functions on a per
c file basis if none of them is referenced elsewhere (declared extern or not).
Thus it eliminates the whole conv.c file from the postgres executable since
those functions are only used by the conversion shared
Greg Copeland wrote:
The paper does recommend ext3, but the differences between file systems
are very small. If you are seeing 'cp' as slow, I wonder if it may be
something more general, like poorly tuned hardware or something. You can
use 'dd' to throw some data around the file system
[ Subject changed.]
Marc, please hold on announcing beta2 until we get this resolved. Thanks.
---
Tom Lane wrote:
I think I've identified a primary cause for the no such pg_clog file
problem that we've seen reported
Oops, I see beta2's on the web site with yesterday's date. Have they
been announced?
-rw-r--r-- 1 70 70 1072573 Sep 25 10:15 postgresql-test-7.3b2.tar.gz
---
Tom Lane wrote:
I think I've identified a primary cause
Bruce Momjian [EMAIL PROTECTED] writes:
Oops, I see beta2's on the web site with yesterday's date. Have they
been announced?
-rw-r--r-- 1 70 70 1072573 Sep 25 10:15 postgresql-test-7.3b2.tar.gz
No, but they've been out there for more than a day. I think it's too
late to retract beta2.
Neil Conway wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
The paper does recommend ext3, but the differences between file systems
are very small.
Well, I only did a very rough benchmark (a few runs of pgbench), but
the results I found were drastically different: ext2 was significantly
I am so glad that postgres now keeps track of relationships between rule,
views, functions, tables, etc. I've had to re-do all my creation and drop
scripts but this is definitely for the better.
During my testing of my scripts, I have come across this message:
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Oops, I see beta2's on the web site with yesterday's date. Have they
been announced?
-rw-r--r-- 1 70 70 1072573 Sep 25 10:15 postgresql-test-7.3b2.tar.gz
No, but they've been out there for more than a day. I think it's too
On Thu, 2002-09-26 at 16:46, Laurette Cisneros wrote:
I am so glad that postgres now keeps track of relationships between rule,
views, functions, tables, etc. I've had to re-do all my creation and drop
scripts but this is definitely for the better.
During my testing of my scripts, I have
I'll see if I can pare down my scripts (they are long) to reproduce this
easier.
L.
On 26 Sep 2002, Rod Taylor wrote:
On Thu, 2002-09-26 at 16:46, Laurette Cisneros wrote:
I am so glad that postgres now keeps track of relationships between rule,
views, functions, tables, etc. I've had
Neil Conway wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
The paper does recommend ext3, but the differences between file systems
are very small.
Well, I only did a very rough benchmark (a few runs of pgbench), but
the results I found were drastically different: ext2 was significantly
Bruce Momjian [EMAIL PROTECTED] writes:
Wow. That leaves no good Linux file system alternatives.
PostgreSQL just wants an ordinary file system that has reliable
recovery from a crash.
I'm not really familiar with the reasoning behind ext2's reputation as
recovering poorly from crashes; if we
Neil Conway wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Wow. That leaves no good Linux file system alternatives.
PostgreSQL just wants an ordinary file system that has reliable
recovery from a crash.
I'm not really familiar with the reasoning behind ext2's reputation as
recovering
On Thu, 2002-09-26 at 16:03, Neil Conway wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Wow. That leaves no good Linux file system alternatives.
PostgreSQL just wants an ordinary file system that has reliable
recovery from a crash.
I'm not really familiar with the reasoning behind
Greg Copeland [EMAIL PROTECTED] writes:
On Thu, 2002-09-26 at 16:03, Neil Conway wrote:
I'm not really familiar with the reasoning behind ext2's
reputation as recovering poorly from crashes; if we fsync a WAL
record to disk before we lose power, can't we recover reliably,
even with ext2?
Bruce Momjian [EMAIL PROTECTED] writes:
Can anyone clarify if data=writeback is safe for PostgreSQL.
Specifically, are the data files recovered properly or is this option
only for a filesystem containing WAL?
data=writeback means that no data is journaled, just metadata (which
is like XFS
Neil Conway [EMAIL PROTECTED] writes:
I'm not really familiar with the reasoning behind ext2's reputation as
recovering poorly from crashes; if we fsync a WAL record to disk
before we lose power, can't we recover reliably, even with ext2?
Up to a point. We do assume that the filesystem won't
Tom Lane [EMAIL PROTECTED] writes:
We'd be happiest with a filesystem that journals its own metadata and
not the user data in the file(s). I dunno if there are any.
ext3 with data=writeback? (See my previous message to Bruce).
-Doug
---(end of
Neil Conway wrote:
Greg Copeland [EMAIL PROTECTED] writes:
On Thu, 2002-09-26 at 16:03, Neil Conway wrote:
I'm not really familiar with the reasoning behind ext2's
reputation as recovering poorly from crashes; if we fsync a WAL
record to disk before we lose power, can't we recover
Doug McNaught wrote:
Tom Lane [EMAIL PROTECTED] writes:
We'd be happiest with a filesystem that journals its own metadata and
not the user data in the file(s). I dunno if there are any.
ext3 with data=writeback? (See my previous message to Bruce).
OK, so that makes ext3 crash safe
On Thu, 2002-09-26 at 17:39, Bruce Momjian wrote:
Neil Conway wrote:
Greg Copeland [EMAIL PROTECTED] writes:
On Thu, 2002-09-26 at 16:03, Neil Conway wrote:
I'm not really familiar with the reasoning behind ext2's
reputation as recovering poorly from crashes; if we fsync a WAL
Rod Taylor wrote:
Yes, before UFS had soft updates, the synchronous nature of UFS made it
slower than ext2, but now with soft updates, that performance difference
is gone so you have two files systems, ext2 and ufs, similar peformance,
but one is crash-safe and the other is not.
Note
On Thu, 2002-09-26 at 17:47, Bruce Momjian wrote:
Rod Taylor wrote:
Yes, before UFS had soft updates, the synchronous nature of UFS made it
slower than ext2, but now with soft updates, that performance difference
is gone so you have two files systems, ext2 and ufs, similar peformance,
Rod Taylor wrote:
On Thu, 2002-09-26 at 17:47, Bruce Momjian wrote:
Rod Taylor wrote:
Yes, before UFS had soft updates, the synchronous nature of UFS made it
slower than ext2, but now with soft updates, that performance difference
is gone so you have two files systems, ext2 and
Bruce Momjian writes:
To summarize, with non-INSTEAD, we get the tag, oid, and tuple count of
the original query. Everyone agrees on that.
For non-INSTEAD, we have:
[I think this is the INSTEAD part.]
1) return original tag
2) return oid if all inserts in the rule insert
Zeugswetter Andreas SB SD writes:
The problem is, that scan.c includes unistd.h before postgres.h
and thus unistd.h defines _LARGE_FILE_API which is not allowed
together with _LARGE_FILES. Do you know an answer ?
Offhand I can only think of using -D_LARGE_FILES as a compiler flag :-(
That
Peter Eisentraut wrote:
Bruce Momjian writes:
To summarize, with non-INSTEAD, we get the tag, oid, and tuple count of
the original query. Everyone agrees on that.
For non-INSTEAD, we have:
[I think this is the INSTEAD part.]
Sorry, yes.
1) return original tag
2)
Bruce Momjian [EMAIL PROTECTED] writes:
Doug McNaught wrote:
Tom Lane [EMAIL PROTECTED] writes:
We'd be happiest with a filesystem that journals its own metadata and
not the user data in the file(s). I dunno if there are any.
ext3 with data=writeback? (See my previous message
the following was sent to the php developer's list, and they came back with:
Isn't it generally better (where better means more secure,
efficient, and easily maintained) to handle database access
control using PostgreSQL's native access mappings?
I would think so, and IMHO, that's where
... And maybe i should attach the code aswell :)
I'm not subscribed to pgsql-jdbc or pgsql-announce, so please CC me if your
responding...
Magnus
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Programmer/Networker [|] Magnus Naeslund
On Thu, 26 Sep 2002, Jim Mercer wrote:
the following was sent to the php developer's list, and they came back with:
Isn't it generally better (where better means more secure,
efficient, and easily maintained) to handle database access
control using PostgreSQL's native access
On Fri, Sep 27, 2002 at 11:15:35AM +1000, Gavin Sherry wrote:
On Thu, 26 Sep 2002, Jim Mercer wrote:
I would think so, and IMHO, that's where pgsql access control
belongs, with pgsql.
I totally disagree. It is a language level restriction, not a database
level one, so why back it into
On Thu, 26 Sep 2002, Jim Mercer wrote:
On Fri, Sep 27, 2002 at 11:15:35AM +1000, Gavin Sherry wrote:
On Thu, 26 Sep 2002, Jim Mercer wrote:
I would think so, and IMHO, that's where pgsql access control
belongs, with pgsql.
I totally disagree. It is a language level restriction,
On Fri, Sep 27, 2002 at 12:06:43PM +1000, Gavin Sherry wrote:
On Thu, 26 Sep 2002, Jim Mercer wrote:
maybe not _totally_ secure, but much moreso than nothing.
I was basically just suggesting that its effect needs to be
documented. This needs to be used in conjunction with other forms of
Doug McNaught [EMAIL PROTECTED] writes:
data=writeback means that no data is journaled, just metadata (which
is like XFS or Reiser). An fsync() call should still do what it
normally does, commit the writes to disk before returning.
data=journal journals all data and is the slowest and
Jim Mercer [EMAIL PROTECTED] writes:
as best i can understand, there is no way to get apach/php/pgsql configured
(using PostgreSQL's native access mappings) that would disallow php code
in one virtual host from connecting to any database on the system.
Betraying my ignorance of PHP here: what
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
diff -c -c -r1.294 postgres.c
*** src/backend/tcop/postgres.c25 Sep 2002 20:31:40 -1.294
--- src/backend/tcop/postgres.c26 Sep 2002 05:15:41 -
***
*** 1281,1288
*
Where can I read about basic tech details of Unicode / Charset
Conversion / ...
I't like to find answers to the following (for database created using
UNICODE)
1. Where exactly are conversions between national charsets done
No national charset is in PostgreSQL. I assume you want to know
On Thu, Sep 26, 2002 at 11:42:44PM -0400, Tom Lane wrote:
Jim Mercer [EMAIL PROTECTED] writes:
as best i can understand, there is no way to get apach/php/pgsql configured
(using PostgreSQL's native access mappings) that would disallow php code
in one virtual host from connecting to any
Hello!
On Thu, 26 Sep 2002, Bruce Momjian wrote:
I'm not really familiar with the reasoning behind ext2's reputation as
recovering poorly from crashes; if we fsync a WAL record to disk
On relatively big volumes ext2 recovery can end up in formatting the fs
under certain cirrumstances.;-)
99 matches
Mail list logo