This may be a reported bug. 7.1beta4.
I use user names mostly as numbers. E.g. 1050, 1060, 1092.
Sometimes I got strange result when I try to reconnect:
tir= \c - 1022
You are now connected as new user 1022.
tir= select user;
current_user
--
1022
(1 row)
(It's OK.)
tir= \c -
On Sun, Apr 29, 2001 at 08:17:28PM -0700, Alfred Perlstein wrote:
Sort of, if that flat file is in the form of:
123456;tablename
33;another_table
Or better yet, since the flat file is unlikely to be large, you could
just do this dance:
1) open file for
On Mon, Apr 30, 2001 at 07:12:16PM -0400, Tom Lane wrote:
There will surely be a 7.1.2. I vote against waiting for it.
Possibly, but one hopes 7.1.2 will be a few months away ...
Is there a chance for the %TYPE patch for PL/pgSQL to make it into
7.1.2?
-Roberto
--
I am reading an interesting discussion about fsync() and disk flush on
Slashdot. The discussion starts about log-based file systems, then
moves to disk fsync about 20% into the discussion.
Look for:
Real issue is HARD DRIVE CACHEs
All this discussion relates to WAL and our use of
I was talking to a Linux user yesterday, and he said that performance
using the xfs file system is pretty bad. He believes it has to do with
the fact that fsync() on log-based file systems requires more writes.
With a standard BSD/ext2 file system, WAL writes can stay on the same
cylinder to
Yes, I like that idea, but the problem is that it is hard to update just
one table in the file.
why not have just one ever-growing file that is only appended to and
that has
lines of form
OID, type (DB/TABLE/INDEX/...), name, time
so when you need tha actual info you grep for
Bruce Momjian wrote:
I can even think of a situation, as unlikely as it can be, where this
could happen ... run out of inodes on the file system ... last inode used
by the table, no inode to stick the symlink onto ...
If you run out of inodes, you are going to have much bigger problems
The problem with log based filesystems is that they most likely
do not know the consequences of a write so an fsync on a file may
require double writing to both the log and the real portion of
the disk. They can also exhibit the problem that an fsync may
cause all pending writes to require
To avoid getting into states where a btree index is corrupt (or appears
that way), it is absolutely critical that the datatype provide a unique,
consistent sort order. In particular, the operators = = = had
better all agree with each other and with the 3-way-comparison support
function about
The Hermit Hacker wrote:
On Mon, 30 Apr 2001, Tom Lane wrote:
The Hermit Hacker [EMAIL PROTECTED] writes:
Okay, maybe this query isn't quite as simple as I think it is, but does
this raise any flags for anyone? How did I get into a COPY? It appears
re-creatable, as I've done it
* Bruce Momjian [EMAIL PROTECTED] [010502 14:01] wrote:
I was talking to a Linux user yesterday, and he said that performance
using the xfs file system is pretty bad. He believes it has to do with
the fact that fsync() on log-based file systems requires more writes.
With a standard
I am unable to connect to the cvsup server. Are there problems?
.. Otto
Otto Hirr
OLAB Inc.
[EMAIL PROTECTED]
503 / 617-6595
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
I am planning to fix this by ensuring that all these operations agree
on an (arbitrarily chosen) sort order for the weird values of these
types. What I'm wondering about is whether to insert the fixes into
7.1.1 or wait for 7.2. In theory changing the sort order might break
existing user
I have been using PostgreSQL and XFS file systems on SGI's for many
years, and PostgreSQL is fast. Dumping and loading 100GB of table
files takes less than one day elapsed (provided there is no other
activity on that database -- large amounts of transactional activity
will slow things down). I
Kovacs Zoltan [EMAIL PROTECTED] writes:
tir= \c - 1060
You are now connected as new user 1060.
tir= select user;
current_user
--
1092
(1 row)
Is it possible that 1060 and 1092 have the same usesysid in pg_shadow?
regards, tom lane
I have been using PostgreSQL and XFS file systems on SGI's for many
years, and PostgreSQL is fast. Dumping and loading 100GB of table
files takes less than one day elapsed (provided there is no other
activity on that database -- large amounts of transactional activity
will slow things down).
On Wed, 2 May 2001, Tom Lane wrote:
Stephan Szabo [EMAIL PROTECTED] writes:
What parts of the changes would require an initdb, would new
functions need to be added or the index ops need to change or would
it be fixes to the existing functions (if the latter, wouldn't a recompile
and
[EMAIL PROTECTED] (Robert E. Bruccoleri) writes:
I have been using PostgreSQL and XFS file systems on SGI's for many
years, and PostgreSQL is fast. Dumping and loading 100GB of table
files takes less than one day elapsed (provided there is no other
activity on that database -- large amounts
* Bruce Momjian [EMAIL PROTECTED] [010502 15:20] wrote:
The problem with log based filesystems is that they most likely
do not know the consequences of a write so an fsync on a file may
require double writing to both the log and the real portion of
the disk. They can also exhibit the
Stephan Szabo [EMAIL PROTECTED] writes:
What parts of the changes would require an initdb, would new
functions need to be added or the index ops need to change or would
it be fixes to the existing functions (if the latter, wouldn't a recompile
and dropping/recreating the indexes be enough?)
[EMAIL PROTECTED] (Robert E. Bruccoleri) writes:
I have been using PostgreSQL and XFS file systems on SGI's for many
years, and PostgreSQL is fast. Dumping and loading 100GB of table
files takes less than one day elapsed (provided there is no other
activity on that database -- large
Bruce Momjian [EMAIL PROTECTED] writes:
Yes, the irony is that a journaling file system is being used to have
fast, reliable restore after crash bootup, but with no fsync, the db is
probably hosed.
It just struck me--is it necessarily true that we get the big
performance hit?
On a
Dear Bruce,
Yes, the irony is that a journaling file system is being used to have
fast, reliable restore after crash bootup, but with no fsync, the db is
probably hosed.
There is no irony in these cases. In my systems, which are used for
bioinformatics, the updating process is generally
Bruce Momjian [EMAIL PROTECTED] writes:
Yes, the irony is that a journaling file system is being used to have
fast, reliable restore after crash bootup, but with no fsync, the db is
probably hosed.
It just struck me--is it necessarily true that we get the big
performance hit?
On
Bruce Momjian [EMAIL PROTECTED] writes:
Comparing NaN/Invalid seems so off the beaten path that we would just
wait for 7.2. That and no one has reported a problem with it so far.
Do you consider vacuum analyze on the regression database to be
off the beaten path? How about creating an index
Hi,
There seems to be a minor bug related to permissions. If you create a
table and grant permissions on that table to someone else, you lose your
own permissions (note: do this as a non-dbadmin account):
testdb= create table tester ( test int4 );
CREATE
testdb= insert into tester
Chris Dunlop [EMAIL PROTECTED] writes:
There seems to be a minor bug related to permissions. If you create a
table and grant permissions on that table to someone else, you lose your
own permissions (note: do this as a non-dbadmin account):
This is fixed in 7.1.
27 matches
Mail list logo