[GENERAL] Re: [INTERFACES] threads and libpq

1999-03-11 Thread Tom Lane

"David O'Farrell" [EMAIL PROTECTED] writes:
 "The libpq library allows a single frontend to make multiple
 connections to backend processes.  However, the frontend application
 is still a single-threaded process."

 could some clarify to me if this means that a frontend CANNOT open
 several connections from different threads with the current
 implementation of libpq.

It just means that libpq doesn't somehow magically convert a
single-threaded application into a multi-threaded one just becase you
opened more than one connection.

PQconnectdb() is not currently multithreadable, because it relies on a
global variable to hold connection options.  (Which is part of the API
for the routine, so I can't easily fix it without risking breaking
applications :-(.)  But as far as I know, all the rest of libpq is
multithreadable, by which I mean that different threads can be working
on different connection objects concurrently.

libpq does not know anything about threads, so it does not contain any
interlocking that would make it safe for more than one thread to touch a
given connection object or result object.  If you need that, you'll have
to provide a lock associated with each object at the application level.
(Realistically, you'd have to do that anyway.  Even if libpq contained
code to serialize access to a connection object, it could only interlock
on a function-call-by-function-call basis.  But almost certainly, each
thread will want to lock down the state of the connection object for a
series of operations.  For instance, you'd not want another thread to
issue a new PQexec till you'd got done examining the result of the
previous one.  libpq couldn't enforce that for you.)

If you need to make connections concurrently, just use PQsetdbLogin()
rather than PQconnectdb().

BTW this all assumes that you have a thread-safe libc; if malloc is
not thread-safe then all bets are off...

 Is there anyone working on multi threading for libpq ?

I don't really see a need for libpq to be thread-aware.  

        regards, tom lane



[GENERAL] Re: [INTERFACES] Problem using Having in a sub-query wit the Count function.

1999-03-24 Thread Tom Lane

Matthew [EMAIL PROTECTED] writes:
 "Select ordnum from ordinace where dept='Finance' and ordnum in
 (Select ordnum from squareview where square='A') and ordnum in (select
 ordnum from keywordview where keyword='Parade' or keyword='Public
 Gathering' group by ordnum having count(ordnum) =2)"

I wonder whether the parser could be getting confused by the multiple
distinct uses of the same name "ordnum" in this query?  In other words,
maybe you'd have better luck if the inner queries read something like

select
k.ordnum from keywordview k where k.keyword='Parade' or k.keyword='Public
Gathering' group by k.ordnum having count(k.ordnum) =2

Without that, it might be thinking that count(ordnum) refers to the
ordnum in the outer select.

If that is it, it's probably a bug, but I'm not sure what the standard
says about how to interpret ambiguous names in this context...

        regards, tom lane



[GENERAL] Re: [HACKERS] backend dies suddenly after a lot of error messages

1999-05-12 Thread Tom Lane

Mirko Kaffka [EMAIL PROTECTED] writes:
 We have problems with backend processes that close the channel because of
 palloc() failures. When an INSERT statement fails, the backend reports an
 error (e.g. `Cannot insert a duplicate key into a unique index') and
 allocates a few bytes more memory. The next SQL statement that fails
 causes the backend to allocate more memory again, etc. until we have no
 more virtual memory left. Is this a bug?

Yeah, I'd say so --- all the memory used should get freed at transaction
end, but evidently it isn't happening.

 We are using postgres 6.4.2 on FreeBSD 2.2.8.

I still see it with 6.5-current sources.  Will take a look.

regards, tom lane



[GENERAL] Re: [INTERFACES] large objects

1999-06-30 Thread Tom Lane

Lauri Posti [EMAIL PROTECTED] writes:
 I've been trying to get postgres LO interface to work with python.

I think you need begin/end transaction around lo_open ... lo_close
sequence.

regards, tom lane



[GENERAL] Re: [SQL] Bad update performance?

1999-07-21 Thread Tom Lane

"Gunnar Ingvi Thorisson" [EMAIL PROTECTED] writes:
 I´ve a table with about 142000 rows like shown below and I want to
 set field "divis" to "unknown" by executing following update command:
 update ipacct set divis = 'unknown';
 However this seems to take hours,

Well, updating 142000 rows is going to take a little while...

Are you starting the postmaster with -o -F ?  That makes for a pretty
considerable speedup in most cases (at the cost of trouble if you
have a system crash during an update).

 Does indexing the field "divis" speed up the update performance?

No, it would not help a query like that --- though if you added a clause
like "where divis = 'oldvalue'" then an index would help to find the
rows that need updated.  Actually, every index you add *slows down*
updates, since all the indexes must be updated along with the table.

regards, tom lane



[GENERAL] Re: [HACKERS] inheritance

1999-07-21 Thread Tom Lane

Chris Bitmead [EMAIL PROTECTED] writes:
 To me this is a much better idea. In any proper OO application you would
 be using the "*" in postgres 99% of the time - that being the whole
 point of OO. Does any consideration want to be given to making the same
 change while there's not too many people using the inheritance feature?

What makes you think there's "not too many people" using inheritance?
Furthermore, if we did that it would break the code of people who
*didn't* think they were using inheritance, except as a means of
copying table definitions (which I do a lot, btw).

I don't think we can reverse the default on that at this late date.

 The other thing Informix does is automatically propagate all attributes
 including indexes, constraints, pretty much everything to sub-classes.
 Again.. I think this is the right thing. Any thoughts?

I'd be inclined to agree on that, or at least say that we ought to
provide a simple way of making it happen.  But the right semantics
are not always obvious.  For example, if the ancestor has a SERIAL
column, do the derived tables get their own sequence objects or
share the ancestor's?  Does your answer change if the serial column
was created "by hand" with a "DEFAULT nextval('some_sequence')" clause?
I suspect that any way we jump on this sort of question will be wrong
for some apps, so it should be possible to suppress system copying of
attributes...

    regards, tom lane



[GENERAL] Re: [SQL] bad select performance fixed by forbidding hash joins

1999-07-21 Thread Tom Lane

George Young [EMAIL PROTECTED] writes:
 Yes!  PGOPTIONS="-fh" made the query time go from 16 seconds to 2 seconds!
 Is this a safe thing to leave on permanently, or is there some way to set
 PGOPTIONS for just this query?

I wouldn't recommend leaving it on as a long-term solution, because
you're hobbling the system for cases where hashjoin *is* the best
method.  AFAIK there is not a SET VARIABLE method for enabling/disabling
plan types on-the-fly, though perhaps one should be added.

The right long-term solution is to figure out why the system is
misestimating the relative costs of the two plans, and fix the cost
estimates.  (The system is estimating that the mergejoin is about 4x
slower than hash; if it's really 8x faster, there is something pretty
broken about the estimate...)

I am interested in looking into this.  If your data is not proprietary,
perhaps you would be willing to send me a database dump so that I can
reproduce the problem exactly?  (If the dump is no more than a few
megabytes, emailing it should be OK.)  No big hurry, since I probably
won't be able to get to it for a week or so anyway.

    regards, tom lane



[GENERAL] Re: [HACKERS] getting at the actual int4 value of an abstime

1999-08-18 Thread Tom Lane

[EMAIL PROTECTED] (Jim Mercer) writes:
 [ concern about speed of converting datetime values to/from text for
   Postgres ]

FWIW, I used to be really concerned about that too, because my
applications do lots of storage and retrieval of datetimes.
Then one day I did some profiling, and found that the datetime
conversion code was down in the noise.  Now I don't worry so much.

It *would* be nice though if there were some reasonably cheap documented
conversions between datetime and a standard Unix time_t displayed as a
number.  Not so much because of speed, as because there are all kinds
of ways to get the conversion wrong on the client side --- messing up
the timezone and not coping with all the Postgres datestyles are two
easy ways to muff it.

BTW, I believe Thomas is threatening to replace all the datetime-like
types with what is currently called datetime (ie, a float8 measuring
seconds with epoch 1/1/2000), so relying on the internal representation
of abstime would be a bad idea...

regards, tom lane



[GENERAL] Re: [HACKERS] BUG with UNIQUE clause

1999-09-15 Thread Tom Lane

"=?iso-8859-1?Q?St=E9phane_FILLON?=" [EMAIL PROTECTED] writes:
 The UNIQUE constraint doesn't work on a field if I use a DEFAULT clause
 on a table.

This sounds closely related to a fix that Thomas Lockhart just made.
IIRC the complained-of symptom was that PRIMARY KEY on one column plus
UNIQUE on another didn't work, but the real problem was that PRIMARY
KEY implies UNIQUE and the table declaration code was getting confused
by two different UNIQUE columns in one table.  It could be that his fix
addresses your problem too.  Check the pghackers archives for the
last couple weeks to find the patch.

    regards, tom lane





[GENERAL] Re: [HACKERS] Permission problem with COPY FROM

1999-09-15 Thread Tom Lane

Nuchanach Klinjun [EMAIL PROTECTED] writes:
   I've faced that problem too, then I use '\copy' instread
 of 'copy' because 'copy' command will asked for super user previlege.
 example
 ^^ - \copy '/your location/your filename' to tablename;

It's not that; the error message Stephane quotes is after the
Postgres superuser-privilege check:

 "ERROR: COPY command, running in backend with effective uid 501
 (that's Postgres), could not open file '/usr/local/.../cltclr001' for
 reading. Error: Permission not allowed (13)."

This is a result of the Unix kernel denying read access to the file.
It's got to be a matter of not having read rights on the file or not
having lookup (x) rights on one of the directories above it.

psql's \copy is often a better choice than the regular SQL COPY command,
though.  It reads or writes the file with the privileges of the user
running psql, rather than those of the Postgres server, which is usually
a Good Thing.  Also, if you are contacting a server on a different
machine, \copy works with files in the local filesystem, not the
server's filesystem.

    regards, tom lane





[GENERAL] Re: [HACKERS] TRANSACTIONS

2000-02-22 Thread Tom Lane

Jose Soares [EMAIL PROTECTED] writes:
 ---
 Interbase, Oracle,Informix,Solid,Ms-Access,DB2:
 ---
 connect  hygea.gdb;
 create table temp(a int);
 insert into temp values (1);
 insert into temp values (10);
 commit;
 select * from temp;

 arithmetic exception, numeric overflow, or string truncation

   A
 ===
   1

 I would like to know what the Standard says and who is in the rigth path
 PostgreSQL or the others, considering the two examples reported below.

I think those other guys are unquestionably failing to conform to SQL92.
6.10 general rule 3.a says

a) If SD is exact numeric or approximate numeric, then

  Case:

  i) If there is a representation of SV in the data type TD
 that does not lose any leading significant digits after
 rounding or truncating if necessary, then TV is that rep-
 resentation. The choice of whether to round or truncate is
 implementation-defined.

 ii) Otherwise, an exception condition is raised: data exception-
 numeric value out of range.

and 3.3.4.1 says

 The phrase "an exception condition is raised:", followed by the
 name of a condition, is used in General Rules and elsewhere to
 indicate that the execution of a statement is unsuccessful, ap-
 plication of General Rules, other than those of Subclause 12.3,
 "procedure", and Subclause 20.1, "direct SQL statement", may
 be terminated, diagnostic information is to be made available,
 and execution of the statement is to have no effect on SQL-data or
 schemas. The effect on target specifications and SQL descriptor
 areas of an SQL-statement that terminates with an exception condi-
 tion, unless explicitly defined by this International Standard, is
 implementation-dependent.

I see no way that allowing the transaction to commit after an overflow
can be called consistent with the spec.

        regards, tom lane





Re: [GENERAL] 7.0RC1: possible query and backend problem

2000-04-28 Thread Tom Lane

Michael Blakeley [EMAIL PROTECTED] writes:
 I've just upgraded to 7.0RC1 on a Solaris 2.6+patches system (U5 with 
 256MB). I'm having some trouble with a brand-new query, so I don't 
 know if this is a new bug or an old one. It has two parts: query 
 execution and backend robustness.

   SELECT id,date_part('epoch',sum(stop-start)),count(*),S1.url,S2.url
   FROM U WHERE 'now'::datetime-start'1 month'::interval
   AND (id=S1.id OR id=S2.id) GROUP BY id,S1.url,S2.url;

   pqReadData() -- backend closed the channel unexpectedly.
  This probably means the backend terminated abnormally
  before or while processing the request.

I've fixed this problem, I think --- the three-way join clause was
confusing the planner :-(.

 I suspect that the problem is that, in my data set, either S1.url or 
 S2.url will be null for any given row, and this is causing problems 
 for GROUP BY

That shouldn't be a problem, although you do need to be careful
when dealing with NULLs --- it's easy to write the query so that
the WHERE condition will produce NULL, which is interpreted as FALSE.
But the above should work OK, because (NULL OR TRUE) will produce
TRUE.

 IpcMemoryCreate: shmget failed (Invalid argument) key=5432110, 
 size=144, permission=700

Hmm, that is odd.  The thing that looks peculiar to me is that
it seems to be calculating a different size for the segment than
it did the first time through:

 # ipcs -a
 IPC status from running system as of Wed Apr 19 16:45:42 2000
 T ID  KEYMODEOWNERGROUP  CREATOR 
 CGROUP NATTCH  SEGSZ  CPID  LPID   ATIMEDTIMECTIME
 Shared Memory:
 m800   0x0052e32e --rw--- postgres postgres postgres 
 postgres  0120 12737 12737 13:01:36 13:01:36 13:01:36

See the difference?  120 vs 144?  What's causing that I wonder...
and would it explain the failure to reattach?

regards, tom lane



Re: [HACKERS] Re: [GENERAL] Re: [SQL] textsubstr() ...? for postgres 7 beta5

2000-04-30 Thread Tom Lane

[EMAIL PROTECTED] writes:
 it is not textsubstr() but this : (found in  6.5.2 redhat )

 result  |function |arguments  |description
 +-+---+--
 text|text_substr  |text int4 int4 |return portion of string

 Is this in 7.0 ?

Looks like it's called just plain "substr" now.  See
http://www.postgresql.org/docs/postgres/functions.htm

    regards, tom lane



Re: [GENERAL] Problems compiling version 7

2000-05-09 Thread Tom Lane

Travis Bauer [EMAIL PROTECTED] writes:
 I'm getting an odd error in the configure scripts:
 . . .
 checking for gzcat... (cached) /usr/local/gnu/bin/gzcat
 checking for perl... (cached) perl
 configure: error: Can't find method to convert from upper to lower case
 with tr

 I'm compiling this in Red Hat 6.0

Weird.  Do you not have 'tr' in your PATH?  You wouldn't be running with
some bizarre LOCALE setting, by any chance?

regards, tom lane



Re: [GENERAL] Query bombed: why?

2000-05-09 Thread Tom Lane

Jeff Eckermann [EMAIL PROTECTED] writes:
 I was expecting not much more than 50 rows to be returned, with an absolute
 maximum of 70.
 I was trying to simulate an outer join by using the "where not exists"
 clause, so that I would get back my full list of 70 and be able to see the
 unmatched entries...

Certainly 70 rows are not going to strain memory ;-).  My guess is that
the query didn't do what you thought, but instead produced some sort
of cross-product result...

    regards, tom lane



Re: [GENERAL] Problems compiling version 7

2000-05-09 Thread Tom Lane

Travis Bauer [EMAIL PROTECTED] writes:
 I have tr version 1.22 (GNU texutils).  It is located in /usr/bin, and is
 found by my login shell (cshrc).

That was a weak theory, but worth checking ...

 How could I check the locale setting?  

echo $LOCALE I think --- someone who actually uses non-ASCII locales
would be a better reference than I.  But the critical bit here is the
part of configure.in that's trying to find the right platform-specific
tr invocation:

dnl Check tr flags to convert from lower to upper case
TRSTRINGS="`echo ABCdef | $TR '[[a-z]]' '[[A-Z]]' 2/dev/null | grep ABCDEF`"
TRCLASS="`echo ABCdef | $TR '[[:lower:]]' '[[:upper:]]' 2/dev/null | grep ABCDEF`"

if test "$TRSTRINGS" = "ABCDEF"; then
TRARGS="'[[a-z]]' '[[A-Z]]'"
elif test "$TRCLASS" = "ABCDEF"; then
TRARGS="'[[:lower:]]' '[[:upper:]]'"
else
AC_MSG_ERROR("Can\'t find method to convert from upper to lower case with tr")
fi

(hmm ... the error message is exactly backwards from what's actually
being tested, isn't it?  Minor point but...)  Anyway, try these out
and see what's happening with your 'tr'.  Note that the apparently
doubled square brackets are a quoting artifact of autoconf --- you
should actually test [a-z] and so on, not [[a-z]].

The really silly bit is that configure.in has several other invocations
of tr that pay no attention at all to the results so painfully extracted
(or mis-extracted) here.  So it kinda looks to me like we could rip out
this test, hardwire the translation as
tr '[a-z]' '[A-Z]'
and be no worse off.  Does anyone recall why this test is in there to
begin with?

regards, tom lane



Re: [GENERAL] textpos() function

2000-05-09 Thread Tom Lane

Hitesh Patel [EMAIL PROTECTED] writes:
 I just upgraded to postgresql 7.0 and was restoring my data when I
 noticed I had a function defined that used the builtin textpos()
 function.  This function was available in 6.5.3 but seems to have
 dissapeared in 7.0..  Is there any way to restore this function or has
 it been renamed to something else (i found strpos() but i'm not sure if
 that does exactly the same thing).  

Looks like it's called position() now --- that function has the same OID
(849) as 6.5's textpos(), and even more damningly points to a C function
that's still called textpos():

regression=# select oid,* from pg_proc where oid=849;
 oid | proname  | proowner | prolang | proisinh | proistrusted | proiscachable |
 pronargs | proretset | prorettype | proargtypes | probyte_pct | properbyte_cpu
| propercall_cpu | prooutin_ratio | prosrc  | probin
-+--+--+-+--+--+---+
--+---++-+-+
+++-+
 849 | position |  256 |  11 | f| t| t |
2 | f | 23 |   25 25 | 100 |  0
|  1 |  0 | textpos | -
(1 row)

We've made a number of changes in 7.0 to bring function and type names
into alignment with the SQL92 standard.  The incompatibilities I knew
about were in date/time types and functions, but I guess this is another
one ...

If you really don't want to update your app's code just yet, you can
install a pg_proc entry that defines textpos() with a CREATE FUNCTION
command.  But the long-term answer is to fix your code to conform with
the standard.

regards, tom lane



Re: [GENERAL] 7.0 RPM?

2000-05-09 Thread Tom Lane

Karl DeBisschop [EMAIL PROTECTED] writes:
 If you do use these. plan on upgrading once Lamar releases his final.  But
 you should be able to do that without a dump/restore cycle, which was my
 prime concern.

7.0RC5 is database-compatible with the final, earlier betas are *not*.
You can use pg_upgrade to update from any 6.5-or-later version if you
are feeling adventurous, but I'd definitely suggest making a backup
first in case things go wrong and you have to initdb and restore.

regards, tom lane



Re: [HACKERS] Re: [GENERAL] Problems compiling version 7

2000-05-10 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 On Tue, 9 May 2000, Tom Lane wrote:
 dnl Check tr flags to convert from lower to upper case

 Does anyone recall why this test is in there to begin with?

 I don't see the results of this test being used anywhere at all, so I'd
 say yank it. If your system doesn't support tr '[A-Z]' '[a-z]' the
 configure script will fail to run anyway, as it uses this contruct
 indiscriminately.

The results *are* used, in backend/utils/Gen_fmgrtab.sh.in (and
apparently nowhere else).  But the data being processed there is just
builtin function names, so I'm at a loss why someone thought that it'd
be worth testing for a locale-specific variant of 'tr'.  I agree, I'm
pretty strongly tempted to yank it.

But we haven't yet figured out Travis' problem: why is the configure
test failing?  Useless or not, I don't see why it's falling over...

regards, tom lane



Re: [GENERAL] backend running out of memory in v7.0

2000-05-11 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 stack size (kbytes) 8192
  

 I bet this is the problem.

Nah, 8 meg stack should be fine --- that's the same configuration I run.

 On Thu, 11 May 2000, Gregory Krasnow wrote:
 I know that there were some issues with many AND/OR joins in
 PostgreSQL 6.5 which caused the backend process to run out of memory.  
 I am still having some similar issues in PostgreSQL 7.0.

AFAIK the AND/OR issues are largely licked, but we do still have
problems with memory leakages during execution for expressions involving
pass-by-reference datatypes.  There are plans on the table to fix that
for 7.1.

But, that might or might not be your problem.  It's difficult to give
any advice without some details about the queries that are giving you
trouble.

regards, tom lane



Re: [GENERAL] int8 and hash index

2000-05-12 Thread Tom Lane

"Philip Poles" [EMAIL PROTECTED] writes:
 I'm having some trouble using a hash index on an INT8 field in
 postgres7.0(release):

Looks like we don't have a hash function defined for int8 :-(.
This is going to have to stay broken until 7.1, given our rule
against changing system catalogs for subreleases.  Sorry.

Just counting entries, that may not be the only type that has
entries in pg_amop but not pg_amproc.  I feel another oprsanity
regression-test check coming on...

    regards, tom lane



Re: [GENERAL] Performance

2000-05-15 Thread Tom Lane

"Diego Schvartzman" [EMAIL PROTECTED] writes:
 I have an application via PHP. For example, a SELECT query that must return
 one and only one row, with a where clause with and index (I droped it and
 created again) that took about 3 seconds (v6.5.3), now (v7.0.0) takes about
 15 seconds.

Could be that 7.0 is less willing to use the index than 6.5 was.  See
thread "indexes ingnored on simple query in 7.0" over in pgsql-sql for
ways to investigate the problem and one possible solution.

        regards, tom lane



Re: [GENERAL] Best way to add columns

2000-05-15 Thread Tom Lane

Martijn van Oosterhout [EMAIL PROTECTED] writes:
 Marten Feldtmann wrote:
 The varable lengths columns should be at the end of the row, therefore
 it does not seem to be good to add an integer column after a varchar
 column.

 1. Is this true? Should variable length column really be at the end
 of a row?

There is some microscopic performance advantage if you put fixed-width
columns before variable-width columns: columns that are at a fixed
offset in the table records don't require scanning through earlier
columns to access. But I'd be surprised if you could actually measure
any difference, unless perhaps on tables with hundreds of columns.

 2. If so, surly postgres can reorder tham internally so that on disk
 they are in the optimal format.

There are notes in the source code indicating that the original
Berkeley Postgres crew thought about this and decided it wasn't
worth the trouble.

regards, tom lane



Re: [GENERAL] Performance

2000-05-16 Thread Tom Lane

"Ross J. Reedstrom" [EMAIL PROTECTED] writes:
 On Tue, May 16, 2000 at 01:41:48AM -0700, Dustin Sallings wrote:
 I ran into this exact problem, and it was *very* significant on a
 15M row table I have.  :)  It didn't seem to want to use the index, even
 freshly created, without a vacuum analyze.

 Hmm, if you drop the index, do a VACUUM ANALYZE, then create the index,
 it doesn't want to use it? That's be odd, since the statistics are
 only kept about the table relations, not the indices themselves.

Right, it doesn't matter whether the index existed at the time of the
VACUUM.  But it does matter whether any VACUUM ANALYZE stats are
available or not...

 If you mean it won't use an fresh index on a fresh table, that's the
 expected behavior. 

Just to clarify: it depends on the query, and 7.0's behavior is
different from prior versions.  For an equality-type probe, like
"WHERE x = 33", I'd expect 7.0 to select an indexscan even without
stats.  For an inequality like "WHERE x  33", it will not select
an indexscan unless it has stats indicating that the inequality is
reasonably selective (less than about 10% of the table, I think).
For a range bound like "WHERE x  22 AND x  33", you will get an
indexscan without stats.  Beyond that I'm not going to guess...

Prior versions had a bogus cost formula for indexscans that would
*drastically* underestimate the cost of an indexscan, so they tended
to pick an indexscan even where it wasn't justified.  As it happened
they would pick an indexscan for the one-sided-inequality case even
with no stats available.  In some cases that was good, in others
it'd lose big.

    regards, tom lane



Re: [GENERAL] initdb and exit_nicely...

2000-05-17 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 It seems like it would be a whole lot "nicer" if initdb only deleted
 the files that it attempted to create OR if the default was not to
 delete anything.

 Okay, I could go for the former. What do others think?

It'd be a bit of a pain but I can see the problem.  You certainly
shouldn't delete the $PGDATA directory itself unless you created it,
IMHO.  Doing more than that would imply that initdb's cleanup
function would have to know the list of files/subdirectories that
normally get created during initdb, so as to remove only those and
not anything else that might be lying around in the directory.
That'd be a considerable maintenance headache since most of said files
are not created directly by the script...

BTW, what about collisions?  Suppose the reason the initdb fails
is that there's already a (bogus) pg_log, or some such --- is
the script supposed to know not to delete it?  That strikes me
as way too difficult, since now the script has to know not only
which files get created but exactly when.

A slightly more reasonable example is where the admin has already
inserted his own pg_hba.conf in the directory; would be nice if initdb
didn't overwrite it (nor delete it on failure), but I'm not sure it's
worth the trouble.

Something that would be a lot simpler is to refuse to run at all
if the $PGDATA dir exists and is nonempty ;-)

    regards, tom lane



Re: [GENERAL] getting libperl.so

2000-05-17 Thread Tom Lane

Travis Bauer [EMAIL PROTECTED] writes:
 compile plperl.so.  But I can't because appearantly  libperl.so is not
 configured properly on my machine.  I'm using Mandrake 7.0, and the only
 copy of libperl.so that I can find is in: /usr/lib/apache (thanks to the
 mod-perl rpm).  

Probably your main perl installation is not using a shared libperl?

You may have to pull down the perl source distribution and
configure/compile/install it yourself.  Should be pretty painless,
really (certainly nothing to fear if you can build Postgres from
source ;-)).  Don't forget to say "yes" when the configure script asks
you if you want a shared libperl.  You can probably default all the
other answers, except maybe for the location you want the perl directory
tree placed ...

    regards, tom lane

PS: be careful not to lose any Perl modules you may have installed
that don't come with the source distribution.



Re: [GENERAL] Passing arguments to and INDEX function.

2000-05-18 Thread Tom Lane

Jeffery Collins [EMAIL PROTECTED] writes:
 So, with that prefix, is there a way to pass an argument to a
 CREATE INDEX function?  If not, is this something that would be possible
 to add?  If it is possible, is it desireable?  If it is possible and
 desireable, where would I start if I were to add it?

It'd be a nontrivial bit of work (possibly not the best thing to do as
your first backend project ;-)).  Currently the keys for an index have
a hard-wired representation in pg_index: there's a list of attribute
numbers, which are either N separate keys of a multi-column index or
the N arguments of a function, depending on whether indproc has a value
or not.  There's noplace to put a constant value, unless you can squeeze
it into the int2 slot where the attribute number would go.

Rather than kluging this up still further, the right approach would be
to blow it all away in favor of a list of N arbitrary expressions to be
evaluated to produce the index key values.  The expressions would be
stored as nodetrees, same as we do for column default values (for
example).  Doable, I think, but not too easy.  You'd have to touch a
bunch of code, not only in the backend but in programs like pg_dump.

Whether or not Jeff wants to tackle it, I think the existing TODO item
for this is badly phrased:

* Allow CREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops)
  fails index can't store constant parameters

It ought to read

* Allow arbitrary expressions as index keys

regards, tom lane



Re: [GENERAL] Error in manual

2000-05-18 Thread Tom Lane

Travis Bauer [EMAIL PROTECTED] writes:
 On another note, I noticed that when you load a dynamically loaded c
 library using the create funtion statement in psql, the library is never
 reloaded unless you quit psql and restart it, even if you "drop function",
 change the library, and recreate the function.

I believe there is a "LOAD" command that will reload the library without
needing to restart the backend.  Not sure if that's any easier than
just restarting though...

 but this should probably be noted somewhere in
 the programmers manual.

Agreed.

        regards, tom lane



Re: [GENERAL] Columns in pg_shadow?

2000-05-21 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 usecatupd:
 Ability to change system tables?

 Since there's no really convenient interface for twiddling this flag,
 I doubt anyone actually bothers to change it.  It starts out the same
 as one's usesuper flag, and probably stays that way...

 Also note that once you unset this flag you can't set it again because you
 no longer have write access to pg_shadow.

Yeek.  Gun ... foot ... shoot ...

I suppose you'd still have the rights to create new users, so you could
create a new superuser and then log in as him to fix the problem.

regards, tom lane



Re: [GENERAL] crash on \copy

2000-05-21 Thread Tom Lane

Louis-David Mitterrand [EMAIL PROTECTED] writes:
 Is this a known problem?
 template1= \copy
 psql: xstrdup: cannot duplicate null pointer (internal error)
 styx:~% 

 using PG 7.0-beta5 on Linux


Seems to be fixed in 7.0 release:

regression=# \copy
\copy: arguments required

regards, tom lane



Re: [GENERAL] fmgr_info error

2000-05-21 Thread Tom Lane

Louis-David Mitterrand [EMAIL PROTECTED] writes:
 After creating a trigger on an insert I get this error:
 auction= insert into bid values('mito',3,354);
 NOTICE:  you bid the exact increment of 5
 ERROR:  fmgr_info: function 38667: cache lookup failed
 And the insert is not performed as it should. What does this error mean?

Offhand I'd guess that you deleted and recreated the function, but
didn't delete and recreate the trigger definition, so it's still
pointing at the now-gone version of the function.

This could stand to be cleaned up :-( ... but right now you have
to remake triggers referencing a function whenever you remake
the function.

regards, tom lane



Logging (was Re: [GENERAL] PostgreSQL 7.0-2 RPMset released.)

2000-05-21 Thread Tom Lane

Lamar Owen [EMAIL PROTECTED] writes:
 The real problem with redirecting the postmaster output is the issue
 of log rolling, which is impossible to do in the 'classic'
 stderr/stdout redirect UNLESS you throw down postmaster when rolling
 the log (unless you know a trick I don't).

Yes.  I think ultimately we will have to do some logging support code of
our own to make this work the way we want.  My thought at the moment is
there's nothing wrong with logging to stderr, as long as there's some
code somewhere that periodically closes stderr and reopens it to a new
log file.  There needn't be a lot of code involved, we just need a
well-thought-out spec for how it should work.  Comments anyone?

regards, tom lane



Re: [GENERAL] Explain auth/access/priv system??

2000-05-23 Thread Tom Lane

[EMAIL PROTECTED] (Philip Hallstrom) writes:
   I'm new to postgres and have some questions regarding the
 authentication and access systems.  I've got postgres installed and can
 connect from remote machines, but have some questions:
 - I cannot connect as the postgres user 'postgres' from remote machines?
 Why?

That's weird --- you can connect as other users but not as postgres?
The only way I know to do that is to set up a specific 'reject' entry
in pg_hba.conf, which doesn't seem like something you'd have done by
accident.  What do you have in pg_hba.conf, anyway?

 - How is pg_shadow managed?  Is it built from the pg_user table?  If so,
 how do I clean it up (doing a "strings pg_shadow" shows users that no
 longer exist -- is that a problem?)

No, actually pg_shadow is the master and pg_user is just a view of it.
Don't worry about what 'strings' tells you --- that will find deleted
tuples and all sorts of junk.  As long as you use CREATE USER and DROP
USER (or the shellscripts that invoke them) to manage users you should
be fine.  (Actually, in 7.0 it should work to use plain INSERT and
DELETE commands on pg_shadow ... but I don't really recommend it ...)

 - In the docs under "Database/Table Privileges" it says "TBD".  Can
  someone fill me in a bit.  For example, as 'postgres' I did "CREATE
 DATABSE foo".  Then I created the user "foo".  I would have thought that
 I would have had to grant some sort of access to user "foo" to database
 "foo", but as user "foo" I was able to create tables in database "foo".

The database-level protection is pretty lame at the moment: any user who
can connect to a database can create tables in it.  pg_hba.conf can be
used to deny particular users any access to particular databases, but
that's about the extent of your flexibility.  This is being worked on...

 - What do I need to do in order to allow multiple users the abililty to
   create tables in a single database?

Nada, see above.

regards, tom lane



Re: [GENERAL] Can't delete Null value from a not null field

2000-05-23 Thread Tom Lane

"Bryan White" [EMAIL PROTECTED] writes:
 I get the output:
 ERROR:  ExecutePlan: (junk) `ctid' is NULL!
 I get the same error if I try to update the null row.
 At this point I figured I would outsmart it and use the oid.  However it
 appears the oid is null as well.

Wow, that's bizarre.  It shouldn't be *possible* for oid or ctid to be
null --- AFAIK they don't have a null-value bit.  There must be
something really hosed up about that tuple's header.

 Any suggestions?  At this point the only thing I can see to do is dump the
 table and run the dump through a filter for the bad record and then reload
 it.

Slightly faster than a dump and reload:

RENAME broken table to something else;

CREATE TABLE new-table;

INSERT INTO new-table SELECT * FROM broken-table WHERE custid IS NOT NULL;

then recreate the indexes wanted on new-table...

    regards, tom lane



Re: [GENERAL] Postgres Instability

2000-05-24 Thread Tom Lane

"planx plnetx" [EMAIL PROTECTED] writes:
 FATAL 1: cannot create  init file 
 mydatabasedirectory//base/mydb/pg_internal.init

If you're getting that, there is something *seriously* broken ---
the only way that can come out is if Postgres is unable to create
that file when it wants to.  I wonder if you are running the postmaster
as the wrong user (eg, one without write permission on the database
directories)?  Another possibility is that you're running with an
incorrect database path (postmaster -D switch or PGDATA environment
setting).  If that's an accurate transcription of the error message
then it looks like your path may be messed up...

    regards, tom lane



Re: [GENERAL] 7.0 installation problem, help please :-(

2000-05-24 Thread Tom Lane

Travis Bauer [EMAIL PROTECTED] writes:
 That's odd.  This is the error I got compiling pgsql 6.5 on Solaris.  I
 never resolved the problem.  However, the 7.0 source did not give this
 error.  Maybe this is a stupid question, but are you sure you have the
 most recent source code?

 On Tue, 23 May 2000, Chris Chan wrote:
 stringinfo.c: In function `appendStringInfo':
 stringinfo.c:104: `va_list' undeclared (first use in this function)
 stringinfo.c:104: (Each undeclared identifier is reported only once
 stringinfo.c:104: for each function it appears in.)


This would seem to indicate that stdarg.h isn't getting included,
which in turn suggests that the configure script didn't define
STDC_HEADERS (look in include/config.h to confirm or deny that).
The autoconf manual lists a number of reasons for not defining
STDC_HEADERS:

 - Macro: AC_HEADER_STDC
 Define `STDC_HEADERS' if the system has ANSI C header files.
 Specifically, this macro checks for `stdlib.h', `stdarg.h',
 `string.h', and `float.h'; if the system has those, it probably
 has the rest of the ANSI C header files.  This macro also checks
 whether `string.h' declares `memchr' (and thus presumably the
 other `mem' functions), whether `stdlib.h' declare `free' (and
 thus presumably `malloc' and other related functions), and whether
 the `ctype.h' macros work on characters with the high bit set, as
 ANSI C requires.

Any reasonably recent Unix system ought to pass those checks AFAIK,
but maybe there's a screw loose somewhere...

regards, tom lane



Re: [GENERAL] PG 7.0 is 2.5 times slower running a big report

2000-05-24 Thread Tom Lane

"Bryan White" [EMAIL PROTECTED] writes:
 Top tells me the front end process is using 5 to 10 percent of the CPU and
 the back end is using 10 to 20 percent.  The load average is about 1.0 and
 the CPU is about 80% idle.

It's probably waiting for disk I/O ...

What does EXPLAIN tell you about how the queries are being executed?
Do you by any chance have the 6.5.3 system still available to compare
its EXPLAIN output?

    regards, tom lane



Re: [GENERAL] table rename oddity

2000-05-25 Thread Tom Lane

Ari Jolma [EMAIL PROTECTED] writes:
 Is this kind of behavior intended. When I try to rename a table to 
 a reserved word -  I have tried to use 'lseg' which is a type - postgres
 gives me an error but the change has been partly done and the
 only way back seems to be to go to the database directory and 
 rename the file. I have tested this with version 7.0.

Boo hiss --- the physical file rename must be the *last* step in
table rename, after all other possible errors have been checked.
Evidently it's not :-(.  Will fix.

regards, tom lane



Re: [GENERAL] problem with NOTICE: _outNode: don't know how to print type

2000-05-25 Thread Tom Lane

Ari Jolma [EMAIL PROTECTED] writes:
 template1= create table a (a int);
 CREATE
 template1= select a from a union select a from a where a in (select a from
 a);
 NOTICE:  _outNode: don't know how to print type 1044119613 
 NOTICE:  _outNode: don't know how to print type 1044119613 

Interesting.  I take it you have a fairly high -d level enabled (high
enough to make parse/plan trees be dumped to the postmaster log).  These
notices are pretty harmless, since they just indicate that the dumper
routine found something it couldn't deal with, but perhaps there is a
more serious problem lurking underneath.  Will look.

regards, tom lane



Re: [GENERAL] Public Domain SQL grammars?

2000-05-25 Thread Tom Lane

"Terry Van Belle" [EMAIL PROTECTED] writes:
 !DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"
 HTMLHEAD
 META content=3D"text/html; charset=3Diso-8859-1" http-equiv=3DContent-Type
 META content=3D"MSHTML 5.00.2919.3800" name=3DGENERATOR
 STYLE/STYLE
 /HEAD
 BODY bgColor=3D#ff
 DIVFONT face=3DArial size=3D2Hi All,/FONT/DIV
 DIVnbsp;/DIV
 DIVFONT face=3DArial size=3D2I've been trying to automatically parse th=
 e SQL'92=20
 grammar, based on the BNF found in the ISO spec.nbsp; So far no luck throw=
 ing=20
 yacc, yay, or JavaCC at it.nbsp; Does anyone know what type of grammar it =
 is,=20
 and whether there are any public domain grammars out there for it?/FONT/=
DIV 
 DIVnbsp;/DIV
 DIVFONT face=3DArial size=3D2Terry/FONT/DIV
 DIVnbsp;/DIV/BODY/HTML

(Please people, don't post HTML-ified email on mailing lists.  You think
the rest of us enjoy looking at this stuff?)

My guess is that the grammar shown in the spec is not LR(1), and that
you'll have to transform it into LR(1) form before you will get any
available parsing tool to deal with it.  This generally involves
rearranging productions so that no more than one token lookahead is
needed to tell which production to use next.  You might care to read
the 'bison' (GNU yacc) manual on the subject of removing parse
conflicts.

    regards, tom lane



Re: [GENERAL] Re: [ANNOUNCE] PostgreSQL 7.0 a success

2000-05-25 Thread Tom Lane

[EMAIL PROTECTED] (Jan Wieck) writes:
 Do  you  mean  the  TOAST snapshot I provided?

No, he's just griping about the fact that manipulating many thousand
Large Objects in one transaction stresses the system unpleasantly.
(Too many table locks grabbed is the usual killer, I think.)  TOAST
should eliminate those problems.

regards, tom lane



Re: [GENERAL] PG 7.0 is 2.5 times slower running a big report

2000-05-25 Thread Tom Lane

"Bryan White" [EMAIL PROTECTED] writes:
 I have recoverd the performance lost when I moved to Postgres 7.0 by
 executing SET enable_indexscan = OFF before creating my cursors and
 turning it back on for the inner loop query.  It may even be faster
 then before so I am happy.

OK, so it was the indexscans that were hurting.  (7.0 has new sorting
code too, so I was a little afraid that the problem might be with the
sorts.  Evidently not.)

This suggests that at least on your setup, the default value of 4.0 for
random_page_cost might still be too low.  I have not tried to measure
that number on a Linux machine, just on machines with BSD-derived
filesystems.  Maybe Linux does a lot worse with random accesses than
BSD?  Needs looking into.

 It seems that with index scans the cursors start producing data right away
 (but the overall rate is slower).  With sequential scan and sort the report
 gets no data for the first 30 minutes and then runs at about 4 times the
 rate of the index scan.

Right, that's what you'd expect: the sort has to be completed before it
knows which row to deliver first, but an indexscan has no such startup
cost.

    regards, tom lane



Re: [GENERAL] problem with NOTICE: _outNode: don't know how to print type

2000-05-25 Thread Tom Lane

Ari Jolma [EMAIL PROTECTED] writes:
 It seems that there needs to be a union and a subselect.

Yes.

 And the really curious thing is that this problem seems
 to depend on which redhat version there is!

It was trying to interpret a pointer-to-character-string as
a pointer to a Node, so the results would be quite machine-
dependent.  This didn't have any effect on execution of the
query, only on display of the parsetree in the postmaster log;
but in the worst-case scenario you could see a coredump from
the printout routine following a nonaligned pointer or some such.

Fixed for 7.0.1.

regards, tom lane



Re: [GENERAL] Limits on PostgreSQL

2000-05-25 Thread Tom Lane

Marcos Barreto de Castro [EMAIL PROTECTED] writes:
 1 - How many tuples can be returned as a result of a
 query using a CURSOR? (Is it possible to do a SELECT
 * on a table that has 2 million records and OPEN a
 CURSOR for that SELECT and show all records'contents
 using FETCH FORWARD, for example?)

You probably wouldn't want to fetch all 2 million rows in one FETCH,
because you'd risk running out of memory on the client side.  But as
long as you grab a reasonable number of rows per FETCH command, it works
fine.  This is in fact the recommended method for dealing with extremely
large SELECT results.

  2 - When one uses a CURSOR for a SELECT is there a
 big memory consumption or there is a memory buffer
 limit and beyond that the result is written to a file
 (virtual memory) and read from there when needed?

Cursors work just the same as plain evaluation of the query would,
except that the query execution is suspended after having fetched the
requested number of tuples.  There isn't any special memory requirement
on the backend side.

Some types of queries need temporary files (an explicit sort of a large
volume of data is an example).  But that'll be just the same whether you
run them via a cursor or a plain SELECT.

regards, tom lane



Re: [GENERAL] PG 7.0 vacuum problem

2000-05-25 Thread Tom Lane

Marcin Inkielman [EMAIL PROTECTED] writes:
 i rescently upgraded my system from PG6.53 to PG7.0. after a few days of
 work i am unable to do a vacuum on one of tables:

 nat=# VACUUM verbose analyze osoby;
 NOTICE:  FlushRelationBuffers(osoby, 182): block 186 is referenced
 (private 0, global 3)
 FATAL 1:  VACUUM (vc_repair_frag): FlushRelationBuffers returned -2

Hmm.  Have you had any backend crashes?  What seems to be happening here
is that there are some leftover reference counts on one of the shared
disk buffers for that relation.  That should never be true while VACUUM
is running, because no other backend is supposed to be referencing that
table.

 do i risk anything if i do:

 pg_dump nat tmp
 dropdb nat
 createdb nat
 psql nat tmp

Probably won't work either.  Instead, try stopping and restarting the
postmaster --- if my theory is right, that should get rid of the
leftover reference counts.  But the real question is how did it get
into this state in the first place...

regards, tom lane



Re: [GENERAL] initdb and exit_nicely...

2000-05-25 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 How do you find out if a directory is empty?

Good question.

 The best way I could think of is this:

 test x"`ls -A "$PGDATA"`" = x

The embedded quotes might confuse some shells, no?  Perhaps better
CONTENTS=`ls -A "$PGDATA"`
if test "x$CONTENTS" = x

 Are we talking 7.0.1 material, btw?

Well, we would be if we were sure of the patch.  I'm a little worried
about portability though.  Given that this isn't a very critical issue
(IMHO) I'd recommend saving it for the 7.1 cycle.

    regards, tom lane



Re: [GENERAL] Speed of locating tables?

2000-05-26 Thread Tom Lane

Barry Lind [EMAIL PROTECTED] writes:
 I am curious, how does PostgreSQL support tables larger than 2Gig, given
 the file per table architecture?

Multiple files per table ...

regards, tom lane



Re: [GENERAL] Failed regression tests

2000-05-26 Thread Tom Lane

[EMAIL PROTECTED] writes:
 I recently compiled and installed PostgreSQL 7.0 on a P-III workstation
 running SuSE Linux 6.2 (2.2.10 kernel, gcc ver. 2.7.2.3).  The compilation
 completed with some non-fatal warnings, but when I ran the sequential
 regression tests, five failures occurred.
 I am using an international code page (LANG=en_US.iso88591), and perhaps the
 $ appears because I used /.configure --enable-locale?

Offhand all of those look like they might be due to the LANG setting.
Try restarting the postmaster with LANG unset and then rerun the
regression tests to see what you get.

regards, tom lane



Re: [GENERAL] SPI file locations

2000-05-26 Thread Tom Lane

Lamar Owen [EMAIL PROTECTED] writes:
 /lib/cpp -M -I. -I../backend executor/spi.h |xargs -n 1|grep \\W|grep -v ^/|grep -v 
spi.h | sort |cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql

 This could easily enough be included in the make install, couldn't it?
 (Tom?  Anyone?)  I realize that GNU grepisms (or is \W common?) are
 used above,

That's just the tip of the iceberg of the portability problems in the
above.  If you want to propose that we actually do something like that
during 'make install', you're gonna have to work a lot harder.
(However, as a non-portable trick for getting a list of files that need
to be included in a hand-generated makefile, it's not bad.)

The more serious problem is "what else might an SPI module need besides
spi.h".  Also, it disturbs me a lot that spi.h pulls in 80+ include
files in the first place --- there's got to be stuff in there that
needn't/shouldn't be exported.  I know that an SPI developer who's just
trying to get some work done couldn't care less, but I'd like to see us
make some effort to actually clean up the list of files to be exported,
rather than fall back on automation that will let the list bloat even
more without anyone much noticing...

    regards, tom lane



Re: [GENERAL] Questions about CURSORS

2000-05-28 Thread Tom Lane

Marcos Barreto de Castro [EMAIL PROTECTED] writes:
 computers at the same time. 
   Suppose I've created a CURSOR for a SELECT * FROM
 that table GROUP BY column1.
   Suppose I have fetched the 3rd record and am going
 to fetch the 4th and in the meantime someone at
 another computer just DELETED that 4th record from
 that table.Will the fetch succeed? Will the record
 be shown although it no longer exists in the table?

Yes --- that's what transaction semantics are all about.
You don't see the effects of a different transaction unless
it committed before yours started.  (The actual implementation
is that a deleted or modified tuple is still in the table,
but it's in a "zombie" state.  Old transactions can still see it,
new transactions ignore it.)

 And if someone at another computer had inserted a new
 record which, according to my GROUP BY clause, would
 be the 4th, would it be shown for the next fetch?

No.  See above.

   My big questions are: Do CURSORS perform their
 SELECT operations in the TABLE directly or in a file?

A cursor is no different from a select; it's just expressed in a form
that lets you suspend execution part way through.

 Any changes to rows selected through a CURSOR will be
 shown right away or they will only appear as I perform
 another SELECT?

Other backends won't be able to see your changes until you commit.
I'm not sure about the behavior if you modify the table in your own
transaction and then resume scanning with a pre-existing cursor.
It might be that you will be able to see the updates in that case.
(If so, is that a bug?  Possibly, not sure...)

   Is there a book that I could read in order to get a
 better knowledge on SQL implementation or even a
 website where I could read about this?

There are several books recommended in our FAQ, I believe.

(Hey Bruce, does your new book go into this stuff?)

    regards, tom lane



Re: [GENERAL] Arguments not being passed to a function

2000-05-28 Thread Tom Lane

Barry Lind [EMAIL PROTECTED] writes:
 I am trying to call PL/pgSQL functions from JDBC via the Fastpath
 interface.  The function is executing but none of the arguments to the
 function are getting set.

Looks like fastpath.c is passing a garbage isNull flag to the function
it calls :-(.  None of the functions "usually" called this way pay
attention to isNull, but plpgsql functions sure do.

Turns out I had already fixed this for 7.1 as a side-effect of some
other work, but I will stick a patch into 7.0.1.  If you're in a hurry,
the bug is in src/backend/tcop/fastpath.c, and the appropriate patch is

  #ifndef NO_FASTPATH
+ isNull = false;
  retval = fmgr_array_args(fid, nargs, arg, isNull);
  #else
  retval = NULL;
  #endif /* NO_FASTPATH */

    regards, tom lane



Re: [GENERAL] Vacuum analyze vs just Vacuum

2000-05-28 Thread Tom Lane

"Bryan White" [EMAIL PROTECTED] writes:
 I would just like to check an assumption.  I "vacuum analyze" regularly.  I
 have always assumed that this did a plain vacuum in addition to gathering
 statistics.  Is this true?

Yes.  There are some poorly-worded places in the docs that make it sound
like they might be completely separate operations, but they're not.

I would like to split out ANALYZE as a separately-callable command at
some point, but we'll no doubt continue to offer the combined "vacuum
analyze" command, if only for backward-compatibility reasons...

        regards, tom lane



Re: [GENERAL] initdb and exit_nicely...

2000-05-18 Thread Tom Lane

"Len Morgan" [EMAIL PROTECTED] writes:
 The reason that IMHO this deserves a little consideration (i.e., doing it at
 all rather than just saying "Don't store any files in PG_DATA") is that
 RedHat based rpm installations create the postgres superuser account as part
 of the process and set the home directory for this user to PG_DATA.

To be blunt, that's *incredibly* brain dead.  The contents of the PGDATA
directory are way too critical to be the account's home directory.  The
correct fix for this is to change the way the RPM sets up the account.

I don't think we are doing anyone a service if we tweak initdb in a way
that will make it slightly safer to keep random files in PGDATA.  You
shouldn't do it anyway, and modifying initdb to make it look like you
can will only increase the risk of people accidentally screwing up their
installation.

        regards, tom lane



Re: [GENERAL] lots of large objects and toast

2000-05-29 Thread Tom Lane

Lincoln Yeoh [EMAIL PROTECTED] writes:
 There's never been much enthusiasm among the core developers for large
 objects at all --- we see them as a poor substitute for allowing large
 values directly.  (The "TOAST" work scheduled for 7.1 will finally
 resolve that issue, I hope.)  So no one's felt like working on improving
 the large-object implementation.

 On the practical side, say I want to insert/read a large amount of
 information into/from a TOAST field. How should I do it? 
 Is there a pipe method where I can continuously print to/read from?

Not at the moment, but that's obviously going to be a necessary feature
if we want to make the existing flavor of large objects obsolete.  There
have been some preliminary discussions about it --- AFAIR no one's laid
out a complete proposal yet.

    regards, tom lane



Re: [GENERAL] Postgresql usage clip.

2000-05-29 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 We support WIn95/98 clients, not servers.

I thought we did have a cygwin-based server port?  If not, there's
a heckuva lot of useless "PORTNAME=win" conditional compilation in
the backend.

Mind you, I don't think any sane dbadmin would use Windoze as a
platform for a mission-critical application, regardless of database
engine choice.  So the cygwin port is pretty much a toy IMHO.
If MySQL wants to have the toy-application market segment, they're
welcome to it.

    regards, tom lane



Re: [GENERAL] Perl interfaces?

2000-05-31 Thread Tom Lane

Philip Hallstrom [EMAIL PROTECTED] writes:
   I took a look around and was unable to find a Perl DBI driver for
 PostgreSQL... does one exist that I'm missing?

DBD-Pg, at rev 0.93 last I looked at the CPAN archives.  For some
bizarre reason it's not listed on the index page about DBI drivers,
but you can find it if you search by module name.

regards, tom lane



Re: [GENERAL] CREATE USER

2000-06-01 Thread Tom Lane

"Hiroshi Inoue" [EMAIL PROTECTED] writes:
 How about starting new transaction automatically after committing
 "create user ..." at backend side if "create user" is the first command
 of the transaction ?

So then
begin;
create user ...;
rollback;

would do the wrong thing --- silently?

I don't think that's an improvement :-(

The only reason CREATE USER isn't rollbackable is that the flat password
file is updated immediately by a trigger, rather than at transaction
commit.  The right fix would be to defer the update until commit (which
is certainly doable, though it might mean hardwired support for the
update instead of doing it in a generic trigger function).

If that seems like too much work, how about downgrading the "create
user not allowed in transaction" error to a "please don't abort now"
notice?  It's pretty silly that CREATE USER is stiffnecked about this
when DROP TABLE is not --- the bad consequences of rolling back DROP
TABLE are a lot worse.

regards, tom lane



Re: [GENERAL] btree index and max()

2000-06-01 Thread Tom Lane

[EMAIL PROTECTED] writes:
 I understand that the query planner cannot be so clever
 to grasp that this particular function (max or min)
 might be evaluated by just travelling the BTREE index.
 Am I correct?

You are correct --- the system has no idea that there is any
connection between the MIN and MAX aggregates and the sort order
of any particular index.  (In fact, the system knows nothing
about the specific semantics of any aggregate function; they're
all black boxes, which is a very good thing for most purposes.)

However, if you think of your problem as "how can I use the sort order
of this index to get the min/max?", a semi-obvious answer pops out:

SELECT foo FROM table ORDER BY foo LIMIT 1; -- get the min
SELECT foo FROM table ORDER BY foo DESC LIMIT 1;-- get the max

and the 7.0 optimizer does indeed know how to use an index to handle
these queries.

Perhaps someday we will try to convert simple uses of MIN/MAX into
queries like these, but for now, you gotta do it by hand.

    regards, tom lane



Re: [GENERAL] Postmaster won't -HUP

2000-06-01 Thread Tom Lane

Jerry Lynde [EMAIL PROTECTED] writes:
 They are all indexed, the DOB index is actually  DOBYear DOBDay and
 DOBMonth and all 5 fields are indexed
 
 Do you have 5 indexes or do you have an index that spans more than one
 field?

 Sorry for being less than explicit. There are 5 separate indices, one per 
 field.

So your query is really something more like

... WHERE firstname = 'joe' AND lastname = 'blow' AND
  DOByear = 1999 AND DOBmonth = 1 AND DOBday = 1

?

The problem here is that only one index can be used in any individual
scan.  If I were the optimizer I'd probably figure that lastname is
going to be the most selective of the five available choices, too.

I'd suggest storing the DOB as *one* field of type 'date'.  You can
pull out the subparts for display with date_part() when you need to,
but for searches you'll be a lot better off with

WHERE DOB = '1999-01-01'

regards, tom lane



Re: [GENERAL] index problem

2000-06-01 Thread Tom Lane

Marcin Inkielman [EMAIL PROTECTED] writes:
 I created an index using pgaccess rescently. the name of the index was
 long:
 "oceny_stud_numer_albumu_protokoloceny_stud"
 now i am unable to vacuum my database.

Oh dear :-( ... it seems that when you quote an identifier, the system
forgets to make sure that it's truncated to no more than 31 characters.
You've got a corrupted pg_class entry now for that index.

 my question is:
 ~~~
 how may i delete this index in my original database???

Dropping the table that the index is on should work.  Hopefully
restoring just the one table is better than restoring your whole DB.

In the meantime, this is a high-priority bug fix...

    regards, tom lane



Re: [GENERAL] Postmaster won't -HUP

2000-06-01 Thread Tom Lane

Jerry Lynde [EMAIL PROTECTED] writes:
   Thanks for the tip. I might indeed take that approach in the future, 
 however that's not really the problem I'm trying to tackle right now. 
 Indexing by Last Name is fine with me, currently. What's not working for me 
 is the part where the dual pentium 500 machine with 256MB RAM goes into 
 deep thought indefinitely for one simple hard-coded query.

Ah, sorry ... I've been seeing so many optimizer questions lately that
I tend to zero right in on anything that looks like a misoptimization
issue.

I'm not aware of any reason that a query such as you describe would
tend to hang up the machine.  It would be useful to know what you see
in "top" or some other monitoring program when the problem happens.
Is there just one backend process sucking all the CPU time?  More than
one?  Is the process(es) memory usage stable, or climbing?

An even more useful bit of info is a stack trace from a backend that's
suffering the problem: if you do a "kill -ABORT" on it you should get
a coredump and be able to backtrace with gdb.  (Note this will cause
a database system restart, ie all the other backends will commit
harakiri too, so I wouldn't advise doing it during normal usage of the
system.)

        regards, tom lane



Re: [GENERAL] query optimiser changes 6.5-7.0

2000-06-01 Thread Tom Lane

"Simon Hardingham" [EMAIL PROTECTED] writes:
 I have run explain on the query and it shows that it is just
 performed a sequential scan on version 7.0

 Seq Scan on gazet  (cost.)

 On the old version (6.5.1) it reports

 Index Scan using gazet_index on gazet  (cost=

 Any suggestions as to how I can improve performance on this databases new
 server?

Unfortunately you have been careful to suppress any information that
might actually let someone give you useful advice ;-).  There are
several threads in the archives about undesirable index-vs-sequential-
scan choices in 7.0; check pgsql-sql as well as pgsql-general for the
last month or so.

Also, 7.0.1, propagating now to an archive near you, contains some
fudge-factor twiddling to make it more willing to choose an indexscan.
We shall soon find out whether that made things better or worse for
typical uses...

    regards, tom lane



Re: [GENERAL] interval questions

2000-06-01 Thread Tom Lane

Michael Blakeley [EMAIL PROTECTED] writes:
 I'm trying to find the average age of the records. I've gotten as far as:
   SELECT DISTINCT ON(id) age(stamp) FROM EVENTS;

 Now, I need the DISTINCT ON(id), but that means I can't simply avg() the age:
   ERROR:  Attribute events.id must be GROUPed or used in an 
 aggregate function

You don't say *why* you need DISTINCT ON, or exactly what output you
are hoping to get (presumably not a straight average over all the table
entries) ... but perhaps something like
SELECT id, avg(age(stamp)) FROM events GROUP BY id;
is what you need?

regards, tom lane



Re: [GENERAL] Compiling Error

2000-06-04 Thread Tom Lane

Jesus Aneiros [EMAIL PROTECTED] writes:
 Could somebody help me with this error. It appears when I try to compile
 an ecpg program. It seems that it is something with the linking phase and
 the ecpg library. Any ideas?

 /usr/lib/libecpg.so: undefined reference to `crypt'
 collect2: ld returned 1 exit status

You probably need an explicit "-lcrypt" in your link command.  Some
platforms need that, some don't...

    regards, tom lane



Re: [GENERAL] Vacuum Question

2000-06-06 Thread Tom Lane

Ed Loehr [EMAIL PROTECTED] writes:
 Then, start this one in another bash window/terminal/whatever...
 
 % while test 1; do echo -n "`date`: "; psql -d pdb -c "vacuum analyze;
 select count(*) from foo;"; sleep 3; done
 
 This seems to consistently crash after the first vacuum with the
 following message:

This is a known gotcha that's got nothing to do with any sort of
concurrency.  You can't safely send a VACUUM followed by anything
else in a single query string.  The problem is that VACUUM forces a
transaction commit, which releases all transiently allocated memory
in its backend ... including the already-parsed querytrees for the
rest of the query string.  Oops.  (cf. comment near line 560 in
src/backend/tcop/postgres.c)

You won't see the problem if you enter "vacuum analyze; select ..."
interactively or as a script in psql, because it chops up the
commands into separate query submittals.  But apparently psql
doesn't chop up a -c string.  Non-psql frontends can expose the bug
as well.

It's possible that this will get cleaned up as a byproduct of the
proposed rework of transaction-local memory contexts.  But it's
not a real high-priority problem, at least not IMHO.  For now,
the answer is "if it hurts, don't do it ;-)"

    regards, tom lane



Re: [GENERAL] Precision of calculated numeric fields

2000-06-06 Thread Tom Lane

Travis Bauer [EMAIL PROTECTED] writes:
 Consider the following:
 trbauer=# create table t1 (x numberic(3,2));
 trbauer=# \d t1
 Attribute |   Type   | Modifier
 
 X | numeric(3,2) |

 trbauer=# create view v1 as select x*2 from t1;
 trbauer=# \d v1
 Attribute | Type | Modifier
 ---
 ?column?  | numeric(65535,65531) |

 How do I get the precision on the calculated numeric field to be something
 sane, like 3,2? 

You don't --- there isn't any way to specify the types of view columns.
The view is being created with typmod -1 for the numeric column, which
is correct behavior IMHO.

The bug here is in psql: it should be showing the column type as plain
"numeric", no decoration.

 This is important for three reasons: 1.MSAccess chokes on views
 containing these fields (citing the precision size).  2. The jdbc driver
 takes _forever_ to retrieve these fields into big decimal.

Sounds like the jdbc driver also gets confused when it sees typmod -1
for a numeric field.

As a short-term workaround you could manually set pg_attribute's
atttypmod column for the view's field.  Use the same value you
find in atttypmod for the underlying table's field.

    regards, tom lane



Re: [GENERAL] Vacuum Question

2000-06-06 Thread Tom Lane

Ed Loehr [EMAIL PROTECTED] writes:
 % while test 1; do echo -n "`date`: "; psql -d pdb -c "vacuum analyze;
 select count(*) from foo;"; sleep 3; done
 
 This seems to consistently crash after the first vacuum with the
 following message:
 
 This is a known gotcha that's got nothing to do with any sort of
 concurrency.  You can't safely send a VACUUM followed by anything
 else in a single query string.  

 Well, I thought that select count(*) might've been causing a problem, so
 I experimented without it and found the same problem.  Doesn't seem to
 happen with 'vacuum'or 'vacuum analyze foo', only 'vacuum analyze'...

I can't reproduce any problem with just a "vacuum" (with or without
analyze) and no following command.

I did, however, notice that very occasionally the inserting process
would spit out weird error messages like "Function '(int4)' does not
exist" and "init_fcache: null probin for procedure 481".  This seems
to be due to VACUUM (on system tables) causing syscache entries to be
flushed at unexpected times.  I've committed patches for the two cases
I observed, but there may be more lurking...

regards, tom lane



Re: [GENERAL] Vacuum Question

2000-06-06 Thread Tom Lane

Ed Loehr [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 I can't reproduce any problem with just a "vacuum" (with or without
 analyze) and no following command.
 
 I did, however, notice that very occasionally the inserting process
 would spit out weird error messages like "Function '(int4)' does not
 exist" and  null probin for procedure 481".  This seems
 to be due to VACUUM (on system tables) causing syscache entries to be
 flushed at unexpected times.  I've committed patches for the two cases
 I observed, but there may be more lurking...

 Yes, I was getting a similar Function error message, though I never saw
 the init_fcache message.  And it appeared the backend would crash
 coincidentally with the function error message.  If the patch can be
 applied to 7.0 (beta3), and you don't mind posting it, I could test it
 here...

Hmm, I only saw error messages, no crashes --- but I suppose a crash is
possible, since the root of the problem here is a dangling pointer.

Patches for 7.0.2 are attached.  Not sure if they will apply perfectly
cleanly to beta3, but you should be able to make the right mods by hand
if patch doesn't cope...

    regards, tom lane

*** src/backend/parser/parse_type.c.origTue May 30 00:24:49 2000
--- src/backend/parser/parse_type.c Tue Jun  6 11:41:08 2000
***
*** 48,54 
return NULL;
}
typetuple = (Form_pg_type) GETSTRUCT(tup);
!   return NameStr(typetuple-typname);
  }
  
  /* return a Type structure, given a type id */
--- 48,55 
return NULL;
}
typetuple = (Form_pg_type) GETSTRUCT(tup);
!   /* pstrdup here because result may need to outlive the syscache entry */
!   return pstrdup(NameStr(typetuple-typname));
  }
  
  /* return a Type structure, given a type id */
***
*** 119,125 
Form_pg_type typ;
  
typ = (Form_pg_type) GETSTRUCT(t);
!   return NameStr(typ-typname);
  }
  
  /* given a type, return its typetype ('c' for 'c'atalog types) */
--- 120,127 
Form_pg_type typ;
  
typ = (Form_pg_type) GETSTRUCT(t);
!   /* pstrdup here because result may need to outlive the syscache entry */
!   return pstrdup(NameStr(typ-typname));
  }
  
  /* given a type, return its typetype ('c' for 'c'atalog types) */

*** src/backend/utils/cache/fcache.c~   Wed Apr 12 13:15:53 2000
--- src/backend/utils/cache/fcache.cTue Jun  6 13:39:03 2000
***
*** 14,19 
--- 14,20 
   */
  #include "postgres.h"
  
+ #include "access/heapam.h"
  #include "catalog/pg_language.h"
  #include "catalog/pg_proc.h"
  #include "catalog/pg_type.h"
***
*** 89,97 
if (!use_syscache)
elog(ERROR, "what the , init the fcache without the catalogs?");
  
!   procedureTuple = SearchSysCacheTuple(PROCOID,
!
ObjectIdGetDatum(foid),
!0, 0, 
0);
  
if (!HeapTupleIsValid(procedureTuple))
elog(ERROR, "init_fcache: Cache lookup failed for procedure %u",
--- 90,98 
if (!use_syscache)
elog(ERROR, "what the , init the fcache without the catalogs?");
  
!   procedureTuple = SearchSysCacheTupleCopy(PROCOID,
!  
  ObjectIdGetDatum(foid),
!  
  0, 0, 0);
  
if (!HeapTupleIsValid(procedureTuple))
elog(ERROR, "init_fcache: Cache lookup failed for procedure %u",
***
*** 258,263 
--- 259,266 
}
else
retval-func.fn_addr = (func_ptr) NULL;
+ 
+   heap_freetuple(procedureTuple);
  
return retval;
  }



Re: [GENERAL] Composite Types

2000-06-06 Thread Tom Lane

[EMAIL PROTECTED] writes:
 I appreciate any help I can get on this...Recently, I've been experimenting
 with the user defined base-types (using CREATE TYPE) with successful results.
 But, when it comes to handling classes as composite types, things are not
 as straight forward.

The function-returning-composite-type feature is something we inherited
from Berkeley Postgres  it doesn't fit into SQL92 at all, and I
don't think any of the current crop of developers even understand it
very well.  It's certainly suffering from bit-rot.  The "hobbies"
examples in the regression tests seem to be meant to illustrate how
it was supposed to work, but I don't find them either intelligible or
persuasive.

 If I type:
   select content from stuff;
 I get:

 content
 ---
 136585664
 (1 row)

I believe you are looking at a numeric equivalent of a pointer-to-
TupleTableSlot there.  Somewhere in the mists of Berkelian prehistory,
there must have been some code that did something useful with that kind
of function result, but I sure as heck can't find much trace of it now.

I have been thinking lately that functions returning tuples might
fit into SQL92 better as table sources.  That is, instead of
select foo(bar).whatdoyouwritehere ...
we could write something like
select elementa,elementc+1 from foo(bar)

That doesn't work at the moment, of course, but it's something we
could think about causing to work as part of the querytree redesign
planned for 7.2.

Thoughts anyone?

    regards, tom lane



Re: [GENERAL] getting inet out of char?

2000-06-06 Thread Tom Lane

Andrew Sullivan [EMAIL PROTECTED] writes:
 I have a table mapping names to ips; the access to users is through PHP3. 
 Now, PHP's module to PostgreSQL does not know about the data type inet.  I
 could just forget about it, but it seems to me the inet data type offers a
 number of advantages for easy data extraction.

Seems like the cleanest answer would be to teach PHP about inet data
type (not to mention other extension types).  I've got no idea what
that would take, but ...

 create rule name_ip_update as on update to name_and_ip do insert into
 name_and_ip_v4(name,ip) values (new.name, new.ip::inet);
 ERROR:  Cannot cast type 'bpchar' to 'inet'
 Is there something else I can do?  (Having followed the recent discussion on
 rules and triggers, I thought a rule was what I wanted.)

You could create a C-coded function to do the transformation, or you
could replace the rule with a trigger coded in pltcl or plperl.  I
believe both the pltcl and plperl languages are basically datatype-
ignorant --- as long as the textual output from one data value looks
like the format the input routine for another type is expecting,
it'll work.  SQL rules and plpgsql functions are far more anal-
retentive about type checking.  Sometimes that's good, sometimes not
so good.

regards, tom lane



Re: [GENERAL] index problem

2000-06-07 Thread Tom Lane

Lincoln Yeoh [EMAIL PROTECTED] writes:
 At 10:53 AM 05-06-2000 +0200, Marcin Inkielman wrote:
 drop index oceny_stud_numer_albumu_protokoloceny_stud;
 failed
 so I used:
 drop index "oceny_stud_numer_albumu_protokoloceny_stud";
 and it worked for me 8-)))

 I wonder why it worked tho. How does Postgresql treat stuff between double
 quotes, especially regard to string length limits? 

Stuff between double quotes *should* be subject to the same
NAMEDATALEN-1 restriction as unquoted names.  Embarrassingly, 7.0's
lexer didn't enforce such a limit (it's fixed in 7.0.1 and later)
which meant that you could overrun the space allocated for names
in pg_class and other system tables, if you quoted the name.

Marcin's original create index command evidently managed to create
a pg_class entry with 32 non-null characters in the name field,
where it should have been only 31 and a null.  He couldn't delete
that entry with a drop index command using an unquoted name, because
the lexer would (correctly) truncate such a name to 31 chars.  But
evidently it worked to match against a quoted-and-not-truncated
name.  I'm pretty surprised that he didn't see coredumps instead.

If you want to trace through the code to discover exactly how it
managed to avoid crashing, go for it --- but it doesn't seem like
an especially pressing question from here.  To my mind the bug is
just that the lexer created an invalid internal name string to
begin with.  Internally, no name should ever exceed NAMEDATALEN-1.

    regards, tom lane



Re: [GENERAL] Dump

2000-06-07 Thread Tom Lane

Trurl McByte [EMAIL PROTECTED] writes:
 Error in dumpig defaults on serial type!
 If table name have non-statndart name (example: "Order")
 sequenser auto created with name "Order_id_seq".
 In the dump filed definition is: 
 ...
 "id" int4 DEFAULT nextval ( 'Order_id_seq' ) NOT NULL,
 ...
 , but need:
 ...
 "id" int4 DEFAULT nextval ( '"Order_id_seq"' ) NOT NULL,
 ...

Hmm.  This is not pg_dump's fault: the default expression is actually
being stored that way in the database.  Someone seems to have thought
it was a good idea to strip the double quotes at parse time instead
of run time :-(.

Will fix for 7.1 ... in the meantime, don't name your sequences that way
...

regards, tom lane



Re: [GENERAL] anoncvs access

2000-06-07 Thread Tom Lane

Travis Bauer [EMAIL PROTECTED] writes:
 I'm trying to get the tree from the cvs:
 cvs -d :pserver:[EMAIL PROTECTED]:/usr/local/cvsroot login

It's /home/projects/pgsql/cvsroot now.

I notice http://www.postgresql.org/docs/postgres/cvs28436.htm
still has the old location :-( ... that needs to be updated.
Hey Vince, isn't that stuff supposed to be rebuilt from sources
nightly?

regards, tom lane



Re: [GENERAL] Dump

2000-06-07 Thread Tom Lane

Mihai Gheorghiu [EMAIL PROTECTED] writes:
 I wanted to back up a database prior to upgrading to 7.0.2
 pg_dump dbname  dbname.bak
 FATAL 1: Memory exhausted in AllocSetAlloc()
 PQendcopy: resetting connection
 SQL query to dump the contents of Table 'tblname' did not execute
 correctly.
 
 Hmm, what version are you using now?  COPY used to have a memory leak
 problem according to the CVS logs, but that was long ago (pre-6.4).

 7.0 on RH6.1

OK, so much for the old-version theory.  What is the full declaration of
table 'tblname'?  (Easiest way to get it is pg_dump -s -t tblname dbname.)
Also, how many rows in the table?

regards, tom lane



Re: [GENERAL] Dump

2000-06-07 Thread Tom Lane

Mihai Gheorghiu [EMAIL PROTECTED] writes:
 pg_dump... outputs nothing!? I did it with -f filename too, and the file is
 empty.
 Size of the file in question: 5MB, 7062 rows. Name of table:
 tblReservations.

Mixed case huh?  It's a little tricky to get pg_dump's -t switch to work
with that; I think you have to write
pg_dump -s -t '"tblReservations"' dbname ...
Without the quoting, pg_dump lowercases the given name.

 I wanted to re-create the error today, but I got something different:
 pg_dump tantest  tantest.bak
 pqWait() -- connection not open
 PQendcopy: resetting connection

Hmm, that looks like a backend coredump.  Did you find a core file in
the database directory?  If so, can you get a backtrace from it?

    regards, tom lane



Re: [GENERAL] plpgsql question...

2000-06-07 Thread Tom Lane

Steve Wampler [EMAIL PROTECTED] writes:
 PostgreSQL 6.5.3

 
 appdb= create function insert_or_update() returns opaque as '
 appdb' begin
 appdb' insert into attributes_table values(new.id,new.name,
 appdb'new.units,new.value);
 appdb' return NULL;
 appdb' end;'
 appdb- language 'plpgsql';
 CREATE
 appdb= create trigger t before insert on attributes for each row
 appdb- execute procedure insert_or_update();
 CREATE
 appdb= insert into attributes values('site','prefix','none','kp');
 NOTICE:  plpgsql: ERROR during compile of insert_or_update near line 2
 ERROR:  syntax error at or near "in"
 appdb=
 =

 Does anyone see what I've done wrong?

Nothing that I can see.  I copied and pasted this trigger into current
sources and it worked fine.  Ditto for your other example.

There must be something pretty broken about your copy of plpgsql;
dunno what exactly.  I'd recommend updating to 7.0.2 and then seeing
if the problem persists.  If it does we can dig deeper.

    regards, tom lane



Re: [GENERAL] make us of old database.

2000-06-07 Thread Tom Lane

Marcos Lloret [EMAIL PROTECTED] writes:
 i want to make us of an old recovered database from a dead hard
 disk. i just copied the directory
 /usr/local/pgsl/data/base/[database-name]
 i install postgres 6..4 (as before) and i created a new database
 directory and copied all the database.

That's not going to work, at least not that way.  You need to have a
pg_log file that has the same set of transaction commit and abort
records that were in your old database.  Basically, you can transfer an
*entire* pgsql/data tree by copying it, but you don't get to pick and
choose parts.

regards, tom lane



Re: [GENERAL] Column types via ODBC interface

2000-06-07 Thread Tom Lane

Matt Goodall [EMAIL PROTECTED] writes:
 f10   decimal(9,2)  SQL_VARCHAR, 254   SQL_DECIMAL, 9
 f11   numeric(9,2)  SQL_VARCHAR, 254   SQL_DECIMAL, 9

The 6.5 ODBC driver doesn't know about type numeric, so it returns its
default assumption, which is varchar as above.  The 7.0 version knows
about numeric, however.

The SQL type returned for "bool" appears to be a user-settable option.

The choices made for floating types look reasonably plausible.
If you think they're wrong, you'll need to argue why, not just
assert that you think they are.  It's easy enough to change the
ODBC driver's SQL type = Postgres type mapping if there's a
better definition than what we're using...

    regards, tom lane



Re: [GENERAL] Cannot INDEX an Access97 ODBC

2000-06-09 Thread Tom Lane

[EMAIL PROTECTED] writes:
  The psql CREATE INDEX statement can't see the imported column
 (name/attribute)... wierd?

I still think that Bryan's got the right idea --- Access probably
created the column name with some embedded blanks and/or upper-case
characters, which'd mean that you have to quote the column name in
order to refer to it.

It would be useful to see the output of
 pg_dump -s -t tablename databasename
for this table.

regards, tom lane



Re: [GENERAL] Ah, yet another cause for not binding the right port....

2000-06-09 Thread Tom Lane

"Steve Wolfe" [EMAIL PROTECTED] writes:
When postgres binds to any port, it likes to write a file in /tmp, along
 the lines of ".s.PGSQL.5432".  If /tmp is not writeable by the postgres
 process, it will not be able to bind to any port.  However, it doesn't give
 you an intelligent error message like "Can't create file", it simply says
 that it can't bind the port,

I tried this and got what I thought was a fairly reasonable error
message:

FATAL: StreamServerPort: bind() failed: Permission denied
Is another postmaster already running on that port?
If not, remove socket node (/tmp/.s.PGSQL.5432) and retry.
postmaster: cannot create UNIX stream port

"Permission denied" is what the kernel told us, and I don't think it's
likely to be a good idea to second-guess why the kernel told us that.
But the reference to /tmp/.s.PGSQL.5432 together with that error ought
to be sufficient clue I would think...

    regards, tom lane



Re: [GENERAL] Problem with upper() in select statement

2000-06-12 Thread Tom Lane

John Cochran [EMAIL PROTECTED] writes:
 John Cochran [EMAIL PROTECTED] writes:
 Why isn't the upper() function working in the first query?
 
 Odd.  You aren't by some chance working in a non-ASCII locale where
 "upper('Boulevard')" yields something besides 'BOULEVARD', are you?

 Nope, using the standard locale. Here is a short session to
 prove that upper() is working like it should.

Oh, never mind.  I was busy looking for complicated answers, but
actually the answer is simple: char(n) and text are not the same
thing because they have different ideas about the significance of
trailing blanks.

create table abbreviation(word char(15) not null);

insert into abbreviation values('BOULEVARD');

select * from abbreviation where word = 'BOULEVARD';
  word
-
 BOULEVARD
(1 row)

The above works because the unknown-type literal 'BOULEVARD' is promoted
to char(n) type, and then char(n) vs. char(n) does what you want because
it regards trailing spaces as insignificant: 'BOULEVARD  ' is equal
to 'BOULEVARD' under char(n) rules.

But 'BOULEVARD  ' != 'BOULEVARD' under varchar(n) or text rules.
Thus this doesn't match:

select * from abbreviation where word = 'BOULEVARD'::text;
 word
--
(0 rows)

because the type ordering is that char(n) promotes to text not vice
versa, so you get a text equality comparison here.  Same result with

select * from abbreviation where word::text = 'BOULEVARD';
 word
--
(0 rows)

and more to the point, upper() is a function that yields type text,
so:

select * from abbreviation where word = upper('Boulevard');
 word
--
(0 rows)

You could make it work by coercing upper()'s result back to char(n),
so that char(n) equality is used:

select * from abbreviation where word = upper('Boulevard')::char;
  word
-
 BOULEVARD
(1 row)

but on the whole my advice is that you are using the wrong datatype for
this table.  Variable-length strings should be represented by varchar(n)
or text.  Fixed-width char(n) is appropriate for fixed-length strings
like state abbreviations.


 BTW, why doesn't PostgreSQL have a SYSDUMMY table or something like it
 (the way Oracle or IBM's DB2 have).

Don't need it, since we don't require a FROM clause.

regression=# select upper('Boulevard');
   upper
---
 BOULEVARD
(1 row)

    regards, tom lane



Re: [GENERAL] does vacuum rebuild index?

2000-06-12 Thread Tom Lane

mikeo [EMAIL PROTECTED] writes:
  we have a 34 million row table.  after many inserts, updates, deletes 
 the performance degraded so we vacuumed the table.  the output indicated 
 that the index was also vacuumed but the size didn't change. it was still 
 80m so we dropped and recreated it reducing it to 20m.

This is in the FAQ isn't it?  VACUUM removes unused index entries but it
doesn't reduce the physical size of the index file.  There's an item on
the TODO list to fix that, but for now a lot of people are in the habit
of doing
drop indexes;
vacuum;
recreate indexes;

This is often faster than what vacuum does, so there's been talk of
changing vacuum to work that way, but there's concern about what happens
if vacuum crashes before it's rebuilt the indexes... at least with the
delete-unused-entries approach you aren't left with an incomplete index.

regards, tom lane



Re: [GENERAL] rules on INSERT can't UPDATE new instance?

2000-06-13 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Is the INSERT rule re-ordering mentioned a TODO item?

Darn if I know.  I threw the thought out for discussion, but didn't
see any comments.  I'm not in a hurry to change it, unless there's
consensus that we should.

regards, tom lane


 Bruce Momjian [EMAIL PROTECTED] writes:
 I thought an INSERT rule with an UPDATE action would work on the same
 table, but that fails.  Seems the rule is firing before the INSERT
 happens.
 
 Yes, a trigger is the right way to do surgery on a tuple before it is
 stored.  Rules are good for generating additional SQL queries that will
 insert/update/delete other tuples (usually, but not necessarily, in
 other tables).  Even if it worked, a rule would be a horribly
 inefficient way to handle modification of the about-to-be-inserted
 tuple, because (being an independent query) it'd have to scan the table
 to find the tuple you are talking about!
 
 The reason the additional queries are done before the original command
 is explained thus in the source code:
 
 * The original query is appended last if not instead
 * because update and delete rule actions might not do
 * anything if they are invoked after the update or
 * delete is performed. The command counter increment
 * between the query execution makes the deleted (and
 * maybe the updated) tuples disappear so the scans
 * for them in the rule actions cannot find them.
 
 This seems to make sense for UPDATE/DELETE, but I wonder whether
 the ordering should be different for the INSERT case: perhaps it
 should be original-query-first in that case.



Re: [GENERAL] Backend died while dropping index

2000-06-17 Thread Tom Lane

Mike Mascari [EMAIL PROTECTED] writes:
 [EMAIL PROTECTED] wrote:
 How do I fix this? This is on 6.5.

 1. Change to the location of the datafiles, which, for a
 database, called "mydatabase" will look something like:
 2. Create an empty file of the name of the index:
 3. Delete the file using psql:
 4. Recreate the index as appropriate

And

5. Update to 7.0.2 at your earliest convenience, if not sooner.

No smileys here.  I've noticed several people lately saying they're
still on 6.5 (or earlier :-() because they're "waiting for 7.0 to
settle out".  Wake up and smell the roses folks.  7.0 certainly has
bugs, and maybe a few teething pains yet to go, but it has far fewer
bugs than any prior release --- and most of the bugs it does have
are undoubtedly present in prior releases as well.  Holding off on
an update isn't buying you anything except continued exposure to the
hundreds of bugs we've swatted since 6.5.

        regards, tom lane



Re: [GENERAL] libpq error codes

2000-06-22 Thread Tom Lane

Denis Perchine [EMAIL PROTECTED] writes:
 And you use PIPE, but I use sockets. If I just do psql -d db, all is
 as you've said, but if I do psql -d db -h localhost the pictures is as
 following:

Works the same for me with either pipe or socket connection.  I think
something must be broken on your platform --- what platform are you
using, anyway?

 db= select count(*) from pg_class;
 pqReadData() --  read() failed: errno=32
 ïÂÏÒ×ÁÎÎÙÊ ËÁÎÁÌ

The two obvious questions about this are (a) what is errno 32 on
your system and (b) why is your strerror() yielding garbage instead
of an appropriate error message?

On my system errno 32 is EPIPE, but surely read() should never
return EPIPE.

regards, tom lane



Re: [GENERAL] Conversion from MS Access to Postgresql

2000-06-27 Thread Tom Lane

"Len Morgan" [EMAIL PROTECTED] writes:
 ...  I cannot seem to make Postgres join two
 tables when the type of one is char(9) and the other is character
 varying(9).  The machine seems to go into an endless loop.

What?  Specific example, please.

 A similar problem I have is with fix precision fields.  I want to
 export a numeric(10,2) number from Access but they always end up
 numeric(30,6).

I don't think our 6.5.* ODBC driver knows anything about numeric,
so you're probably going to get default numeric precision if you
go through it.  You might have better luck with 7.0.

    regards, tom lane



Re: [GENERAL] puzzled by the docs

2000-06-27 Thread Tom Lane

Hernan Gonzalez [EMAIL PROTECTED] writes:
 The PostgreSQL Administrator's Guide which appears on the web
 http://www.postgresql.org/docs/admin/index.html
 is fairly different from the one which is packed with
 the 7.0.2 distribution.

 Which is the good one?

The files appearing under http://www.postgresql.org/docs/ are a
snapshot of current development, not the docs that go with the
most recent release.  Any changes you see are work that will be
in the next major release (ie, 7.1).

We have been planning to rearrange the website so that the main
docs page shows the most recent release, and the development
snapshot appears someplace else, but I guess Vince hasn't got
round to it yet ...

regards, tom lane



Re: [GENERAL] trigger question

2000-06-27 Thread Tom Lane

mikeo [EMAIL PROTECTED] writes:
 in oracle, the triggers were smart enough to know not to reference
 an old value on insert in an "insert or update" trigger procedure,
 apparently.

 this is the original oracle trigger that works fine 
 with the same insert statement:

 CREATE OR REPLACE TRIGGER rates_hist_trigger
 before insert or update on rates
 for each row
  WHEN (old.rt_valid  'P' or new.rt_valid not in ('Y','N'))

Hmm.  It sounds to me like Oracle treats the OLD fields as being NULL
if the context is INSERT, which is something we could certainly do at
the price of losing some error detection capability --- ie, if that
really had been a typo as I first thought, the system wouldn't flag it
for you.

Not sure which way is better.  Comments anyone?

    regards, tom lane



Re: [GENERAL] Lingering Backend Processes

2000-06-28 Thread Tom Lane

"Arthur M. Kang" [EMAIL PROTECTED] writes:
 If a postgres connection dies without implicitly disconnecting from the
 database, it seems that the backend processes hangs around
 indefinitely.  Does anyone know of a way to get around this??

This sounds like a bug in either your webserver or the underlying
kernel: the postgres backend isn't getting notified that the connection
has been closed.  Do you still see the connections as "established"
in netstat?

We did recently change the backend to set TCP "keepalive" mode, so
that the kernel will actively probe for a still-alive far end after a
sufficiently long idle period.  But I think the keepalive timeout is
of the order of half an hour, so it might be too long for your tastes.
Not sure it's relevant here anyway; keepalive should only matter if
the far end has suffered either a kernel crash or loss of network
connectivity.

Anyway, the short answer is that the backend shuts down when the kernel
tells it the connection to the client is no longer open.  You need to be
looking at TCP connection management issues if the client seems to be
able to quit without provoking a connection closure.

 I'm using Postgres 7.0.2 and ApacheDBI to connected.   If I start the
 web server with 10 servers, there is immediately 10 postgres processes.
 If I HUP the web server, there become 20 postgres processes.

Offhand I would guess that Apache forgets to close its outgoing TCP
connections when you HUP it... check with netstat or lsof or something
like that.

        regards, tom lane



Re: Re[2]: [GENERAL] Restricting queries by the presence of a WHERE clause

2000-06-28 Thread Tom Lane

John Morton [EMAIL PROTECTED] writes:
 webclient  ---  proxy  ---  Postgres

 Minuses:

 - Another machine to buy
 - Root on the proxy is as bad as root was on the secure server, so..
 - The simple hack will need thorough security auditing
 - It's yet another machine to secure.
 - It's yet another point of failure.

What??  Who said anything about another machine?  I was just
thinking another daemon process on the database server machine.

 What I'm really asking is is there any way of doing this with just
 triggers or rules, and if not, can a function be written to examine the
 where clause (or whatever it's called in the parse tree) and select
 triggers be hacked into the database?

No.  If there were, what makes you think that it'd be easier to
security-audit it than a standalone proxy?

regards, tom lane



Re: [GENERAL] pg_atoi()

2000-06-28 Thread Tom Lane

Richard Harvey Chapman [EMAIL PROTECTED] writes:
 Is there a reason why pg_atoi() was programmed to fail if the entire
 input string is not valid?
 i.e. "109" yields 109, but "109 apples" yields an error.

Because that's what it ought to do, if you ask me ;-).
"109 foo" is not a valid integer value.  If you want your app to
accept such things, do your own input parsing and filtering.
A database server should not be lax about what it considers valid
data.

        regards, tom lane



Re: [GENERAL] Comments with embedded single quotes

2000-06-28 Thread Tom Lane

Richard Harvey Chapman [EMAIL PROTECTED] writes:
 Are single quotation marks not allowed in comments?

 test2=# /* John's cat is fat. */
 test2'#
 test2'# '*/
 test2-# ;
 ERROR:  Unterminated quoted string
 test2=#

They are, but it looks like psql's primitive parser is confused here.
What the backend sees when this is sent is
/* comment */

'*/

and it quite properly complains that the string starting '*/ is not
terminated.  But it looks like psql mistakenly thinks that ' nests
inside /* ... */:

regression=# /*aaa
regression*# 'sss
regression'# ddd
regression'# */
regression'# 'sss
regression*# */
regression-#

Notice the pattern of the 'state' markers in the prompts.  It seems
to get the reverse case correct though:

regression-# 'foo
regression'# /*bar
regression'# '
regression-#

Over to you, Peter...

regards, tom lane



Re: [GENERAL] Importing data w/ Unix timestamp

2000-06-29 Thread Tom Lane

[EMAIL PROTECTED] writes:
 I want to copy in some data from a tab-delimited flat file, but one of the
 columns that should translate into a datetime is in Unix timestamp format
 (seconds since epoch). 

COPY isn't going to apply any datatype conversions for you.  What you
could do, though, is import into a temp table that has an int4 column
in that position, and then transfer the data to the real table with
something like

insert into RealTable select uname, timestamp(abstime(timeinAsInt4)),
 duration, etc from TempTable;


Side comment: a lot of people seem to think that COPY is a data
translation utility.  It's not; it was only really intended as a simple
dump/reload method, for which purpose it should be as picky as possible
about the reloaded data.  At some point it'd be nice to have a program
that *is* designed as a data importer and is willing to do data format
conversions for you.  I envision this as a separate client program,
so it wouldn't take any deep dark backend-programming ability to write
it, just some knowledge about typical file formats and conversions.
Anyone want to take on the project?

regards, tom lane



Re: [GENERAL] disk backups

2000-06-29 Thread Tom Lane

Martijn van Oosterhout [EMAIL PROTECTED] writes:
 Is there a better way? Here pg_dumping the DB takes over half an hour
 (mainly because pg_dump chews all available memory).

pg_dump shouldn't be a performance hog if you are using the default
COPY-based style of data export.  I'd only expect memory problems
if you are using INSERT-based export (-d or -D switch to pg_dump).
For now, the answer is "don't do that" ... at least not on big tables.

This could be fixed in either of two ways:

1. recode pg_dump to use DECLARE CURSOR and FETCH to grab table contents
in reasonable-size chunks (instead of with an all-at-once SELECT);

2. add an API to libpq that allows a select result to be retrieved
on-the-fly rather than accumulating it in libpq's memory.

The second is more work but would be more widely useful.

However, it's not been much of a priority, since insert-based data
export is so slow to reload that no sensible person uses it for
big tables anyway ;-)

    regards, tom lane



Re: [GENERAL] NOTICE messages during table drop

2000-06-29 Thread Tom Lane

"kurt miller" [EMAIL PROTECTED] writes:
 Found these messages in the log this morning.
 Can anyone explain why?

 NOTICE:  RegisterSharedInvalid: SI buffer overflow
 NOTICE:  InvalidateSharedInvalid: cache state reset

Probably this indicates that you had another backend somewhere that
had been sitting in an open transaction for a long time and therefore
was not reading its SI "mail" about system table changes.  Eventually
the SI message buffer overflows and the above notices result.

7.0 recovers cleanly from an SI overflow, so the notices should be
pretty harmless (and in a release or two they'll probably be removed,
or at least downgraded to DEBUG level so they don't appear by default).
But in prior releases this notice was often a harbinger of impending
doom :-(, because the cache-reset code didn't really work reliably.

If you see a *lot* of these during normal operations, you might have
reason to be concerned about the performance lost due to all the cache
flushes --- everybody pays for one backend's slowness when this happens.
In that case it'd be worth figuring out why your clients are leaving
backends sitting idle for long periods within open transaction blocks,
and trying to avoid that.  But an occasional SI overrun is normal and
nothing to worry about ... at least not in 7.x.

        regards, tom lane



Re: [GENERAL] Trigger programming..

2000-06-30 Thread Tom Lane

"Mitch Vincent" [EMAIL PROTECTED] writes:
 I have this code...

  tupdesc = rel-rd_att;  /* what the tuple looks like (?) */

  app_id_colnum = SPI_fnumber(tupdesc, app_id_fieldname);

   if (app_id_colnum == SPI_ERROR_NOATTRIBUTE)
elog(ERROR, "app_id_colnum - SPI_ERROR_NOATTRIBUTE error ");

  char_app_id = SPI_getvalue(rettuple, tupdesc, app_id_colnum);

Looks OK to me, as far as it goes.  I'd wonder whether the tuple
actually matches the tupdesc you're using.

Looking at SPI_getvalue itself (in src/backend/executor/spi.c),
I can see that SPI_result might offer a clue about why it's failing.

        regards, tom lane



Re: [GENERAL] Performance of Postgres via network connections

2000-06-30 Thread Tom Lane

"Steve Wolfe" [EMAIL PROTECTED] writes:
 After this is all set up, if anyone would like, I may type up an
 explanation of how things were done as well as costs, for those going
 through the same sort of growing pains.  It's certainly been a lot of work
 for us to hammer out all of the details, hopefully that would help someone
 else avoid the work.

Please do --- I think a lot of people would find it interesting.

    regards, tom lane



Re: [GENERAL] vacuumdb problem

2000-07-02 Thread Tom Lane

Marcin Inkielman [EMAIL PROTECTED] writes:
 NOTICE:  FlushRelationBuffers(osoby, 228): block 223 is referenced
 (private 0, global 1)
 FATAL 1:  VACUUM (vc_repair_frag): FlushRelationBuffers returned -2

 this table is referenced in my db by a tree of FOREIGN KEYs. 

Hmm, I wonder whether there is a buffer-refcount leak in the foreign
key stuff.

 however my db seems to work and I am able to do pg_dump
 Rescently, I dumped and restored it and for a few days I was able to
 do vacuumdb. Today, the problem is here again.

You will probably find that stopping and restarting the postmaster
will make the problem go away (until it comes back again).  Might
be an acceptable workaround to let you vacuum, until we can find
the bug.

Do you think you can extract a reproducible example?  Short of that,
it'd be helpful to at least see the declarations of "osoby" and all
associated tables.

    regards, tom lane



Re: [GENERAL] psql dumps core

2000-07-04 Thread Tom Lane

"K. Ari Krupnikov" [EMAIL PROTECTED] writes:
 psql on the clent machime aborts with this message:

 psql:recreate-dbdom-db.pgsql:4: \connect: pqReadData() -- backend closed
 the channel unexpectedly.
 This probably means the backend terminated abnormally
 before or while processing the request.
 Segmentation fault (core dumped)

 I get a core dump in the current directory on the client.

No advice possible with so little information.  What is the query
that triggers the crash?  What are the definitions of the tables
used in the query?  Can you get a backtrace from the psql coredump
(and also from the backend coredump, if there is one ... which seems
likely)?

    regards, tom lane



[ANNOUNCE] Re: [GENERAL] Re: [HACKERS] proposed improvements to PostgreSQL license

2000-07-04 Thread Tom Lane

Thomas Lockhart [EMAIL PROTECTED] writes:
 Postgres is starting to become a visible thing, and is going to be used
 by people who don't know much about the free software movement. And
 *I'm* within reach of the American court system, and *you* can
 contribute code which could make me a target for a lawsuit.

A further comment here: BSD and similar licenses have indeed been used
successfully for a couple of decades --- within a community of like-
minded hackers who wouldn't dream of suing each other in the first
place.  Postgres is starting to get out into a colder and harder world.
To name just one unpleasant scenario: if PG continues to be as
successful as it has been, sooner or later Oracle will decide that we
are a threat to their continued world domination.  Oracle have a
longstanding reputation for playing dirty pool when they feel it
necessary.  It'd be awfully convenient for them if they could eliminate
the threat of Postgres with a couple of well-placed lawsuits hinging on
the weaknesses of the existing PG license.  It'd hardly even cost them
anything, if they can sue individual developers who have no funds for
a major court case.

Chris and Peter may not feel that they need to worry about the
sillinesses of the American legal system, but those of us who are
within its reach do need to worry about it.

I'm not opining here about the merits or weaknesses of Great Bridge's
proposal.  (What I'd really like is to see some review from other
legal experts --- surely there are some people on these mailing lists
who can bring in their corporate legal departments to comment?)  But
what we have here is a well-qualified lawyer telling us that we've got
some problems in the existing license.  IMHO we'd be damned fools to
ignore his advice completely.  Sticking your head in the sand is not
a good defense mechanism.

regards, tom lane



  1   2   3   4   5   6   7   8   9   10   >