[ADMIN] Disk space consumed by pk not returned after vacuum or reindex

2006-09-13 Thread Kevin Johnson




Hello,

I am running into a problem on a RHEL3 systems, running PostgreSQL 7.4

We have a database, which consistently consumes more and more of the
disk space in it's lvol until it reaches 100%.  So far, we have tried
to run a full vacuum on the database, with limited success. 
Eventually, we had to drop and reload the database with the same data
inside.  It brought the disk usage down to 73%.  It then began to creep
once more toward 100%.  After some research, I was able to use the
pg_class catalog to find that the items which are expanding the
quickest are primary key (btree) indexes.  I attempted to run a REINDEX
on one of the tables with the pk taking up the largest amount of
space.  The usage according to pg_class dropped dramatically, however
the disk space was not returned to the system.  So I attempted another
full vacuum afterwards, and still nothing was returned to the system. 
These tables are updated extremely frequently (although their total
number of rows is close to constant), which is my guess as to why the
pk indexes increase so rapidly in terms of their disk usage. 
Unfortunately, PostgreSQL knowledge is limited, and I was wondering if
anyone had experienced something similar / knows what else we can do to
return this disk space back to the system? 

Thank you in advance for any/all help!

Kevin



begin:vcard
fn:Kevin Johnson
n:Johnson;Kevin
org:Raytheon AWIPS Team
adr:SMCC II;;;1325 East West Highway;Silver Spring;MD;20910
email;internet:[EMAIL PROTECTED]
title:Sr. Systems Engineer
tel;work:301.713.9362 x325
tel;cell:301.787.0648
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] COPY FROM command v8.1.4

2006-09-13 Thread Mr. Dan





This is pretty confusing.  You mean that groups of 25 adjacent rows were
missing in the output?
Yes, isn't that interesting?  It's always in a group of 25 rows.  This is 
always random.  Say 800,000 to 800,025 out of 12 million represents one 
random group of 25 rows.



What's the (24+1) supposed to convey?

One time we thought we counted 24 rows. (in a row - yes adjacent)


groups aligned on 25-row boundaries in the whole output (ie, are there
an exact multiple of 25 rows before each dropped group)?

Yes


How are you doing the copies, exactly?  SQL COPY command, psql \copy,
something else?
We've tried SQL COY and psql \copy and always get random results - 0,1, or 2 
blocks of 25 rows missing.




Is there a pre-compiled version of postgres we can try ?  Where could I 
download that?  We built postgres with GNU make 3.80, the documentation says 
3.76 and higher is ok.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] After how many updates should a vacuum be performed?

2006-09-13 Thread Ellen Cyran

We have a database that periodically we perform large updates, around
a million records, after that the vacuum will run for 12 hours without
completing.  After that, I typically remove the 2 indexes and 1 
constraint on the largest table, 7 million records, and the vacuum will 
complete in a couple of hours and the indexes can be recreated in a half 
hour.


After how many updates should a vacuum be performed?  Do indexes need to 
be recreated periodically?  Would auto vacuuming help in this case?  Any 
suggestions on tuning?


The database is in PostgreSQL 8.0.7.

Thanks.

Ellen



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [ADMIN] After how many updates should a vacuum be performed?

2006-09-13 Thread Tom Lane
Ellen Cyran <[EMAIL PROTECTED]> writes:
> We have a database that periodically we perform large updates, around
> a million records, after that the vacuum will run for 12 hours without
> completing.  After that, I typically remove the 2 indexes and 1 
> constraint on the largest table, 7 million records, and the vacuum will 
> complete in a couple of hours and the indexes can be recreated in a half 
> hour.

By "vacuum" do you mean VACUUM FULL?  My advice is not to use that,
just plain VACUUM.  Don't forget to make sure your FSM is large enough,
too.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] After how many updates should a vacuum be performed?

2006-09-13 Thread Ellen Cyran
No, I mean vacuum analyze.  I'll vacuum verbose and see about adjusting 
the fsm.  Thanks.


Tom Lane wrote:


Ellen Cyran <[EMAIL PROTECTED]> writes:


We have a database that periodically we perform large updates, around
a million records, after that the vacuum will run for 12 hours without
completing.  After that, I typically remove the 2 indexes and 1 
constraint on the largest table, 7 million records, and the vacuum will 
complete in a couple of hours and the indexes can be recreated in a half 
hour.



By "vacuum" do you mean VACUUM FULL?  My advice is not to use that,
just plain VACUUM.  Don't forget to make sure your FSM is large enough,
too.

regards, tom lane




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] COPY FROM command v8.1.4

2006-09-13 Thread Tom Lane
"Mr. Dan" <[EMAIL PROTECTED]> writes:
>> How are you doing the copies, exactly?  SQL COPY command, psql \copy,
>> something else?

> We've tried SQL COY and psql \copy and always get random results - 0,1, or 2 
> blocks of 25 rows missing.

Hmph.  If it happens with a SQL COPY command then psql seems to be off
the hook, and that also eliminates some theories about dropped TCP
packets and such.

Would you check back in the source table for the COPY and see what the
ctid values are for the missing rows?  I'm wondering about a pattern
like "the dropped rows of a group are all on the same disk page", ie,
what's being missed is one whole page at a time.

If that's what's happening, the only very plausible theory I can think
of is that your disk drive is sometimes glitching and returning a page
of all-zeroes instead of what it should return.  Postgres will not
complain about this in normal operation (because there are legitimate
error-recovery scenarios where a zero page can be in a table); it'll
just treat the page as empty.  VACUUM will complain though, so the next
step would be to set up a test table by copying your large table and
then repeatedly run plain VACUUM on the test table.  If you get sporadic
warnings "relation foo page N is uninitialized --- fixing" then we have
the smoking gun.  Don't run this test directly on a valuable table, as
each such message would mean you just lost another page of data :-(

FWIW, I spent several hours yesterday evening copying 6GB tables around
to see if I could reproduce any such problem, and I couldn't...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] After how many updates should a vacuum be performed?

2006-09-13 Thread Tom Lane
Ellen Cyran <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> By "vacuum" do you mean VACUUM FULL?

> No, I mean vacuum analyze.

Hm, that should be OK.  What do you have maintenance_work_mem set to?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] After how many updates should a vacuum be performed?

2006-09-13 Thread Ellen Cyran

It's set at the default 16384.

Tom Lane wrote:


Ellen Cyran <[EMAIL PROTECTED]> writes:


Tom Lane wrote:


By "vacuum" do you mean VACUUM FULL?




No, I mean vacuum analyze.



Hm, that should be OK.  What do you have maintenance_work_mem set to?

regards, tom lane




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] After how many updates should a vacuum be performed?

2006-09-13 Thread Tom Lane
Ellen Cyran <[EMAIL PROTECTED]> writes:
>> Hm, that should be OK.  What do you have maintenance_work_mem set to?

> It's set at the default 16384.

That should be plenty for getting rid of a million or so tuples.  I'm
wondering if you are seeing some weird locking effect.  Is the VACUUM
constantly busy with I/O or does it sit and wait at points?  Do you have
other queries actively accessing the table during the VACUUM?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] After how many updates should a vacuum be performed?

2006-09-13 Thread Ellen Cyran
Someone else was doing the vacuum that didn't complete this last time 
and they started it at night so no other queries were running. I wasn't 
monitoring I/O usage at the time and in the past I just always removed 
the indexes and vacuumed when this happened.


This is on a Solaris server, would you suggest any additional commands 
besides iostat to monitor the i/o?


Ellen


Tom Lane wrote:


Ellen Cyran <[EMAIL PROTECTED]> writes:


Hm, that should be OK.  What do you have maintenance_work_mem set to?




It's set at the default 16384.



That should be plenty for getting rid of a million or so tuples.  I'm
wondering if you are seeing some weird locking effect.  Is the VACUUM
constantly busy with I/O or does it sit and wait at points?  Do you have
other queries actively accessing the table during the VACUUM?

regards, tom lane




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] COPY FROM command v8.1.4

2006-09-13 Thread Ray Stell



You said "local Reiser FS."  Maybe repeat on one of the others, Ext3/JFS?

Tom asked about hardware issues, is there nothing in syslog 
that relates to the timing of the event?  I don't recall you
responding in public to this.  Maybe I missed it.

Just musing...





On Wed, Sep 13, 2006 at 12:50:26PM -0400, Tom Lane wrote:
> "Mr. Dan" <[EMAIL PROTECTED]> writes:
> >> How are you doing the copies, exactly?  SQL COPY command, psql \copy,
> >> something else?
> 
> > We've tried SQL COY and psql \copy and always get random results - 0,1, or 
> > 2 
> > blocks of 25 rows missing.
> 
> Hmph.  If it happens with a SQL COPY command then psql seems to be off
> the hook, and that also eliminates some theories about dropped TCP
> packets and such.
> 
> Would you check back in the source table for the COPY and see what the
> ctid values are for the missing rows?  I'm wondering about a pattern
> like "the dropped rows of a group are all on the same disk page", ie,
> what's being missed is one whole page at a time.
> 
> If that's what's happening, the only very plausible theory I can think
> of is that your disk drive is sometimes glitching and returning a page
> of all-zeroes instead of what it should return.  Postgres will not
> complain about this in normal operation (because there are legitimate
> error-recovery scenarios where a zero page can be in a table); it'll
> just treat the page as empty.  VACUUM will complain though, so the next
> step would be to set up a test table by copying your large table and
> then repeatedly run plain VACUUM on the test table.  If you get sporadic
> warnings "relation foo page N is uninitialized --- fixing" then we have
> the smoking gun.  Don't run this test directly on a valuable table, as
> each such message would mean you just lost another page of data :-(
> 
> FWIW, I spent several hours yesterday evening copying 6GB tables around
> to see if I could reproduce any such problem, and I couldn't...
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq

-- 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] psql command

2006-09-13 Thread Iuri Sampaio








I forgot to contextualize,

 

env | grep PATH :

 

[EMAIL PROTECTED]:~$ env | grep
PATH

PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games

 

 

Funny thing, speaking about
full paths... there is no psql command in "/usr/bin/"

 

[EMAIL PROTECTED]:~$ ls -l
/usr/bin/psql

ls: /usr/bin/psql: No such
file or directory

 

 

I installed postgresql 8.1
from tarball-source and followed those steps:

http://openacs.org/doc/current/postgres.html

I didn't use apt-get install

 

 









From: Iuri Sampaio
[mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 07, 2006
2:57 AM
To: 'pgsql-admin@postgresql.org'
Subject: psql command



 

The postgres commands work fine to postgres user but it
doesn’t work to a random user account

 

I already created the user on postgres

desktop:~# su - postgres

[EMAIL PROTECTED]:~$ createuser -a -d oacsbr

 

and set on .bashrc 

LD_LIBRARY_PATH=:/usr/local/pgsql/lib:/usr/local/pgsql/lib

PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/usr/local/pgsql/bin:/usr/local/pgsql/bin

 

 

But somehow I can’t run any commands such as 

[EMAIL PROTECTED]:~$ createdb -E UNICODE
mytestdb

-su: createdb: command not found

[EMAIL PROTECTED]:~$ psql -l

-su: psql: command not found

[EMAIL PROTECTED]:~$ psql -l mytestdb.dmp mytestdb

-su: psql: command not found

 

 

Does anyone knows what I’ve missed?

 

iuri

 








[ADMIN] psql command

2006-09-13 Thread Iuri Sampaio








The postgres commands work fine to postgres user but it doesn’t
work to a random user account

 

I already created the user on postgres

desktop:~# su - postgres

[EMAIL PROTECTED]:~$ createuser -a -d oacsbr

 

and set on .bashrc 

LD_LIBRARY_PATH=:/usr/local/pgsql/lib:/usr/local/pgsql/lib

PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/usr/local/pgsql/bin:/usr/local/pgsql/bin

 

 

But somehow I can’t run any commands such as 

[EMAIL PROTECTED]:~$ createdb -E UNICODE
mytestdb

-su: createdb: command not found

[EMAIL PROTECTED]:~$ psql -l

-su: psql: command not found

[EMAIL PROTECTED]:~$ psql -l mytestdb.dmp mytestdb

-su: psql: command not found

 

 

Does anyone knows what I’ve missed?

 

iuri

 








[ADMIN] real and effective user ids must match

2006-09-13 Thread Liustech



My postgres can not start up,  I get this error 
message:
 
postmaster successfully started/usr/bin/postmaster: real 
and effective user ids must match
 
and
 
Sep 12 14:18:26 TWeb su(pam_unix)[6683]: session opened for 
user postgres by (uid=0)Sep 12 14:18:26 TWeb su(pam_unix)[6683]: session 
closed for user postgresSep 12 14:18:29 TWeb su(pam_unix)[6702]: session 
opened for user postgres by (uid=0)Sep 12 14:18:29 TWeb su(pam_unix)[6702]: 
session closed for user postgresSep 12 14:18:30 TWeb postgresql: Starting 
postgresql service:  failed
what is happen and how to fix it?
 
regards
David


[ADMIN] create user access only to specific tables

2006-09-13 Thread Thilina Ranaweera
I am new to PostgreSql .Please tell  me  how  to define user access only to selected tablesThanks


[ADMIN]

2006-09-13 Thread Ronald Cortbauwi








Hello Sir, 

I am using
PostGreSql ,I want an oledb provider for the postgre in order to connect
through.net to a postgre Database, I installed the pgoledb.dll and I registered
it and in the connection string I am using the “Provider=PostgreSQL.1 . .
.   ,but the following error is occurring. 

Error:
System.Data.OleDb.OleDbException: FATAL:  invalid command-line arguments
for server process HINT:  Try "postgres --help" for more
information. 

 

I need help how I
can fix this problem. Thank you 

 

 



Ronald Cortbawi
Softwre Engineer
EVER ME

( Office: +961-1-513531

6 Fax: +961-1-513534

* Email: [EMAIL PROTECTED]

Web Site: www.ever-me.com

 

"This e-mail message
may contain confidential, proprietary or legally privileged information. It
should not be used by anyone who is not the original intended recipient. If you
have erroneously received this message, please delete it immediately and notify
the sender. The recipient acknowledges that EVER ME or its
subsidiaries and associated companies, are unable to exercise control or ensure
or guarantee the integrity of/over the contents of the information contained in
e-mail transmissions and further acknowledges that any views expressed in this
message are those of the individual sender and no binding nature of the message
shall be implied or assumed unless the sender does so expressly with due
authority of EVER ME before opening any attachments please check them
for viruses and defects."

 








Re: [ADMIN] Vacuum error on database postgres

2006-09-13 Thread andy

Tom Lane wrote:

"Paul B. Anderson" <[EMAIL PROTECTED]> writes:
I did delete exactly one of each of these using ctid and the query then 
shows no duplicates.  But, the problem comes right back in the next 
database-wide vacuum.


That's pretty odd --- I'm inclined to suspect index corruption.


I also tried reindexing the table.


Get rid of the duplicates (actually, I'd just blow away all the
pg_statistic entries for each of these tables) and *then* reindex.
Then re-analyze and see what happens.

Worst case you could just delete everything in pg_statistic, reindex it,
do a database-wide ANALYZE to repopulate it.  By definition there's not
any original data in that table...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org



Hello, newbe here..

I seem to have run across this same error.  And I believe I can 
re-create it too.


I'm running 8.1.4 on slackware-(almost 11).

I have two scripts, one create's tables and indexes, the other has lots 
of "copy from stdin" statements.  The very last line is


VACUUM VERBOSE ANALYZE;

which eventually gives me the same error.  This is a test box, with test 
data, and this seemed to work:


delete from pg_statistic;
reindex table pg_statistic;
vacuum analyze;

So I'm ok, but I tried it again, by dropping the database and re-running 
both scripts and got the same error again.  So thought I'd offer a test 
case if there was interest.


The data is 46 meg compressed and 500'ish meg uncompressed.  Its a 
little bit sensitive, so I'd like to be a little discreet with it, but 
can put it on a website for a developer to download.


Please contact me privately for a link:

[EMAIL PROTECTED]

...Oh, also, this box has locked up on me a few times, so its not the 
most stable thing in the world.  It did not lock up during import of 
this data, but there is flaky hardware in there someplace, so it could 
very well be a hardware issue.


-Andy

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] real and effective user ids must match

2006-09-13 Thread Michael Fuhr
On Tue, Sep 12, 2006 at 02:26:11PM +0800, Liustech wrote:
> My postgres can not start up,  I get this error message:
> 
> postmaster successfully started
> /usr/bin/postmaster: real and effective user ids must match

How are you starting the postmaster?  The error suggests that the
postmaster or the program that runs the postmaster is setuid.  For
security reasons PostgreSQL refuses to run that way.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] create user access only to specific tables

2006-09-13 Thread Michael Fuhr
On Tue, Sep 12, 2006 at 06:11:19PM +0600, Thilina Ranaweera wrote:
> I am new to PostgreSql .Please tell  me  how  to define user access only to
> selected tables

See the GRANT and REVOKE commands.

http://www.postgresql.org/docs/8.1/interactive/ddl-priv.html
http://www.postgresql.org/docs/8.1/interactive/sql-grant.html
http://www.postgresql.org/docs/8.1/interactive/sql-revoke.html

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings