Re: [GENERAL] 10/13 SFPUG meeting, "The Mighty GUCS," video now available

2009-10-19 Thread Viktor Rosenfeld
Hi,

I second the request for the files refered to in the video --
particularly postgresql.conf.simple and dependencies.

Cheers,
Viktor

Christophe Pettus wrote:

> The video archive for the 10/13 SFPUG meeting, "The Mighty GUCS: A
> guide to the essential PostgreSQL settings you need to know," is now
> available:
> 
>   http://thebuild.com/blog/2009/10/16/the-mighty-gucs/
> 
> It's also available on Vimeo:
> 
>   http://vimeo.com/7109722
> --
> -- Christophe Pettus
>x...@thebuild.com
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About could not connect to server: Connection timed out

2009-10-19 Thread Johan Nel

Craig Ringer wrote:

On Tue, 2009-10-20 at 12:12 +0800, 黄永卫 wrote:

Oct 18 10:46:11 SUC02 postgres[10997]: [2-1] LOG:  unexpected EOF on
client connection
Oct 18 10:46:11 SUC02 postgres[15899]: [2-1] LOG:  could not receive
data from client: Connection reset by peer


That looks a lot like lower-level networking issues. Is there any NAT
between client and server? What sort of network connects the two? Is
there any packet loss on the network? Is there any sort of firewall on
or between the client and server?


Agree, what is the timeout value set to in the connection string? 
Increase to something Timeout=60 or bigger.


Johan Nel
Pretoria, South Africa.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-19 Thread David Fetter
On Mon, Oct 19, 2009 at 10:48:47PM -0400, Guy Rouillier wrote:
> Andre Lopes wrote:
>> Hi,
>>  I'am searching for a free tool to design a Postgre Database. There is  
>> something similar to MySQL Workbench?
>>  Best Regards,
>> Andre.
>
> From 9/11/2009 "Open source database design tool ,  alternative to  
> MicroOLDAP"

> [extensive list]

Considering that PostgreSQL is a shared resource, you'll probably do
something more collaborative, at least at many stages.  Development
looks a little more like:

1.  Paper, pencil, trash can.  Lots of that last.

2.  Lather, rinse, repeat.

3.  Marker, dry-erase board, eraser, closed room.

4.  Lather, rinse, repeat.

5.  psql + $EDITOR

6.  Lather, rinse, repeat.

7.  $SCM

8.  Lather, rinse, repeat.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About could not connect to server: Connection timed out

2009-10-19 Thread Craig Ringer
On Tue, 2009-10-20 at 12:12 +0800, 黄永卫 wrote:
> 
> Oct 18 10:46:11 SUC02 postgres[10997]: [2-1] LOG:  unexpected EOF on
> client connection
> Oct 18 10:46:11 SUC02 postgres[15899]: [2-1] LOG:  could not receive
> data from client: Connection reset by peer

That looks a lot like lower-level networking issues. Is there any NAT
between client and server? What sort of network connects the two? Is
there any packet loss on the network? Is there any sort of firewall on
or between the client and server?

--
Craig Ringer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] About could not connect to server: Connection timed out

2009-10-19 Thread 黄永卫
Hi,

 

 When I use ecpg  code to download File from database, error  occurred
occasionally as below:

“could not connect to server: Connection timed outIs the server running
on host "DB" and acceptingTCP/IP connections on port 5432?” 

 

This my code:

 

  conn = PQsetdbLogin(host, NULL, NULL, NULL, database,
username, "") ;

if (PQstatus (conn) != CONNECTION_BAD){

PQexec(conn, "BEGIN");

if (lo_export(conn,oidImage_Data,chImageName) == -1) {

printf("PQSTATUS=%s \n",PQerrorMessage(conn));

PQfinish (conn) ;

printf("EXPORT %s ERROR! SQL STATE =   %s\n
",chImageName,sqlca.sqlstate);

}

printf("PQSTATUS=%s \n",PQstatus(conn));

PQexec(conn, "COMMIT");

 

This log was found in postgres.log:

Oct 18 10:44:35 SUC02 postgres[10159]: [7-1] LOG:  database system is ready

Oct 18 10:44:35 SUC02 postgres[10159]: [8-1] LOG:  transaction ID wrap limit
is 3343152371, limited by database "template1"

Oct 18 10:46:11 SUC02 postgres[10997]: [2-1] LOG:  unexpected EOF on client
connection

Oct 18 10:46:11 SUC02 postgres[15899]: [2-1] LOG:  could not receive data
from client: Connection reset by peer

Oct 18 10:46:11 SUC02 postgres[15899]: [3-1] LOG:  unexpected EOF on client
connection

 

What is the problem ? What should I   investigate ? 

Please help ! Thank you very much !

 

Ray Huang

2009-10-20

 

 



Re: [GENERAL] Un successful Restoration of DATA using WAL files

2009-10-19 Thread Craig Ringer
On Mon, 2009-10-19 at 07:18 -0700, Mitesh51 wrote:
> I am unable to restore data with the use of WAL files by following procedure.
> 
> I have done following changes in postgres.conf to enable WAL archiving...
> 
> archive_mode = on # allows archiving to be done
> archive_command = 'copy "%p" "C:\\archivedir\\%f"'
> 
> I have one database(built in) postgres. I create one table student in it.
> and take full backup(only of a single database & I am not copying complete
> data dir..) @ 12:40 pm with the 
> 
> pg_dump.exe -h localhost -p 5432 -U postgres -f C:\test.sql postgres

Any comments from readers on the following suggestion of changes to the
PITR docs at:

http://www.postgresql.org/docs/current/interactive/continuous-archiving.html


User misunderstandings on the pgsql-general mailing list suggest
that a clear and prominent warning needs to be added to this
page to prevent people from trying to combine a pg_dump base
backup with WAL archiving. People are failing to understand that
the base backup must be a file-system-level copy taken after
calling pg_start_backup() .

Suggested changes:

"maintains a write ahead log" -> "maintains a block-level write
ahead log" in the first paragraph.

"we can combine a file-system-level backup" -> 
"we can combine a file-system-level backup of the data directory
(not a pg_dump backup)"

Also, somewhere in the introductory section, something like this
would be good:

"IMPORTANT: WAL archiving and PITR cannot be used with an
SQL-level base backup taken with pg_dump. See "Making a Base
Backup" below for the correct method of backing up your database
for WAL archiving and PITR. See "Caveats" for details."

In "Caveats":

"You can't use pg_dump to make a base backup, restore that to a
different cluster or a new database in the original cluster,
then apply archived WAL files to it. WAL archiving records a
history of changes to the database cluster at a very low level,
rather than recording anything like SQL commands. The WAL files
essentially record sequences of writes of byte sequences to
offsets within files in the cluster. A new cluster (or a new
database created in the original cluster by CREATE DATABASE)
will have a different layout in its files, so the WAL archives
don't make any sense to apply to the new cluster."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Un successful Restoration of DATA using WAL files

2009-10-19 Thread Craig Ringer
On Mon, 2009-10-19 at 07:18 -0700, Mitesh51 wrote:
> I am unable to restore data with the use of WAL files by following procedure.
> 
> I have done following changes in postgres.conf to enable WAL archiving...
> 
> archive_mode = on # allows archiving to be done
> archive_command = 'copy "%p" "C:\\archivedir\\%f"'
> 
> I have one database(built in) postgres. I create one table student in it.
> and take full backup(only of a single database & I am not copying complete
> data dir..) @ 12:40 pm with the 
> 
> pg_dump.exe -h localhost -p 5432 -U postgres -f C:\test.sql postgres

Whoops. You can't combine a pg_dump -style base backup with WAL
archiving. You *MUST* use the pg_start_backup() and pg_stop_backup()
functions combined with a file-system level copy of the database
directory.

The reason the WAL files can't just be applied to a database restored
from pg_dump is that they're block-level logs of write activity. A newly
restored database from a pg_dump backup will have a different block
layout, so the WAL files make no sense with the newly restored database.
Additionally, you're probably restoring the database to a different
cluster, which the WAL files won't make sense for.

It's a really, REALLY good thing you're smart enough to test your
backups before relying on them :-)

I strongly suggest re-reading the PITR backup documentation.

Personally, I recommend taking a periodic dump backup with pg_dump to
protect yourself against undetected corruption of the catalog or table
files, which PITR-based backups won't help you with. There's nothing
like discovering that your backup copies of your cluster are corrupt
too :-(

In *addition* to the pg_dump backusp, enable WAL archiving and take a
PITR base backup (using pg_start_backup() and pg_stop_backup() as per
the PITR docs). That way if you have a failure such as an accidental
"DROP DATABASE" you can recover your cluster up to a few minutes before
the mistake. You'll want to periodically take a new base backup so that
you're not faced with storing terabytes of WAL archives ... and the days
or weeks it could take to replay those WAL archives when you need to
restore the backup. How often depends on your write load - how fast your
WAL archives build up.

> After taking full backup...
> 
> I create 1 table named "person" @ 12:41 pm  in the postgres database(for
> testing purpose only).
> (Now it should be recorded in WAL files...Am I write here?)

Sort-of. The block writes to the catalog, the table layout, etc are
indeed recorded in the WAL, but it's not recording SQL like "CREATE
TABLE person();", it's recording "bytes 0x0902ff12 written to offset
0x12312" or the like.

> but I cant see the table "person" created again with the help of WAL file
> restoration :(

Personally, I'm surprised you got anything but an error when trying
this. Perhaps it's not even seeing the WAL files from the old unrelated
cluster at all? If it did see them and try to use them I'd expect an
error reporting that the WAL archives don't make any sense for your
cluster.

--
Craig Ringer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-19 Thread Guy Rouillier

Andre Lopes wrote:

Hi,
 
I'am searching for a free tool to design a Postgre Database. There is 
something similar to MySQL Workbench?
 
Best Regards,

Andre.


From 9/11/2009 "Open source database design tool ,  alternative to 
MicroOLDAP"


Open ModelSphere: http://www.modelsphere.org/


From 5/24/2008 "best er modeling tool for postgreSQL":

/** GNU-GPL **/
OpenSystemArchitect
http://www.codebydesign.com/
Power*Architect
http://code.google.com/p/power-architect/
Druid
http://druid.sourceforge.net/
ERW
http://erw.dsi.unimi.it/
Dia
http://live.gnome.org/Dia
XML to DDL (python scripts)
http://xml2ddl.berlios.de/
Graphiz - I believe I've read of people using it to reverse engineer 
from DDL

http://graphviz.org/

/** Multiple versions/licenses (each has a "free" and a "not-free" 
version) **/

DBVisualizer
http://www.minq.se/products/dbvis/
Toad
http://www.toadsoft.com/

/** Not-free **/
Microsoft Visio - (If you already have it, it does work quite well)
DataStudio - 30 day trial
http://www.aquafold.com
DBWrench - 30 day trial
http://www.dbwrench.com/

From 3/13/2008 "ER diagram tool"

http://druid.sf.net/
http://schemaspy.sourceforge.net/
http://uml.sourceforge.net/index.php

And on it goes.  Keep searching and you'll find more.

--
Guy Rouillier

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When much of a DB is large objects - PG 8.4

2009-10-19 Thread Merlin Moncure
On Mon, Oct 19, 2009 at 9:11 PM, David Wall  wrote:
> We have a system in which a large amount of the data is stored as large
> objects now in PG 8.4.  The relational aspects of these objects are stored
> using traditional tables, but we store a lot of binary data (files, images)
> and XML objects that are compressed and then encrypted.  The ACID properties
> of PG are great for this since we can ensure multiple updates are
> committed/rolled-back together.
>
> But, even if I have 30 tables that themselves all have OID types for the
> encrypted or binary data, most ends up in the pg_largeobject table, which
> makes partitioning and disk access complicated, and there's always some
> concern we'll end up maxing out the OIDs in that system table and what the
> performance of banging against one table invites.
>
> So my question is there a way to load balance/partition pg_largeobject
> across disks or otherwise ensure I'm getting the best performance from PG?
>  I know I can use BYTEA (forget its max size off hand, but note that it's
> not exactly JDBC friendly because of all the escaping to be done moving a
> large byte array).  Would I do well, perhaps to clone the idea of
> pg_largeobject for those objects that are say 100KB or less (many will be I
> suspect) and store them in a sequence of BYTEA rows in my own tables as way
> to allow the storage of these blobs across many tables instead of just
> pg_largeobject?  It probably wouldn't be as efficient as the large object
> code, but I'm sure it could be made to work.

the bytea limit is 1gb (as are all datums in postgres).
pg_largeobject can go up to 2gb, but in either case you'd likely run
into severe performance/scalability issues long before objects began
approaching those size because of memory usage and other issues.  With
100kb objects though, you should be all right.

libpq supports a binary protocol mode which allows you to execute
queries sending bytea without escaping.  (I'm not familiar with the
jdbc driver, but I'd imagine it should support it in some fashion).  l
would start researching there: find out if the jdbc driver supports
binary queries and use them if possible.  If you can't or won't be use
jdbc in this way, your options are to stick with large objects or try
and figure out another way to get data into the database.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How ad an increasing index to a query result?

2009-10-19 Thread Merlin Moncure
On Sun, Oct 18, 2009 at 12:00 PM, Raymond O'Donnell  wrote:
> On 18/10/2009 11:30, Alban Hertroys wrote:
>
>> Short of enumerating those results in your application, the easiest
>> approach is probably to wrap your query in a join with generate_series
>> like so:
>>
>> SELECT a, s.b
>> FROM (
>>     SELECT a
>>     FROM table1
>>     ORDER BY a DESC LIMIT 5
>> ) AS t1, generate_series(5, 1, -1) AS s(b)
>>
>
> Won't that just give you the cartesian product of the two sets? I tried
> something similar yesterday out of curiosity, and that's what I got.
>
> The only things I can think of are (i) as you say, enumerate the results
> in the application or (ii) use a temporary sequence as someone else
> suggested.

yeah, the above gives a cartesian product.  Row number is really the
way to go here.  Using pre window tactics, it looks like we need:

select a, b from
(
  select a, nextval('c')  as b from
  (
SELECT a,
FROM table1
ORDER BY a DESC LIMIT 5
  ) q order by a
) q order by a desc;

aside: it's never a good idea to write queries like this:
select func(), foo from bar order by foo limit x;
if you are concerned about how many times foo executes.  This is a
huge gotcha that constantly comes up on the lists.  see the warning
here:

http://www.postgresql.org/docs/8.3/interactive/explicit-locking.html#ADVISORY-LOCKS

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] When much of a DB is large objects - PG 8.4

2009-10-19 Thread David Wall
We have a system in which a large amount of the data is stored as large 
objects now in PG 8.4.  The relational aspects of these objects are 
stored using traditional tables, but we store a lot of binary data 
(files, images) and XML objects that are compressed and then encrypted.  
The ACID properties of PG are great for this since we can ensure 
multiple updates are committed/rolled-back together.


But, even if I have 30 tables that themselves all have OID types for the 
encrypted or binary data, most ends up in the pg_largeobject table, 
which makes partitioning and disk access complicated, and there's always 
some concern we'll end up maxing out the OIDs in that system table and 
what the performance of banging against one table invites.


So my question is there a way to load balance/partition pg_largeobject 
across disks or otherwise ensure I'm getting the best performance from 
PG?  I know I can use BYTEA (forget its max size off hand, but note that 
it's not exactly JDBC friendly because of all the escaping to be done 
moving a large byte array).  Would I do well, perhaps to clone the idea 
of pg_largeobject for those objects that are say 100KB or less (many 
will be I suspect) and store them in a sequence of BYTEA rows in my own 
tables as way to allow the storage of these blobs across many tables 
instead of just pg_largeobject?  It probably wouldn't be as efficient as 
the large object code, but I'm sure it could be made to work.


Thanks for any hints or ideas

David

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mac OS Roman import?

2009-10-19 Thread Christophe Pettus


On Oct 19, 2009, at 4:39 PM, Scott Ribe wrote:
On OS X it definitely does; on other platforms it may not since  
supported

encodings are platform-dependent.


The Centos version knows about it as well; thanks, that's the perfect  
solution.


--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mac OS Roman import?

2009-10-19 Thread Scott Ribe
> Maybe iconv knows about it?

On OS X it definitely does; on other platforms it may not since supported
encodings are platform-dependent.

-- 
Scott Ribe
scott_r...@killerbytes.com
http://www.killerbytes.com/
(303) 722-0567 voice



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mac OS Roman import?

2009-10-19 Thread Tom Lane
Christophe Pettus  writes:
> I find myself needing to regularly import data encoded as Mac OS Roman  
> (yes, it is coming from an Macintosh OS 9 system) into a UTF-8  
> PostgreSQL database.  Any advice on how to do so, since Mac OS Roman  
> is (reasonably enough) not one of PG's standard character encodings?

Maybe iconv knows about it?  Or it might be that it's close enough to
ISO 8859-something that you could get away with pretending it's that.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mac OS Roman import?

2009-10-19 Thread Scott Ribe
> I find myself needing to regularly import data encoded as Mac OS Roman
> (yes, it is coming from an Macintosh OS 9 system) into a UTF-8
> PostgreSQL database.  Any advice on how to do so, since Mac OS Roman
> is (reasonably enough) not one of PG's standard character encodings?

Use iconv; it's a command shipped with OS X, man iconv for more info.

-- 
Scott Ribe
scott_r...@killerbytes.com
http://www.killerbytes.com/
(303) 722-0567 voice



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mac OS Roman import?

2009-10-19 Thread Sam Mason
On Mon, Oct 19, 2009 at 02:54:38PM -0700, Christophe Pettus wrote:
> I find myself needing to regularly import data encoded as Mac OS Roman  
> (yes, it is coming from an Macintosh OS 9 system) into a UTF-8  
> PostgreSQL database.

How tightly integrated into PG do you want it?  You can define custom
character conversions if you want (although I've never tried this).
Python appears to know how to convert to/from "MacRoman"[1] by default,
so pl/python should make this reasonably easy.

Otherwise, iconv is a useful tool.  The project's homepage suggests
that it knows how to do the conversions, but I can't figure out the
command line needed.

-- 
  Sam  http://samason.me.uk/

 [1] http://docs.python.org/library/codecs.html
 [2] http://www.gnu.org/software/libiconv/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Mac OS Roman import?

2009-10-19 Thread Christophe Pettus

Greetings,

I find myself needing to regularly import data encoded as Mac OS Roman  
(yes, it is coming from an Macintosh OS 9 system) into a UTF-8  
PostgreSQL database.  Any advice on how to do so, since Mac OS Roman  
is (reasonably enough) not one of PG's standard character encodings?


Thanks!
--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to find the row corresponding to a given toast value?

2009-10-19 Thread David Boreham


I have a (large) corrupted 8.3.7 database that I'd like to fix.
It has this problem :

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  missing chunk number 2 for toast 
value 10114 in pg_toast_16426

I've seen this particular syndrome before and fixed it by deleting the table 
row that refers
to the missing toast value. The table row was discovered by chance because the user to whom the 
data belonged complained that his service wasn't working. In this current case that hasn't

happened, so I'm clueless as to which row I need to delete.

I've tried dumping the table to see if the records happen to be in primary key 
order
(hence the N+1'th record would be the bad one). Unfortunately this didn't help 
because the records appear to be out of order in the dump.


Hence my question : is there an efficient way to determine which table row
references that missing toad value ? 


My best option right now is to issue SELECT ... LIMIT .. OFFSET ... queries
to identify the row. This is likely to take a while though because there's
tens of GBytes in the table, and the database is quite heavily loaded.

Any better ideas are most welcome, thanks.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-19 Thread Peter Hunsberger
On Mon, Oct 19, 2009 at 2:18 PM, Andre Lopes  wrote:
> Hi,
>
> I'am searching for a free tool to design a Postgre Database. There is
> something similar to MySQL Workbench?

Search the archives this came up within the last couple of months.  I
currently use Power Architect, it's a beta product and still fairly
buggy but works.

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practices for effective_io_concurrency

2009-10-19 Thread Sergey Konoplev
On Mon, Oct 19, 2009 at 7:12 PM, Greg Smith  wrote:
> On Mon, 19 Oct 2009, Sergey Konoplev wrote:
>
>> I feel it rater significant for PG performance and would like to ask gurus
>> to provide some more description here.
>
> It's probably not as significant as you are hoping.  Currently the code only
> kicks in when you're doing a Bitmap Heap Scan, which is really helpful for
> them, but of no help for any other type of query.  And I've only seen it
> actually work at all on Linux.  It might work on BSD and Mac OS X systems,
> certainly doesn't do anything on Solaris and Windows.

Thanx for the details.

>
>> Also I've found some info in EnterpriseDB documentation
>> (http://www.enterprisedb.com/docs/en/8.3R2/perf/Performance_Guide-16.htm).
>> Is it all actual for PG8.4?
>
> That's is almost all correct for 8.4.  The settings mentioned there that
> start with the name "edb" are not available to tweak
> (edb_prefetch_indexscans and edb_enable_icach) in standard PostgreSQL, the
> rest of it is good background and advice.  The basic idea is that you start
> with setting the value to the number of working drives in the disk array the
> database is on and see if I/O performance goes up and/or query speed drops
> afterwards.  If it does you might try further increases beyond that even.
>
> As for why there isn't a better tuning guide than just those simple
> guidelines, it's not that easy to show a situation where the type of bitmap
> scan this parameter impacts is used on a generated data set, even though
> it's not that uncommon in real-world data.  It's hard both to make generic
> suggestions here and to even demonstrate the feature at work.

Well wouldn't using loosy indexes be the reason to increase this parameter?
And would it somehow affect (I mean negative) another queries which
doesn't lead to Bitmap Heap Scans?

-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english & russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Tom Lane
Brad Nicholson  writes:
> What about the question that Scott and I both asked - what about query
> plans.  Will they be affected?

No, they should be the same as what you were getting just before the
crash.  The planner only looks at pg_statistic which is a whole
different thing.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Free Tool to design Postgres Databases

2009-10-19 Thread Andre Lopes
Hi,

I'am searching for a free tool to design a Postgre Database. There is
something similar to MySQL Workbench?

Best Regards,
Andre.


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Brad Nicholson
On Mon, 2009-10-19 at 15:09 -0400, Brad Nicholson wrote:
> On Mon, 2009-10-19 at 15:01 -0400, Tom Lane wrote:
> > Brad Nicholson  writes:
> > > autoanalyze will automatically analyze new tables when they don't have
> > > stats.  It seems logical that it should handle this case where the table
> > > also does not have stats.
> > 
> > It will autoanalyze once a sufficient number of inserts have occurred.
> > The effect of a crash is just to reset the inserts-since-last-analyze
> > counters.  You really haven't made the case why that's so awful that
> > we need to do things that are unpleasant along other dimensions in order
> > to avoid it.  (The only ways of avoiding it that I can see would result
> > in very significant overhead added to the stats collection mechanism.)
> 
> What about the question that Scott and I both asked - what about query
> plans.  Will they be affected?
> 
> If so, you could have a table that does not get updated, that no longer
> generates relevant query plans, and won't until manual intervention.
> 

Actually, isn't there also a condition (although unlikely to occur) here
that could result in transaction wraparound? 
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Brad Nicholson
On Mon, 2009-10-19 at 15:01 -0400, Tom Lane wrote:
> Brad Nicholson  writes:
> > autoanalyze will automatically analyze new tables when they don't have
> > stats.  It seems logical that it should handle this case where the table
> > also does not have stats.
> 
> It will autoanalyze once a sufficient number of inserts have occurred.
> The effect of a crash is just to reset the inserts-since-last-analyze
> counters.  You really haven't made the case why that's so awful that
> we need to do things that are unpleasant along other dimensions in order
> to avoid it.  (The only ways of avoiding it that I can see would result
> in very significant overhead added to the stats collection mechanism.)

What about the question that Scott and I both asked - what about query
plans.  Will they be affected?

If so, you could have a table that does not get updated, that no longer
generates relevant query plans, and won't until manual intervention.

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Tom Lane
Brad Nicholson  writes:
> autoanalyze will automatically analyze new tables when they don't have
> stats.  It seems logical that it should handle this case where the table
> also does not have stats.

It will autoanalyze once a sufficient number of inserts have occurred.
The effect of a crash is just to reset the inserts-since-last-analyze
counters.  You really haven't made the case why that's so awful that
we need to do things that are unpleasant along other dimensions in order
to avoid it.  (The only ways of avoiding it that I can see would result
in very significant overhead added to the stats collection mechanism.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Brad Nicholson
On Mon, 2009-10-19 at 11:35 -0700, Christophe Pettus wrote:
> On Oct 19, 2009, at 11:26 AM, Scott Marlowe wrote:
> > That could be a pretty bad worst case scenario for certain types of
> > tables / usage patterns.
> 
> Given that (presumably) the database server is not failing repeatedly  
> without some kind of operator notification, isn't it at least somewhat  
> reasonable to just make "start an vacuum upon recovery from an  
> immediate shutdown" an operational procedure, rather than something PG  
> does automatically?
> 


It's not a vacuum you want, it's an analyze.  Once the stats are back,
autovacuum will vacuum accordingly.

autoanalyze will automatically analyze new tables when they don't have
stats.  It seems logical that it should handle this case where the table
also does not have stats.

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Christophe Pettus


On Oct 19, 2009, at 11:26 AM, Scott Marlowe wrote:

That could be a pretty bad worst case scenario for certain types of
tables / usage patterns.


Given that (presumably) the database server is not failing repeatedly  
without some kind of operator notification, isn't it at least somewhat  
reasonable to just make "start an vacuum upon recovery from an  
immediate shutdown" an operational procedure, rather than something PG  
does automatically?


--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Scott Marlowe
On Mon, Oct 19, 2009 at 11:27 AM, Tom Lane  wrote:
> Scott Marlowe  writes:
>> All of this is completely besides the point that a database that's
>> been shutdown immediately / had the power cord yanked comes back up
>> and doesn't start autovacuuming automatically, which seems a
>> non-optimal behaviour.
>
> It'll start as soon as you've modified enough rows.  The absolute worst
> case behavior is that table bloat reaches twice the level it would have
> otherwise, or pg_statistic data becomes twice as out of date as it would
> have otherwise.

That could be a pretty bad worst case scenario for certain types of
tables / usage patterns.

How bad can the affect of out of date pg_statistic data be?  Is it
likely to turn a hash agg into a nested loop and take a query from
seconds to minutes?  If so, then that's pretty bad.

> Now, if your server's MTBF is less than the autovac interval, you could
> indeed have an accumulating problem ... but I suggest that in that
> situation you've got other issues to fix.

True.  Still very much beside the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function returning 2 columns evaluated twice when both columns are needed

2009-10-19 Thread Tom Lane
Gerhard Wiesinger  writes:
> On Mon, 19 Oct 2009, Tom Lane wrote:
>> Probably because you have the function declared VOLATILE.

> None of the function is declared VOLATILE. Any other idea?

[ shrug... ]  There are other possible reasons why the planner would
fail to flatten a subquery, but none of them apply to the example you
showed.  And your example function *was* VOLATILE, by default.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function returning 2 columns evaluated twice when both columns are needed

2009-10-19 Thread Christophe Pettus


On Oct 19, 2009, at 10:49 AM, Gerhard Wiesinger wrote:

None of the function is declared VOLATILE. Any other idea?



If they are not declared with a volatility category at all, the  
default is VOLATILE.  Is that a possibility?


--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function returning 2 columns evaluated twice when both columns are needed

2009-10-19 Thread Gerhard Wiesinger

On Mon, 19 Oct 2009, Tom Lane wrote:


Gerhard Wiesinger  writes:

On Sun, 18 Oct 2009, Tom Lane wrote:

The OFFSET bit is a kluge, but is needed to keep the planner from
flattening the subquery and undoing your work.



Thnx Tom. It also works without the OFFSET kludge. Any ideas why?


Probably because you have the function declared VOLATILE.



None of the function is declared VOLATILE. Any other idea?

Thnx.

Ciao,
Gerhard

--
http://www.wiesinger.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Tom Lane
Scott Marlowe  writes:
> All of this is completely besides the point that a database that's
> been shutdown immediately / had the power cord yanked comes back up
> and doesn't start autovacuuming automatically, which seems a
> non-optimal behaviour.

It'll start as soon as you've modified enough rows.  The absolute worst
case behavior is that table bloat reaches twice the level it would have
otherwise, or pg_statistic data becomes twice as out of date as it would
have otherwise.

Now, if your server's MTBF is less than the autovac interval, you could
indeed have an accumulating problem ... but I suggest that in that
situation you've got other issues to fix.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Brad Nicholson
On Mon, 2009-10-19 at 11:16 -0600, Scott Marlowe wrote:
> On Mon, Oct 19, 2009 at 11:06 AM, Brad Nicholson
>  wrote:
> > On Mon, 2009-10-19 at 10:53 -0600, Scott Marlowe wrote:
> >> On Mon, Oct 19, 2009 at 10:44 AM, Tom Lane  wrote:
> >> > Brad Nicholson  writes:
> >> >> On Mon, 2009-10-19 at 12:07 -0400, Tom Lane wrote:
> >> >>> That seems like a fundamentally stupid idea, unless you are unconcerned
> >> >>> with the time and cost of getting the DB running again, which seemingly
> >> >>> you are.
> >> >
> >> >> I disagree that this is fundamentally stupid.  We are talking about a
> >> >> situation where the server is about to die, HA solution kicks in and
> >> >> moves it to standby.
> >> >
> >> > Moving it to standby immediately is a good idea, but it does not follow
> >> > that you need to hit the DB over the head with a hammer.  A fast-mode
> >> > shutdown seems perfectly adequate.  If it isn't, you're going to need
> >> > nontrivial recovery effort anyhow.
> >>
> >> All of this is completely besides the point that a database that's
> >> been shutdown immediately / had the power cord yanked comes back up
> >> and doesn't start autovacuuming automatically, which seems a
> >> non-optimal behaviour.
> >
> > It's also not going to endear us very much to the VLDB crowd - it will
> > amounts to a multi-hour crash recovery for those folks while analyze
> > regenerates statistics.
> 
> But this would be AOK behaviour for small transactional databases?

Defiantly not.

> Again, besides the point, but important.  The real point is a database
> that doesn't run autovac after an emergency shutdown is broken by
> design, and not just for one use case.

This behaviour is also undocumented AFAIK.  I would bet that a lot of
users would have no idea that they are in this state post
crash-recovery.

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Scott Marlowe
On Mon, Oct 19, 2009 at 11:06 AM, Brad Nicholson
 wrote:
> On Mon, 2009-10-19 at 10:53 -0600, Scott Marlowe wrote:
>> On Mon, Oct 19, 2009 at 10:44 AM, Tom Lane  wrote:
>> > Brad Nicholson  writes:
>> >> On Mon, 2009-10-19 at 12:07 -0400, Tom Lane wrote:
>> >>> That seems like a fundamentally stupid idea, unless you are unconcerned
>> >>> with the time and cost of getting the DB running again, which seemingly
>> >>> you are.
>> >
>> >> I disagree that this is fundamentally stupid.  We are talking about a
>> >> situation where the server is about to die, HA solution kicks in and
>> >> moves it to standby.
>> >
>> > Moving it to standby immediately is a good idea, but it does not follow
>> > that you need to hit the DB over the head with a hammer.  A fast-mode
>> > shutdown seems perfectly adequate.  If it isn't, you're going to need
>> > nontrivial recovery effort anyhow.
>>
>> All of this is completely besides the point that a database that's
>> been shutdown immediately / had the power cord yanked comes back up
>> and doesn't start autovacuuming automatically, which seems a
>> non-optimal behaviour.
>
> It's also not going to endear us very much to the VLDB crowd - it will
> amounts to a multi-hour crash recovery for those folks while analyze
> regenerates statistics.

But this would be AOK behaviour for small transactional databases?

Again, besides the point, but important.  The real point is a database
that doesn't run autovac after an emergency shutdown is broken by
design, and not just for one use case.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Brad Nicholson
On Mon, 2009-10-19 at 10:53 -0600, Scott Marlowe wrote:
> On Mon, Oct 19, 2009 at 10:44 AM, Tom Lane  wrote:
> > Brad Nicholson  writes:
> >> On Mon, 2009-10-19 at 12:07 -0400, Tom Lane wrote:
> >>> That seems like a fundamentally stupid idea, unless you are unconcerned
> >>> with the time and cost of getting the DB running again, which seemingly
> >>> you are.
> >
> >> I disagree that this is fundamentally stupid.  We are talking about a
> >> situation where the server is about to die, HA solution kicks in and
> >> moves it to standby.
> >
> > Moving it to standby immediately is a good idea, but it does not follow
> > that you need to hit the DB over the head with a hammer.  A fast-mode
> > shutdown seems perfectly adequate.  If it isn't, you're going to need
> > nontrivial recovery effort anyhow.
> 
> All of this is completely besides the point that a database that's
> been shutdown immediately / had the power cord yanked comes back up
> and doesn't start autovacuuming automatically, which seems a
> non-optimal behaviour.

It's also not going to endear us very much to the VLDB crowd - it will
amounts to a multi-hour crash recovery for those folks while analyze
regenerates statistics.

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Scott Marlowe
On Mon, Oct 19, 2009 at 10:44 AM, Tom Lane  wrote:
> Brad Nicholson  writes:
>> On Mon, 2009-10-19 at 12:07 -0400, Tom Lane wrote:
>>> That seems like a fundamentally stupid idea, unless you are unconcerned
>>> with the time and cost of getting the DB running again, which seemingly
>>> you are.
>
>> I disagree that this is fundamentally stupid.  We are talking about a
>> situation where the server is about to die, HA solution kicks in and
>> moves it to standby.
>
> Moving it to standby immediately is a good idea, but it does not follow
> that you need to hit the DB over the head with a hammer.  A fast-mode
> shutdown seems perfectly adequate.  If it isn't, you're going to need
> nontrivial recovery effort anyhow.

All of this is completely besides the point that a database that's
been shutdown immediately / had the power cord yanked comes back up
and doesn't start autovacuuming automatically, which seems a
non-optimal behaviour.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Tom Lane
Brad Nicholson  writes:
> On Mon, 2009-10-19 at 12:07 -0400, Tom Lane wrote:
>> That seems like a fundamentally stupid idea, unless you are unconcerned
>> with the time and cost of getting the DB running again, which seemingly
>> you are.

> I disagree that this is fundamentally stupid.  We are talking about a
> situation where the server is about to die, HA solution kicks in and
> moves it to standby.  

Moving it to standby immediately is a good idea, but it does not follow
that you need to hit the DB over the head with a hammer.  A fast-mode
shutdown seems perfectly adequate.  If it isn't, you're going to need
nontrivial recovery effort anyhow.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Brad Nicholson
On Mon, 2009-10-19 at 12:07 -0400, Tom Lane wrote:
> Brad Nicholson  writes:
> > If you issue an immediate shutdown to the database, autovacumm will not
> > process tables that should be vacuumed until manually re-analyzed.
> 
> AFAICS this is an unsurprising consequence of flushing stats on a crash.
> If you don't like it, avoid immediate shutdowns --- they are not
> especially good practice in any case.
> 
> > 3: What is the best work around for this?  When our HA solution triggers
> > a DB shutdown, we want it to be immediate.
> 
> That seems like a fundamentally stupid idea, unless you are unconcerned
> with the time and cost of getting the DB running again, which seemingly
> you are.
> 

I disagree that this is fundamentally stupid.  We are talking about a
situation where the server is about to die, HA solution kicks in and
moves it to standby.  

If we wait for a clean shutdown instead, and the server dies before it
completes (which is entirely possible), Postgres crashes and the exact
same behaviour will happen.  It also means that if any server crashes
(HA aside, shutdown method aside), the database will come up, but
functionality may be impacted until manual intervention.

At the very least. shouldn't autoanalyze not correct the lack of
statistics?  To me, this looks like the database will not come up
cleanly after crashing.
 
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cast numeric with scale and precision to numeric plain

2009-10-19 Thread Tom Lane
Sim Zacks  writes:
> Numeric with scale precision always shows the trailing zeros.
> Numeric plain only shows numbers after the decimal point that are being
> used.

That statement is false:

regression=# select 1234.000::numeric;
 numeric  
--
 1234.000
(1 row)

I'm not sure offhand what is the easiest way to suppress trailing
zeroes, but casting to plain numeric is not the route to a solution.

Really this is a textual formatting problem.  You might find that the
best bet is something with trim() or a regexp.  The trick would be
not removing zeroes that are significant ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Tom Lane
Brad Nicholson  writes:
> If you issue an immediate shutdown to the database, autovacumm will not
> process tables that should be vacuumed until manually re-analyzed.

AFAICS this is an unsurprising consequence of flushing stats on a crash.
If you don't like it, avoid immediate shutdowns --- they are not
especially good practice in any case.

> 3: What is the best work around for this?  When our HA solution triggers
> a DB shutdown, we want it to be immediate.

That seems like a fundamentally stupid idea, unless you are unconcerned
with the time and cost of getting the DB running again, which seemingly
you are.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] cast numeric with scale and precision to numeric plain

2009-10-19 Thread Sim Zacks
I'm using 8.2.4


Numeric with scale precision always shows the trailing zeros.

Numeric plain only shows numbers after the decimal point that are being
used.


I would like to have the data in my table with scale and precision, but
my views to be cast to numeric without any scale or precision.

However, when I cast a numeric(16,4) to a ::numeric it doesn't cast it.


In order to get the results I want, I have to cast it twice ::double
precision::numeric and then it removes the trailing zeros.


Is there any way to cast directly from numeric with scale and precision
to plain numeric?


example:


create table test(f1 numeric(16,4));

insert into test(f1)values(15),(200.004),(12.4123);
select f1 from test;
15.
200.0040
12.4123


select f1::numeric from test
15.
200.0040
12.4123


select f1::float::numeric from test
15
200.004
12.4123



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Un successful Restoration of DATA using WAL files

2009-10-19 Thread Mitesh51

I am unable to restore data with the use of WAL files by following procedure.

I have done following changes in postgres.conf to enable WAL archiving...

archive_mode = on   # allows archiving to be done
archive_command = 'copy "%p" "C:\\archivedir\\%f"'  

I have one database(built in) postgres. I create one table student in it.
and take full backup(only of a single database & I am not copying complete
data dir..) @ 12:40 pm with the 

pg_dump.exe -h localhost -p 5432 -U postgres -f C:\test.sql postgres

After taking full backup...

I create 1 table named "person" @ 12:41 pm  in the postgres database(for
testing purpose only).
(Now it should be recorded in WAL files...Am I write here?)

and then l remove both tables...(to check the validity of my backups) @
12:43

I restore full backup with...

psql -U postgres postgres < test.sql

@ this stage I took the full backup @ 12:40 pm so table "student" is
restored properly & I can see it.

Now...

I made recovery.conf with contents

restore_command = 'copy c:/archivedir/%f "%p"'
recovery_target_time = '2009-10-19 12:42:00'

I put it in C:\Program Files\PostgreSQL\8.4\data

I restart the server...

recovery.conf is changed to recovery.done & getting the following log...

2009-10-19 18:41:04.744 IST,,,4060,,4adc6568.fdc,1,,2009-10-19 18:41:04
IST,,0,LOG,0,"database system was shut down at 2009-10-19 18:40:43
IST"
2009-10-19 18:41:04.744 IST,,,4060,,4adc6568.fdc,2,,2009-10-19 18:41:04
IST,,0,LOG,0,"starting archive recovery"
2009-10-19 18:41:04.744 IST,,,4060,,4adc6568.fdc,3,,2009-10-19 18:41:04
IST,,0,LOG,0,"restore_command = 'copy c:/mit/%f ""%p""'"
2009-10-19 18:41:04.744 IST,,,4060,,4adc6568.fdc,4,,2009-10-19 18:41:04
IST,,0,LOG,0,"recovery_target_time = '2009-10-19
18:38:46+05:30'"
2009-10-19 18:41:04.931 IST,,,4060,,4adc6568.fdc,5,,2009-10-19 18:41:04
IST,,0,LOG,0,"automatic recovery in progress"
2009-10-19 18:41:04.963 IST,,,4060,,4adc6568.fdc,6,,2009-10-19 18:41:04
IST,,0,LOG,0,"record with zero length at 0/C868"
2009-10-19 18:41:04.963 IST,,,4060,,4adc6568.fdc,7,,2009-10-19 18:41:04
IST,,0,LOG,0,"redo is not required"
2009-10-19 18:41:05.135 IST,,,4060,,4adc6568.fdc,8,,2009-10-19 18:41:04
IST,,0,LOG,0,"selected new timeline ID: 10"
2009-10-19 18:41:05.478 IST,,,6032,"",4adc6569.1790,1,"",2009-10-19 18:41:05
IST,,0,LOG,0,"connection received: host=127.0.0.1 port=4383"
2009-10-19 18:41:05.494
IST,"postgres","postgres",6032,"127.0.0.1:4383",4adc6569.1790,2,"",2009-10-19
18:41:05 IST,,0,FATAL,57P03,"the database system is starting up"
2009-10-19 18:41:06.760 IST,,,5080,"",4adc656a.13d8,1,"",2009-10-19 18:41:06
IST,,0,LOG,0,"connection received: host=127.0.0.1 port=4384"
2009-10-19 18:41:06.775
IST,"postgres","postgres",5080,"127.0.0.1:4384",4adc656a.13d8,2,"",2009-10-19
18:41:06 IST,,0,FATAL,57P03,"the database system is starting up"
2009-10-19 18:41:08.838 IST,,,4060,,4adc6568.fdc,9,,2009-10-19 18:41:04
IST,,0,LOG,0,"archive recovery complete"

but I cant see the table "person" created again with the help of WAL file
restoration :(

Where I am wrong ??

-- 
View this message in context: 
http://www.nabble.com/Un-successful-Restoration-of-DATA-using-WAL-files-tp25959142p25959142.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] autovacuum and immediate shutdown issues

2009-10-19 Thread Brad Nicholson
If you issue an immediate shutdown to the database, autovacumm will not
process tables that should be vacuumed until manually re-analyzed.

PG 8.3.8

Relevant settings:

autovacuum = on
log_autovacuum_min_duration = 0   
autovacuum_max_workers = 3 
autovacuum_naptime = 1min   
autovacuum_vacuum_threshold = 10
autovacuum_analyze_threshold = 10
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.05


test=# CREATE TABLE foo (id int);
test=# INSERT INTO foo SELECT generate_series(1,1000);

auto-analyze runs as expected

test=# SELECT last_autovacuum, last_autoanalyze from pg_stat_user_tables
where relname ='foo';
-[ RECORD 1 ]+--
last_autovacuum  | 
last_autoanalyze | 2009-10-19 14:14:47.791658+00


test=# DELETE FROM foo WHERE id IN (SELECT id FROM foo LIMIT 100);

test=# SELECT * from pgstattuple('foo');
-[ RECORD 1 ]--+--
table_len  | 32768
tuple_count| 900
tuple_len  | 25200
tuple_percent  | 76.9
dead_tuple_count   | 100
dead_tuple_len | 2800
dead_tuple_percent | 8.54
free_space | 656
free_percent   | 2

dead_tuple_percent > 5% - autovacuum runs as expected on next pass:

test=# SELECT last_autovacuum, last_autoanalyze from pg_stat_user_tables
where relname ='foo';
-[ RECORD 1 ]+--
last_autovacuum  | 2009-10-19 14:16:47.910177+00
last_autoanalyze | 2009-10-19 14:14:47.791658+00


--repopulate table
test=# TRUNCATE foo;
test=# INSERT INTO foo SELECT generate_series(1,1000);

Wait for autoanalyze, then before next run of autovacuum

test=# DELETE FROM foo WHERE id IN (SELECT id FROM foo LIMIT 100);
pg_ctl -D data -m immediate stop

restart the postmaster
stats are gone due to the immediate shutdown.

test=# SELECT last_autovacuum, last_autoanalyze from pg_stat_user_tables
where relname ='foo';
-[ RECORD 1 ]+-
last_autovacuum  | 
last_autoanalyze | 


test=# SELECT * from pgstattuple('foo');
-[ RECORD 1 ]--+--
table_len  | 32768
tuple_count| 900
tuple_len  | 25200
tuple_percent  | 76.9
dead_tuple_count   | 100
dead_tuple_len | 2800
dead_tuple_percent | 8.54
free_space | 656
free_percent   | 2

dead_tuple_percent > 5% of table is dead, autovacuum should pick it up,
but it doesn't (yes, I have waited longer enough).
autoanalyze does not process the table.

Autovacuum will not process this table again until one of the following
two conditions are met:

1: Manually analyze the table
2: an additional 5% (autovacuum_vacuum_scale_factor) of the tuples in
the are dirtied - which amounts to 10% (autovacuum_vacuum_scale_factor *
2) of the actual table being dirtied before it gets vacuumed.

In case 2 - reports from pgstattuple are odd.  After deleting one row.

test=# DELETE FROM foo WHERE id IN (SELECT id FROM foo LIMIT 1);
DELETE 1

test=# SELECT * from pgstattuple('foo');
-[ RECORD 1 ]--+--
table_len  | 32768
tuple_count| 899
tuple_len  | 25172
tuple_percent  | 76.82
dead_tuple_count   | 1
dead_tuple_len | 28
dead_tuple_percent | 0.09
free_space | 3456
free_percent   | 10.55

Stats start over.

The DB should be able to recover cleanly from an immediate shutdown
IMHO.  If the stats are no longer there, I would expect autoanalyze to
run and regenerate them.  This is the same behaviour as when a new table
is created and populated.


A few questions

1: Is this expected behaviour, or a bug?
2: If not a bug, why does autoanalyze not process these tables.  It will
process newly loaded tables when they do not have any stats.
3: What is the best work around for this?  When our HA solution triggers
a DB shutdown, we want it to be immediate.  Currently we run a manual
analyze post recovery, this is bad for really for large databases.
4: after restart, why does pgstattuple shoe dead_tuple_percent = 8.54,
but after deleting one row, it shows dead_tuple_percent = 0.09?
5: on the missing stats - does this mean my query plans are potentially
bad until the stats are regenerated?


-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practices for effective_io_concurrency

2009-10-19 Thread Greg Smith

On Mon, 19 Oct 2009, Sergey Konoplev wrote:

I feel it rater significant for PG performance and would like to ask 
gurus to provide some more description here.


It's probably not as significant as you are hoping.  Currently the code 
only kicks in when you're doing a Bitmap Heap Scan, which is really 
helpful for them, but of no help for any other type of query.  And I've 
only seen it actually work at all on Linux.  It might work on BSD and Mac 
OS X systems, certainly doesn't do anything on Solaris and Windows.



Also I've found some info in EnterpriseDB documentation
(http://www.enterprisedb.com/docs/en/8.3R2/perf/Performance_Guide-16.htm).
Is it all actual for PG8.4?


That's is almost all correct for 8.4.  The settings mentioned there that 
start with the name "edb" are not available to tweak 
(edb_prefetch_indexscans and edb_enable_icach) in standard PostgreSQL, the 
rest of it is good background and advice.  The basic idea is that you 
start with setting the value to the number of working drives in the disk 
array the database is on and see if I/O performance goes up and/or query 
speed drops afterwards.  If it does you might try further increases beyond 
that even.


As for why there isn't a better tuning guide than just those simple 
guidelines, it's not that easy to show a situation where the type of 
bitmap scan this parameter impacts is used on a generated data set, even 
though it's not that uncommon in real-world data.  It's hard both to make 
generic suggestions here and to even demonstrate the feature at work.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index Question

2009-10-19 Thread Tom Lane
"Bierbryer, Andrew"  writes:
> I have created a table where a unique row is defined by 5 columns and
> have created an index on these 5 columns. If I then do a select with a
> where clause that only consists of 3 of these columns, will I get the
> speed benefits from the index that I have created, or will I need to
> create another index with only these 3 columns?

It should work fine if those are the three leading columns of the index.
If they're not, you might want another index.  This is not specific at
all to unique keys, it's a general property of multicolumn indexes.
Read
http://www.postgresql.org/docs/8.4/static/indexes.html
particularly 11.3 and 11.5

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index Question

2009-10-19 Thread Bierbryer, Andrew
I have created a table where a unique row is defined by 5 columns and
have created an index on these 5 columns. If I then do a select with a
where clause that only consists of 3 of these columns, will I get the
speed benefits from the index that I have created, or will I need to
create another index with only these 3 columns?

 

Thanks,

 

Andrew

 

 



Re: [GENERAL] Function returning 2 columns evaluated twice when both columns are needed

2009-10-19 Thread Tom Lane
Gerhard Wiesinger  writes:
> On Sun, 18 Oct 2009, Tom Lane wrote:
>> The OFFSET bit is a kluge, but is needed to keep the planner from
>> flattening the subquery and undoing your work.

> Thnx Tom. It also works without the OFFSET kludge. Any ideas why?

Probably because you have the function declared VOLATILE.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Best practices for effective_io_concurrency

2009-10-19 Thread Sergey Konoplev
Hi, All

I read documentation
(http://www.postgresql.org/docs/8.4/interactive/runtime-config-resource.html)
and googled effective_io_concurrency but have not found any expanded
explanation of what it actually is. I feel it rater significant for PG
performance and would like to ask gurus to provide some more
description here. It would be great if someone provide his experience.

Also I've found some info in EnterpriseDB documentation
(http://www.enterprisedb.com/docs/en/8.3R2/perf/Performance_Guide-16.htm).
Is it all actual for PG8.4?

Thank you.

-- 
Regards,
Sergey Konoplev

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgadmin3 hangs during dump

2009-10-19 Thread Albe Laurenz
Dennis Gearon wrote:
> I set up the same characteristics on the console, and it runs 
> fine, (COPY commands will import back, right? That's what it output.)
> 
> On the console, it was:
> pg_dump -vaF p -f dbase.sql -U user-name dbase-name
> 
> More details:
>  about 11 tables, practically empty.
>  no other fancy stuff like functions/views/installed languages/blah,blah
>  running on Ubuntu latptop
> 
> I don't know if I have log files set up or where they'd be. 

Well, what is needed so that anyone can help are still more details,
like the software versions involved, whether the problem is repeatable
or not, whether pgAdmin III hangs in the database or not (pg_locks ?),
if yes, in which query it hangs, what exactly is getting dumped when
it hangs, and best of all a self-contained example that would enable
somebody else to reproduce the problem.

Yours,
Larenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general