[GENERAL] failed sanity check, table answers was not found

2001-04-17 Thread Marek Ptlicki

I receive this error when trying to pg_dump -s

failed sanity check, table answers was not found

the failed table name varies. All the problematic tables
seem to work. vacuumdb (-z) also works without any comment.

This is a production database on 7.0.3
I'd like to migrate to 7.1, but I am affraid that the dumps are corrupt.

regards

-- 
Marek Ptlicki [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] updates way slower than selects?

2001-04-13 Thread Marek Ptlicki

On Friday, April, 2001-04-13 at 18:34:06, Tom Lane wrote:
  I've got a question: has anybody noticed in your production
  tables, that updates on existing rows take longer than inserts
  into those same tables?
 
 Updates naturally take longer than inserts.  Especially if you haven't
 provided an index that allows the row(s) to be updated to be found
 easily.  Have you checked the EXPLAIN results for your problem queries?

OK, I take that for granted Tom, but this database has 16 indexes (most
of them on 2-3 columns) and the updated column is just an int4 with no
index defined. No constraints attached. The update is on a single row
selected by serial primary key field ('where field=value'). I change the
field from -1 into 1 to be exact (it is kinda flag field indicating
a state of the record which is being reverted on the update).
'Explain' off course shows index scan that is why I am amazed by the lack
of speed comparing to insert (which is faster, disregarding the need to
update 16 indexes and going through couple of triggers).

(one side-note: I don't argue that the table is well-designed - observe
the number of indexes - I am just puzzled by lack of consistency in the
experience I gained by this - and I would like to learn more about why
it happened or, more likely, to learn that I've messed up badly and
this in not a normal situation).

I have reorganized the app so it doesn't use the 'flag field' anymore
(instead it uses one column table of ints to store the 'marked'
records keys). The process of 'delete from tb1 where id=value' doesn't
compare to 'update tb2 set field1=1 where id=value' in measure of speed
(or slowness). The whole operation (a few inserts/deletes on a single
transaction) takes 20% of the previous time which is much more
satisfactory to me (and my employers ;-)

Any further ideas will be more than appreciated, for the sake of my future
attempts.

thanks and best regards.

-- 
Marek Ptlicki [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] updates way slower than selects?

2001-04-08 Thread Marek Ptlicki

On Saturday, April, 2001-04-07 at 23:04:23, Doug McNaught wrote:
 Marek Ptlicki [EMAIL PROTECTED] writes:
 
  Hi!
  
  I've got a question: has anybody noticed in your production
  tables, that updates on existing rows take longer than inserts
  into those same tables?
 
 Standard first question: are you vacuuming?

this is a productio system running over half a year - yes it is
vacuumed every night. Additional question: is update speed on single
int4 value really dependant on vacuuming or are you just asking a
'standard question'?

regards and thanks

-- 
Marek Ptlicki [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] updates way slower than selects?

2001-04-07 Thread Marek Ptlicki

Hi!

I've got a question: has anybody noticed in your production
tables, that updates on existing rows take longer than inserts
into those same tables?

I work on an Internet auction system. We have 'items' put to
auction with 'autorestore' option. When such an auction is ended
without solution (I mean: nobody bought the item either because
of not achieving min. price or because of no offers at all)
it can be reentered automatically. It is done by a special
script running on the server.

I mark those auctions to be autorestored by a flag auto_restore int4
auto_restore = 1 means 'autorestore on' (option active)
auto_restore = -1 means 'autorestore pending'

when in autorestore loop I simply select all the auctions with 
auto_restore = -1, reinsert those items into the database and update
the column to auto_restore = 1

The timings are very discouraging though - I have timings of insert
vs update like 1:5. It means I insert new auction in 1ms and the
update of the flag takes 5ms. So I have 500% waste of time because of
the method!

This is not a big issue, because I can use additional table
autorestore_pending (consisting of only the IDs of auctions to be
restored) and skip the 'insert / update' routine (changing it into
insert / delete one), but it is not very good news to me 
(I have other procedures where I have to massively update the auctions
table and some of them are quite time-critical for the system)



Can someone give me some hint? Do you experience the same effect?
Is it 'normal'? If the answer is 'yes' I will have to look for
solutions avoiding updates on the database.


I use PostgreSQL 7.0.3 with Python 2.0 and PoPy combination
on RH 7.0 (Debian on production machine) on 2.4.x kernel
The mentioned table has a few triggers attached and 16 indices
(it is 30-column table of items in auction - to be sorted multiple
ways and selected on multiple conditions). 3 triggers attached to
'INSERT' and 'UPDATE' are used to provide statistical information
(for the application as well as for the users). But I think
that triggers are not the problem here, because the same ones
are used for 'insert' and for 'update'.


regards and best wishes

-- 
Marek Ptlicki
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] PostGreSql 7.1

2001-03-30 Thread Marek Ptlicki

On Friday, March, 2001-03-30 at 13:13:56, Jack wrote:
 Hi, All
 
 Where can I get a PostGreSQL 7.1 lattest beta version with RPM installation?

maybe ftp://ftp.postgresql.org/pub/dev/test-rpms/

I don't know, I've never tried them (using RH I prefer own compilation).

regards

-- 
Marek Ptlicki [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] how to load a sql-file????

2001-03-29 Thread Marek Ptlicki

On Wednesday, March, 2001-03-28 at 16:55:55, will trillich wrote:
 On Thu, Mar 29, 2001 at 12:17:31AM +0200, Marek Ptlicki wrote:
  On Tuesday, March, 2001-03-27 at 21:42:39, markus jais wrote:
   
   in MySQL I can type something like in Bash on my linux box:
   
   $ mysql -u root -p  file.sql
   
   then the file file.sql is read.
   
   can you tell me how to do this with postgresql???
   thanks a lot.
  
  very similar:
  
  psql dbname username  file.sql
  
  should work
  
  (you can well omit username if it is the same as your login name)
 
 if your sql is flawless, go right ahead.
 
 since psql reads "file" from its STDIN, it won't include any
 line numbers to help you track down troubles. (as far as it's
 concerned, you're just typing ral fast, so the feedback would
 occur right as you type, so line numbers would be superfluous.)
 
 on the other hand, both
 
   psql -f file-containing-sql mydb
 
 and
 
   psql mydb
   mydb= \i file-containing-sql
 
 will spit out error messages including line numbers to help you
 debug your sql.

off course, but note that the question was if one could use ' file.sql'

It is the same mechanism as issuing:

my_program_creating_sql | psql dbname username

(for example to use compressed dump to restore without decompressing it
to the disk first) which can't be done with -f option ;-)

Nevertheless your remark about debugging errors in sql files is very
important...

regards

-- 
Marek Ptlicki [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Vacuum VS Vacuum Analyze

2001-03-25 Thread Marek Ptlicki

On Friday, March, 2001-03-23 at 17:42:37, Tom Lane wrote:
 "Matt Friedman" [EMAIL PROTECTED] writes:
  I currently running vacuum nighly using cron and once in a while I run
  vacuum analyze (as postgres).
  Any reason why I wouldn't just simply run vacuum analyze each night?
 
 If you can spare the cycles, you might as well make every vacuum a
 vacuum analyze.

I have experienced that vacuum, especially vacuum analyze on heavily
used database sometimes seems to last forever. A very quick_and_dirty
hack is to run it twice: first time I run simple vacuum, but before that
I drop all the indices. After recreating of indices I run vacuum analyze.

The whole process runs lightning fast (the longest process is to
recreate the indices). The only problem is not to allow users to add
anything to the database, because it may end up in broken unique-key
indices. My solution to that is... temporary shutdown of services using
the database (those are helper services for my WWW application) which
simply makes my application refuse to work. The whole process is
scheduled for a deep night (about 4:00 AM) so hardly anybody can notice
;-) (it takes approx. 5 minutes)

The other solution would be not to drop the unique indices (but I don't
know the speed penalty in this case).

Question is: have I misspotted something? Is this routine of any danger
that I fail to notice?

regards

-- 
Marek Ptlicki [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: [ADMIN] New PostgreSQL session monitor

2001-03-17 Thread Marek Ptlicki

On Saturday, March, 2001-03-17 at 00:13:37, Bruce Momjian wrote:
 
 If you are running PostgreSQL 7.1 or earlier, the 'query' button will
 not work unless you compile PostgreSQL with debug symbols (-g), or apply
 the supplied patch 'query_display.diff' and recompile PostgreSQL.  The
 later method is recommended.

will the patch be included in the final 7.1 release?

regards

-- 
Marek Ptlicki [EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Re: pgsql for Python

2001-03-05 Thread Marek Ptlicki

On Friday, March, 2001-03-02 at 14:16:02, Joel Burton wrote:
 On Wed, 28 Feb 2001, [iso-8859-2] Marek Ptlicki wrote:
 
  Has anybody used http://sourceforge.net/projects/pgsql ?
  I maintain production system based on Python and PostgreSQL.
  
  I currently use PoPy, but for a few reasons it doesn't satysfy me fully.
  The pgsql seems OK at first sight (especially libpq-to-Python API) but
  for the production system I need Python DB API 2.0 compliant library.
  Has anybody tested it yet? What is the status of PostgreSQL 7.1
  compliance for today?
  
  Any Python geeks out here? ;-)
 
 Partially a Python geek here.
 
 I use PoPy and think it's dandy. I also thought it *was* the DB API 2.0--
 am I mistaken here?

yes, it _is_ compliant, when complaining about compatibility I meant 
pgsql's libpq-to-Python API, which in my opinion is very cool thing (but
as I said I need DB API 2.0).

What I don't like in PoPy is its oversimplistic approach towards more
complex types in PostgreSQL. Returning Numerics as strings for example
is a little bit too much for me...

 PyGreSQL is more commonly used, and has (IMHO) a simpler, more dict-like
 interface, but isn't (AFAIK) thread-safe, nor DB API compliant.

yes, it is commonly used partly because it is distributed together with
PostgreSQL :-)

Anyway I must say, after a few glances at http://sourceforge.net/projects/pgsql
that it is very interesting project, especially because of its better
support of PostgreSQL types (like Numeric, Money or arrays), aspecially
better from PoPy, which I have most experience with. A few
improvements should be added to it though to make it really the best
PostgreSQL API for Python, IMO.

regards

-- 
Marek Ptlicki [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])