[GENERAL] how to read all physical rows (visible or not) from a heap

2015-04-30 Thread Qingqing Zhou
I recall once there is a GUC allowing us - basically force a
SnapshotAny - what's current trick now? I need this only for debugging
reading heap pages.

Thanks,
Qingqing


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What constitutes reproducible numbers from pgbench?

2015-04-21 Thread Qingqing Zhou
On Tue, Apr 21, 2015 at 7:21 AM,
holger.friedrich-fa-triva...@it.nrw.de wrote:
 Hello list,

 Exactly what constitutes „reproducible“ values from pgbench?  I keep getting
 a range between 340 tps and 440 tps or something like that using the same
 command line on the same machine.  Is that reproducible enough?

Nope, it is not. Is PostgreSQL the only resource consuming (IO,
memory, CPU, etc) program running there?

By reproducible, meaning the tps numbers you get shall be close,
within several percent, if nothing changed with your runs. You can try
a selection only (-S) pgbench first.

Regards,
Qingqing


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Waiting on ExclusiveLock on extension

2015-04-17 Thread Qingqing Zhou
On Fri, Apr 17, 2015 at 1:14 AM, Andres Freund and...@anarazel.de wrote:
 Hm. I'm not aware of related changes in 9.4? 9.5 should be a bit better,
 but I don't think 9.4 will make much of a difference.

You are right. I mis-read the check-in log.

 I don't really agree that that's the most important bit. See
 http://archives.postgresql.org/message-id/20150329185619.GA29062%40alap3.anarazel.de

Thanks for the link. I actually think changes in a more fundamental
way - will follow on that thread.

Regards,
Qingqing


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fillfactor and cluster table vs ZFS copy-on-write

2015-04-16 Thread Qingqing Zhou
On Thu, Apr 16, 2015 at 5:09 AM, Geoff Speicher gspeic...@umtechx.com wrote:
 Therefore one might posit that PostgreSQL should be configured to use 100%
 fillfactor and avoid clustering on ZFS. Can anyone comment on this?


Even with COW, I can see fillfactor  100% still have its virtues. For
example, HOT update can avoid adding an extra index item on the index
page if it finds the new item can be inserted in the same heap page.

When you do CLUSTER command, engine will overwrite table into new
files any way, so COW does not affect here.

Regards,
Qingqing


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Waiting on ExclusiveLock on extension

2015-04-16 Thread Qingqing Zhou
On Thu, Apr 16, 2015 at 1:24 PM, Andomar ando...@aule.net wrote:
 After upgrading our database from 9.3.5 to 9.4.1 last night, the server
 suffers from high CPU spikes. During these spikes, there are a lot of these
 messages in the logs:

 process X still waiting for ExclusiveLock on extension of relation Y of
 database Z after 1036.234 ms

This issue has been complained several times, and here is the most recent one:
http://www.postgresql.org/message-id/0ddfb621-7282-4a2b-8879-a47f7cecb...@simply.name

PG 9.4.1 shall have much alleviated it by relaxing buffer pool related
locks. PG 9.4.1 shall be actually better in relation extension
handling - a possible explanation is that your 9.3.5 database has been
used for a while thus there are holes in pages, so not many extensions
are required.


 a) I'm assuming extension means growing the physical diskspace used by a
 table-- is that correct?
Yes.

 b) How can you find the name of the relation being extended? based on the
 relation number.
select number::regclass;

 c) Why would Postgres grow a table twice in the same millisecond? Could it
 be an index with a high fill factor?
PG extends one page each time when new space needed.

 Any suggestions on how to approach this issue are welcome.

There are some diagnosis in above link, see if it rings any bell. From
PG kernel side, I think fundamentally we may want to extend many pages
each time instead of one.

Regards,
Qingqing


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Waiting on ExclusiveLock on extension

2015-04-16 Thread Qingqing Zhou
On Thu, Apr 16, 2015 at 2:39 PM, Andomar ando...@aule.net wrote:
 That post is about a server with huge shared_buffers, but ours is just 8GB.
 Total memory 48GB memory on a dedicated server. Checkpoints write around 2%
 of the buffers.
Are you able to take some 'perf top' during high CPU spike and see
what's burning CPU there? Though the issue is related to blocking, but
high CPU spikes may hint some spinning to acquire behavior.

 With holes in pages, I suppose you mean the fill factor? Is there a way to
 see the current fillfactor of a table and its indices?

Yes, holes meaning free space within a page. It can come from 100%
fillfactor or vacuum collected dead tuples. You can see fillfactor
value in pg_class.reloptions. If you nothing there, that's 100% and
90% for heap and btree by default respectively.

If your previous relation size is smaller than after upgrade, that's a
signal that you do have holes in relation, thus extension can be
avoided sometimes for new tuples.

Regards,
Qingqing


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange Behavior with Serializable Transcations

2006-06-29 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] wrote

 Right, the snapshot does not become set until you do a non-utility
 command (normally, SELECT/INSERT/UPDATE/DELETE).  This is a feature, not
 a bug, because it lets the transaction take table locks before its
 snapshot becomes set.


Hm, mostly I understand it as an optimization. What do you mean by it lets
the transaction take table locks before its snapshot becomes set?  If we
take a snapshot at BEGIN, then the transaction can't take table locks?

test=# begin;select count(*) from k;
BEGIN
 count
---
  
(1 row)

test=# lock table k;
LOCK TABLE

Regards,
Qingqing



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Strange Behavior with Serializable Transcations

2006-06-29 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] wrote

 It can, but there are cases where you want the lock to be taken before
 the snapshot is set.  Otherwise, there could be committed changes in the
 database that you can't see in your snapshot.  I think there are some
 examples in the manual, or check the archives.


Ok, here are the links:

http://archives.postgresql.org/pgsql-bugs/2002-12/msg00169.php  - a bug
analysis of the take-snapshot at wrong point;

http://archives.postgresql.org/pgsql-hackers/2004-09/msg00133.php - recent
redesign notes.

Regards,
Qingqing



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-21 Thread Qingqing Zhou


On Wed, 21 Jun 2006, Relyea, Mike wrote:

 ExecutorState: 550339936 total in 123 blocks; 195003544 free (740135
 chunks); 355336392 used
 HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80
 chunks); 290485792 used
 TIDBitmap: 2088960 total in 8 blocks; 924720 free (27 chunks); 1164240
 used
 HashBatchContext: 41877664 total in 11 blocks; 4845648 free (15 chunks);
 37032016 used
 TIDBitmap: 2088960 total in 8 blocks; 1012120 free (27 chunks); 1076840
 used

The same problem. ExecutorState uses much more memory than we expect --
but not sure where they are from :-(

Regards,
Qingqing

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] problem connecting to server

2006-06-20 Thread Qingqing Zhou

Trigve Siver [EMAIL PROTECTED] wrote

 Thank you for reply. No there isn't any panic message. Log contain only
 messages I have posted.


Check out this thread (or search background writer process exited in
list):

http://archives.postgresql.org/pgsql-general/2006-05/msg01568.php

Regards,
Qingqing



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] simple query terminated by signal 11

2006-06-20 Thread Qingqing Zhou

Thomas Chille [EMAIL PROTECTED] wrote

 I don't know how to post it, because the size is 1,5 MB?! I try to
 attch it as gzip.


No ...  I mean the bt result of the core dump.

$gdb postgres_exe_path -c core_file_name
bt

 .
 Program terminated with signal 11, Segmentation fault.
 #0  0x080753c2 in DataFill ()
 (gdb) where
 #0  0x080753c2 in DataFill ()
 #1  0xb74253d4 in ?? ()
 #2  0x001d in ?? ()
 #3  0x08356fa8 in ?? ()
 #4  0x08379420 in ?? ()
 #5  0x in ?? ()
 (gdb)


Since it is repeatable in your machine, you can compile a new postgres
version with --enable-cassert (enable assertions in code) and
--enable-debug  (enable gcc debug support) configuration. Then run it on
your data and bt the core dump.

Regards,
Qingqing




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] simple query terminated by signal 11

2006-06-19 Thread Qingqing Zhou

Thomas Chille [EMAIL PROTECTED] wrote
 Hi List,

 i run in to an error while dumping a db.

 after investigating it, i found a possible corrupted table. but i am not
sure.
 and i dont know how i can repair it? could it be a harddrive error?


 # now the error: SELECT * FROM hst_sales_report WHERE id = 5078867

 [6216 / 2006-06-19 18:46:23 CEST]LOG:  0: connection received:
 host=[local] port=
 [6216 / 2006-06-19 18:46:23 CEST]LOCATION:  BackendRun, postmaster.c:2679
 [6216 / 2006-06-19 18:46:23 CEST]LOG:  0: connection authorized:
 user=postgres database=backoffice_db
 [6216 / 2006-06-19 18:46:23 CEST]LOCATION:  BackendRun, postmaster.c:2751
 [6216 / 2006-06-19 18:46:23 CEST]LOG:  0: statement: SELECT * FROM
 hst_sales_report WHERE id = 5078867
 [6216 / 2006-06-19 18:46:23 CEST]LOCATION:  pg_parse_query, postgres.c:526
 [3762 / 2006-06-19 18:46:23 CEST]LOG:  0: server process (PID
 6216) was terminated by signal 11
 [3762 / 2006-06-19 18:46:23 CEST]LOCATION:  LogChildExit,
postmaster.c:2358
 [3762 / 2006-06-19 18:46:23 CEST]LOG:  0: terminating any other
 active server processes
 [3762 / 2006-06-19 18:46:23 CEST]LOCATION:  HandleChildCrash,
postmaster.c:2251
 [3985 / 2006-06-19 18:46:23 CEST]WARNING:  57P02: terminating
 connection because of crash of another server process
 [3985 / 2006-06-19 18:46:23 CEST]DETAIL:  The postmaster has commanded
 this server process to roll back the current transaction and exit,
 because another server process exited abnormally and possibly
 corrupted shared memory.

Which verison are you using? In any way, except a random hardware error, we
expect Postgres to be able to detect and report the problem instead of a
silent core dump.  So can you gather the core dump and post it here?

Regards,
Qingqing



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] problem connecting to server

2006-06-19 Thread Qingqing Zhou

Trigve Siver [EMAIL PROTECTED] wrote

   I have installed PostgreSQL 8.1 on windows XP with SP2 ...

   2006-06-19 12:00:38 LOG:  terminating any other active server processes
 2006-06-19 12:00:38 LOG:  all server processes terminated; reinitializing
 2006-06-19 12:00:41 LOG:  background writer process (PID 132500) exited
with exit code 0
 2006-06-19 12:00:41 LOG:  terminating any other active server processes
 2006-06-19 12:00:41 LOG:  all server processes terminated; reinitializing
 2006-06-19 12:00:43 LOG:  background writer process (PID 133080) exited
with exit
   ...

Looks like the server try hard to restart. Is there any PANIC message you
found in the log?

Regards,
Qingqing



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-18 Thread Qingqing Zhou

Qingqing Zhou [EMAIL PROTECTED] wrote



 ExecutorState: 550339936 total in 123 blocks; 195005920 free (740144
chunks); 355334016 used
 ...
 HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80 chunks);
290485792 used
 TIDBitmap: 2088960 total in 8 blocks; 1012120 free (27 chunks); 1076840
used
 ...
 HashBatchContext: 2080768 total in 7 blocks; 346888 free (11 chunks);
1733880 used
 HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
 HashBatchContext: 2080768 total in 7 blocks; 325264 free (11 chunks);
1755504 used
 HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
 HashBatchContext: 41877664 total in 11 blocks; 4845648 free (15 chunks);
37032016 used
 TIDBitmap: 2088960 total in 8 blocks; 1012120 free (27 chunks); 1076840
used

 [repeated sequenced like this ...]

 ERROR:  out of memory
 DETAIL:  Failed on request of size 344.
 CONTEXT:  SQL statement INSERT INTO tblSummary SELECT * FROM
qrySummary
 PL/pgSQL function refresh_matview line 13 at execute statement
 LOG:  autovacuum: processing database iqdb
 LOG:  autovacuum: processing database template1LOG:  autovacuum:
processing database postgres

Er, looks like a huge hash-join but not sure if it is a memory leak, Tom?

Regards,
Qingqing




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to optimize PostgreSQL database size

2006-06-18 Thread Qingqing Zhou

Linux Portal [EMAIL PROTECTED] wrote
 The article on the subject can be read here:

   http://linux.inet.hr/optimize_postgresql_database_size.html


After dump/restore the database size is 1685 MB and after
vacuum-full/reindex is 1990 MB. Where we saved 305 MB?

Regards,
Qingqing



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-15 Thread Qingqing Zhou

Relyea, Mike [EMAIL PROTECTED] wrote
 Is this what you're looking for?

No. I mean per-context memory usage output like this in your log file:

2006-06-08 16:33:09 LOG:  autovacuum: processing database ibox
TopMemoryContext: 84400 total in 7 blocks; 12696 free (22 chunks); 71704
used
Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256
used
TopTransactionContext: 8192 total in 1 blocks; 7552 free (10 chunks); 640
used
MessageContext: 8192 total in 1 blocks; 7368 free (1 chunks); 824 used
smgr relation table: 8192 total in 1 blocks; 3904 free (0 chunks); 4288 used
...

Regards,
Qingqing





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-14 Thread Qingqing Zhou

Relyea, Mike [EMAIL PROTECTED] wrote
 I've just started receiving an out of memory error with my most complex
 query.

Can you post the memory usage log after the error the server reports?

Regards,
Qingqing



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Insert into partition table hangs

2006-06-11 Thread Qingqing Zhou

Nikola Ivanov [EMAIL PROTECTED] wrote
 Database is running on Windows 2003 Advanced Server and it is PostgreSQL
 8.1.3.

 On 6/9/06, Tom Lane [EMAIL PROTECTED] wrote
 
  Buggy SysV semaphore support in the kernel, maybe?  What platform is
  this exactly?
 

Yes, there is a bug in 8.1.3 Win32 SYSV semaphore emulation but I can't
figure out how the bug will lead this behavior ...

There is patch applied to the CVS tip -- you can find it in the patches
(somthing like Win32 semaphore patch). If you can, try to apply it to your
server (8.0.*, 8.1.* are all ok with it). There was one successful case of
solving the semaphore problem with the patch.

Regards,
Qingqing



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Disk corruption detection

2006-06-11 Thread Qingqing Zhou

Florian Weimer [EMAIL PROTECTED] wrote

 Anyway, how would be the chances for PostgreSQL to detect such a
 corruption on a heap or index data file?  It's typically hard to
 detect this at the application level, so I don't expect wonders.  I'm
 just curious if using PostgreSQL would have helped to catch this
 sooner.


PostgreSQL will only detect these corruption when it uses that heap or index
page. So a safe way to is to dump/restore your database if you suspect there
is some inconsistency happened.

Regards,
Qingqing



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Ever increasing OIDs - gonna run out soon?

2006-06-11 Thread Qingqing Zhou

John Sidney-Woollett [EMAIL PROTECTED] wrote

 It looks like the db is using them at the rate of 1.5 million per day.
 At what value will I hit a wraparound, and what options do I have to
 identify/fix the (impending) problem.


The Oid will wraparound when it reaches the 32bits unsigned integer limit.
If you don't use the oid explicitely in your application, then you don't
worry about it.

Regards,
Qingqing



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Insert into partition table hangs

2006-06-09 Thread Qingqing Zhou

Nik [EMAIL PROTECTED] wrote

 I also noticed that I am having the similar issue with SELECT
 statements. Same SELECT statement will run fine 3-4 times in a row and
 then get stuck and run forever with no results. But if I reload the
 configuration like above, the query immediatelly returns with correct
 result.

 Why does this happen, and could this give any insight in my problem?


Interesting enough :-(

I can see that for some reason, the query may be blocked on a semaphore and
also listen to the signal. So when the SIGHUP comes, it will wake up and set
a variable got_SIGHUP to true (that's the only immediate thing do for
SIGHUP). After that, we either continue to sleep on the semaphore or exit
with a FATAL message -- both situations does not match your senario.

A wild guess is that when we continue to sleep on the semaphore, magically
we get it, but how come?

Regards,
Qingqing



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of Memory Error on Postgresql 8.1.3. on win32

2006-06-08 Thread Qingqing Zhou

Harald Armin Massa [EMAIL PROTECTED] wrote
 today postgresql 8.1.3 on win32 died:
 2006-06-08 16:33:12 ERROR:  out of memory
 2006-06-08 16:33:12 DETAIL:  Failed on request of size 16777212.


I didn't see any strange number in your log message and 16777212 is 16M,
which is not a scary number ... is there any other memory-exhausting program
in the same machine?

Regards,
Qingqing



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] error 57014

2006-06-08 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] wrote

 Well, *something* sent that backend a SIGINT.


Could something be the backend statement_timeout? I remember [but not sure]
we don't differenciate timeout or explicite SIGINT in 7.4.x.

Regards,
Qingqing



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Restoring databases from a different installment on Windows

2006-05-29 Thread Qingqing Zhou

Berislav Lopac [EMAIL PROTECTED] wrote
I have recently reinstalled my Windows mychine, including the
 PostgreSQL server, but (due to a system crash, unrelated to Postgres) 
 I
 wasn't able to dump my databases to import them now. However, I have a
 full copy of the original system, including all the files of the
 original Postgres installation.

 Is there a way to restore the databases from the original installation
 into the new one? For example, in MySQL I would be able to just copy
 the data files; is there something similar in Postgres?


If you don't have a tablespace built (i.e. all data are in your initial 
data directory), then just like MySQL, you can replace your current data 
directory with that folder.

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] delete is extremely slow

2006-05-26 Thread Qingqing Zhou

Emi Lu [EMAIL PROTECTED] wrote
 Hello,

 Tried to run:

 delete from tableA where id 60;

 It is extremely slow.

 id is integer and also the primary key. tableA has around 9 records.

 (I do not want to run truncate since I have some foreign key constraint
 setup based on tableA. Besides not all data I want to delete; only part
 of them).


Since you have some FK constrains on A, so delete each line of A will
confirm that it is not depended, which involves a lookup on the dependent
table. Do you have indexes built on these dependents?

Regards,
Qingqing



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Insert into partition table hangs

2006-05-19 Thread Qingqing Zhou

Nik [EMAIL PROTECTED] wrote
 I have an application that executes an INSERT statement every second or
 two. In my old design this data was inserted into one table. However, I
 changed the design to have one table with multiple partitions. Now when
 I run the application it runs for some random amount of time (random
 number of INSERTs) and then it gets stuck at the INSERT query and it
 never finishes (at least not in some reasonable time - aka few hours).
 If I restart the application it continues inserting the data normally
 for a while before getting stuck again.


Do you do INSERT concurrently? Can you post a self-contained example to
reproduce it?

Regards,
Qingqing



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postmaster cannot start

2006-05-15 Thread Qingqing Zhou

Chun Yit(Chronos) [EMAIL PROTECTED] wrote
 saw from the log file, it's possible that server crash during
 vacuum process...

 Question :
 1) what happen to my database server? what the error meaning?

It looks like index pg_class_relname_nsp_index (which is an index on 
pg_class) is corrupted. But not sure why it reports the following error 
message (which looks like a post-commit cleanup caused error):

DEBUG:  AbortCurrentTransaction
PANIC:  cannot abort transaction 14135438, it was already committed

 2) it is my daily maintenance routine causing that?
No.

 3) it is ok to run vacuum full every day ?
Yes. There is no apparent bad side effect of vacuum full except that it 
would block other accesses on the target table. But since you do it per 
day(guess you do it on a nightly schedule), so this should not be a 
problem.

 4) how can i solve this problem?

The base table pg_class should be ok(pg_class_oid_ind indicates both 
have the same cardinality). Try to reindex pg_class as the superuser.

Regards,
Qingqing












---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Postmaster cannot start

2006-05-15 Thread Qingqing Zhou

Chun Yit(Chronos) [EMAIL PROTECTED] wrote
 postmaster give me error every time i try to start it
 LOG:  redo starts at A/46315F50
 PANIC:  btree_delete_page_redo: uninitialized right sibling


So the last resort I can think of is to use pg_resetxlog to pass the startup
failure -- but no gaurantee that your database is still consistent after
that :-(. So before you do that, make a copy of your database for backup.

Maybe a deeper look into the problem is not needed, since 7.4.5 is too old,
and try to use some newer version.

Regards,
Qingqing



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] How to have a blind-superuser

2006-04-24 Thread Qingqing Zhou
Is it possible to have a superuser who could do CHECKPOINT, BACKUP and
whatever but could not see any user data?

Regards,
Qingqing



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to have a blind-superuser

2006-04-24 Thread Qingqing Zhou

Jim C. Nasby [EMAIL PROTECTED] wrote
 On Mon, Apr 24, 2006 at 06:16:30PM +0800, Qingqing Zhou wrote:
  Is it possible to have a superuser who could do CHECKPOINT, BACKUP and
  whatever but could not see any user data?

 Not for backup. It'd be rather tricky to allow backing up data without
 being able to read it, afterall.

 I believe CHECKPOINT is protected since repeatedly calling it could
 result in performance problems, but you can probably get around that if
 needed by using a security-definer function.

 Why do you want non-superusers to be able to checkpoint, anyway?


Basically I wonder if I can have a superuer that he has every priviliges as
he does now (create language, rotate log files, create checkpoint and
everything superuser can do) but one thing I want to make sure is that he
could not see any user data for security reason (just think my database is
filled with very important UFO data ;-)). In another word, I need a
superuser be able to maintain database but he know nothing about what in the
database. Is there a solution for this in PG?

Thanks,
Qingqing




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-31 Thread Qingqing Zhou

Alex bahdushka [EMAIL PROTECTED] wrote

 (@)2006-03-18 23:30:33.035 MST[3791]PANIC:  heap_update_redo: no block


According to the discussion in pgsql-hackers, to finish this case, did you
turn off the full_page_writes parameter? I hope the answer is yes ...

Regards,
Qingqing




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-27 Thread Qingqing Zhou

Alex bahdushka [EMAIL PROTECTED] wrote

 LOG:  REDO @ D/19176610; LSN D/19176644: prev D/191765E8; xid 81148979:
 Heap - clean: rel 1663/16386/16559898; blk 0
 LOG:  REDO @ D/19176644; LSN D/191766A4: prev D/19176610; xid 81148979:
 Heap - move: rel 1663/16386/16559898; tid 1/1; new 0/10
 PANIC:  heap_update_redo: no block: target blcknum: 1,
 relation(1663/16386/16559898) length: 1

What happened that time is like this: heap 16559898 has two pages {0, 1}. A
vacuum full first examines page 0, do some cleaning; then it comes to the
second page, move the item 1/1 to page 0 -- as a byproduct, the heap is
truncated to only 1 page since 1/1 is the only item on the second page. At
this time, system crashed, and heap 16559898 has only 1 page left.  At the
xlog startup time, for some unknown reason( or I am totally wrong),
PostgreSQL didn't extend the heap to 2 blocks, thus the
heap_update_redo(more exactly, should be heap_move_redo) failed. But what's
the unknown reason, I really don't have a clue :-(.

Actually I tried to simulate your situation, but everytime I got a neat
recovery -- the only carvet is that depends on the XLOG_SMGR_TRUNCATE
written down or not, there may one extra useless page at the end of heap,
which is not a problem at all.

Regards,
Qingqing




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-26 Thread Qingqing Zhou

Alex bahdushka [EMAIL PROTECTED] wrote

 (@)2006-03-25 20:54:17.528 MST[26571]LOG:  REDO @ D/19176610; LSN
 D/19176644: prev D/191765E8; xid 81148979: Heap - clean: rel
 1663/16386/16559898; b
 (@)2006-03-25 20:54:17.528 MST[26571]LOG:  REDO @ D/19176644; LSN
 D/191766A4: prev D/19176610; xid 81148979: Heap - move: rel
 1663/16386/16559898; ti
 (@)2006-03-25 20:54:17.528 MST[26571]PANIC:  heap_update_redo: no
 block: target blcknum: 1, relation(1663/16386/16559898) length: 1
 (@)2006-03-25 20:54:17.529 MST[26570]LOG:  startup process (PID
 26571) was terminated by signal 6


It is like the problem due to the confliction of vacuum  update. The update
is on the page that vacuum was just removed.

Before we try to understand/attack the bug exactly, first I'd like to see
the complete xlog output. Your xlog output is imcomplete -- for example, the
first line after b should be lk %u, the second line after ti should be
d %u%u.  Can you check out why the output looks like that? If you want to
examine the source code, look at heapam.c/heap_desc().

Regards,
Qingqing




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-23 Thread Qingqing Zhou

Merlin Moncure [EMAIL PROTECTED] wrote

 In postgresql, queries executed over the parameterized/prepared C api
 are particularly fast...as much as a 70% speed reduction over vanilla
 PQexec.

Does it mean 70% time is spent on planning? I am a little bit interested in
this number. Can you specify what kind of queries or give some testing
numbers. By the way, if so, we can do it by PREPARE statement in SQL.

Regards,
Qingqing



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to release locks

2006-03-22 Thread Qingqing Zhou

Andrus [EMAIL PROTECTED] wrote

 insert command takes a long time. I terminated the client application from
 Windows XP client manager during insert command.
 At next re-run application and pgadmin both hang in DELETE FROM line

 I think I can reproduce this. Postgres server is in FreeBSD.
 I can require server admin to do something after the problem occurs.

 What should I do after reproducing the problem ?


A backtrace information would be most useful. Just build a debug verison
(--enable-debug) and restart the server, when you reproduce the problem,
attach gdb to it, then bt.

Regards,
Qingqing



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-21 Thread Qingqing Zhou

Alex bahdushka [EMAIL PROTECTED] wrote

 Hrm... well i obviously have restored the database by now (using
 pg_resetxlog; pg_dump; initdb; pg_restore).  However i did make a
 backup of the broken directory before I created the new database.  If
 anyone has any thing they would like me to try to possibly help track
 down this possible bug.  I would be more than glad to do it.


pg_resetxlog is the last resort to avoid the real problem. Once you reset
it, then the xlog after that some offset will not get replayed (so the
problem disappered) and possibly some data get lost :-(.

Can you patch the heap/heapam.c/heap_xlog_update() like this:

- elog(PANIC, heap_update_redo: no block);
+ elog(PANIC, heap_update_redo: no block: target block: %u, relation
length: %u,
+ItemPointerGetBlockNumber((xlrec-target.tid)),
+RelationGetNumberOfBlocks(reln));

And restart your database to see what's the output?

Regards,
Qingqing



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-21 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] wrote

 While at it, you should extend the error message to include the relation
 ID, so you have some idea which table is affected ... this is certainly
 not a very informative message ...


Exactly. Please use the following version:

- elog(PANIC, heap_update_redo: no block);
+ elog(PANIC, heap_update_redo: no block: target blcknum: %u,
relation(%u/%u/%u) length: %u,
+ItemPointerGetBlockNumber((xlrec-target.tid)),
+reln-rd_node.spcNode,
+reln-rd_node.dbNode,
+reln-rd_node.relNode,
+RelationGetNumberOfBlocks(reln));

BTW: I just realized that there is another (better) way to do so is to
enable WAL_DEBUG in xlog.h and SET XLOG_DEBUG=true. And that's why we don't
have much error message in xlog redo.

Regards,
Qingqing



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-21 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] wrote
 Qingqing Zhou [EMAIL PROTECTED] writes:
  BTW: I just realized that there is another (better) way to do so is to
  enable WAL_DEBUG in xlog.h and SET XLOG_DEBUG=true. And that's why we
don't
  have much error message in xlog redo.

 That was probably Vadim's original reasoning for not being very verbose
 in the redo routines' various PANIC messages.  But for failures in the
 field it'd be awfully nice to be able to see this info from a standard
 build, so I'm thinking we should improve the elog messages.  If you feel
 like creating a patch I'll be glad to apply it ...


So there are three ways to do it:
(1) enable WALD_DEBUG by default
So it is user's reponsibility to enable XLOG_DEBUG to print verbose
information while at error. This adds no cost during normal running, but the
problem is that too much information (only the last is useful) may pollute
the log file.

(2) print verbose information after errror
We can change StartupXLOG like this:

PG_TRY();
{
RmgrTable[record-xl_rmid].rm_redo(EndRecPtr, record);
}
PG_CATCH();
{
RmgrTable[record-xl_rmid].rm_desc(buf,
 record-xl_info, XLogRecGetData(record));
abort();
}
PG_END_CATCH();

Also, channge err_finish() so that if PANIC if InRecovery, do a
PG_RE_THROW(). The problem is this looks like a little bit hack.

(3) Replace elog in every xlog_ABC_redo() to some xlog_elog(), so the
description information can be automatically appended.

I vote for method 2.

Regards,
Qingqing



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to release locks

2006-03-21 Thread Qingqing Zhou

Andrus [EMAIL PROTECTED] wrote
 After killing client from XP task manager command

 DELETE FROM mytable

 runs forever.

 This is probably because pgAdmin shows that there are some locks placed
(see
 included picture).


It is not because of the locks. There is only running transaction (and it
got every locks). Not sure why the backend stucks there, seems it doesn't
aware of the broken client. In normal situations, if you killed a client,
then the server will print something like could not receive data from
client then exit the backend.

If you can attach to the problematic postgres process, that would be more
helpful to identify the problem.

Regards,
Qingqing



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] invalid page header

2006-03-21 Thread Qingqing Zhou

Jo De Haes [EMAIL PROTECTED] wrote

 CETERROR:  XX001: invalid page header in block 22182 of relation
dunn_main

 My main question is: why is this occuring?


I suppose there is no system error happens during the period (like lost
power). Can you attach the gdb at b bufmgr.c:257 and print the content of
(PageHeader) bufBlock?

Regards,
Qingqing



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-20 Thread Qingqing Zhou

Alex bahdushka [EMAIL PROTECTED] wrote

 After doing some more digging, it looks like that server was missing
 the appropriate Kpostgresql symlink in /etc/rc0.d/.  So upon shutdown
 (shutdown -h now)...  my guess is it got a sigterm (you know where it
 says Sending all processes a TERM signal or whatever), then it (init)
 waited 5 seconds or whatever the timeout is and sent a sigkill.

 If postgresql took longer to shutdown than that timeout and so was
 then given a sigkill and then server turned off Could that do it?


I don't believe in this explaination actually. According the startup
message, the error heap_update_redo: no block could most possibly happen
when PostgreSQL tried to read an existing block but found that the file
length is not long enough to have it. How could a SIGKILL truncate a data
file like that?

Regards,
Qingqing



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Error in Postgresql after a Machine Crash

2006-03-15 Thread Qingqing Zhou

Paulo Henrique Oliveira [EMAIL PROTECTED] wrote
 I was using postgres 7.2.1 in a Debian Woody server (in prodution).
 The machine crashed and when I restart it the following error occurs a lot
 in log.
 2006-03-14 14:35:23 [11858]  ERROR:  XLogFlush: request 102/7407C864 is
not
 satisfied --- flushed only to 102/4CFEC030

This happens due to a broken LSN field on the page header. Now seems that
number is quite insane, which is much much bigger than the actual current of
WAL. If there is a lot of different complains like this (compare the
7407C864 field), then there are a lot of broken pages :-(

 How do I fix this?
Are you sure you are using 7.2.1? Now PG don't support version older than
7.3 officially. So the best way is upgrade to a newer vesion like 7.4. You
can dump your data before that.

Regards,
Qingqing



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] \copy combine with SELECT

2006-03-15 Thread Qingqing Zhou

jia ding [EMAIL PROTECTED] wrote

 I tried:
 select id, name  into table2   from table1;
  \copy table2 to filename.txt
 in order to export 2 columns from table1 to a file.

 But, I am thinking, if there is a command can combine these two
 command together?

Notice that COPY command can be used like this:
COPY relname ['(' columnList ')'] FROM/TO [WITH options]

Regards,
Qingqing




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How do I use the backend APIs

2006-02-16 Thread Qingqing Zhou

Chad [EMAIL PROTECTED] wrote

 In Postgres, is there a C language API which would give me access to
 BTrees like Berkeley DB does? eg to seek to a particular key/value pair
 and iterate forward from there?

AFAIK there is no such API for this purpose. The reason is that to access
BTree, you have to setup complex enough environment to enable so. For
example, the buffer pool support, the WAL support etc. So though exporting
such API is easy to do, there is no pratical usage of it.

Regards,
Qingqing



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Huge number of disk writes after migration to 8.1

2006-01-17 Thread Qingqing Zhou
On Tue, 17 Jan 2006, Marcin wrote:


 Playing with postgresql.conf I changed stats_command_string to off,
 reloaded config, and restarted connection from applications (the 
 postgresql
 wasn't restarted). The write rate immediately drops down, and is now
 at 8000-9000 blocks per second (which is still 4-5 times more than in 
 8.0.3).


A similar problem was reported before:

http://archives.postgresql.org/pgsql-admin/2005-12/msg00266.php

But we conclude that's not related to pgstats. See if that's related to
your situation.

Regards,
Qingqing



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Huge number of disk writes after migration to 8.1

2006-01-16 Thread Qingqing Zhou

Marcin [EMAIL PROTECTED] wrote

 I suffer from huge increase in number of disk writes after migration
 from 8.0.3 to 8.1.2. The average number of sectors written increased 10
 times and transfer stays at 20MB of writes per second during work hours
 (with 20% cpu time wasted in iowait). The number of queries lasting more
 than 300ms increased thrice.


AFAICS the difference maybe related to this between 8.0 and 8.1 is the 
introduction of autovacuum. But seems you didn't turn it on. Do you observed 
only writes increased or both read/write? If the latter, that might be an 
unoptimized query plan invloves seqscan ... Can you find out what's the 
writes on?

Regards,
Qingqing 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Error when inserting millions of records

2006-01-13 Thread Qingqing Zhou

Rudolph [EMAIL PROTECTED] wrote
 Me and a friend are testing some operations for a search application in
 PostgreSQL. In version 8.1 my friend tried to copy the result of a join
 into another table, about 45.000.000 (small) records. This is the
 result:

 testsearch= insert into t_documentword2 (SELECT document_id,
 t_word2.id,
 frequency from t_documentword, t_word2 where t_documentword.word =
 t_word2.word);
 server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.
 The connection to the server was lost. Attempting reset: Failed.

 He got the same result in version 7.5.15.


Do we have a 7.5.15 version? You can get the right version using SELECT 
version();.

What's the structure of t_documentword2 and t_documentword? If you got core 
dumps, it will be useful to print the backtrace.

Regards,
Qingqing 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] wrote

 The key word there is safely.  We don't have a lot of trust in
 SIGTERM'ing individual backends (as opposed to shutting down the
 whole cluster at once, which is a well-tested code path).  See the
 archives.


Maybe related question: is the code below in XactLockTableWait() related to 
SIGQUIT?

 /*
  * Transaction was committed/aborted/crashed - we have to update pg_clog
  * if transaction is still marked as running.
  */
 if (!TransactionIdDidCommit(xid)  !TransactionIdDidAbort(xid))
  TransactionIdAbort(xid);

I interpret that if a quickdie or crash happens, then other backends may 
still run for a while, so it is important to mark related transaction abort. 
Or there is some other more obvious reason for that?

Regards,
Qingqing



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] extend sql?

2006-01-12 Thread Qingqing Zhou

xiapw [EMAIL PROTECTED] wrote
 Hello everyone,I want to konw how to extend the SQL.
 For example,Create table temp [sec_level],sec_level means the secure level 
 of the table that you have created,how to do this?

If you mean extend the SQL grammar, modify file backend/parser/gram.y.

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] index and MVCC

2006-01-11 Thread Qingqing Zhou

[EMAIL PROTECTED] wrote
a table is created and an index is created on that table

 then one row is updated

 what happens to the index file?
 ( is the index to old row deleted?)


Nothing happened to the index file. The index row is not deleted since there 
is no MVCC information associated with index. See the comments in 
ExecDelete():

  * Note: Normally one would think that we have to delete index tuples
  * associated with the heap tuple now...
  *
  * ... but in POSTGRES, we have no need to do this because VACUUM will
  * take care of it later.  We can't delete index tuples immediately
  * anyway, since the tuple is still visible to other transactions.

Regards,
Qingqing 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] xml export tool?

2006-01-11 Thread Qingqing Zhou

JJC [EMAIL PROTECTED] wrote

 I'm looking for an open-source tool that will allow me to extract data as
 XML.


If you want to extract data from PostgreSQL, I think most XML export tool
will do if they can connect to database via ODBC/JDBC.

Regards,
Qingqing




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] psql(18967) malloc: *** vm_allocate(size=8421376) failed (error code=3)

2006-01-11 Thread Qingqing Zhou

Ari Kahn [EMAIL PROTECTED] wrote
 I'm doing a query that really should be too taxing. But when I  execute it 
 I get the following error(s):

 psql(18967) malloc: *** vm_allocate(size=8421376) failed (error code=3)

 Nested Loop  (cost=89.36..6086.42 rows=273312 width=88) (actual 
 time=0.113..24456.508 rows=54952794 loops=1)

Maybe 54952794 rows is too much for your client machine? Try add more memory 
slots or increase your vm file size.

Regards,
Qingqing 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] psql(18967) malloc: *** vm_allocate(size=8421376) failed (error

2006-01-11 Thread Qingqing Zhou


On Wed, 11 Jan 2006, Ari Kahn wrote:


 I have LOTS of memory (5GB).
 What is the appropriate way to add more memory slots?
 Or increase my VM file size?


Hmm ... What OS are you using?

Regards,
Qingqing

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] psql(18967) malloc: *** vm_allocate(size=8421376) failed (error

2006-01-11 Thread Qingqing Zhou


On Wed, 11 Jan 2006, Ari Kahn wrote:

 Mac OSX 10.4


Seems this is a known problem of Mac. Try to google vm_allocate failed.

Regards,
Qingqing

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] psql(18967) malloc: *** vm_allocate(size=8421376) failed (error

2006-01-11 Thread Qingqing Zhou


On Wed, 11 Jan 2006, Ari Kahn wrote:

 Could you be more specific. It seems that lots of people get this
 error message on many platforms.

http://groups.google.ca/groups?hl=enq=vm_allocate%20failedsa=Ntab=wg

I checked the first several pages. You will most of them point to Mac or
NeXT.

Regards,
Qingqing

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] contrib library in windows

2006-01-06 Thread Qingqing Zhou

SunWuKung [EMAIL PROTECTED] wrote

 Does the contrib library exist for Windows?

If you are compiling from source tar ball, it is in pgsql\contrib.

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] More atomic online backup

2006-01-06 Thread Qingqing Zhou


On Fri, 6 Jan 2006, Bruno Almeida do Lago wrote:

 Hi,

 1) I've a pg cluster with 5 databases here. I was wondering if it's possible
 to make an online backup of 1 database only, not the entire cluster.


Check out tool pg_dump or the documents for other options.

 2) Online backups at Oracle are done per tablespace. Do you see any
 advantage on this?


AFAIK, In Oracle, a tablespace is a set of (logically related) files
belongs to one database, so do tablespace backup is a natural option.
PostgreSQL can have several databases sharing one tablespace, which makes
the backup tablespace logically hard to understand, and we don't support
tablespace level backup yet.

Regards,
Qingqing

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dump throws no buffer space available error

2006-01-06 Thread Qingqing Zhou

Vishal Dixit [EMAIL PROTECTED] wrote

 We are running postgres version 8.0 on windows server 2003. On
 doing a pg_dump of a large database the following error occurs:

 pg_dump: could not receive data from server: No buffer space
 available

 There is one table in the database containing bytea type column,
 this error comes as we add more entries to this column. Please
 share your thoughts.


It is a socket error WSAENOBUFS. According to Microsoft:

No buffer space available. -- An operation on a socket could not be 
performed because the system lacked sufficient buffer space or because a 
queue was full.

I just googled a little bit, tweak some regedit values can alleviate it, but 
not sure.

/*
For pg-hackers:
http://support.microsoft.com/kb/q201213/

Anyone reports a similar thing in Unix?
*/

Regards,
Qingqing 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Hardware recommendation for PostgreSQL on Windows?

2006-01-05 Thread Qingqing Zhou

Bj?rn T Johansen [EMAIL PROTECTED] wrote
I an planning to make a small Windows application and need a nice 
database I am used to using PostgreSQL
 under Linux and I am thinking about using this under Windows but how much 
 resources does it use under Windows?
 The server will be running on the workstation along with the Windows 
 application, so it should be lite...
 And I know that maybe I should be using something else, like SQLite or 
 something but I have never used it and
 I don't want to spend too much time learning how to use other databases... 
 And I am also going to use Delphi,
 so I am not sure how well any other db is supported?



I am not sure how exactly you want to use PostgreSQL? In a client 
application or as a backend database server?
Need (1) multiple-connections? (2) Need transaction? (3) SQL?

Regards,
Qingqing




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] build Postgresql on Solaris 64 bits

2006-01-03 Thread Qingqing Zhou

FERREIRA, William (GFI INFORMATIQUE) [EMAIL PROTECTED] wrote

 i'm trying to install Postgresql on Solaris.
 Looking at the generated code, i saw that it was in 32 bits.
 I would like to compile Postgresql in 64 bits but i didn't find any post 
 in the archives about this.


Check out this thread (the problem mentioned in the thread is fixed 
recently):
http://archives.postgresql.org/pgsql-bugs/2005-12/msg00144.php

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_reload_conf() does not unset archive_command

2006-01-01 Thread Qingqing Zhou

Andrus Moor [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]

 1. Make sure that postgres.conf file contains in first line:

 archive_command = 'copy %p c:/arhiiv/%f'

 2. Edit postgres.conf file by adding # before this line

 #archive_command = 'copy %p c:/arhiiv/%f'

 3. Run SELECT pg_reload_conf() in PgAdmin

This is a known problem - since pg_reload_conf() will ignore the lines with 
a # (comment), so the old value is kept.

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] SET SESSION TRANSACTION command

2006-01-01 Thread Qingqing Zhou
test=# SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET
test=# show transaction_isolation;
 transaction_isolation
---
 read committed
(1 row)

Is this a bug by any chance or we do it intentionally?

Regards,
Qingqing 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] SET SESSION TRANSACTION command

2006-01-01 Thread Qingqing Zhou


On Sun, 1 Jan 2006, Tom Lane wrote:

 Qingqing Zhou [EMAIL PROTECTED] writes:
  test=# SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;

 SESSION is taken as a noise word here.  Possibly we should disallow this
 syntax altogether, since it isn't SQL-spec ... but I'm not sure it's
 worth contorting the grammar enough to do that.


Other GUC variables work well of the SET SESSION after a few tests. How
about we try to support it? There are 3 levels (not including LOCAL
SESSION which I don't understand) of setting isoLevel and list them by
their priority:

isoLevel[0]: Transaction level : BEGIN TRANSACTION ISOLATION
isoLevel[1]: Session level:  SET SESSION TRANSACTION
isoLevel[2]: global level: default_transaction_isolation in postgres.conf
or by SET;

So the algorithm of setting XactIsoLevel at beginning of transaction will
be:

/* StartTransaction() */
for (i=3; i0; i--)
if (isoLevel[i] has a value)
XactIsoLevel = isoLevel[i];

Or is there a consideration that we can't do this (means which will cause
broken transaction semantics something)?

Regards,
Qingqing

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] how to add a new column to system table

2005-12-29 Thread Qingqing Zhou

xiapw [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
I wanted to add a new column to system table(pg_database),but failed at 
last. Who know how to do this and can you tell me ?

A recent change to pg_database (adding a new column) happened on

--
Sun Jul 31 17:19:21 2005 UTC (4 months, 4 weeks ago) by tgl

Log Message:
---
Add per-user and per-database connection limit options.
This patch also includes preliminary update of pg_dumpall for roles.
Petr Jelinek, with review by Bruce Momjian and Tom Lane.
--

Try to find out related changes on pgsql.committers.

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] I want to know how to improve the security of postgresql

2005-12-28 Thread Qingqing Zhou

xiapw [EMAIL PROTECTED] wrote
 Hi guys,can you give me some advices about how to improve the security of 
 postgresql?
 Now I major in the security of postgresql and the destination is create a 
 database with security level of B1(TCSEC),what should I do now,what 
 program language should I use?
 Thanks!
 Sting

PostgreSQL has a very good fame of security in real-world applications, but 
frankly it is at most in C2 level according to TCSEC standards (the in-core 
support of auditing is missing). So to support B1, there are some 
non-trivial work ahead I am afraid :-( I would suggest you have a look at 
the TCSEC standards and list the *exact* functionalities that we don't 
provide so far.

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] newbie : setting access for users in a web enviroment

2005-12-23 Thread Qingqing Zhou

robert mena [EMAIL PROTECTED] wrote

 How can I specify that the user testadm can perform those actions to this
 database?

Use GRANT command (the opposite is REVOKE). I suppose you are using 8.1:

http://www.postgresql.org/docs/8.1/static/sql-grant.html

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Funky template1 problem?

2005-12-21 Thread Qingqing Zhou

Jerry LeVan [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Hi,

 I have a bunch of small dbs (levan, pictures, and a couple of others).

 I have had these db's over many upgrades and I have just noticed that
 template1 seems to be a bit munged...

 testuser= \c template1
 You are now connected to database template1.
 template1= \d
  List of relations
 Schema |  Name  |   Type   | Owner
 ++--+---
 public | lo_shadow  | table| levan
 public | my_description | table| levan
 public | person | table| levan
 public | person_id_seq  | sequence | levan
 public | pics   | table| levan
 public | pics_ident_seq | sequence | levan
 (6 rows)


Can you show the relfilenode (the physical file name of the relation) of 
these tables like this:

select relname, relfilenode from pg_class, pg_namespace where relnamespace = 
pg_namespace.oid and nspname = 'public';

And check if they are really in the template1?

Regards,
Qingqing 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Funky template1 problem?

2005-12-21 Thread Qingqing Zhou


On Wed, 21 Dec 2005, Tom Lane wrote:

  Jerry LeVan [EMAIL PROTECTED] wrote in message
  I have had these db's over many upgrades and I have just noticed that
  template1 seems to be a bit munged...

 I'm sure they are.

Oh right -- since CREATE DATABASE just copied the whole directory.

Regards,
Qingqing

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Migration tool from MS SQL server 7.0

2005-12-16 Thread Qingqing Zhou

dfx [EMAIL PROTECTED] wrote
 Dear Sirs,
 it exist some tool to convert a SQL script generated from MS SQL 7 to the 
 syntax
 compatible with Postgresql v. 8.x.x?

In general, I bet no - since some functionalities in SQL Server is different 
from ours, say TableSpace.

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] keeping track of inserts (periodical) into a table

2005-12-03 Thread Qingqing Zhou

Assad Jarrahian [EMAIL PROTECTED] wrote
 Hi all,
   I have a design decission to make. I am using Lucene (Java Search
 API) and I need to update lucenes index from time to time.
Is there a way one can keep track of all the inserts coming
 into a table (holding onto their Primary key's are sufficient) and
 then reseting that after a certain time period?


Check out the example here:

http://www.postgresql.org/docs/8.1/static/plpgsql-trigger.html

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] user_write_lock_oid questions

2005-12-02 Thread Qingqing Zhou

Marian Naghen [EMAIL PROTECTED] wrote

 1. if a user set a write lock and then, for some reason (bad
 connections, app chashes, etc), disconect from server without releasing,
 what happened with the lock ? It will be still holding ?

When the connection is lost, server will release all its resources, 
including locks.

 2. there is a way to set/obtain informations about the user who sets a
 lock ?

Check out pg_stat_get_backend_userid(integer backendid) function.

Regards,
Qingqing 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] invalid page header in block 597621 of relation...error

2005-11-24 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] wrote

 At this point I think there's no question that your filesystem is
 dropping blocks :-(.

It is very interesting to follow this thread. But at this point, can you 
explain more why there is no question is file system's fault?

Thanks,
Qingqing 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Difference in indexes

2005-11-21 Thread Qingqing Zhou

A.j. Langereis [EMAIL PROTECTED] wrote

 What is happening here? What am I overlooking? The length does not
 seem to be the problem: 'FooFooFoo' also uses the index..
 Also the fact whenever there are results or not does not seem to
 influence the planner..


Check out this thread:

http://archives.postgresql.org/pgsql-bugs/2005-11/msg00032.php

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Conditional delete

2005-11-21 Thread Qingqing Zhou



On Mon, 21 Nov 2005, Bartosz Jakubiak wrote:

 Hi.

 I'm new with PostgreSQL, but this thing doesn't allow me to sleep:

 I wonder if it is possible to execute SQL query which:
 1. checks out if table exists, and if it is:
 2. deletes it
 All of it at SQL query level, preferrably in one transaction.

 Something like (MSSQL):
 IF EXISTS (SELECT * FROM sysobjects WHERE id =
 object_id(n'properties_branches') AND objectproperty(id, n'isusertable')
 = 1)
 DROP TABLE properties_branches

 or (MySQL)
 DROP TABLE IF EXISTS properties_branches;


This quite looks like rewrite a script auto generated from SQL Server :-)
Yes, you can do that by querying the system catalog to see if a table
exists (pg_class). However, notice there is possible a race condition
here: if you found one table exists, but when you delete it, it may
already deleted by others.

Regards,
Qingqing



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Difference in indexes

2005-11-21 Thread Qingqing Zhou

A.j. Langereis [EMAIL PROTECTED] wrote

 Bitmap Heap Scan on hosts  (cost=2.07..11.34 rows=21 width=59) (actual
 time=0.175..0.287 rows=21 loops=1)
   Recheck Cond: ((hostname)::text = 'Fabian'::text)
   -  Bitmap Index Scan on hosts_hostname  (cost=0.00..2.07 rows=21
 width=0) (actual time=0.145..0.145 rows=21 loops=1)
 Index Cond: ((hostname)::text = 'Fabian'::text)
 Total runtime: 0.510 ms

 This result was achieved by setting enable_seqscan to off 
 (postgresql.conf).
 Turning off enable_bitmapscan as well resulted in a index scan which was
 even more faster:

 Index Scan using hosts_hostname on hosts  (cost=0.00..37.28 rows=21
 width=59) (actual time=0.068..0.281 rows=21 loops=1)
   Index Cond: ((hostname)::text = 'Fabian'::text)
 Total runtime: 0.492 ms


If you compare the difference among the *estimated* cost (cost=0.00 ..):

seqscan: cost=0.00..10.25
Bitmap: cost=2.07..11.34
indexscan: cost=0.00..37.28

Then you will know why the optimizer prefers sequential scan.  Yes, in your 
case, the *real* cost(actual time = ...) is quite different from the 
estimated cost. That's because the optimizer can't collect enough 
information of the environment at execution. For example, the optimizer does 
not know if a data page is in buffer or not(which will incurs IO cost) and 
it always assumes not. There is a long story about the why the optimizer 
does this. In short, since PG uses small buffer pool and the optimizer is 
mainly useful for big tables, so this assumption is reasonable -- but for 
small tables, may not that good.

Regards,
Qingqing


 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Win32 libpq and ecpg thread safety

2005-10-26 Thread Qingqing Zhou
Hi,

Is there a paragraph in the document talking about this? I've tried to look 
into it, but can't find it. Can anybody point me to it?

Thanks,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Win32 libpq and ecpg thread safety

2005-10-26 Thread Qingqing Zhou



On Wed, 26 Oct 2005, Bruce Momjian wrote:

 If you are asking about Win32-specific threading, there is no
 documentation about it because it works just like Unix threading.  At
 least, that is what I am told.


So both libpq and ecpg are thread-safe in Win32? I look into the code,
seems at least ecpg is not, and there is some possible improvements of
libpq (esp. PTHREAD_MUTEX_INITIALIZER).

Regards,
Qingqing

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] copy data between database

2005-10-10 Thread Qingqing Zhou

Scott Marlowe [EMAIL PROTECTED] wrote

 Under no circumstances should you do this with a database that has any
 data in it that you value.  pg_dump / pg_restore / psql are the
 preferred way of doing this.


Oh yeah, sorry for the miss leading information. My method is dangerous and 
can not cover all the cases (say you have an external stored column, etc).

Sorry about it,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] copy data between database

2005-10-07 Thread Qingqing Zhou

[EMAIL PROTECTED] wrote
 Hi, I have a table A in both database d1 and d2. I would like to copy
 data in A in d1 to A in d2. How can I do it? I do not want to copy all
 data, just some part of A,



Here is a way if you feel want to try. The basic idea is that create a table 
(say OS file name of this table is xxx) for the part of data of A that you 
are interested in d1, and create another table of the same structure in d2 
but empty with OS file name yyy. Then normally shutdown database, replace 
yyy with xxx. Restart database again, you will get the data in d2. Notice 
that you must make sure that two database d1 and d2 are initialized with the 
same parameters.

Regards,
Qingqing 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] could not send data to server: Operation not supported

2005-09-26 Thread Qingqing Zhou

ruben [EMAIL PROTECTED] wrote

 A PHP script that is running every 30 minutes for three years has crashed 
 when updating a Postgres database table with this error message:

 pg_exec() query failed:  could not send data to server: Operation not 
 supported

 I just ran it again and it worked fine. I could not find a single page 
 about this error in Google. This is the output of the logfile:

 LOG:  server process (PID 7069) was terminated by signal 11


It is a segment fault in Postgres process. You can compile another PG using 
exactly the same envrionment with --enable-debug support (if this version is 
not) and trace the core dump (if you got it). Theoretically you can bt the 
core dump frames and post here.

Regards,
Qingqing



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] [SQL] add column if doesn't exist (fwd)

2005-09-26 Thread Qingqing Zhou

Brandon Metcalf [EMAIL PROTECTED] wrote

 Is there a way to check for the existence of a column in a table other
 than, say, doing a SELECT on that column name and checking the output?


Take a look at pg_attribute system table. 
http://www.postgresql.org/docs/8.0/static/catalog-pg-attribute.html

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Can not get ODBC to log. Trace is not working.

2005-09-26 Thread Qingqing Zhou

Roy Souther [EMAIL PROTECTED] wrote
I am trying to get the ODBC client to log but I just can't. I have
included my odbc.ini and odbcinst.ini. Can anyone tell me why there is
never a log file created? I connect to the database with isql and
OpenOffice, run queries and get data but never do I ever get a log file.

Take a look here: http://support.microsoft.com/kb/q274551/

Regards,
Qingqing




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] GiST access is not concurrent

2005-08-31 Thread Qingqing Zhou
John Surnow [EMAIL PROTECTED] wrote
 Does this mean that read access is not concurrent, or write access, or 
 both?

For old GiST, you can have several connections read-*only* at the same time, 
but write/write or read/write is unsafe. GiST in 8.1 is concurrent safe 
already.

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to set an expiration date for a WHOLE user account

2005-06-16 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] writes:

 He's right, you're not: that check is only applied in the
 password-based authorization path.

 This has always seemed a bit bogus to me too --- would not object to
 a well-thought-out patch to change it.


If we add a VALID UNTIL column in pg_hba.conf, then will work for all
authorization path?

Regards,
Qingqing



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PG 8.0.1 is getting slow in 24 hours. Only daily VACUUM FULL helps

2005-06-14 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] writes

 Evidently that's not often enough.  Or it could be that your
 FSM settings aren't large enough?


Setting various proper parameters of FSM is onerous for maintainence. Is
there any plan to make FSM buffer-file-able, just like what clog does now?

Regards,
Qingqing



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] (NONE)

2005-06-14 Thread Qingqing Zhou

jeremy ` [EMAIL PROTECTED] writes
 get some screenshots, and i got this message:
 Internal account lookup failure: No mappiong between account names and
 security IDs was done.



This was recently asked. Search Internal account lookup failure in list.

Regards,
Qingqing



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] How to set an expiration date for a WHOLE user account

2005-06-14 Thread Qingqing Zhou

Együd Csaba [EMAIL PROTECTED] writes

 I'm wonder how I can disable a user (without droping of course). Is there
a
 way to set an expiration date to prevent logins after that date?

 I know the VALID UNTIL clause of CREATE USER command, but it is about the
 password only.
 I think something similar but regarding the whole user account.


It is not about password only. Once current date is beyond the valid date
you set, the user can never get authorized ok anymore.

Regards,
Qingqing



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Does SELECT INTO is affected by default_tablespace?

2005-05-24 Thread Qingqing Zhou
According to my test, it does not:

test=# set default_tablespace = testsp;
SET
test=# create table a (i int);
CREATE TABLE
test=# select * into b from a;
SELECT
test=# \d+ b
  Table public.b
 Column |  Type   | Modifiers | Description
+-+---+-
 i  | integer |   |
Has OIDs: yes

test=# \d+ a
  Table public.a
 Column |  Type   | Modifiers | Description
+-+---+-
 i  | integer |   |
Has OIDs: yes
Tablespace: testsp


If so, shall we fix it?

Regards,
Qingqing



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] getting lock information

2005-05-24 Thread Qingqing Zhou
Himanshu Baweja [EMAIL PROTECTED] writes
I wanted to see which tables/transaction have acquired or are waiting for
which locks
as far as i know there are two ways to do it

1) pg_locks ::: need to write trigger... and have high overhead...


select * from pg_locks has trivial impact on the server. pg_locks is a
view

test=# \d pg_locks;
View pg_catalog.pg_locks
   Column|  Type   | Modifiers
-+-+---
 relation| oid |
 database| oid |
 transaction | xid |
 pid | integer |
 mode| text|
 granted | boolean |
View definition:
 SELECT l.relation, l.database, l.transaction, l.pid, l.mode,
l.granted
   FROM pg_lock_status() l(relation oid, database oid, transaction xid,
pid
integer, mode text, granted boolean);

and it retrives data from server internal lock data structure via
pg_lock_status() function.

What do you mean by write trigger?

2) trace_locks,trace_lwlocks ... etc etc ::: well for this my server
says undefined parameters... so as far as i know i need to rebuild the
server with some flag...


Yes, trace_locks is used when LOCK_DEBUG is defined, so you got to rebuild
your server.

Regards,
Qingqing





---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Database monitor (again)

2005-03-31 Thread Qingqing Zhou

Edson Vilhena de Carvalho [EMAIL PROTECTED] writes
 Sorry but perhaps it is a database monitorizer that
 makes the monitorization on the databases.
 It's my english

Ok, don't worry about your English. Try to find out your questions in your
language here:

http://www.postgresql.org/docs/faq/

Regards,
Qingqing



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] truncate/create slowness

2005-03-31 Thread Qingqing Zhou

Joe Maldonado [EMAIL PROTECTED] writes
 I suspect that pg_class has too many pages (49182 as below) and for
 some reason the above commands spend a lot of time updating it.
 vacuum/analyze on pg_class has not helped. Also, since the time taken
 for these commands is not consistently fast or slow, I'm puzzled.


Statistics (reltuples, relpages) from pg_class is not always fresh. If you
want to reduce the size of table, run vacuum with full option. Also, you
may want to examine that the time is not caused by checkpoints or locks.

Regards,
Qingqing



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Postgres Processing Help !!!!

2005-03-31 Thread Qingqing Zhou

Carlos Roberto Chamorro Mostac [EMAIL PROTECTED] writes
 Hello to all, I have a problem with the use of
 temporary tables to have if somebody has an idea.
 Handling an application that it requires to process
 6,000 registries Parents and the processing of each
 one requires to process N registries Children;  Each
 teacher is sent to process in an own session (you do
 not synchronize) creating temporary tables and
 unloading them after which he finishes this session
 and he initiates with another one. The problem this in
 which the first registries soon process relatively
 fast but the response time grows exponentially and
 finally a considered process of 2 hours takes 14.  And
 but it is that the about data base that when single
 initiating weighed 600 Mg turns to 12 GB.  and after
 applying to him to a VACUUM and a REINDEX returns to
 600 Mb.  Somebody has an idea of because?.


Here is why -
http://www.postgresql.org/docs/8.0/static/sql-vacuum.html

Regards,
Qingqing



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Triggers: using table's primary key value to update another field

2005-03-31 Thread Qingqing Zhou

Randall Perry [EMAIL PROTECTED] writes
 What I'm having trouble with is figuring out how to grab the primary key
 value of the current row. I tried tacking on a var with the pkey row name
to
 NEW, but that doesn't work (didn't think it would).

There is an example in PG docs.

http://www.postgresql.org/docs/8.0/static/trigger-example.html

Regards,
Qingqing



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Debugging deadlocks

2005-03-27 Thread Qingqing Zhou

Michael Fuhr [EMAIL PROTECTED] writes
 To make sure the referenced key can't change until the transaction
 completes and the referencing row becomes visible to other transactions
 (or is rolled back) -- otherwise other transactions could change
 or delete the referenced key and not know they'd be breaking your
 referential integrity.  The current implementation supports only
 exclusive row-level locks (SELECT FOR UPDATE), but I think Alvaro
 might be working on shared row-level locks for a future release.


The code to process RI could be optimized a little bit. See the following
case:

user -1-
test=# begin;
BEGIN
test=# delete from a where i = 2;
DELETE 1

user -2-
test=# update a set i = 3 where i = 1;
ERROR:  update or delete on a violates foreign key
constraint b_i_fkey on b
DETAIL:  Key (i)=(1) is still referenced from table b.
test=# update a set i = 2 where i = 1;
/* User 2 will be blocked here */

Blocking user 2 is strange and not necessary? Since the sever should first
check the where-clause (this is true in ordinary UPDATE). If not, just
report an error as the fomer statement.

Regards,
Qingqing




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Debugging deadlocks

2005-03-27 Thread Qingqing Zhou

Stephan Szabo [EMAIL PROTECTED] writes

 Well, that's not the foreign key necessarily. I don't have a machine to
 test on at the moment (machine currently dead), but I think the same
 happens without a foreign key constraint due to the unique/primary key
 constraint on a.i.

I see. That's more reasonable - the executor will first wait to see if the
constraint on itself satifies, then do the RI check.

Thanks,
Qingqing



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Parallel Query should be a top priority

2005-03-27 Thread Qingqing Zhou

Mike Mascari mascarm@mascari.com writes
 Consider parallel processing a single query should be moved out from
 under Miscellaneous on the TODO list and re-categorized as the formerly
 existent URGENT feature...


Yes, inter/inner-operation of PQO could be an obvious winner in some
situations. For example, in a rather idle multi-processor machine, run a
complex query could get times of acceleration.

However, in some other cases, like in a quite busy machine, inter-query
parallel mechanism is already good enough, esp. for the system like PG using
MVCC concurrency control. I remember there was someone working on the
parallel scan idea of PG, I believe that's more realistic and benefitial
than the implementation of the full set of PQO.

Regards,
Qingqing





---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Measuring Query Performance

2005-03-22 Thread Qingqing Zhou
Ed L. [EMAIL PROTECTED] writes
 How are others handling this problem?  Other ideas?


In your client, you will measure the libpq/odbc/jdbc or whatever call's
completion time and send the interesting parts to your log database. And
just make sure the tested database and the log database are in separate
machines - so they won't bother each other.

Regards,
Qingqing



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Measuring Query Performance

2005-03-22 Thread Qingqing Zhou



Qingqing Zhou [EMAIL PROTECTED] writes
 Ed L. [EMAIL PROTECTED] writes
  How are others handling this problem?  Other ideas?
 


Sorry, forget to mention this project:
http://pqa.projects.postgresql.org/example.html

Regards,
Qingqing



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Tracking row updates

2005-03-21 Thread Qingqing Zhou

Alex Adriaanse [EMAIL PROTECTED] writes
 Applying this analogy to our database, wouldn't that require a
 table-level lock during a CVS-like commit (which would mean locking the
 table, getting the revision number, updating the row(s), and committing
 the transaction)?


You may have a look at how CVS works and decide it. Note the difference is
that CVS manages the files and versions, and seems you just manage the
versions?

Another way is to change the logic - try to submit the update together.
Think the bank-transfer example we always used in describing transaction's
atomic property, which shares something common in your case. We fold the
logic of reduce some money and add some money together, so no matter how
many concurrent transfer is on the target account, it is guarantteed no race
condition, since the serializable property assures that.

Regards,
Qingqing









---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Tracking row updates

2005-03-20 Thread Qingqing Zhou

Alex Adriaanse [EMAIL PROTECTED] writes
 This seems to work, except there exists a race condition.  Consider the
 following series of events (in chronological order):

1. Initially, in the codes table there's a row with id=1, revision=1,
   and a row with id=2, revision=2
2. Client A begins a transaction, and updates the row with id=1,
   resulting in revision=3 for that row
3. Client B begins a transaction, and updates the row with id=2,
   resulting in revision=4 for that row
4. Client B commits the transaction
5. Client C (which has $lastrevision=2 in its local database)
   synchronizes with the database by doing SELECT * FROM codes WHERE
   revision  2; and retrieves client B's update to the row with
   id=2, revision=4 (it doesn't yet see the update from client A)
6. Client A commits the transaction
7. Some time later, Client C synchronizes with the database again.
   $lastrevision for its database is now 4, so doing SELECT * FROM
   codes WHERE revision  4; does not retrieve any rows.  So client C
   never sees client A's update to the row with id=1

 Essentially, the race condition occurs when the order of clients
 committing transactions (i.e. the updates becoming visible to other
 clients) differs from the order of clients generating sequence values.
 Do you guys have any suggestions on how to avoid this race condition, or
 maybe a more elegant way to synchronize the clients with the server?

In my understanding, you are doing something like a CVS does. Say if you
don't check out a file and you make a revision on the version you now
see(say version 1), then when you want to commit, you will probabaly receive
a merge required notice. Since in this interval, the file may have already
updated by another user (to version 2) - he is free to do so since nobody
knows that you might commit an update. To avoid this,  you have to check
out the file, i.e., lock the file to prevent other changes, then you are
free of any merge requirement. The cost is that you locked the file and
nobody could change it.  So the only options are merge or lock.

Regards,
Qingqing







---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


  1   2   >