Hi,
I've dbase with about 80 relations.
On deleting a user, this cascades through all the tables.
This is very slow, for 20 users it takes 4 hours, with exclusive
access to the dbase.
No other users connected to the dbase.
Ok I know there will be somewhere a relation with a FK wi
"Craig James" <[EMAIL PROTECTED]> writes:
> Below is the explain/analyze output of the query from each database. Since
> both tables are indexed on the joined columns, I don't understand why the
> big table should be so much slower -- I hoped this would scale well, or at
> least O(log(N)), not O(N
I have the same schema in two different databases. In "smalldb", the two tables of
interest have about 430,000 rows, in "bigdb", the two tables each contain about 5.5
million rows. I'm processing the data, and for various reasons it works out well to process it in
100,000 row chunks. However
On Fri, 29 Jun 2007, Ho Fat Tsang wrote:
I noticed that for each time the pgsql slow down, there is a short
period a process called "pdflush" eating up lot of I/O. I've goolgled
and know it is a process for writing dirty pages back to the disk by the
Linux kernel.
The pdflush documentation i
Vidhya Bondre skrev:
> Hi all,
>
>I need a very urgent help from you all in below case.
>
>I have a query
[snipped]
> after vacuuming the db it has become very very slow ... 100 times slow.
>
> Please suggest ?
Suggestions for getting more/better responses:
- Format your query ni
e = '1'AND
ca3. attribute ='OPTIMIZE_TYPE') as ca ON c1.asset_id=ca.campaign_id AND
20070702 BETWEEN (c1.start_date - interval '1 day') AND
(c1.end_date+interval '1day') AND
c1.status = 'A' AND c1.revenue_type != 'FOC' AND c1.action_type &g
Hi,
I have found some discussions about that issue, but did not find the
answer actually.
Is there a way to be sure, that some indexes are alway in memory? My
tests bringing them to the memory based file system (ramfs) tablespace
showed really a very significant performance gain. But a perspectiv