Re: [HACKERS] Autovacuum in the backend

2005-06-16 Thread Hans-Jrgen Schnig

Gavin Sherry wrote:

On Thu, 16 Jun 2005, [ISO-8859-1] Hans-Jrgen Schnig wrote:



2) By no fault of its own, autovacuum's level of granularity is the table
level. For people dealing with non-trivial amounts of data (and we're not
talking gigabytes or terabytes here), this is a serious drawback. Vacuum
at peak times can cause very intense IO bursts -- even with the
enhancements in 8.0. I don't think the solution to the problem is to give
users the impression that it is solved and then vacuum their tables during
peak periods. I cannot stress this enough.



I completly agree with Gavin - integrating this kind of thing into the
backend writer or integrate it with FSM would be the ideal solution.

I guess everybody who has already vacuumed a 2 TB relation will agree
here. VACUUM is not a problem for small my cat Minka databases.
However, it has been a real problem on large, heavy-load databases. I
have even seen people splitting large tables and join them with a view
to avoid long vacuums and long CREATE INDEX operations (i am not joking
- this is serious).



I think this gets away from my point a little. People with 2 TB tables can
take care of themselves, as can people who've taken the time to partition
their tables to speed up vacuum. I'm more concerned about the majority of
people who fall in the middle -- between the hobbiest and the high end
data centre.

Thanks,

Gavin



I think your approach will help all of them.
If we had some sort of autovacuum (which is packages with most distros 
anyway - having it in the core is nice as well) and a mechanism to 
improve realloaction / vacuum speed we have solved all problems.


i do think that 2 tb can take care of themselves. the question is, 
however, whether the database can do what they want ...


thanks a lot,

hans

--
Cybertec Geschwinde u Schoenig
Schoengrabern 134, A-2020 Hollabrunn, Austria
Tel: +43/664/393 39 74
www.cybertec.at, www.postgresql.at


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Optimizer bug??

2004-05-24 Thread Hans-Jrgen Schnig
Ismail Kizir wrote:
Hi everybody,
1  EXPLAIN SELECT COUNT(*) AS c FROM articletbl WHERE
  ((mydate BETWEEN '2004-04-24' AND '2004-05-24' )
)
2  EXPLAIN SELECT COUNT(*) AS c FROM articletbl WHERE
  ((mydate = '2004-04-24')
)
(I ran VACUUM ANALYZE before running those)
mydate is an indexed date column.
The optimizer optimizes the second query but, it doesn't optimize the first
one and decides to make a sequential scan.
Is this a bug?
Or may someone explain me the reason?
Thanks in advance.
Ismail Kizir

If 2004-04-24 to 2004-05-24 make up let's say 90% of your data 
PostgreSQL will find out that it is cheaper to use a seq scan instead of 
an index.

This is not a bug at all - this is normal and desired behaviour ...
Regards,
Hans
--
Cybertec Geschwinde u Schoenig
Schoengrabern 134, A-2020 Hollabrunn, Austria
Tel: +43/720/10 1234567 or +43/664/233 90 75
www.cybertec.at, www.postgresql.at, kernel.cybertec.at
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] ObjectWeb/Clustered JDBC

2003-11-28 Thread Hans-Jrgen Schnig
Peter Eisentraut wrote:
Hans-Jürgen Schönig writes:


Especially the disaster recovery mechanism and things such as adding new
masters need some more work.


Yes, someone is working on automatic recovery (which would extend to
adding new masters by starting recovery from zero).  In fact, they're just
across town from you (together.at).


Guess who has taught them PostgreSQL ;).

	Hans

--
Cybertec Geschwinde u Schoenig
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/2952/30706 or +43/660/816 40 77
www.cybertec.at, www.postgresql.at, kernel.cybertec.at


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings