er data
> );
>
> CREATE TABLE host_events (
> ip IP4 NOT NULL REFERENCES ip_info(ip),
> name VARCHAR
> port INTEGER,
> --other data
> );
It would probably help to have an index on that column for all three
tables, then I would wager using joins will be the spe
peeds, etc.
In general, you want as much RAM as you can afford for the project,
the more the better. I'd say 2-4GB is the minimum. And RAID-5 isn't
very good for database work in general, you'll get better performance
from RAID 1+0.
--
ffect on the size of shared memory allocated
> by PostgreSQL, nor does it reserve kernel disk cache; it is used
> only for estimation purposes."
This is a hint to the optimizer about how much of the database may
be in the OS level cache.
---
B of data
you need to sort? If not I fear you may be causing the system
to swap by setting it this high.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)
out this page in the docs for more information:
http://www.postgresql.org/docs/8.1/static/kernel-resources.html
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast
core, has been
able to for quite some time.
-----
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
are the hard disk differences? Does the old server
have fast SCSI disk and the new box SATA? Or the old server was
on a RAID volume and the new one isn't, etc... those are the sort
of hardware differences that are important in this particular
case.
-----
tem for importing data.
> >>
> >>The import is a lot of "insert into" commands - it's a converted
> >>database from another system so I can't change it to copy commands.
Are you on the same hardware specifically in your disk subsystem?
Anyth
While pgpool can let you pool together multiple backend servers,
it also functions well as just a connection pooling device with
only one backend.
---------
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
On Tue, 17 Jan 2006 16:12:59 -0500
[EMAIL PROTECTED] wrote:
> In the mean time, I successfully use LIMIT and OFFSET without such an
> optimization, and things have been fine for me.
Same here.
-----
Frank Wiles <[EMAIL PROTECTED]>
http://w
ever to build triggers
for a more complex situation.
Hope this helps!
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore
e will help you understand how to set your configuration
values.
-----
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 9: In versions bel
seconds).
>
> Can this be tuned somehow???
Do you have an index on report.id_order ? Try creating an index for
it if not and run a vacuum analyze on the table to see if it gets
rid of the sequence scan in the plan.
--
ena/GeneralBits/Tidbits/perf.html
http://www.powerpostgresql.com/Docs
http://www.powerpostgresql.com/PerfList
Hope these help!
-----
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)--
from
RAID 5. IMHO RAID 5 is only useful on near line backup servers or
Samba file servers where space is more important than speed.
-----
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---
urations to see
which is optimal for your particular application and setup.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
han with the 10k ones.
Just an FYI, but I've run both 10k and 15k rpm drives in PostgreSQL
servers and haven't experienced any "horror stories". They do run
hotter, but this shouldn't be a big issue in a decent case in a
typical server room environment.
-
X15's so that would seem likely to be the faster option.
I agree, the extra spindles and lower seek times are better if all
you are concerned about is raw speed.
However, that has to be balanced, from an overall perspective, with
the nice single point of ordering/contact/support/war
. Doing this will get you some extra performance,
but will probably not entirely solve your problem.
---------
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
o 200
> with 256 Meg of ram. You notice slow queries real fast on such a box.
I know several people who use this development method. It can
sometimes lead to premature optimizations, but overall I think it is
a great way to work.
---------
Frank Wiles &
always work out best, but for the majority of systems out there
complicating your schema, maxing your hardware, and THEN tuning
is IMHO the wrong approach.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
---
On Wed, 9 Nov 2005 21:43:33 -0300
Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Frank Wiles wrote:
>
> > Obviously there are systems/datasets/quantities where this won't
> > always work out best, but for the majority of systems out there
> > complicatin
ut I certainly
think there are other areas of PostgreSQL we should be focusing our
efforts.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
imes a full vacuum will not clear up as much space as it could.
Try increasing those configuration values and doing vacuums more
often.
If you should also explore upgrading to the latest 8.0 as you will
no doubt see noticeable speed improvements.
--
nment
you would send the one query to any of the 10 servers and it would
return the proper results.
But like I said this type of application is fairly trivial to write
in most scripting or higher level languages.
-
Frank Wiles <[EMAIL PROTECTED]&g
:(
>
> I need more speed in orders of magnitude. Will more disks / more
> memory do that trick?
More disk and more memory always helps out. Since you say these
queries are mostly on not-often-used data I would lean toward more
disks in your SCSI RAID-1 setup than maxing out availa
ared_buffers some, but
that alone most likely won't make up your speed difference.
-----
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)-
you can find it here:
http://www.revsys.com/writings/postgresql-performance.html
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 9: In versions below 8.0,
le, but
obviously not clearly enough. I'll work on rewording that section.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
thing but a 99% read application.
I'll work this bit of wisdom in later tonight. Thanks again for the
feedback.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end o
tions.
The article can be found here:
http://www.revsys.com/writings/postgresql-performance.html
Thanks!
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end o
ignificant impact on performance, no matter what card was
used?
I'm not sure you can give all the credit to the card on this one.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---
(cost=0.00..23390.55
> rows=342764
> width=8)
> (4 rows)
>
> The query is noticably slow (2 seconds) on a database with 150,000+
> records. How can I speed it up?
It's doing a sequence scan because you're not limiting the query in
, just that PostgreSQL had more languages to choose from.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
stgreSQL backend back in 2000-2001. We got
roughly a 200% speed improvement at that time and PostgreSQL has only
improved since then. I would be more than happy to elaborate on this
migration off list if you would like. kusports.com gets a TON of
hits especially during "March
les and surprisingly enough it was faster to grep the flat
files than use the database, because as was previously mentioned
all of the joins.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
ver seen a database perform
worse after it was normalized. In fact, I can't even think of a
situation where it could!
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---
h PostgreSQL what I usually
do is write a small script that does the major resource intensive
queries on the database and time it. Tweak a PostgreSQL parameter
and re-run, wash, rinse, repeat until I get what I believe is the
best performance I can.
---
On Fri, 17 Dec 2004 23:37:37 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> Frank Wiles <[EMAIL PROTECTED]> writes:
> > I've also seen a huge difference between select count(*) and
> > select count(1) in older versions,
>
> That must have been before
ocessor. Counting isn't
processor intensive, but reading through the entire table on disk
is. I've also seen a huge difference between select count(*) and
select count(1) in older versions, haven't tried it on a recent
version however.
--
a few weeks and see where you're at. It is really
a matter of trial an error.
With my databases, I can afford to do VACUUM FULLs fairly often
so I typically don't need to increase my fsm values.
-
Fr
any way to fix this problem ?
Try a VACUUM FULL, this will clean up unused space. You might also
want to adjust your free space map so that you don't have to do FULL
vacuums as often ( or at all ). It is controlled by max_fsm_pages
and max_fsm_relations.
-
rds) or I can get it working, but it
performs badly ( PERC cards vs say a Mylex card ).
I think it's more of a system design issue ( wrt Linux use ) rather
than a quality issue.
-----
Frank Wiles &l
If you have a test machine, I would suggest moving the data to a
box without a RAID controller and see if you get better results.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
-
---(
olumns could be searched.
Also, you should consider increasing your shared_buffers and probably
your sort memory a touch as well. This will help your overall
performance.
-
Frank Wiles <[EMAIL PROTECTED]>
http://www.wiles.org
45 matches
Mail list logo