Re: [PERFORM] Need for speed 2

2005-09-20 Thread Alex Turner
I have found that while the OS may flush to the controller fast with
fsync=true, the controller does as it pleases (it has BBU, so I'm not
too worried), so you get great performance because your controller is
determine read/write sequence outside of what is being demanded by an
fsync.

Alex Turner
NetEconomistOn 8/25/05, Kelly Burkhart <[EMAIL PROTECTED]> wrote:
On Thu, 2005-08-25 at 11:16 -0400, Ron wrote:> ># - Settings -> >>
>fsync =
false  
# turns forced synchronization on or off> >#wal_sync_method = fsync# the default varies across platforms:>
>
# fsync, fdatasync, open_sync, or>> I hope you have a battery backed write buffer!Battery backed write buffer will do nothing here, because the OS istaking it's sweet time flushing to the controller's battery backed write
buffer!Isn't the reason for batter backed controller cache to make fsync()sfast?-K---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not   match


Re: [PERFORM] Need for speed 3

2005-09-05 Thread Nicholas E. Wakefield
Ulrich,

Luke cc'd me on his reply and you definitely should have a look at
Bizgres Clickstream. Even if the whole stack doesn't match you needs,
though it sounds like it would. The clickstream focused TELL and BizGres
enhancements could make your life a little easier.

Basically the stack components that you might want to look at first are:

BizGres flavor of PostGreSQL - Enhanced for business intelligence and
data warehousing - The www.bizgres.com website can speak to this in more
detail.
Clickstream Data Model - Pageview fact table surrounded by various
dimensions and 2 core staging tables for the cleansed weblog data.
ETL Platform - Contains a weblog sessionizer, cleanser and ETL
transformations, which can handle 2-3 million hits without any trouble.
With native support for the COPY command, for even greater performance.
JasperReports - For pixel perfect reporting.

Sorry for sounding like I'm in marketing or sales, however I'm not.

Couple of key features that might interest you, considering your email.
The weblog parsing component allows for relatively complex cleansing,
allowing for less data to be written to the DB and therefore increasing
throughput. In addition, if you run every 5 minutes there would be no
need to truncate the days data and reload, the ETL knows how to connect
the data from before. The copy enhancement to postgresql found in
bizgres, makes a noticeable improvement when loading data.
The schema is basically

Dimension tables Session, Known Party (If cookies are logged), Page, IP
Address, Date, Time, Referrer, Referrer Page.
Fact tables: Pageview, Hit Subset (Not everyone wants all hits).

Staging Tables: Hits (Cleansed hits or just pageviews without surrogate
keys), Session (Session data gathered while parsing the log).

Regards

Nick


-Original Message-
From: Luke Lonergan [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 01, 2005 9:38 AM
To: Ulrich Wisser; pgsql-performance@postgresql.org
Cc: Nicholas E. Wakefield; Barry Klawans; Daria Hutchinson
Subject: Re: [PERFORM] Need for speed 3

Ulrich,

On 9/1/05 6:25 AM, "Ulrich Wisser" <[EMAIL PROTECTED]>
wrote:

> My application basically imports Apache log files into a Postgres 
> database. Every row in the log file gets imported in one of three (raw
> data) tables. My columns are exactly as in the log file. The import is

> run approx. every five minutes. We import about two million rows a
month.

Bizgres Clickstream does this job using an ETL (extract transform and
load) process to transform the weblogs into an optimized schema for
reporting.
 
> After every import the data from the current day is deleted from the 
> reporting table and recalculated from the raw data table.

This is something the optimized ETL in Bizgres Clickstream also does
well.
 
> What do you think of this approach? Are there better ways to do it? Is

> there some literature you recommend reading?

I recommend the Bizgres Clickstream docs, you can get it from Bizgres
CVS, and there will shortly be a live html link on the website.

Bizgres is free - it also improves COPY performance by almost 2x, among
other enhancements.

- Luke 




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Need for speed 3

2005-09-01 Thread Luke Lonergan
Ulrich,

On 9/1/05 6:25 AM, "Ulrich Wisser" <[EMAIL PROTECTED]> wrote:

> My application basically imports Apache log files into a Postgres
> database. Every row in the log file gets imported in one of three (raw
> data) tables. My columns are exactly as in the log file. The import is
> run approx. every five minutes. We import about two million rows a month.

Bizgres Clickstream does this job using an ETL (extract transform and load)
process to transform the weblogs into an optimized schema for reporting.
 
> After every import the data from the current day is deleted from the
> reporting table and recalculated from the raw data table.

This is something the optimized ETL in Bizgres Clickstream also does well.
 
> What do you think of this approach? Are there better ways to do it? Is
> there some literature you recommend reading?

I recommend the Bizgres Clickstream docs, you can get it from Bizgres CVS,
and there will shortly be a live html link on the website.

Bizgres is free - it also improves COPY performance by almost 2x, among
other enhancements.

- Luke 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Need for speed 3

2005-09-01 Thread Merlin Moncure
> Hi Merlin,
> > Just a thought: have you considered having apache logs write to a
> > process that immediately makes insert query(s) to postgresql?
> 
> Yes we have considered that, but dismissed the idea very soon. We need
> Apache to be as responsive as possible. It's a two server setup with
> load balancer and failover. Serving about ones thousand domains and
> counting. It needs to be as failsafe as possible and under no
> circumstances can any request be lost. (The click counting is core
> business and relates directly to our income.)
> That said it seemed quite save to let Apache write logfiles. And
import
> them later. By that a database downtime wouldn't be mission critical.

hm.  well, it may be possible to do this in a fast and safe way but I
understand your reservations here, but I'm going to spout off my opinion
anyways :).

If you are not doing this the following point is moot.  But take into
consideration you could set a very low transaction time out (like .25
seconds) and siphon log entries off to a text file if your database
server gets in trouble.  2 million hits a month is not very high even if
your traffic is bursty (there are approx 2.5 million seconds in a
month).

With a direct linked log file you get up to date stats always and spare
yourself the dump/load song and dance which is always a headache :(.
Also, however you are doing your billing, it will be easier to manage it
if everything is extracted from pg and not some conglomeration of log
files, *if* you can put 100% faith in your database.  When it comes to
pg now, I'm a believer.

> > You could write small C program which executes advanced query
interface
> > call to the server.
> 
> How would that improve performance?

The functions I'm talking about are PQexecParams and PQexecPrepared.
The query string does not need to be encoded or decoded and is very
light on server resources and is very low latency.  Using them you could
get prob. 5000 inserts/sec on a cheap server if you have some type of
write caching in place with low cpu load.  

Merlin



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Need for speed 3

2005-09-01 Thread Ulrich Wisser

Hi Merlin,

schemas would be helpful.  


right now I would like to know if my approach to the problem makes 
sense. Or if I should rework the whole procedure of import and aggregate.



Just a thought: have you considered having apache logs write to a
process that immediately makes insert query(s) to postgresql? 


Yes we have considered that, but dismissed the idea very soon. We need 
Apache to be as responsive as possible. It's a two server setup with 
load balancer and failover. Serving about ones thousand domains and 
counting. It needs to be as failsafe as possible and under no 
circumstances can any request be lost. (The click counting is core 
business and relates directly to our income.)
That said it seemed quite save to let Apache write logfiles. And import 
them later. By that a database downtime wouldn't be mission critical.




You could write small C program which executes advanced query interface
call to the server.


How would that improve performance?

Ulrich

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Need for speed 3

2005-09-01 Thread Merlin Moncure

Ulrich wrote:
> Hi again,
> 
> first I want to say ***THANK YOU*** for everyone who kindly shared
their
> thoughts on my hardware problems. I really appreciate it. I started to
> look for a new server and I am quite sure we'll get a serious hardware
> "update". As suggested by some people I would like now to look closer
at
> possible algorithmic improvements.
> 
> My application basically imports Apache log files into a Postgres
> database. Every row in the log file gets imported in one of three (raw
> data) tables. My columns are exactly as in the log file. The import is
> run approx. every five minutes. We import about two million rows a
month.
> 
> Between 30 and 50 users are using the reporting at the same time.
> 
> Because reporting became so slow, I did create a reporting table. In
> that table data is aggregated by dropping time (date is preserved),
ip,
> referer, user-agent. And although it breaks normalization some data
from
> a master table is copied, so no joins are needed anymore.
> 
> After every import the data from the current day is deleted from the
> reporting table and recalculated from the raw data table.
> 

schemas would be helpful.  You may be able to tweak the import table a
bit and how it moves over to the data tables.

Just a thought: have you considered having apache logs write to a
process that immediately makes insert query(s) to postgresql? 

You could write small C program which executes advanced query interface
call to the server.

Merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Need for speed 2

2005-08-25 Thread Kelly Burkhart
On Thu, 2005-08-25 at 11:16 -0400, Ron wrote:
> ># - Settings -
> >
> >fsync = false   # turns forced synchronization on or off
> >#wal_sync_method = fsync# the default varies across platforms:
> > # fsync, fdatasync, open_sync, or
> 
> I hope you have a battery backed write buffer!

Battery backed write buffer will do nothing here, because the OS is
taking it's sweet time flushing to the controller's battery backed write
buffer!

Isn't the reason for batter backed controller cache to make fsync()s
fast?

-K

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Need for speed 2

2005-08-25 Thread Merlin Moncure
> Putting pg_xlog on the IDE drives gave about 10% performance
> improvement. Would faster disks give more performance?
> 
> What my application does:
> 
> Every five minutes a new logfile will be imported. Depending on the
> source of the request it will be imported in one of three "raw click"
> tables. (data from two months back, to be able to verify customer
> complains)
> For reporting I have a set of tables. These contain data from the last
> two years. My app deletes all entries from today and reinserts updated
> data calculated from the raw data tables.
> 
> The queries contain no joins only aggregates. I have several indexes
to
> speed different kinds of queries.
> 
> My problems occur when one users does a report that contains to much
old
> data. In that case all cache mechanisms will fail and disc io is the
> limiting factor.

It seems like you are pushing limit of what server can handle.  This
means: 1. expensive server upgrade. or 
2. make software more efficient.

Since you sound I/O bound, you can tackle 1. by a. adding more memory or
b. increasing i/o throughput.  

Unfortunately, you already have a pretty decent server (for x86) so 1.
means 64 bit platform and 2. means more expensive hard drives.  The
archives is full of information about this...

Is your data well normalized?  You can do tricks like:
if table has fields a,b,c,d,e,f with a is primary key, and d,e,f not
frequently queried or missing, move d,e,f to seprate table.

well normalized structures are always more cache efficient.  Do you have
lots of repeating and/or empty data values in your tables?

Make your indexes and data as small as possible to reduce pressure on
the cache, here are just a few tricks:
1. use int2/int4 instead of numeric
2. know when to use char and varchar 
3. use functional indexes to reduce index expression complexity.  This
can give extreme benefits if you can, for example, reduce double field
index to Boolean.

Merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Need for speed 2

2005-08-25 Thread Ron

At 03:10 AM 8/25/2005, Ulrich Wisser wrote:


I realize I need to be much more specific. Here is a more detailed
description of my hardware and system design.


Pentium 4 2.4GHz
Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR
Motherboard chipset 'I865G', two IDE channels on board


First suggestion: Get better server HW.  AMD Opteron based dual 
processor board is the current best in terms of price/performance 
ratio, _particularly_ for DB applications like the one you have 
described.  Such mainboards cost ~$400-$500.  RAM will cost about 
$75-$150/GB.  Opteron 2xx are ~$200-$700 apiece.   So a 2P AMD system 
can be had for as little as ~$850 + the cost of the RAM you need.  In 
the worst case where you need 24GB of RAM (~$3600), the total comes 
in at ~$4450.  As you can see from the numbers, buying only what RAM 
you actually need can save you a great deal on money.


Given what little you said about how much of your DB is frequently 
accessed, I'd suggest buying a server based around the 2P 16 DIMM 
slot IWill DK88 mainboard (Tyan has announced a 16 DIMM slot 
mainboard, but I do not think it is actually being sold yet.).  Then 
fill it with the minimum amount of RAM that will allow the "working 
set" of the DB to be cached in RAM.  In the worst case where DB 
access is essentially uniform and essentially random, you will need 
24GB of RAM to hold the 22GB DB + OS + etc.  That worst case is 
_rare_.  Usually DB's have a working set that is smaller than the 
entire DB.  You want to keep that working set in RAM.  If you can't 
identify the working set, buy enough RAM to hold the entire DB.


In particular, you want to make sure that any frequently accessed 
read only tables or indexes are kept in RAM.  The "read only" part is 
very important.  Tables (and their indexes) that are frequently 
written to _have_ to access HD.  Therefore you get much less out of 
having them in RAM.  Read only tables and their indexes can be loaded 
into tmpfs at boot time thereby keeping out of the way of the file 
system buffer cache.  tmpfs does not save data if the host goes down 
so it is very important that you ONLY use this trick with read only 
tables.  The other half of the trick is to make sure that the file 
system buffer cache does _not_ cache whatever you have loaded into tmpfs.




2x SEAGATE BARRACUDA 7200.7 80GB 7200RPM ATA/100
(software raid 1, system, swap, pg_xlog)
ADAPTEC SCSI RAID 2100S ULTRA160 32MB 1-CHANNEL
2x SEAGATE CHEETAH 15K.3 73GB ULTRA320 68-PIN WIDE
(raid 1, /var/lib/pgsql)


Second suggestion: you need a MUCH better IO subsystem.  In fact, 
given that you have described this system as being primarily OLTP 
like, this is more important that the above server HW.  Best would be 
to upgrade everything, but if you are strapped for cash, upgrade the 
IO subsystem first.


You need many more spindles and a decent RAID card or cards.  You 
want 15Krpm (best) or 10Krpm HDs.  As long as all of the HD's are at 
least 10Krpm, more spindles is more important than faster 
spindles.  If it's a choice between more 10Krpm discs or fewer 15Krpm 
discs, buy the 10Krpm discs.  Get the spindle count as high as you 
RAID cards can handle.


Whatever RAID cards you get should have as much battery backed write 
buffer as possible.  In the commodity market, presently the highest 
performance RAID cards I know of, and the ones that support the 
largest battery backed write buffer, are made by Areca.




Database size on disc is 22GB. (without pg_xlog)


Find out what the working set, ie the most frequently accessed 
portion, of this 22GB is and you will know how much RAM is worth 
having.  4GB is definitely too little!




Please find my postgresql.conf below.


Third suggestion:  make sure you are running a 2.6 based kernel and 
at least PG 8.0.3.  Helping beta test PG 8.1 might be an option for 
you as well.



Putting pg_xlog on the IDE drives gave about 10% performance 
improvement. Would faster disks give more performance?


What my application does:

Every five minutes a new logfile will be imported. Depending on the 
source of the request it will be imported in one of three "raw click"
tables. (data from two months back, to be able to verify customer 
complains)  For reporting I have a set of tables. These contain data 
from the last two years. My app deletes all entries from today and 
reinserts updated data calculated from the raw data tables.


The raw data tables seem to be read only?  If so, you should buy 
enough RAM to load them into tmpfs at boot time and have them be 
completely RAM resident in addition to having enough RAM for the OS 
to cache an appropriate amount of the rest of the DB.



The queries contain no joins only aggregates. I have several indexes 
to speed different kinds of queries.


My problems occur when one users does a report that contains too 
much old data. In that case all cache mechanisms will fail and disc 
io is the limiting factor.


If one query contains so much data, that a f

Re: [PERFORM] Need for speed 2

2005-08-25 Thread Frank Wiles
On Thu, 25 Aug 2005 09:10:37 +0200
Ulrich Wisser <[EMAIL PROTECTED]> wrote:

> Pentium 4 2.4GHz
> Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR
> Motherboard chipset 'I865G', two IDE channels on board
> 2x SEAGATE BARRACUDA 7200.7 80GB 7200RPM ATA/100
> (software raid 1, system, swap, pg_xlog)
> ADAPTEC SCSI RAID 2100S ULTRA160 32MB 1-CHANNEL
> 2x SEAGATE CHEETAH 15K.3 73GB ULTRA320 68-PIN WIDE
> (raid 1, /var/lib/pgsql)
> 
> Database size on disc is 22GB. (without pg_xlog)
> 
> Please find my postgresql.conf below.
> 
> Putting pg_xlog on the IDE drives gave about 10% performance
> improvement. Would faster disks give more performance?

  Faster as in RPM on your pg_xlog partition probably won't make
  much of a difference.  However, if you can get a drive with better
  overall write performance then it would be a benefit. 

  Another thing to consider on this setup is whether or not you're
  hitting swap often and/or logging to that same IDE RAID set.  For
  optimal insertion benefit you want the heads of your disks to 
  essentially be only used for pg_xlog.  If you're having to jump
  around the disk in the following manner: 

write to pg_xlog
read from swap
write syslog data
write to pg_xlog 
...
...

  You probably aren't getting anywhere near the benefit you could.  One
  thing you could easily try is to break your IDE RAID set and put 
  OS/swap on one disk and pg_xlog on the other. 

> If one query contains so much data, that a full table scan is needed,
> I  do not care if it takes two minutes to answer. But all other
> queries  with less data (at the same time) still have to be fast.
> 
> I can not stop users doing that kind of reporting. :(
> 
> I need more speed in orders of magnitude. Will more disks / more
> memory do that trick?

  More disk and more memory always helps out.  Since you say these
  queries are mostly on not-often-used data I would lean toward more
  disks in your SCSI RAID-1 setup than maxing out available RAM based
  on the size of your database. 

 -
   Frank Wiles <[EMAIL PROTECTED]>
   http://www.wiles.org
 -


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Need for speed

2005-08-22 Thread Jim C. Nasby
RRS (http://rrs.decibel.org) might be of use in this case.

On Tue, Aug 16, 2005 at 01:59:53PM -0400, Alex Turner wrote:
> Are you calculating aggregates, and if so, how are you doing it (I ask
> the question from experience of a similar application where I found
> that my aggregating PGPLSQL triggers were bogging the system down, and
> changed them so scheduled jobs instead).
> 
> Alex Turner
> NetEconomist
> 
> On 8/16/05, Ulrich Wisser <[EMAIL PROTECTED]> wrote:
> > Hello,
> > 
> > one of our services is click counting for on line advertising. We do
> > this by importing Apache log files every five minutes. This results in a
> > lot of insert and delete statements. At the same time our customers
> > shall be able to do on line reporting.
> > 
> > We have a box with
> > Linux Fedora Core 3, Postgres 7.4.2
> > Intel(R) Pentium(R) 4 CPU 2.40GHz
> > 2 scsi 76GB disks (15.000RPM, 2ms)
> > 
> > I did put pg_xlog on another file system on other discs.
> > 
> > Still when several users are on line the reporting gets very slow.
> > Queries can take more then 2 min.
> > 
> > I need some ideas how to improve performance in some orders of
> > magnitude. I already thought of a box with the whole database on a ram
> > disc. So really any idea is welcome.
> > 
> > Ulrich
> > 
> > 
> > 
> > --
> > Ulrich Wisser  / System Developer
> > 
> > RELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden
> > Direct (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769
> > 
> > http://www.relevanttraffic.com
> > 
> > ---(end of broadcast)---
> > TIP 1: if posting/reading through Usenet, please send an appropriate
> >subscribe-nomail command to [EMAIL PROTECTED] so that your
> >message can get through to the mailing list cleanly
> >
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Need for speed

2005-08-19 Thread Christopher Browne
>> Ulrich Wisser wrote:
>> >
>> > one of our services is click counting for on line advertising. We do
>> > this by importing Apache log files every five minutes. This results in a
>> > lot of insert and delete statements. 
> ...
>> If you are doing mostly inserting, make sure you are in a transaction,
>
> Well, yes, but you may need to make sure that a single transaction
> doesn't have too many inserts in it.  I was having a performance
> problem when doing transactions with a huge number of inserts (tens
> of thousands), and I solved the problem by putting a simple counter
> in the loop (in the Java import code, that is) and doing a commit
> every 100 or so inserts.

Are you sure that was an issue with PostgreSQL?

I have certainly observed that issue with Oracle, but NOT with
PostgreSQL.

I have commonly done data loads where they loaded 50K rows at a time,
the reason for COMMITting at that point being "programming paranoia"
at the possibility that some data might fail to load and need to be
retried, and I'd rather have less fail...

It would seem more likely that the issue would be on the Java side; it
might well be that the data being loaded might bloat JVM memory usage,
and that the actions taken at COMMIT time might keep the size of the
Java-side memory footprint down.
-- 
(reverse (concatenate 'string "moc.liamg" "@" "enworbbc"))
http://cbbrowne.com/info/
If we were meant to fly, we wouldn't keep losing our luggage.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Need for speed

2005-08-18 Thread Roger Hand
> Ulrich Wisser wrote:
> >
> > one of our services is click counting for on line advertising. We do
> > this by importing Apache log files every five minutes. This results in a
> > lot of insert and delete statements. 
...
> If you are doing mostly inserting, make sure you are in a transaction,

Well, yes, but you may need to make sure that a single transaction doesn't have 
too many inserts in it.
I was having a performance problem when doing transactions with a huge number 
of inserts
(tens of thousands), and I solved the problem by putting a simple counter in 
the loop (in the Java import code, 
that is) and doing a commit every 100 or so inserts.

-Roger

> John
>
> > Ulrich

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Need for speed

2005-08-17 Thread Matthew Nuzum
On 8/17/05, Ron <[EMAIL PROTECTED]> wrote:
> At 05:15 AM 8/17/2005, Ulrich Wisser wrote:
> >Hello,
> >
> >thanks for all your suggestions.
> >
> >I can see that the Linux system is 90% waiting for disc io.
...
> 1= your primary usage is OLTP-like, but you are also expecting to do
> reports against the same schema that is supporting your OLTP-like
> usage.  Bad Idea.  Schemas that are optimized for reporting and other
> data mining like operation are pessimal for OLTP-like applications
> and vice versa.  You need two schemas: one optimized for lots of
> inserts and deletes (OLTP-like), and one optimized for reporting
> (data-mining like).

Ulrich,

If you meant that your disc/scsi system is already the fastest
available *with your current budget* then following Ron's advise I
quoted above will be a good step.

I have some systems very similar to yours. What I do is import in
batches and then immediately pre-process the batch data into tables
optimized for quick queries. For example, if your reports frequenly
need to find the total number of views per hour for each customer,
create a table whose data contains just the totals for each customer
for each hour of the day. This will make it a tiny fraction of the
size, allowing it to fit largely in RAM for the query and making the
indexes more efficient.

This is a tricky job, but if you do it right, your company will be a
big success and buy you more hardware to work with. Of course, they'll
also ask you to create dozens of new reports, but that's par for the
course.

Even if you have the budget for more hardware, I feel that creating an
effective db structure is a much more elegant solution than to throw
more hardware. (I admit, sometimes its cheaper to throw more hardware)

If you have particular queries that are too slow, posting the explain
analyze for each on the list should garner some help.

-- 
Matthew Nuzum
www.bearfruit.org

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Need for speed

2005-08-17 Thread Ron

At 05:15 AM 8/17/2005, Ulrich Wisser wrote:

Hello,

thanks for all your suggestions.

I can see that the Linux system is 90% waiting for disc io.


A clear indication that you need to improve your HD IO subsystem if possible.



At that time all my queries are *very* slow.


To be more precise, your server performance at that point is 
essentially equal to your HD IO subsystem performance.




 My scsi raid controller and disc are already the fastest available.


Oh, REALLY?  This is the description of the system you gave us:

"We have a box with
Linux Fedora Core 3, Postgres 7.4.2
Intel(R) Pentium(R) 4 CPU 2.40GHz
2 scsi 76GB disks (15.000RPM, 2ms)"


The is far, Far, FAR from the "the fastest available" in terms of SW, 
OS, CPU host, _or_ HD subsystem.


The "fastest available" means
1= you should be running PostgreSQL 8.0.3
2= you should be running the latest stable 2.6 based kernel
3= you should be running an Opteron based server
4= Fibre Channel HDs are slightly higher performance than SCSI ones.
5= (and this is the big one) YOU NEED MORE SPINDLES AND A HIGHER END 
RAID CONTROLLER.


Your description of you workload was:
"one of our services is click counting for on line advertising. We do 
this by importing Apache log files every five minutes. This results 
in a lot of insert and delete statements. At the same time our 
customers shall be able to do on line reporting."


There are two issues here:
1= your primary usage is OLTP-like, but you are also expecting to do 
reports against the same schema that is supporting your OLTP-like 
usage.  Bad Idea.  Schemas that are optimized for reporting and other 
data mining like operation are pessimal for OLTP-like applications 
and vice versa.  You need two schemas: one optimized for lots of 
inserts and deletes (OLTP-like), and one optimized for reporting 
(data-mining like).


2= 2 spindles, even 15K rpm spindles, is minuscule.  Real enterprise 
class RAID subsystems have at least 10-20x that many spindles, 
usually split into 6-12 sets dedicated to different groups of tables 
in the DB.  Putting xlog on its own dedicated spindles is just the 
first step.


The absolute "top of the line" for RAID controllers is something 
based on Fibre Channel from Xyratex (who make the RAID engines for 
EMC and NetApps), Engino (the enterprise division of LSI Logic who 
sell mostly to IBM.  Apple has a server based on an Engino card), or 
dot-hill (who bought Chaparral among others).  I suspect you can't 
afford them even if they would do business with you.  The ante for a 
FC-based RAID subsystem in this class is in the ~$32K to ~$128K 
range, even if you buy direct from the actual RAID HW manufacturer 
rather than an OEM like EMC, IBM, or NetApp who will 2x or 4x the 
price.  OTOH, these subsystems will provide OLTP or OLTP-like DB apps 
with performance that is head-and-shoulders better than anything else 
to be found.  Numbers like 50K-200K IOPS.  You get what you pay for.


In the retail commodity market where you are more realistically going 
to be buying, the current best RAID controllers are probably the 
Areca cards ( www.areca.us ).  They come darn close to saturating the 
Real World Peak Bandwidth of a 64b 133MHz PCI-X bus and have better 
IOPS numbers than their commodity brethren.  However, _none_ of the 
commodity RAID cards have IOPS numbers anywhere near as high as those 
mentioned above.



To avoid aggregating to many rows, I already made some aggregation 
tables which will be updated after the import from the Apache 
logfiles.  That did help, but only to a certain level.


I believe the biggest problem is disc io. Reports for very recent 
data are quite fast, these are used very often and therefor already 
in the cache. But reports can contain (and regulary do) very old 
data. In that case the whole system slows down. To me this sounds 
like the recent data is flushed out of the cache and now all data 
for all queries has to be fetched from disc.


I completely agree.  Hopefully my above suggestions make sense and 
are of use to you.




My machine has 2GB memory,


...and while we are at it, OLTP like apps benefit less from RAM than 
data mining ones, but still 2GB of RAM is just not that much for a 
real DB server...



Ron Peacetree



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Need for speed

2005-08-17 Thread Ron

At 05:15 AM 8/17/2005, Ulrich Wisser wrote:

Hello,

thanks for all your suggestions.

I can see that the Linux system is 90% waiting for disc io.


A clear indication that you need to improve your HD IO subsystem.


At that time all my queries are *very* slow.


To be more precise, your server performance at that point is 
essentially equal to your HD IO subsystem performance.




 My scsi raid controller and disc are already the fastest available.


Oh, REALLY?  This is the description of the system you gave us:

"We have a box with
Linux Fedora Core 3, Postgres 7.4.2
Intel(R) Pentium(R) 4 CPU 2.40GHz
2 scsi 76GB disks (15.000RPM, 2ms)"

The is far, Far, FAR from the "the fastest available" in terms of SW, 
OS, CPU host, _or_ HD subsystem.


The "fastest available" means
1= you should be running 8.0.3
2= you should be running the latest stable 2.6 based kernel
3= you should be running an Opteron based server
4= Fibre Channel HDs are higher performance than SCSI ones.
5= (and this is the big one) YOU NEED MORE SPINDLES AND A HIGHER END 
RAID CONTROLLER.


The absolute "top of the line" for RAID controllers is something 
based on Fibre Channel from Xyratex (who make the RAID engines for 
EMC and NetApps), Engino (the enterprise division of LSI Logic who 
sell mostly to IBM.  Apple has a server based on an Engino card), 
dot-hill (who bought Chaparral among others).  I suspect you can't 
afford them even if they would do business with you.  The ante for a 
FC-based RAID subsystem in this class is in the ~$32K to ~$128K 
range, even if you buy direct from the actual RAID HW manufacturer 
rather than an OEM like


In the retail commodity market, the current best RAID controllers are 
probably the 16 and 24 port versions of the Areca cards ( 
www.areca.us ).  They come darn close to saturating the the Real 
World Peak Bandwidth of a 64b 133MHz PCI-X bus.


I did put pg_xlog on another file system on other discs.


 The query plan uses indexes and "vacuum analyze" is run once a day.


That


To avoid aggregating to many rows, I already made some aggregation 
tables which will be updated after the import from the Apache 
logfiles.  That did help, but only to a certain level.


I believe the biggest problem is disc io. Reports for very recent 
data are quite fast, these are used very often and therefor already 
in the cache. But reports can contain (and regulary do) very old 
data. In that case the whole system slows down. To me this sounds 
like the recent data is flushed out of the cache and now all data 
for all queries has to be fetched from disc.


My machine has 2GB memory,





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Need for speed

2005-08-17 Thread Josh Berkus
Ulrich,

> I believe the biggest problem is disc io. Reports for very recent data
> are quite fast, these are used very often and therefor already in the
> cache. But reports can contain (and regulary do) very old data. In that
> case the whole system slows down. To me this sounds like the recent data
> is flushed out of the cache and now all data for all queries has to be
> fetched from disc.

How large is the database on disk?

> My machine has 2GB memory, please find postgresql.conf below.

h ...
effective_cache_size?
random_page_cost?
cpu_tuple_cost?
etc.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Need for speed

2005-08-17 Thread Jeffrey W. Baker
On Wed, 2005-08-17 at 11:15 +0200, Ulrich Wisser wrote:
> Hello,
> 
> thanks for all your suggestions.
> 
> I can see that the Linux system is 90% waiting for disc io. At that time 
> all my queries are *very* slow. My scsi raid controller and disc are 
> already the fastest available.

What RAID controller?  Initially you said you have only 2 disks, and
since you have your xlog on a separate spindle, I assume you have 1 disk
for the xlog and 1 for the data.  Even so, if you have a RAID, I'm going
to further assume you are using RAID 1, since no sane person would use
RAID 0.  In those cases you are getting the performance of a single
disk, which is never going to be very impressive.  You need a RAID.

Please be more precise when describing your system to this list.

-jwb


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Need for speed

2005-08-17 Thread Tom Lane
Ulrich Wisser <[EMAIL PROTECTED]> writes:
> My machine has 2GB memory, please find postgresql.conf below.

> max_fsm_pages = 5   # min max_fsm_relations*16, 6 bytes each

FWIW, that index I've been groveling through in connection with your
other problem contains an astonishingly large amount of dead space ---
almost 50%.  I suspect that you need a much larger max_fsm_pages
setting, and possibly more-frequent vacuuming, in order to keep a lid
on the amount of wasted space.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Need for speed

2005-08-17 Thread Ulrich Wisser

Hello,

thanks for all your suggestions.

I can see that the Linux system is 90% waiting for disc io. At that time 
all my queries are *very* slow. My scsi raid controller and disc are 
already the fastest available. The query plan uses indexes and "vacuum 
analyze" is run once a day.


To avoid aggregating to many rows, I already made some aggregation 
tables which will be updated after the import from the Apache logfiles.

That did help, but only to a certain level.

I believe the biggest problem is disc io. Reports for very recent data 
are quite fast, these are used very often and therefor already in the 
cache. But reports can contain (and regulary do) very old data. In that 
case the whole system slows down. To me this sounds like the recent data 
is flushed out of the cache and now all data for all queries has to be 
fetched from disc.


My machine has 2GB memory, please find postgresql.conf below.

Ulrich


#---
# RESOURCE USAGE (except WAL)
#---

# - Memory -

shared_buffers = 2  # min 16, at least max_connections*2, 
sort_mem = 4096 # min 64, size in KB

vacuum_mem = 8192   # min 1024, size in KB

# - Free Space Map -

max_fsm_pages = 5   # min max_fsm_relations*16, 6 bytes each
max_fsm_relations = 3000# min 100, ~50 bytes each

# - Kernel Resource Usage -

#max_files_per_process = 1000   # min 25
#preload_libraries = ''


#---
# WRITE AHEAD LOG
#---

# - Settings -

fsync = false   # turns forced synchronization on or off
#wal_sync_method = fsync# the default varies across platforms:
wal_buffers = 128   # min 4, 8KB each

# - Checkpoints -

checkpoint_segments = 16# in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300   # range 30-3600, in seconds
#checkpoint_warning = 30# 0 is off, in seconds
#commit_delay = 0   # range 0-10, in microseconds
#commit_siblings = 5# range 1-1000


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Need for speed

2005-08-16 Thread Dennis Bjorklund
On Tue, 16 Aug 2005, Ulrich Wisser wrote:

> Still when several users are on line the reporting gets very slow. 
> Queries can take more then 2 min.

Could you show an exampleof such a query and the output of EXPLAIN ANALYZE
on that query (preferably done when the database is slow).

It's hard to say what is wrong without more information.

-- 
/Dennis Björklund


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Need for speed

2005-08-16 Thread Alex Turner
Are you calculating aggregates, and if so, how are you doing it (I ask
the question from experience of a similar application where I found
that my aggregating PGPLSQL triggers were bogging the system down, and
changed them so scheduled jobs instead).

Alex Turner
NetEconomist

On 8/16/05, Ulrich Wisser <[EMAIL PROTECTED]> wrote:
> Hello,
> 
> one of our services is click counting for on line advertising. We do
> this by importing Apache log files every five minutes. This results in a
> lot of insert and delete statements. At the same time our customers
> shall be able to do on line reporting.
> 
> We have a box with
> Linux Fedora Core 3, Postgres 7.4.2
> Intel(R) Pentium(R) 4 CPU 2.40GHz
> 2 scsi 76GB disks (15.000RPM, 2ms)
> 
> I did put pg_xlog on another file system on other discs.
> 
> Still when several users are on line the reporting gets very slow.
> Queries can take more then 2 min.
> 
> I need some ideas how to improve performance in some orders of
> magnitude. I already thought of a box with the whole database on a ram
> disc. So really any idea is welcome.
> 
> Ulrich
> 
> 
> 
> --
> Ulrich Wisser  / System Developer
> 
> RELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden
> Direct (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769
> 
> http://www.relevanttraffic.com
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
>

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Need for speed

2005-08-16 Thread Jeffrey W. Baker
On Tue, 2005-08-16 at 17:39 +0200, Ulrich Wisser wrote:
> Hello,
> 
> one of our services is click counting for on line advertising. We do 
> this by importing Apache log files every five minutes. This results in a 
> lot of insert and delete statements. At the same time our customers 
> shall be able to do on line reporting.
> 
> We have a box with
> Linux Fedora Core 3, Postgres 7.4.2
> Intel(R) Pentium(R) 4 CPU 2.40GHz

This is not a good CPU for this workload.  Try an Opteron or Xeon.  Also
of major importance is the amount of memory.  If possible, you would
like to have memory larger than the size of your database.

> 2 scsi 76GB disks (15.000RPM, 2ms)

If you decide your application is I/O bound, here's an obvious place for
improvement.  More disks == faster.

> I did put pg_xlog on another file system on other discs.

Did that have a beneficial effect?

> Still when several users are on line the reporting gets very slow. 
> Queries can take more then 2 min.

Is this all the time or only during the insert?

> I need some ideas how to improve performance in some orders of 
> magnitude. I already thought of a box with the whole database on a ram 
> disc. So really any idea is welcome.

You don't need a RAM disk, just a lot of RAM.  Your operating system
will cache disk contents in memory if possible.  You have a very small
configuration, so more CPU, more memory, and especially more disks will
probably all yield improvements.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Need for speed

2005-08-16 Thread John A Meinel
Ulrich Wisser wrote:
> Hello,
>
> one of our services is click counting for on line advertising. We do
> this by importing Apache log files every five minutes. This results in a
> lot of insert and delete statements. At the same time our customers
> shall be able to do on line reporting.

What are you deleting? I can see having a lot of updates and inserts,
but I'm trying to figure out what the deletes would be.

Is it just that you completely refill the table based on the apache log,
rather than doing only appending?
Or are you deleting old rows?

>
> We have a box with
> Linux Fedora Core 3, Postgres 7.4.2
> Intel(R) Pentium(R) 4 CPU 2.40GHz
> 2 scsi 76GB disks (15.000RPM, 2ms)
>
> I did put pg_xlog on another file system on other discs.
>
> Still when several users are on line the reporting gets very slow.
> Queries can take more then 2 min.

If it only gets slow when you have multiple clients it sounds like your
select speed is the issue, more than conflicting with your insert/deletes.

>
> I need some ideas how to improve performance in some orders of
> magnitude. I already thought of a box with the whole database on a ram
> disc. So really any idea is welcome.

How much ram do you have in the system? It sounds like you only have 1
CPU, so there is a lot you can do to make the box scale.

A dual Opteron (possibly a dual motherboard with dual core (but only
fill one for now)), with 16GB of ram, and an 8-drive RAID10 system would
perform quite a bit faster.

How big is your database on disk? Obviously it isn't very large if you
are thinking to hold everything in RAM (and only have 76GB of disk
storage to put it in anyway).

If your machine only has 512M, an easy solution would be to put in a
bunch more memory.

In general, your hardware is pretty low in overall specs. So if you are
willing to throw money at the problem, there is a lot you can do.

Alternatively, turn on statement logging, and then post the queries that
are slow. This mailing list is pretty good at fixing poor queries.

One thing you are probably hitting is a lot of sequential scans on the
main table.

If you are doing mostly inserting, make sure you are in a transaction,
and think about doing a COPY.

There is a lot more that can be said, we just need to have more
information about what you want.

John
=:->

>
> Ulrich
>
>
>



signature.asc
Description: OpenPGP digital signature


Re. : [PERFORM] Need for speed

2005-08-16 Thread bsimon
Hi,

How much Ram do you have ?
Could you give us your postgresql.conf  ? (shared buffer parameter)

If you do lots of deletes/inserts operations you HAVE to vacuum analyze 
your table (especially if you have indexes). 

I'm not sure if vacuuming locks your table with pg 7.4.2 (it doesn't with 
8.0), you might consider upgrading your pg version. 
Anyway, your "SELECT"  performance while vacuuming is going to be altered. 


I don't know your application but I would certainly try to split your 
table. it would result in one table for inserts/vaccum and one for 
selects. You would have to switch from one to the other every five 
minutes.

Benjamin.





Ulrich Wisser <[EMAIL PROTECTED]>
Envoyé par : [EMAIL PROTECTED]
16/08/2005 17:39

 
Pour :  pgsql-performance@postgresql.org
cc : 
Objet : [PERFORM] Need for speed


Hello,

one of our services is click counting for on line advertising. We do 
this by importing Apache log files every five minutes. This results in a 
lot of insert and delete statements. At the same time our customers 
shall be able to do on line reporting.

We have a box with
Linux Fedora Core 3, Postgres 7.4.2
Intel(R) Pentium(R) 4 CPU 2.40GHz
2 scsi 76GB disks (15.000RPM, 2ms)

I did put pg_xlog on another file system on other discs.

Still when several users are on line the reporting gets very slow. 
Queries can take more then 2 min.

I need some ideas how to improve performance in some orders of 
magnitude. I already thought of a box with the whole database on a ram 
disc. So really any idea is welcome.

Ulrich



-- 
Ulrich Wisser  / System Developer

RELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden
Direct (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769

http://www.relevanttraffic.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Need for speed

2005-08-16 Thread Richard Huxton

Ulrich Wisser wrote:

Hello,

one of our services is click counting for on line advertising. We do 
this by importing Apache log files every five minutes. This results in a 
lot of insert and delete statements. At the same time our customers 
shall be able to do on line reporting.


I need some ideas how to improve performance in some orders of 
magnitude. I already thought of a box with the whole database on a ram 
disc. So really any idea is welcome.


So what's the problem - poor query plans? CPU saturated? I/O saturated? 
Too much context-switching?


What makes it worse - adding another reporting user, or importing 
another logfile?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly