> Sorry for my mistake on the 15000 recs per day.

It was useful for us to pick at that a bit; it was certainly looking a
mite suspicious.

> In fact, this server is planned as a OLTP database server for a retailer.
> Our intention is either to setup 1 or 2 Postgresql db in the server.
> The proper sizing info for the 1st Postgresql db should be:
> No. of item master : 200,000
> (This item master grows at 0.5% daily).
> No. of transactions from Point-of-Sales machines: 25,000

> Plus other tables, the total sizing that I estimated is 590,000
> records daily.

So that's more like 7 TPS, with, more than likely, a peak load several
times that.

> The 2nd Postgresql db will be used by end users on client machines linked
> via ODBC, doing manual data entry.
> This will house the item master, loyalty card master and other Historical
> data to be kept for at least 1.5 years.
> Therefore total sizing for this db is around 165,000,000 recs at any time.

FYI, it is useful to plan for purging the old data from the very
beginning; if you don't, things can get ugly :-(.

> In summary, the single machine must be able to take up around 100
> users connections via both socket and ODBC. And house the above
> number of records.

Based on multiplying the load by 40, we certainly move from
"pedestrian hardware where anything will do" to something requiring
more exotic hardware.   

- You _definitely_ want a disk array, with a bunch of SCSI disks.

- You _definitely_ will want some form of RAID controller with
  battery-backed cache.

- You probably want multiple CPUs.

- You almost certainly will want a second (and maybe third) complete
  redundant system that you replicate data to.

- The thing that will have _wild_ effects on whether this is enough,
  or whether you need to go for something even _more_ exotic
  (e.g. - moving to big iron UNIX(tm), whether that be Solaris,
  AIX, or HP/UX) is the issue of how heavily the main database gets
  hit by queries.

  If "all" it is doing is consolidating transactions, and there is
  little query load from the POS systems, that is a very different
  level of load from what happens if it is also servicing pricing

  Performance will get _destroyed_, regardless of how heavy the iron
  is, if you hit the OLTP system with a lot of transaction reports.
  You'll want a secondary replicated system to draw that load off.

Evaluating whether it needs to be "big" hardware or "really enormous"
hardware is not realistic based on what you have said.  There are
_big_ variations possible based notably on:

 1. What kind of query load does the OLTP server have to serve up?

    If the answer is "lots," then everything gets more expensive.

 2. How was the database schema and the usage of the clients designed?

    How well it is done will have a _large_ impact on how many TPS the
    system can cope with.

You'll surely need to do some prototyping, and be open to
possibilities such as that you'll need to consider alternative OSes.
On Intel/AMD hardware, it may be worth considering FreeBSD; it may
also be needful to consider "official UNIX(tm)" hardware.  It would be
unrealistic to pretend more certainty...
(reverse (concatenate 'string "ac.notelrac.teneerf" "@" "454aa"))
"Being really good at C++ is  like being really good at using rocks to
sharpen sticks."  -- Thant Tessman

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to