Re: [PERFORM] postgres performance tunning

2010-12-19 Thread selvi88

My requirement is more than 15 thousand queries will run,
It will be 5000 updates and 5000 insert and rest will be select.

Each query will be executed in each psql client, (let say for 15000 queries
15000 thousand psql connections will be made).

Since the connections are more for me the performance is low, I have tested
it with pgbench tool.

Configurations,
RAM  : 17.9GB 
CPU  : 64-bit 2 cores each 5346 bogomips

Postgres Configurations,
Shared Memory Required (shmmax) : 172032 bytes
Wal Buffers : 1024KB
Maintenance work mem : 1024MB
Effective Cache Size : 9216MB
Work Memory : 32MB
Shared Buffer : 1536MB
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/postgres-performance-tunning-tp3307846p3309251.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows

2010-12-19 Thread Justin Pitts
 If you strictly have an OLTP workload, with lots of simultaneous
 connections issuing queries across small chunks of data, then
 PostgreSQL would be a good match for SQL server.

This matches my observations. In fact, PostgreSQL's MVCC seems to work
heavily in my favor in OLTP workloads.

 On the other-hand, if some of your work load is OLAP with a few
 connections issuing complicated queries across large chunks of data,
 then PostgreSQL will not perform as well as SQL server.  SQL server
 can divide processing load of complicated queries across several
 processor, while PostgreSQL cannot.

While I agree with this in theory, it may or may not have a big impact
in practice. If you're not seeing multi-cpu activity spike up on your
MSSQL box during complex queries, you aren't likely to benefit much.
You can test by timing a query with and without a query hint of MAXDOP
1

select * from foo with (MAXDOP = 1)

which limits it to one processor. If it runs just as fast on one
processor, then this feature isn't something you'll miss.

Another set of features that could swing performance in MSSQL's favor
are covering indexes and clustered indexes. You can sort-of get around
clustered indexes being unavailable in PostgreSQL - especially on
low-churn tables, by scheduling CLUSTER commands. I've seen
discussions recently that one or both of these features are being
looked at pretty closely for inclusion in PostgreSQL.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgres performance tunning

2010-12-19 Thread selvi88


Thanks for ur suggestion, already I have gone through that url, with that
help I was able to make my configuration to work for 5K queries/second.
The parameters I changed was shared_buffer, work_mem, maintenance_work_mem
and effective_cache.
Still I was not able to reach my target.

Can u kindly tell me ur postgres configurations thereby I can get some idea
out of it.


-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/postgres-performance-tunning-tp3307846p3310337.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Strange optimization - xmin,xmax compression :)

2010-12-19 Thread Jim Nasby
On Dec 17, 2010, at 8:46 PM, Robert Haas wrote:
 2010/12/6 pasman pasmański pasma...@gmail.com:
 hello.
 
 i tested how are distributed values xmin,xmax on pages.
 in my tables . typically there are no more than 80 records
 on pages.
 
 maybe its possible to compress xmin  xmax values to
 1 byte/per record (+table of transactions per page)?
 its reduce the space when more than 1 record is
 from the same transaction.
 
 Not a bad idea, but not easy to implement, I think.

Another option that would help even more for data warehousing would be storing 
the XIDs at the table level, because you'll typically have a very limited 
number of transactions per table.

But as Robert mentioned, this is not easy to implement. The community would 
probably need to see some pretty compelling performance numbers to even 
consider it.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] CPU bound

2010-12-19 Thread Mladen Gogala

On 12/19/2010 7:57 PM, James Cloos wrote:

RA == Royce Ausburnro...@inomial.com  writes:

RA  I notice that when restoring a DB on a laptop with an SDD,
RA  typically postgres is maxing out a CPU - even during a COPY.

The time the CPUs spend waiting on system RAM shows up as CPU
time, not as Wait time.  It could be just that the SSD is fast
enough that the RAM is now the bottleneck, although parsing
and text=binary conversions (especially for integers, reals
and anything stored as an integer) also can be CPU-intensive.

-JimC


Good time accounting is the most compelling reason for having a wait 
event interface, like Oracle. Without the wait event interface, one 
cannot really tell where the time is spent, at least not without 
profiling the database code, which is not an option for a production 
database.


--
Mladen Gogala
Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
www.vmsinfo.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance