[PHP-DB] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-10-28 Thread Mohamed Hashim
I have Quadcore server with 8GB RAM

vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5607  @ 2.27GHz
stepping: 2
cpu MHz : 1197.000
cache size  : 8192 KB


MemTotal:8148636 kB
MemFree: 4989116 kB
Buffers:8464 kB
Cached:  2565456 kB
SwapCached:81196 kB
Active:  2003796 kB
Inactive: 843896 kB
Active(anon):1826176 kB
Inactive(anon):   405964 kB
Active(file): 177620 kB
Inactive(file):   437932 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:  16779260 kB
SwapFree:   16303356 kB
Dirty:  1400 kB
Writeback: 0 kB
AnonPages:208260 kB
Mapped:  1092008 kB
Shmem:   1958368 kB
Slab: 224964 kB
SReclaimable:  60136 kB
SUnreclaim:   164828 kB
KernelStack:2864 kB
PageTables:35684 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit:20853576 kB
Committed_AS:3672176 kB
VmallocTotal:   34359738367 kB
VmallocUsed:  303292 kB
VmallocChunk:   34359429308 kB
HardwareCorrupted: 0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:6144 kB
DirectMap2M: 2082816 kB
DirectMap1G: 6291456 kB

My database size is

pg_size_pretty

 21 GB

i have one table which has data more than 160500460 rows almost...and i
have partioned with yearwise in different schemas

 stk_source
Table _100410.stk_source
Column |   Type|
Modifiers  | Storage  | Description
---+---+-+--+-
 source_id | integer   | not null default
nextval('source_id_seq'::regclass) | plain|
 stock_id  | integer
| | plain|
 source_detail | integer[]
| | extended |
 transaction_reference | integer
| | plain|
 is_user_set   | boolean   | default
false   | plain|
Triggers:
insert_stk_source_trigger BEFORE INSERT ON stk_source FOR EACH ROW
EXECUTE PROCEDURE stk_source_insert_trigger()
Child tables: _100410_200809.stk_source,
  _100410_200910.stk_source,
  _100410_201011.stk_source,
  _100410_201112.stk_source
Has OIDs: yes

Also have indexes

ss_source_id_pk PRIMARY KEY, btree (source_id)
stk_source_stock_id_idx btree (stock_id)


First two years data is very less so no issues

and next two years table size is 2GB  10 GB respectively.

EXPLAIN select * from stk_source ;
 QUERY
PLAN
-
 Result  (cost=0.00..6575755.39 rows=163132513 width=42)
   -  Append  (cost=0.00..6575755.39 rows=163132513 width=42)
 -  Seq Scan on stk_source  (cost=0.00..42.40 rows=1080 width=45)
 -  Seq Scan on stk_source  (cost=0.00..20928.37 rows=519179
width=42)
 -  Seq Scan on stk_source  (cost=0.00..85125.82 rows=2111794
width=42)
 -  Seq Scan on stk_source  (cost=0.00..6469658.80 rows=160500460
width=42)


because of this table my total database performance got affected i want to
optimize the settings by reading the below blogs i have changed some
configurations but no use still sytem is slow
http://comments.gmane.org/gmane.comp.db.postgresql.performance/29561

Actually we are using one *PHP* application in that we have used *Postgresql
9.0.3* database.The server is accessing 40 -50 users dailyso want to
have more performancemy config details are below

Could any one help how to tune the settings for better performance???

Thanks in advance..

# - Memory -

*shared_buffers = 2GB *   # min 128kB
   # (change requires
restart)
#temp_buffers = 8MB # min 800kB
*max_prepared_transactions = 0 *  # zero disables the feature
   # (change requires
restart)

# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.


*work_mem = 48MB *# min 64kB
*maintenance_work_mem = 256MB*# min 1MB
*max_stack_depth = 6MB *  # min 100kB


# - Planner Cost Constants -

*seq_page_cost = 1.0  *   # measured on an arbitrary scale

Re: [PHP-DB] Simple MySQL sample code runs out of memory

2011-10-28 Thread tamouse mailing lists
On Thu, Oct 27, 2011 at 10:04 PM,  p...@umpquanet.com wrote:
 Something more fundamental is going on.

 Here is simpler code which still reproduces the problem.
 I get:

 PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to 
 allocate 20 bytes) in xx3.php on line 26

 What's up with that?


Hmm, okay, I haven't worked with such huge datasets in MySQL and PHP
before, so now I'm speculating here.

If the dataset is so huge that it's eating up all your memory, perhaps
you need to cycle through the data base using limits, and process it
in chunks, and release each dataset as you're done processing it. I
know I've had to do that with Oracle databases before that were
storing 100's of millions of records. It's rather like pagination for
displaying data, only in this case you can have much much higher
chunks than a displayed page.

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php