Re: [PERFORM] New server setup

2013-03-01 Thread Wales Wang
pls choice PCI-E Flash for written heavy app

Wales

在 2013-3-1,下午8:43,Niels Kristian Schjødt nielskrist...@autouncle.com 写道:

 Hi, I'm going to setup a new server for my postgresql database, and I am 
 considering one of these: 
 http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with four 
 SAS drives in a RAID 10 array. Has any of you any particular 
 comments/pitfalls/etc. to mention on the setup? My application is very write 
 heavy.
 
 
 
 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-27 Thread Wales Wang
There are many approach for PostgreSQL in-memory.

The quick and easy way is making slave pgsql run on persistent RAM filesystem, 
the slave is part of master/slave replication cluster.
 
The fstab and script make RAM file system persistent is below:
Setup:
First, create a mountpoint for the disk : 
mkdir /mnt/ramdisk
Secondly, add this line to /etc/fstab in to mount the drive at boot-time. 
tmpfs   /mnt/ramdisk tmpfs  defaults,size=65536M 0 0

#! /bin/sh 
# /etc/init.d/ramdisk.sh
#
 
case $1 in
  start)
    echo Copying files to ramdisk
    rsync -av /data/ramdisk-backup/ /mnt/ramdisk/
    echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched from HD  
/var/log/ramdisk_sync.log
    ;;
  sync)
    echo Synching files from ramdisk to Harddisk
    echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched to HD  
/var/log/ramdisk_sync.log
    rsync -av --delete --recursive --force /mnt/ramdisk/ /data/ramdisk-backup/
    ;;
  stop)
    echo Synching logfiles from ramdisk to Harddisk
    echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched to HD  
/var/log/ramdisk_sync.log
    rsync -av --delete --recursive --force /mnt/ramdisk/ /data/ramdisk-backup/
    ;;
  *)
    echo Usage: /etc/init.d/ramdisk {start|stop|sync}
    exit 1
    ;;
esac
exit 0
 
you can run it when startup and shutdown and crontabe hoursly.
 
Wales Wang 


 发件人: Jeff Janes jeff.ja...@gmail.com
收件人: Stefan Keller sfkel...@gmail.com 
抄送: Wales Wang wormw...@yahoo.com; pgsql-performance@postgresql.org; Stephen 
Frost sfr...@snowman.net 
发送日期: 2012年2月27日, 星期一, 上午 6:34
主题: Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? 
How to read in all tuples into memory?
  
On Sun, Feb 26, 2012 at 2:56 AM, Stefan Keller sfkel...@gmail.com wrote:
 Hi Jeff and Wales,

 2012/2/26 Jeff Janes jeff.ja...@gmail.com wrote:
 The problem is that the initial queries are too slow - and there is no
 second chance. I do have to trash the buffer every night. There is
 enough main memory to hold all table contents.

 Just that table, or the entire database?

 The entire database consisting of only about 5 tables which are
 similar but with different geometry types plus a relations table (as
 OpenStreetMap calls it).

And all of those combined fit in RAM?  With how much to spare?


 1. How can I warm up or re-populate shared buffers of Postgres?

 Instead, warm the OS cache. 燭hen data will get transferred into the
 postgres shared_buffers pool from the OS cache very quickly.

 tar -c $PGDATA/base/ |wc -c

 Ok. So with OS cache you mean the files which to me are THE database itself?

Most operating systems will use any otherwise unused RAM to cache
recently accessed file-system data.  That is the OS cache.  The
purpose of the tar is to populate the OS cache with the database
itself.  That way, when postgres wants something that isn't already
in shared_buffers, it doesn't require a disk read to get it, just a
request to the OS.

But this trick is most useful after the OS has been restarted so the
OS cache is empty.  If the OS has been up for a long time, then why
isn't it already populated with the data you need?  Maybe the data
doesn't fit, maybe some other process has trashed the cache (in which
case, why would it not continue to trash the cache on an ongoing
basis?)

Since you just recently created the tables and indexes, they must have
passed through the OS cache on the way to disk.  So why aren't they
still there?  Is shared_buffers so large that little RAM is left over
for the OS?  Did you reboot the OS?  Are there other processes running
that drive the database-specific files out of the OS cache?

 A cache to me is a second storage with controlled redudancy because
 of performance reasons.

Yeah.  But there are multiple caches, with different parties in
control and different opinions of what is redundant.

 2. Are there any hints on how to tell Postgres to read in all table
 contents into memory?

 I don't think so, at least not in core. 營've wondered if it would
 make sense to suppress ring-buffer strategy when there are buffers on
 the free-list. 燭hat way a sequential scan would populate
 shared_buffers after a restart. 燘ut it wouldn't help you get the
 indexes into cache.

 So, are there any developments going on with PostgreSQL as Stephen
 suggested in the former thread?

I don't see any active development for the upcoming release, and most
of what has been suggested wouldn't help you because they are about
re-populating the cache with previously hot data, while you are
destroying your previously hot data and wanting to specify the
future-hot data.

By the way, your explain plan would be more useful if it included
buffers.  Explain (analyze, buffers) select...

I don't know that it is ever better to run analyze without buffers,
other than for backwards compatibility.  I'm trying to get in the
habit of just automatically doing it.

Cheers,

Jeff

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org

[PERFORM] Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

2012-02-26 Thread Wales Wang
You can try PostgreSQL 9.x master/slave replication, then try run slave 
on persistent RAM Fileystem(tmpfs)
So, access your all data from slave PostgreSQL that run on tmpfs..
 


 发件人: Jeff Janes jeff.ja...@gmail.com
收件人: Stefan Keller sfkel...@gmail.com 
抄送: pgsql-performance@postgresql.org; Stephen Frost sfr...@snowman.net 
发送日期: 2012年2月26日, 星期日, 上午 10:13
主题: Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? 
How to read in all tuples into memory?
  
On Sat, Feb 25, 2012 at 4:16 PM, Stefan Keller sfkel...@gmail.com wrote:

 I'd like to come back on the issue of aka of in-memory key-value database.

 To remember, it contains table definition and queries as indicated in
 the appendix [0]. There exist 4 other tables of similar structure.
 There are indexes on each column. The tables contain around 10 million
 tuples. The database is read-only; it's completely updated every
 day. I don't expect more than 5 concurrent users at any time. A
 typical query looks like [1] and varies in an unforeseable way (that's
 why hstore is used). EXPLAIN tells me that the indexes are used [2].

 The problem is that the initial queries are too slow - and there is no
 second chance. I do have to trash the buffer every night. There is
 enough main memory to hold all table contents.

Just that table, or the entire database?


 1. How can I warm up or re-populate shared buffers of Postgres?

Instead, warm the OS cache.  Then data will get transferred into the
postgres shared_buffers pool from the OS cache very quickly.

tar -c $PGDATA/base/ |wc -c

If you need to warm just one table, because the entire base directory
won't fit in OS cache, then you need to do a bit more work to find out
which files to use.

You might feel clever and try this instead:

tar -c /dev/null $PGDATA/base/  /dev/null

But my tar program is too clever by half.  It detects that it is
writing to /dev/null, and just does not actually read the data.

 2. Are there any hints on how to tell Postgres to read in all table
 contents into memory?

I don't think so, at least not in core.  I've wondered if it would
make sense to suppress ring-buffer strategy when there are buffers on
the free-list.  That way a sequential scan would populate
shared_buffers after a restart.  But it wouldn't help you get the
indexes into cache.

Cheers,

Jeff

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] File system choice for Red Hat systems

2010-06-02 Thread Wales Wang

you can try Scientific Linux 5.x,it plus XFS and some other soft for HPC based 
on CentOS.
It had XFS for years


--- On Wed, 6/2/10, Alan Hodgson ahodg...@simkin.ca wrote:

 From: Alan Hodgson ahodg...@simkin.ca
 Subject: Re: [PERFORM] File system choice for Red Hat systems
 To: pgsql-performance@postgresql.org
 Date: Wednesday, June 2, 2010, 10:53 PM
 On Tuesday 01 June 2010, Mark
 Kirkwood mark.kirkw...@catalyst.net.nz
 
 wrote:
  I'm helping set up a Red Hat 5.5 system for Postgres.
 I was going to
  recommend xfs for the filesystem - however it seems
 that xfs is
  supported as a technology preview layered product
 for 5.5. This
  apparently means that the xfs tools are only available
 via special
  channels.
  
  What are Red Hat using people choosing for a good
 performing filesystem?
  
 
 I've run PostgreSQL on XFS on CentOS for years. It works
 well. Make sure you 
 have a good battery-backed RAID controller under it (true
 for all 
 filesystems).
 
 -- 
 No animals were harmed in the recording of this episode.
 We tried but that 
 damn monkey was just too fast.
 
 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance
 



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance