Re: Postgres on RAID5

2005-03-16 Thread David Dougall
In my experience, if you are concerned about filesystem performance, don't
use ext3.  It is one of the slowest filesystems I have ever used
especially for writes.  I would suggest either reiserfs or xfs.
--David Dougall


On Fri, 11 Mar 2005, Arshavir Grigorian wrote:

 Hi,

 I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has
 an Ext3 filesystem which is used by Postgres. Currently we are loading a
 50G database on this server from a Postgres dump (copy, not insert) and
 are experiencing very slow write performance (35 records per second).

 Top shows that the Postgres process (postmaster) is being constantly put
 into D state for extended periods of time (2-3 seconds) which I assume
 is because it's waiting for disk io. I have just started gathering
 system statistics and here is what sar -b shows: (this is while the db
 is being loaded - pg_restore)

  tpsrtps wtps  bread/s  bwrtn/s
 01:35:01 PM275.77 76.12199.66709.59   2315.23
 01:45:01 PM287.25 75.56211.69706.52   2413.06
 01:55:01 PM281.73 76.35205.37711.84   2389.86
 02:05:01 PM282.83 76.14206.69720.85   2418.51
 02:15:01 PM284.07 76.15207.92707.38   2443.60
 02:25:01 PM265.46 75.91189.55708.87   2089.21
 02:35:01 PM285.21 76.02209.19709.58   2446.46
 Average:   280.33 76.04204.30710.66   2359.47

 This is a Sun e450 with dual TI UltraSparc II processors and 2G of RAM.
 It is currently running Debian Sarge with a 2.4.27-sparc64-smp custom
 compiled kernel. Postgres is installed from the Debian package and uses
 all the configuration defaults.

 I am also copying the pgsql-performance list.

 Thanks in advance for any advice/pointers.


 Arshavir

 Following is some other info that might be helpful.

 /proc/scsi# mdadm -D /dev/md1
 /dev/md1:
  Version : 00.90.00
Creation Time : Wed Feb 23 17:23:41 2005
   Raid Level : raid5
   Array Size : 123823616 (118.09 GiB 126.80 GB)
  Device Size : 8844544 (8.43 GiB 9.06 GB)
 Raid Devices : 15
Total Devices : 17
 Preferred Minor : 1
  Persistence : Superblock is persistent

  Update Time : Thu Feb 24 10:05:38 2005
State : active
   Active Devices : 15
 Working Devices : 16
   Failed Devices : 1
Spare Devices : 1

   Layout : left-symmetric
   Chunk Size : 64K

 UUID : 81ae2c97:06fa4f4d:87bfc6c9:2ee516df
   Events : 0.8

  Number   Major   Minor   RaidDevice State
 0   8   640  active sync   /dev/sde
 1   8   801  active sync   /dev/sdf
 2   8   962  active sync   /dev/sdg
 3   8  1123  active sync   /dev/sdh
 4   8  1284  active sync   /dev/sdi
 5   8  1445  active sync   /dev/sdj
 6   8  1606  active sync   /dev/sdk
 7   8  1767  active sync   /dev/sdl
 8   8  1928  active sync   /dev/sdm
 9   8  2089  active sync   /dev/sdn
10   8  224   10  active sync   /dev/sdo
11   8  240   11  active sync   /dev/sdp
12  650   12  active sync   /dev/sdq
13  65   16   13  active sync   /dev/sdr
14  65   32   14  active sync   /dev/sds

15  65   48   15  spare   /dev/sdt

 # dumpe2fs -h /dev/md1
 dumpe2fs 1.35 (28-Feb-2004)
 Filesystem volume name:   none
 Last mounted on:  not available
 Filesystem UUID:  1bb95bd6-94c7-4344-adf2-8414cadae6fc
 Filesystem magic number:  0xEF53
 Filesystem revision #:1 (dynamic)
 Filesystem features:  has_journal dir_index needs_recovery large_file
 Default mount options:(none)
 Filesystem state: clean
 Errors behavior:  Continue
 Filesystem OS type:   Linux
 Inode count:  15482880
 Block count:  30955904
 Reserved block count: 1547795
 Free blocks:  28767226
 Free inodes:  15482502
 First block:  0
 Block size:   4096
 Fragment size:4096
 Blocks per group: 32768
 Fragments per group:  32768
 Inodes per group: 16384
 Inode blocks per group:   512
 Filesystem created:   Wed Feb 23 17:27:13 2005
 Last mount time:  Wed Feb 23 17:45:25 2005
 Last write time:  Wed Feb 23 17:45:25 2005
 Mount count:  2
 Maximum mount count:  28
 Last checked: Wed Feb 23 17:27:13 2005
 Check interval:   15552000 (6 months)
 Next check after: Mon Aug 22 18:27:13 2005
 Reserved blocks uid:  0 (user root)
 Reserved blocks gid:  0 (group root)
 First inode:  11
 Inode size:   128

Re: [OT] best tape backup system?

2005-02-22 Thread David Dougall
Not sure if it is important to many people, but tapes take a lot less
electricity than online disks.
--David Dougall


On Tue, 22 Feb 2005, Jon Lewis wrote:

 On Tue, 22 Feb 2005, Alvin Oga wrote:

   Better depends on what you want/need/can afford.  Last time I was tape
   shopping, I thought this would be a good compromise on the need/can
   afford:
   Exabyte VXA-2 Packetloader 1x10
  
   Native tape capacity is 800gb.  The only downside is, no magazine...it
   stores the tapes in an internal carosel accessed from the front, one
   position at a time.  For a bit more $, they have magazine based tape
   library systems with VXA-2 drives.
 
  for 1TB of storage ... i'd put the data on 4 disks ( raided )
  and take the disks and put in nice bubble wrap and nice cushion

 I should clarify, that's 80GB per tape...so 800GB native assumes you have
 10 tapes in the unit.

  i keep wondering why people pay $150K for 1TB brandname tape subsystems ..

 I wouldn't pay that much...but I think the common wisdom is that tape is
 more durable/portable than disks.  Once upon a time, it was cheaper than
 disks too...but that's no longer the case.  It's part of why my plan to
 buy a bunch of Exabyte stuff got shot down and instead we bought P4's with
 1TB SATA-RAID5 arrays to use as backup servers.

 --
  Jon Lewis   |  I route
  Senior Network Engineer |  therefore you are
  Atlantic Net|
 _ http://www.lewis.org/~jlewis/pgp for PGP public key_
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


No response?

2005-01-20 Thread David Dougall
Perhaps I was asking a stupid question or an obvious one, but I have
received not response.
Maybe if I simplify the question...

If I am running software raid1 and a disk device starts throwing I/O
errors, Is the filesystem supposed to see any indication of this?  I
thought software raid would mask all of this and just fail the drive.

I have servers with xfs as the filesystem and xfs will start to throw I/O
errors when a disk starts acting up even with software raid in between.
Please advise on how I can confirm my setup or if this is possibly a bug
how to diagnose further.
If it makes a difference, I am running linux-2.4.26
Thanks
--David Dougall

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html