Postgres 8.1.4
Slony 1.1.5
Linux manny 2.6.12-10-k7-smp #1 SMP Fri Apr 28 14:17:26 UTC 2006 i686
GNU/Linux
We're seeing an average of 30,000 context-switches a sec. This problem
was much worse w/8.0 and got bearable with 8.1 but slowly resurfaced.
Any ideas?
procs
Saranya Sivakumar wrote:
Hi All,
I am trying to back up a full copy of one of our databases (14G) and
restore it on another server. Both databases run 7.3.2 version.
Though the restore completed successfully, it took 9 hours for the
process to complete. The destination server runs Fedora Core 3
Hi, Reimer,
carlosreimer wrote:
There is some performance problems with the server and I discovered with
vmstat tool that there is some process writing a lot of information in
the disk subsystem.
[..]
I could I discover who is sending so many data to the disks?
It could be something
Hi, Charles,
Charles Sprickman wrote:
I've also got a 1U with a 9500SX-4 and 4 drives. I like how the 3Ware
card scales there - started with 2 drives and got drive speed
mirroring. Added two more and most of the bonnie numbers doubled. This
is not what I'm used to with the Adaptec SCSI
Hi, Arjen,
Arjen van der Meijden wrote:
It was the 8core version with 16GB memory... but actually that's just
overkill, the active portions of the database easily fits in 8GB and a
test on another machine with just 2GB didn't even show that much
improvements when going to 7GB (6x1G, 2x
Hi, Scott and Hale,
Scott Marlowe wrote:
Make sure analyze has been run and that the statistics are fairly
accurate.
It might also help to increase the statistics_target on the column in
question.
HTH,
Markus
--
Markus Schaber | Logical TrackingTracing International AG
Dipl. Inf. |
Hi Markus,
As said, our environment really was a read-mostly one. So we didn't do
much inserts/updates and thus spent no time tuning those values and left
them as default settings.
Best regards,
Arjen
Markus Schaber wrote:
Hi, Arjen,
Arjen van der Meijden wrote:
It was the 8core
Hi Richard,Thank you very muchfor the suggestions. As I said, we are stuck with 7.3.2 version for now. We have a Upgrade Project in place, but this backup is something we have to do immediately (we do not have enough time to test our application with 7.3.15 :( )The checkpoint segments
Tom Lane wrote:
We're seeing an average of 30,000 context-switches a sec. This problem
was much worse w/8.0 and got bearable with 8.1 but slowly resurfaced.
Is this from LWLock or spinlock contention? strace'ing a few backends
could tell the difference: look to see how many select(0,...)
Hi All,I tried to set shared_buffers= 1, turned off fsyncand reload the config file. But I got the following error:IpcMemoryCreate: shmget(key=5432001, size=85450752, 03600) failed: Invalid argument This error usually means that PostgreSQL's request for a shared memorysegment
IpcMemoryCreate: shmget(key=5432001, size=85450752, 03600) failed: Invalid
argument
This error usually means that PostgreSQL's request for a shared memory
segment exceeded your kernel's SHMMAX parameter. You can either
reduce the request size or reconfigure the kernel with larger
Jim C. Nasby wrote:
On Tue, Aug 01, 2006 at 08:42:23PM -0400, Alvaro Herrera wrote:
Most likely ext3 was used on the default configuration, which logs data
operations as well as metadata, which is what XFS logs. I don't think
I've seen any credible comparison between XFS and ext3 with the
Hi Richard,Thank you very much for the information. The SHMMAX was set to 33554432, and that's why it failed to start the postmaster. Thanks for the link to the kernel resources article. I guess changing these parameters would require recompiling the kernel. Is there any work around
On Mon, 2006-08-07 at 12:26, hansell baran wrote:
Hi. I'm new at using PostgreSQL. I have found posts related to this
one but there is not a definite answer or solution. Here it goes.
Where I work, all databases were built with MS Access. The Access
files are hosted by computers with Windows
Is this from LWLock or spinlock contention?
Over a 20 second interval, I've got about 85 select()s and 6,230
semop()s. 2604 read()s vs 16 write()s.
OK, so mostly LWLocks then.
Do you have any long-running transactions,
Not long-running. We do have a badly behaving legacy app that is
Thank you very much for the information. The SHMMAX was set to 33554432,
and that's why it
failed to start the postmaster. Thanks for the link to the kernel resources
article. I guess
changing these parameters would require recompiling the kernel.
Is there any work around without
Tom Lane wrote:
Sorry, I was unclear: it's the age of your oldest transaction that
counts (measured by how many xacts started since it), not how many
cycles it's consumed or not.
With the 8.1 code it's possible for performance to degrade pretty badly
once the age of your oldest transaction
Although I for one have yet to see a controller that actualy does this (I believe software RAID on linux doesn't either).Alex.On 8/7/06, Markus Schaber
[EMAIL PROTECTED] wrote:Hi, Charles,
Charles Sprickman wrote: I've also got a 1U with a 9500SX-4 and 4 drives.I like how the 3Ware card scales
Hi All,Thanks Richard for the additional link. The information is very useful.The restore completed successfully in 2.5 hours in the new 2GB box, with the same configuration parameters. I think if I can tweak the parameters a little more, I should be able to get it down to the 1 hr down
On Mon, Aug 07, 2006 at 04:02:52PM -0400, Alex Turner wrote:
Although I for one have yet to see a controller that actualy does this (I
believe software RAID on linux doesn't either).
Linux' software RAID does. See earlier threads for demonstrations.
/* Steinar */
--
Homepage:
Hi,
First of all I must tell that my reality in a southern brazilian city is
way different than what we read in the list. I was lookig for ways to
find the HW bottleneck and saw a configuration like:
we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4
opterons, 16GB of
I am do some consulting for an animal hospital in the Boston, MA area.
They wanted a new server to run their database on. The client wants
everything from one vendor, they wanted Dell initially, I'd advised
against it. I recommended a dual Opteron system from either Sun or HP.
They settled on a
Steve,
On 8/5/06 4:10 PM, Steve Poe [EMAIL PROTECTED] wrote:
I am do some consulting for an animal hospital in the Boston, MA area.
They wanted a new server to run their database on. The client wants
everything from one vendor, they wanted Dell initially, I'd advised
against it. I
Luke,
I'll do that then post the results. I ran zcav on it (default
settlings) on the disc array formatted XFS and its peak MB/s was around
85-90. I am using kernel 2.6.17.7. mounting the disc array with
noatime, nodiratime.
Thanks for your feedback.
Steve
On 8/7/06, Luke Lonergan [EMAIL
The database data is on the drive array(RAID10) and the pg_xlog is on
the internal RAID1 on the 6i controller. The results have been poor.
I have heard that the 6i was actually decent but to avoid the 5i.
Joshua D. Drake
My guess is the controllers are garbage.
Can you run bonnie++
There is 64MB on the 6i and 192MB on the 642 controller. I wish the
controllers had a wrieback enable option like the LSI MegaRAID
adapters have. I have tried splitting the cache accelerator 25/75 75/25
0/100 100/0 but the results really did not improve.
SteveOn 8/7/06, Joshua D. Drake [EMAIL
Luke,
Here are the results of two runs of 16GB file tests on XFS.
scsi disc array
xfs
,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1,0,16,3172,7,+,+++,2957,9,3197,10,+,+++,2484,8
scsi disc array
xfs
27 matches
Mail list logo