Title: INSERTs becoming slower and slower
Hi,
I am breaking up huge texts (between 25K and 250K words) into single words using PgPlsql.
For this I am using a temp table in the first step :
LOOP
vLeft := vRight;
vTmp := vLeft;
LOOP
vChr := SUBSTRING ( pText FROM
You might find it faster to install contrib/tsearch2 for text indexing
sort of purposes...
Nörder-Tuitje wrote:
Hi,
I am breaking up huge texts (between 25K and 250K words) into single
words using PgPlsql.
For this I am using a temp table in the first step :
LOOP
On 06/12/05, Michael Stone ([EMAIL PROTECTED]) wrote:
On Tue, Dec 06, 2005 at 07:52:25PM -0500, Alex Turner wrote:
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS
We are testing disk I/O on our new server (referred to in my recent
questions about LVM and XFS on this list) and have run bonnie++ on the
xfs partition destined for postgres; results noted below.
I haven't been able to find many benchmarks showing desirable IO stats.
As far as I can tell the
Hi All,
I am working on an application that uses PostgreSQL. One of the
functions of the application is to generate reports. In order to keep
the code in the application simple we create a view of the required
data
in the database and then simply execute a SELECT * FROM
view_of_the_data;
Keith Worthington wrote:
Hi All,
I am working on an application that uses PostgreSQL. One of the
functions of the application is to generate reports. In order to keep
the code in the application simple we create a view of the required data
in the database and then simply execute a SELECT *
Christopher Kings-Lynne wrote:
You might find it faster to install contrib/tsearch2 for text indexing
sort of purposes...
Nörder-Tuitje wrote:
Here is my config:
shared_buffers = 2000 # min 16, at least max_connections*2,
8KB each
work_mem = 32768# min 64, size in
I hope that this will demonstrate the problem and will give the needed
information (global_content_id=90 is the record that was all the time
updated):
V-Mark=# UPDATE active_content_t SET ac_counter_mm4_outbound=100 WHERE
global_content_id=90;
UPDATE 1
Time: 396.089 ms
V-Mark=# UPDATE
I have a choice to make on a RAID enclosure:
14x 36GB 15kRPM ultra 320 SCSI drives
OR
12x 72GB 10kRPM ultra 320 SCSI drives
both would be configured into RAID 10 over two SCSI channels using a
megaraid 320-2x card.
My goal is speed. Either would provide more disk space than I would
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Vivek Khera
Sent: 08 December 2005 16:52
To: Postgresql Performance
Subject: [PERFORM] opinion on disk speed
I have a choice to make on a RAID enclosure:
14x 36GB 15kRPM ultra 320 SCSI
Hi everybody!
My system is 2xXEON 3 GHz, 4GB RAM, RAID-10 (4 SCSI HDDs), running Postgres
8.1.0, taken from CVS REL8_1_STABLE, compiled with gcc-3.4 with options
-march=nocona -O2 -mfpmath=sse -msse3. Hyperthreading is disabled.
There are about 300,000 - 500,000 transactions per day. Database
Hi all,
First of all, please pardon if the question is dumb! Is it even feasible or
normal to do such a thing ! This query is needed by a webpage so needs to be
lightning fast. Anything beyond 2-3 seconds is unacceptable performance.
I have two tables
CREATE TABLE runresult
(
id_runresult
On Thu, 2005-12-08 at 10:52, Vivek Khera wrote:
I have a choice to make on a RAID enclosure:
14x 36GB 15kRPM ultra 320 SCSI drives
OR
12x 72GB 10kRPM ultra 320 SCSI drives
both would be configured into RAID 10 over two SCSI channels using a
megaraid 320-2x card.
My goal is
Rory,
While I don't have my specific stats with my from my tests with XFS and
bonnie for our company's db server, I do recall vividly that seq. output
did not increase dramatically until I had 8+ discs in a RAID10
configuration on an LSI card. I was not using LVM. If I had less than 8
discs, seq.
On Thu, 2005-12-08 at 11:52 -0500, Vivek Khera wrote:
I have a choice to make on a RAID enclosure:
14x 36GB 15kRPM ultra 320 SCSI drives
OR
12x 72GB 10kRPM ultra 320 SCSI drives
both would be configured into RAID 10 over two SCSI channels using a
megaraid 320-2x card.
My goal is
What's the problem? You are joining two 300 million row tables in 0.15
of a second - seems reasonable.
Dmitri
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Amit V Shah
Sent: Thursday, December 08, 2005 11:59 AM
To:
Hi,
The thing is, although it shows 0.15 seconds, when I run the actual query,
it takes around 40-45 seconds (sorry I forgot to mention that). And then
sometimes it depends on data. Some parameters have very less number of
records, and others have lot more. I dont know how to read the explan
Hi Steve
On 08/12/05, Steve Poe ([EMAIL PROTECTED]) wrote:
Rory,
While I don't have my specific stats with my from my tests with XFS and
bonnie for our company's db server, I do recall vividly that seq. output
did not increase dramatically until I had 8+ discs in a RAID10
configuration on
On Thu, 8 Dec 2005, Vivek Khera wrote:
I have a choice to make on a RAID enclosure:
14x 36GB 15kRPM ultra 320 SCSI drives
OR
12x 72GB 10kRPM ultra 320 SCSI drives
both would be configured into RAID 10 over two SCSI channels using a megaraid
320-2x card.
My goal is speed. Either would
19 matches
Mail list logo