Shohab Abdullah wrote:
Dear,
We are facing performance tuning problem while using PostgreSQL Database
over the network on a linux OS.
Our Database consists of more than 500 tables with an average of 10K
records per table with an average of 20 users accessing the database
simultaneously
Ketema wrote:
I am trying to build a very Robust DB server that will support 1000+
concurrent users (all ready have seen max of 237 no pooling being
used). I have read so many articles now that I am just saturated. I
have a general idea but would like feedback from others.
Describe a bit
Magnus Hagander wrote:
Ow Mun Heng wrote:
You're likely better off (performance-wise) putting it on the same disk
as the database itself if that one has better RAID, for example.
I'm thinking along the lines of since nothing much writes to the OS
Disk, I should(keyword) be safe.
Unless
[EMAIL PROTECTED] wrote:
Hello
i have a python script to update 60 rows to one table from a csv file in my
postgres database and it takes me 5 hours to do the transaction...
Let's see if I guessed correctly.
Your Python script is stepping through a 600,000 row file and updating
A B wrote:
So, it is time to improve performance, it is running to slow.
AFAIK (as a novice) there are a few general areas:
1) hardware
2) rewriting my queries and table structures
3) using more predefined queries
4) tweek parameters in the db conf files
Of these points:
1) is nothing I can do
Josh Berkus wrote:
Folks,
Subsequent to my presentation of the new annotated.conf at pgCon last week,...
Available online yet? At?...
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Rainer Mager wrote:
I have an interesting performance improvement need. As part of the
automatic test suite we run in our development environment, we
re-initialize our test database a number of times in order to ensure
it is clean before running a test. We currently do this by dropping
the
Wei Yan wrote:
Hi:
Our queries are extremely slow only after db server reboot, not after
restart postgres db only. The difference is about 10 mins vs. 8 secs.
Not acceptable. I have browsed around , set the postgres db parameters
as suggested. Still the same.
Any suggestion on how to
My additional comments:
t...@fuzzy.cz wrote:
...
For future upgrade, what is the basic steps?
0. Create test database - work out bugs and performance issues before
going live.
1. create database
...cluster. You only need to create the individual database if the
options you
Kevin Grittner wrote:
Steve Crawford scrawf...@pinpointresearch.com wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
Was going on memory from a presentation I
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million
Greg Smith wrote:
Jochen Erwied wrote:
- Promise Technology Supertrak ES4650 + additional BBU
- Adaptec RAID 5405 SGL/256 SATA/SAS + additional BBU
- Adaptec RAID 5405Z SGL/512 SATA/SAS
I've never seen a Promise controller that had a Linux driver you would
want to rely on under any
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the database...
Some questions:
Is autovacuum running? This is the most likely suspect. If not, things
Madison Kelly wrote:
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the
database...
Yup, I even tried manually running 'VACUUM
Madison Kelly wrote:
I wanted to get ahead of the problem, hence my question here. :)
I've set this to run at night ('iwt' being the DB in question):
su postgres -c psql iwt -c \VACUUM ANALYZE VERBOSE\
And why not the vacuumdb command?:
su postgres -c vacuumdb --analyze --verbose iwt
Madison Kelly wrote:
You are right, autovacuum is not running after all. From your comment,
I am wondering if you'd recommend I turn it on or not?...
I see you are considering an upgrade but FWIW on your 8.1 instance, my
remaining 8.1 server has been running for years with it on. Read up
...Can you
correlate the spikes with anything like that?
Not that I know of. Just regular web traffic. On the backup server
these events happen occasionally even when there is little or no web
traffic, and nobody logged in doing maintenance.
What, if anything, are you logging in the
I'm weighing options for a new server. In addition to PostgreSQL, this
machine will handle some modest Samba and Rsync load.
I will have enough RAM so the virtually all disk-read activity will be
cached. The average PostgreSQL read activity will be modest - a mix of
single-record and fairly
On 10/20/2010 09:45 PM, Scott Marlowe wrote:
On Wed, Oct 20, 2010 at 8:25 PM, Joshua D. Drakej...@commandprompt.com wrote:
On Wed, 2010-10-20 at 22:13 -0400, Bruce Momjian wrote:
Ben Chobot wrote:
On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:
I'm weighing
On 02/15/2011 10:41 AM, Mark Mikulec wrote:
Hello,
I was under the impression that pg_dumpall didn't affect database
performance when dumping while the db is live. However I have evidence to
the contrary now - queries that are run during the pg_dumpall time take 10
to a 100 times longer to
On 05/25/2011 11:45 AM, Reuven M. Lerner wrote:
Hi, Alex. You wrote:
Have you tried something like:
SELECT encode(regexp_replace('141142143', '(\d{3})', '\\\1',
'g')::bytea, 'escape');
Hmm, forgot about regexp_replace. It might do the trick, but without
a full-blown eval that I can run on
On 05/26/2011 05:36 AM, Merlin Moncure wrote:
...
got it:
select decode(regexp_replace('141142143', '([0-9][0-9][0-9])',
$q$\\\1$q$ , 'g'), 'escape');
decode
abc
(1 row)
merlin
Nice. A word of warning, in 9.0 this returns a hex string:
select decode(regexp_replace('141142143',
On 08/04/2011 11:56 AM, Jian Shi wrote:
Hey,
I'm a new user of PostgreSQL. I found one of my tables is taking
unexpectedly large space:...
I did vaccum, reindex, the size is still the same. Is there anything
else that I can do?
Did you try CLUSTER? A basic vacuum only identifies space
On 02/17/2012 10:34 AM, Alessandro Gagliardi wrote:
Comparing
SELECT DISTINCT(user_id) FROM blocks JOIN seen_its USING (user_id)
WHERE seen_its.created BETWEEN (now()::date - interval '8
days')::timestamp AND now()::date::timestamp
to
SELECT DISTINCT(user_id) FROM seen_its WHERE created
On 02/23/2012 12:39 AM, Reuven M. Lerner wrote:
Hi, everyone...
This is basically what I'm trying to execute:
DELETE FROM B
WHERE r_id IN (SELECT R.id
FROM R, B
WHERE r.end_date (NOW() - (interval '1 day' * 30))
AND r.id = b.r_id
I don't recall which versions like which
On 02/23/2012 09:35 AM, Alessandro Gagliardi wrote:
I should have been more clear. I virtually never delete or do updates,
but I insert /a lot/. So the table does change quite a bit, but only
in one direction.
I was unable to disable autovacuum universally (due to the
On 02/23/2012 10:38 AM, Alessandro Gagliardi wrote:
On Thu, Feb 23, 2012 at 10:01 AM, Steve Crawford
scrawf...@pinpointresearch.com
mailto:scrawf...@pinpointresearch.com wrote:
The documentation has information like This parameter can only be
set in the postgresql.conf file
On 02/23/2012 01:07 PM, Alessandro Gagliardi wrote:
The second one (a bunch of insert statements within a single
connection). As I mentioned above, I was going to try the temp table
thing, but that wasn't fast enough. COPY might be my next attempt.
insert into...;
insert into...;
insert
On 02/23/2012 12:39 AM, Reuven M. Lerner wrote:
Hi, everyone. ...
...at one installation, we're quickly running out of disk space. The
database is already taking up about 200 GB of space, and is growing by
1 GB or so a day.
I've been following the discussion of approaches and tuning for
On 03/04/2012 03:50 AM, Michael Friedl wrote:
Hey!
On 04.03.2012 10:58, Rory Campbell-Lange wrote:
1U chassis with 8 2.5 disk bays
2x Intel Xeon E5630 Quad-Core / 4x 2.53GHz / 12MB cache
8 channel Areca ARC-1880i (PCI Express x8 card)
presumably with BBU (can't see it
On 03/30/2012 05:51 PM, Josh Berkus wrote:
So this turned out to be a Linux kernel issue. Will document it on
www.databasesoup.com.
Anytime soon? About to build two PostgreSQL servers and wondering if you
have uncovered a kernel version or similar issue to avoid.
Cheers,
Steve
--
Sent via
On 04/13/2012 08:04 AM, Claudio Freire wrote:
...You might want to try unlogged temp tables, which more
closely resemble mssql temp tables.
If they are permanent tables used for temporary storage then making them
unlogged may be beneficial. But actual temporary tables *are* unlogged
and
On 04/13/2012 09:43 AM, Claudio Freire wrote:
On Fri, Apr 13, 2012 at 1:36 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
If they are permanent tables used for temporary storage then making them
unlogged may be beneficial. But actual temporary tables *are* unlogged and
attempting
On 04/26/2012 12:49 PM, Craig James wrote:
An update to our system means I'm going to be rewriting every row of
some large tables (20 million rows by 15 columns). In a situation
like this, can auto-vacuum take care of it, or should I plan on
vacuum-full/reindex to clean up?
If you want to
On 05/24/2012 05:58 AM, Rajesh Kumar. Mallah wrote:
Dear Andy ,
Following the discussion on load average we are now investigating on some
other parts of the stack (other than db).
Essentially we are bumping up the limits (on appserver) so that more requests
goes to the DB server.
Which leads
On 06/19/2012 09:00 AM, Strange, John W wrote:
Given a baseline postgresql.conf config and a couple DL580 40 core/256GB memory
I noticed a large over head for pgbouncer, has anyone seen this before?
$ pgbench -h `hostname -i` -j 32 -p 4320 -U asgprod -s 500 -c 32 -S -T 60
pgbench_500
Scale
On 07/03/2012 08:22 AM, Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows 100 (not huge) -
the fastest way to clean each non-empty table and reset unique
identifier column of empty ones
I wonder, what is the fastest way to accomplish this kind of task in
On 08/03/2012 10:21 AM, Laszlo Nagy wrote:
All the above are the exact same point in time merely stated as
relevant to each location. Note that given a timestamp with time zone
and a zone, PostgreSQL returns a timestamp without time zone (you
know the zone since you specified it).
Yes, I
On 10/08/2012 08:26 AM, Navaneethan R wrote:
Hi all,
I have 10 million records in my postgres table.I am running the database
in amazon ec2 medium instance. I need to access the last week data from the
table.
It takes huge time to process the simple query.So, i throws time out
On 10/08/2012 02:45 PM, Craig James wrote:
This is driving me crazy. A new server, virtually identical to an old
one, has 50% of the performance with pgbench. I've checked everything
I can think of.
The setups (call the servers old and new):
old: 2 x 4-core Intel Xeon E5620
new: 4 x 4-core
On 10/23/2012 11:33 AM, Kevin Grittner wrote:
henk de wit wrote:
Well, what do you know! That did work indeed. Immediately after the
ANALYZE on that parent table (taking only a few seconds) a fast
plan was created and the query executed in ms again. Silly me, I
should have tried that earlier.
On 03/11/2013 08:55 PM, Misa Simic wrote:
Hi all,
We have one table with list of records for processing...
We loop trough that table and call one long runing function:
do_the_math_for_record(record_id)...snip...
but - if replace do_the_math_and_save_results with pg_sleep(1); To
simulate
On 03/12/2013 08:06 AM, Misa Simic wrote:
Thanks Steve
Well, the full story is too complex - but point was - whatever
blackbox does - it last 0.5 to 2secs per 1 processed record (maybe I
was wrong but I thought the reason why it takes the time how much it
needs to actually do the task
On 03/13/2013 09:15 AM, John Lister wrote:
On 13/03/2013 15:50, Greg Jaskiewicz wrote:
SSDs have much shorter life then spinning drives, so what do you do
when one inevitably fails in your system ?
Define much shorter? I accept they have a limited no of writes, but
that depends on load. You
On 05/21/2013 06:18 AM, Jeison Bedoya wrote:
Hi people, i have a database with 400GB running in a server with 128Gb
RAM, and 32 cores, and storage over SAN with fiberchannel, the problem
is when i go to do a backup whit pg_dumpall take a lot of 5 hours,
next i do a restore and take a lot of 17
On 05/23/2013 10:19 AM, Jonathan Morra wrote:
I am fairly new to squeezing performance out of Postgres, but I hope
this mailing list can help me. I have read the instructions found at
http://wiki.postgresql.org/wiki/Slow_Query_Questions and have tried to
abide by them the best that I can. I
On 05/23/2013 10:57 AM, Jonathan Morra wrote:
Ultimately I'm going to deploy this to Heroku on a Linux machine (my
tests have so far indicated that Heroku is MUCH slower than my
machine), but I wanted to get it fast on my local machine first. I
agree with your role partitioning, however, this
On 05/23/2013 05:21 PM, Jonathan Morra wrote:
Sorry for the messy query, I'm very new to writing these complex
queries. I'll try and make it easier to read by using WITH clauses.
However, just to clarify, the WITH clauses only increase readability
and not performance in any way, right?
It
On 12/18/2013 12:12 PM, prashant Pandey wrote:
Could you tell me each and every hardware parameters and OS parameters
the performance depends on.
I need the complete list of all the required parameters and how to
extract them on Linux through system calls and files.
Please it will be highly
On 07/03/2014 06:59 PM, Huang, Suya wrote:
BTW, I'm using the pgbadger report to check for peak connections/sessions.
*From:*pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] *On Behalf Of *Huang, Suya
*Sent:* Friday, July 04, 2014 11:44 AM
*To:*
On 08/21/2014 03:51 PM, Josh Berkus wrote:
On 08/21/2014 02:26 PM, Scott Marlowe wrote:
I'm running almost the exact same setup in production as a spare. It
has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since
it's a spare node I might be able to do some testing on it as well.
I will soon be migrating to some recently acquired hardware and seek
input from those who have gone before.
A quick overview: the dataset size is ~100GB, (~250-million tuples) with
a workload that consists of about 2/3 writes, mostly single record
inserts into various indexed tables, and 1/3
On 06/05/2015 10:54 AM, Sheena, Prabhjot wrote:
Postgresql 9.3 Version
Guys
Here is the issue that I’m facing for couple of weeks now.
I have table (size 7GB)
*If I run this query with this specific registration id it is using
the wrong execution plan and takes more than a
On 06/05/2015 12:28 PM, Steve Crawford wrote:
On 06/05/2015 11:38 AM, Sheena, Prabhjot wrote:
When I run vacuum analyze it fixes the problem but after 1 or 2 days
the problem comes back
Is autovacuum running and using what settings?
(select name, setting from pg_settings where name
On 06/05/2015 11:38 AM, Sheena, Prabhjot wrote:
When I run vacuum analyze it fixes the problem but after 1 or 2 days
the problem comes back
Is autovacuum running and using what settings?
(select name, setting from pg_settings where name ~ 'autovacuum' Konsole
output or name ~
On 07/02/2015 07:01 AM, Wes Vaske (wvaske) wrote:
What about a RAID controller? Are RAID controllers even available for
PCI-Express SSD drives, or do we have to stick with SATA if we need a
battery-backed RAID controller? Or is software RAID sufficient for SSD
drives?
Quite a few of the
>
> ...(BTW, I wonder why you are moving only to 9.2 and not something more
>> recent.)
>>
>
> Well, 9.2.15 is what comes bundled with RHEL 7, so I decided to go with
> that to avoid dependency issues. But I could install a more fresh version
> from scratch if that would solve my problem.
>
The answer, as always, is "it depends."
Can you give us an overview of your setup? The appropriate setup for small
numbers of long-running analytical queries (typically faster CPUs) will be
different than a setup for handling numerous simultaneous connections
(typically more cores).
But CPU is
On Tue, May 23, 2017 at 1:49 PM, Sven R. Kunze wrote:
> On 23.05.2017 22:14, Jarek wrote:
>
>> I have pool of clients (~30) inserting to database about 50 records per
>> second (in total from all clients) and small numer (<10) clients
>> querying database for those records once
60 matches
Mail list logo