On Thu, May 4, 2017 at 8:10 AM, Junaid Malik wrote:
> Hello Guys,
>
> We are facing problem related to performance of Postgres. Indexes are not
> being utilized and Postgres is giving priority to seq scan. I read many
> articles of Postgres performance and found that we
On Thu, May 4, 2017 at 8:36 AM, Scott Marlowe wrote:
> On Thu, May 4, 2017 at 8:10 AM, Junaid Malik wrote:
>> Hello Guys,
>>
>> We are facing problem related to performance of Postgres. Indexes are not
>> being utilized and Postgres is giving
On Thu, May 4, 2017 at 8:10 AM, Junaid Malik wrote:
> Hello Guys,
>
> We are facing problem related to performance of Postgres. Indexes are not
> being utilized and Postgres is giving priority to seq scan. I read many
> articles of Postgres performance and found that we
Hello Guys,
We are facing problem related to performance of Postgres. Indexes are not being
utilized and Postgres is giving priority to seq scan. I read many articles of
Postgres performance and found that we need to set the randome_page_cost value
same as seq_page_cost because we are using
hi,
thank you so much for the input.
Can you please clarify the following points:
*1. Output of BitmapAnd = 303660 rows*
- BitmapAnd (cost=539314.51..539314.51 rows=303660 width=0) (actual
time=9083.085..9083.085 rows=0 loops=1)
- Bitmap Index Scan on groupid_index
2013/12/7 chidamparam muthusamy mchidampa...@gmail.com
hi,
thank you so much for the input.
Can you please clarify the following points:
*1. Output of BitmapAnd = 303660 rows*
- BitmapAnd (cost=539314.51..539314.51 rows=303660 width=0) (actual
time=9083.085..9083.085 rows=0 loops=1)
On Friday, December 06, 2013 11:06:58 PM chidamparam muthusamy wrote:
hi,
Registered with PostgreSQL Help Forum to identify and resolve the Postgres
DB performance issue, received suggestions but could not improve the
speed/response time. Please help.
Details:
Postgres Version 9.3.1
On 06/12/13 17:36, chidamparam muthusamy wrote:
I rather think Alan is right - you either want a lot more RAM or faster
disks. Have a look at your first query...
Query:
EXPLAIN (analyze, buffers) SELECT text(client) as client, text(gateway)
as gateway,count(*)::bigint as total_calls,
On 6.12.2013 18:36, chidamparam muthusamy wrote:
hi,
Registered with PostgreSQL Help Forum to identify and resolve the
Postgres DB performance issue, received suggestions but could not
improve the speed/response time. Please help.
Details:
Postgres Version 9.3.1
Server configuration:
Dusan Misic wrote:
I had done some testing for my application (WIP) and I had executed
same SQL script and queries on real physical 64-bit Windows 7 and on
virtualized 64-bit CentOS 6.
Both database servers are tuned with real having 8 GB RAM and 4 cores,
virtualized having 2 GB RAM and 2
I had done some testing for my application (WIP) and I had executed same SQL
script and queries on real physical 64-bit Windows 7 and on virtualized
64-bit CentOS 6.
Both database servers are tuned with real having 8 GB RAM and 4 cores,
virtualized having 2 GB RAM and 2 virtual cores.
On 8/3/2011 11:37 AM, Dusan Misic wrote:
I had done some testing for my application (WIP) and I had executed same
SQL script and queries on real physical 64-bit Windows 7 and on
virtualized 64-bit CentOS 6.
Both database servers are tuned with real having 8 GB RAM and 4 cores,
virtualized
Thank you Andy for your answer.
That is exactly what I had expected, but it is better to consult with
experts on this matter.
Again, thank you.
Dusan
On Aug 3, 2011 7:05 PM, Andy Colson a...@squeakycode.net wrote:
On 8/3/2011 11:37 AM, Dusan Misic wrote:
I had done some testing for my
Dusan Misic promi...@gmail.com wrote:
My question is simple. Does PostgreSQL perform better on Linux
than on Windows and how much is it faster in your tests?
We tested this quite a while back (on 8.0 and 8.1) with identical
hardware and identical databases running in matching versions of
On Apr 5, 2011, at 9:33 AM, Adarsh Sharma wrote:
Now I have to start more queries on Database Server and issue new connections
after some time. Why the cached memory is not freed.
It's freed on-demand.
Flushing the cache memory is needed how it could use so much if I set
Why would forced
On Tue, Apr 5, 2011 at 1:33 AM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
[root@s8-mysd-2 ~]# free -m
total used free shared buffers cached
Mem: 15917 15826 90 0 101 15013
-/+ buffers/cache: 711 15205
Scott Marlowe wrote:
On Tue, Apr 5, 2011 at 1:33 AM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
[root@s8-mysd-2 ~]# free -m
total used free sharedbuffers cached
Mem: 15917 15826 90 0101 15013
-/+ buffers/cache:
On Tue, Apr 5, 2011 at 7:20 AM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
Scott Marlowe wrote:
On Tue, Apr 5, 2011 at 1:33 AM, Adarsh Sharma adarsh.sha...@orkash.com
wrote:
[root@s8-mysd-2 ~]# free -m
total used free shared buffers cached
Mem:
Dear all,
I have a Postgres database server with 16GB RAM.
Our application runs by making connections to Postgres Server from
different servers and selecting data from one table insert into
remaining tables in a database.
Below is the no. of connections output :-
postgres=# select
max_connections = 700
shared_buffers = 4096MB
temp_buffers = 16MB
work_mem = 64MB
maintenance_work_mem = 128MB
wal_buffers = 32MB
checkpoint_segments = 32
random_page_cost = 2.0
effective_cache_size = 4096MB
First of all, there's no reason to increase wal_buffers above 32MB. AFAIK
the
On Mon, Apr 4, 2011 at 3:40 AM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
Dear all,
I have a Postgres database server with 16GB RAM.
Our application runs by making connections to Postgres Server from different
servers and selecting data from one table insert into remaining tables in
a
Also you can try to take the help of pgtune before hand.
pgfoundry.org/projects/*pgtune*/
On Mon, Apr 4, 2011 at 12:43 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Mon, Apr 4, 2011 at 3:40 AM, Adarsh Sharma adarsh.sha...@orkash.com
wrote:
Dear all,
I have a Postgres database
On Mon, Apr 4, 2011 at 4:43 AM, Scott Marlowe scott.marl...@gmail.com wrote:
[root@s8-mysd-2 ~]# free total used free shared
buffers cached
Mem: 16299476 16202264 97212 0 58924 15231852
-/+ buffers/cache: 911488 15387988
Adarsh,
What is the Size of Database?
Best Regards,
Raghavendra
EnterpriseDB Corporation
On Mon, Apr 4, 2011 at 4:24 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Mon, Apr 4, 2011 at 4:43 AM, Scott Marlowe scott.marl...@gmail.com
wrote:
[root@s8-mysd-2 ~]# free total
On Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
Mem: 16299476k total, 16198784k used, 100692k free, 73776k buffers
Swap: 16787884k total, 148176k used, 16639708k free, 15585396k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
Thanks Scott :
My iostat package is not installed but have a look on below output:
[root@s8-mysd-2 8.4SS]# vmstat 10
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff cache si sobibo incs us sy
id wa st
1 0 147664
You got to have something to compare against.
I would say, try to run some benchmarks (pgbench from contrib) and compare them
against a known good instance of postgresql, if you have access in such a
machine.
That said, and forgive me if i sound a little explicit but if you dont know
how to
On Mon, Apr 4, 2011 at 5:51 AM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
Thanks Scott :
My iostat package is not installed but have a look on below output:
[root@s8-mysd-2 8.4SS]# vmstat 10
procs ---memory-- ---swap-- -io --system--
-cpu--
r b
Thanks Scott :
My iostat package is not installed but have a look on below output:
[root@s8-mysd-2 8.4SS]# vmstat 10
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff cache si sobibo incs us sy
id wa st
1 0
Thank U all,
I know some things to work on after some work study on them , I will
continue this discussion tomorrow .
Best Regards,
Adarsh
Raghavendra wrote:
Adarsh,
[root@s8-mysd-2 8.4SS]# iostat
-bash: iostat: command not found
/usr/bin/iostat
Our application runs
Adarsh,
[root@s8-mysd-2 8.4SS]# iostat
-bash: iostat: command not found
/usr/bin/iostat
Our application runs by making connections to Postgres Server from different
servers and selecting data from one table insert into remaining tables in
a database.
When you are doing bulk inserts you
On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith g...@2ndquadrant.com wrote:
Scott Marlowe wrote:
I can sustain about 5,000 transactions per second on a machine with 8
cores (2 years old) and 14 15k seagate hard drives.
Right. You can hit 2 to 3000/second with a relatively inexpensive system,
On Thu, Jan 6, 2011 at 2:31 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith g...@2ndquadrant.com wrote:
Scott Marlowe wrote:
I can sustain about 5,000 transactions per second on a machine with 8
cores (2 years old) and 14 15k seagate hard drives.
On Thu, Jan 6, 2011 at 2:41 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Jan 6, 2011 at 2:31 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith g...@2ndquadrant.com wrote:
Scott Marlowe wrote:
I can sustain about 5,000 transactions per second
On Fri, Dec 17, 2010 at 07:48, selvi88 selvi@gmail.com wrote:
My requirement is more than 15 thousand queries will run,
It will be 5000 updates and 5000 insert and rest will be select.
What IO system are you running Postgres on? With that kind of writes you
should be really focusing on
On Sat, Dec 18, 2010 at 2:34 AM, selvi88 selvi@gmail.com wrote:
Thanks for ur suggestion, already I have gone through that url, with that
help I was able to make my configuration to work for 5K queries/second.
The parameters I changed was shared_buffer, work_mem, maintenance_work_mem
and
Scott Marlowe wrote:
I can sustain about 5,000 transactions per second on a machine with 8
cores (2 years old) and 14 15k seagate hard drives.
Right. You can hit 2 to 3000/second with a relatively inexpensive
system, so long as you have a battery-backed RAID controller and a few
hard
On Mon, Dec 20, 2010 at 10:49 AM, Greg Smith g...@2ndquadrant.com wrote:
Scott Marlowe wrote:
I can sustain about 5,000 transactions per second on a machine with 8
cores (2 years old) and 14 15k seagate hard drives.
Right. You can hit 2 to 3000/second with a relatively inexpensive system,
My requirement is more than 15 thousand queries will run,
It will be 5000 updates and 5000 insert and rest will be select.
Each query will be executed in each psql client, (let say for 15000 queries
15000 thousand psql connections will be made).
Since the connections are more for me the
Thanks for ur suggestion, already I have gone through that url, with that
help I was able to make my configuration to work for 5K queries/second.
The parameters I changed was shared_buffer, work_mem, maintenance_work_mem
and effective_cache.
Still I was not able to reach my target.
Can u kindly
Dear Friends,
I have a requirement for running more that 15000 queries per second.
Can you please tell what all are the postgres parameters needs to be changed
to achieve this.
Already I have 17GB RAM and dual core processor and this machine is
dedicated for database operation.
Dear Friends,
I have a requirement for running more that 15000 queries per
second.
Can you please tell what all are the postgres parameters needs to be
changed
to achieve this.
Already I have 17GB RAM and dual core processor and this machine
is dedicated for database
selvi88 wrote:
I have a requirement for running more that 15000 queries per second.
Can you please tell what all are the postgres parameters needs to be changed
to achieve this.
Already I have 17GB RAM and dual core processor and this machine is
dedicated for database operation.
On Thu, Dec 16, 2010 at 14:33, selvi88 selvi@gmail.com wrote:
I have a requirement for running more that 15000 queries per second.
Can you please tell what all are the postgres parameters needs to be changed
to achieve this.
You have not told us anything about what sort of queries
On Thu, Dec 16, 2010 at 7:33 AM, selvi88 selvi@gmail.com wrote:
Dear Friends,
I have a requirement for running more that 15000 queries per second.
Can you please tell what all are the postgres parameters needs to be changed
to achieve this.
Already I have 17GB RAM and dual
Hi,
there are several performance related issues, thereby it's rather
difficult to answer your question shortly.
You have to keep in mind not only postgres itself, hardware is also an
important factor.
Do you have performance problems, which you can describe more detailed ?
regards..GERD..
Hi all..
please, how can i tune postgres performance?
Thanks.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
std pik wrote:
Hi all..
please, how can i tune postgres performance?
Thanks.
Thats a very generic question. Here are some generic answers:
You can tune the hardware underneath. Faster hardware = faster pg.
You can tune the memory usage, and other postgres.conf setting to match
your
Didn't see the original message so I replied to this one.
On Mon, Sep 28, 2009 at 8:11 AM, Andy Colson a...@squeakycode.net wrote:
std pik wrote:
Hi all..
please, how can i tune postgres performance?
Start here:
http://www.westnet.com/~gsmith/content/postgresql/
--
Sent via
On Fri, Sep 12, 2008 at 12:07 PM, H. Hall [EMAIL PROTECTED] wrote:
Hmmm ARM/XScale, 64MB. Just curious. Are you running a Postgres server on
a pocket pc or possibly a cell phone?
I would think SQLite would be a better choice on that kind of thing.
Unless you're trying to run really complex
I'm trying to optimize postgres performance on a headless solid state
hardware platform (no fans or disks). I have the database stored on a
USB 2.0 flash drive (hdparm benchmarks reads at 10 MB/s). Performance is
limited by the 533Mhz CPU.
Hardware:
IXP425 XScale (big endian) 533Mhz 64MB RAM
USB
George McCollister wrote:
I'm trying to optimize postgres performance on a headless solid state
hardware platform (no fans or disks). I have the database stored on a
USB 2.0 flash drive (hdparm benchmarks reads at 10 MB/s). Performance is
limited by the 533Mhz CPU.
Hardware:
IXP425 XScale (big
Bill Moran escribió:
In response to Chris Mair [EMAIL PROTECTED]:
Hi,
Note: I have already vacumm full. It does not solve the problem.
To jump in here in Chris' defense, regular vacuum is not at all the same
as vacuum full. Periodic vacuum is _much_ preferable to an occasional
vacuum
Just a random thought/question...
Are you running else on the machine? When you say resource usage, do
you mean hd space, memory, processor, ???
What are your values in top?
More info...
Cheers
Anton
On 27/08/2007, Bill Moran [EMAIL PROTECTED] wrote:
In response to Chris Mair [EMAIL PROTECTED]:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Note: I have already vacumm full. It does not solve the problem.
I have a postgres 8.1 database. In the last days I have half traffic
than 4 weeks ago, and resources usage is twice. The resource monitor
graphs also shows hight peaks (usually
Hi,
Note: I have already vacumm full. It does not solve the problem.
I have a postgres 8.1 database. In the last days I have half traffic
than 4 weeks ago, and resources usage is twice. The resource monitor
graphs also shows hight peaks (usually there is not peaks)
The performarce is getting
SO: CentOS release 4.3 (Final) (kernel: 2.6.9-34.0.1.ELsmp)
Postgres: 8.1.3
I had some problems before with autovacuum. So, Each day I crontab execute:
vacuumdb -f -v --analyze
reindex database vacadb
I saw logs (the output of vacuum and reindex) and there is no errors.
If u need more info,
In response to Chris Mair [EMAIL PROTECTED]:
Hi,
Note: I have already vacumm full. It does not solve the problem.
To jump in here in Chris' defense, regular vacuum is not at all the same
as vacuum full. Periodic vacuum is _much_ preferable to an occasional
vacuum full.
The output of
Hello, I've set up 2 identical machines, hp server 1ghz p3,
768mb ram, 18gb scsi3 drive. On the first one I've installed
Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both
machines I've installed Postgresql 8.2.3 from sources.
Now the point :)) According to my tests postgres on Linux
box
On 2/21/07, Jacek Zaręba [EMAIL PROTECTED] wrote:
Hello, I've set up 2 identical machines, hp server 1ghz p3,
768mb ram, 18gb scsi3 drive. On the first one I've installed
Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both
machines I've installed Postgresql 8.2.3 from sources.
Now the point
In response to Jacek Zaręba [EMAIL PROTECTED]:
Hello, I've set up 2 identical machines, hp server 1ghz p3,
768mb ram, 18gb scsi3 drive. On the first one I've installed
Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both
machines I've installed Postgresql 8.2.3 from sources.
Now the
Le mercredi 21 février 2007 10:57, Jacek Zaręba a écrit :
Now the point :)) According to my tests postgres on Linux
box run much faster then on FreeBSD, here are my results:
You may want to compare some specific benchmark, as in bench with you
application queries. For this, you can consider
Jacek Zarêba wrote:
Hello, I've set up 2 identical machines, hp server 1ghz p3,
768mb ram, 18gb scsi3 drive. On the first one I've installed
Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both
machines I've installed Postgresql 8.2.3 from sources.
Now the point :)) According to my tests
The members table contains about 500k rows. It has an index on
(group_id, member_id) and on (member_id, group_id).
Yes, bad stats are causing it to pick a poor plan, but you're giving it
too many options (which doesn't help) and using space up unnecessarily.
Keep (group_id, member_id)
Remove
I have two instances of a production application that uses Postgres 7.2,
deployed in two different data centers for about the last 6 months. The
sizes, schemas, configurations, hardware, and access patterns of the two
databases are nearly identical, but one consistently takes at least 5x
longer
Michael wrote:
I have two instances of a production application that uses Postgres
7.2,
deployed in two different data centers for about the last 6 months.
The
sizes, schemas, configurations, hardware, and access patterns of the
two
databases are nearly identical, but one consistently takes at
Michael Nonemacher [EMAIL PROTECTED] writes:
I have two instances of a production application that uses Postgres 7.2,
deployed in two different data centers for about the last 6 months. The
sizes, schemas, configurations, hardware, and access patterns of the two
databases are nearly
to stop or change this
behavior? Apologies if this is a known problem...
mike
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael
Nonemacher
Sent: Friday, June 04, 2004 10:43 AM
To: [EMAIL PROTECTED]
Subject: [PERFORM] postgres performance: comparing
On Fri, 2004-06-04 at 18:07, Michael Nonemacher wrote:
Slight update:
Thanks for the replies; this is starting to make a little more sense...
I've managed to track down the root of the problem to a single query on
a single table. I have a query that looks like this:
select
, 2004 5:27 PM
To: Michael Nonemacher
Cc: Postgresql Performance
Subject: Re: [PERFORM] postgres performance: comparing 2 data centers
The members table contains about 500k rows. It has an index on
(group_id, member_id) and on (member_id, group_id).
Yes, bad stats are causing it to pick a poor
Michael Nonemacher [EMAIL PROTECTED] writes:
Agreed.
We originally created the indexes this way because we sometimes do
searches where one of the columns is constrained using =, and the other
using a range search, but it's not clear to me how much Postgres
understands multi-column indexes.
Michael Nonemacher [EMAIL PROTECTED] writes:
It seems like the statistics are wildly different depending on whether
the last operation on the table was a 'vacuum analyze' or an 'analyze'.
Vacuum or vacuum-analyze puts the correct number (~500k) in
pg_class.reltuples, but analyze puts 7000 in
72 matches
Mail list logo