ry of sar
data will make it apparent pretty quickly if that is a win/lose/tie.
If we have another spike in production, we'll be ready to measure it more
accurately.
Thanks,
Tony
we manage,
I'll get you the stats.
Thanks!
Tony
Tony Kay
TeamUnify, LLC
TU Corporate Website <http://www.teamunify.com/>
TU Facebook <http://www.facebook.com/teamunify> | Free OnDeck Mobile
Apps<http://www.teamunify.com/__corp__/ondeck/>
On Tue, Oct 15, 2013 at 6:00 A
On Mon, Oct 14, 2013 at 4:42 PM, Tomas Vondra wrote:
> On 15.10.2013 01:00, Tony Kay wrote:
> > Hi,
> >
> > I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a
> > 16 Opteron 6276 CPU box. We limit connections to roughly 120, but
> > our web
transaction blocking other things would drive the
CPUs so hard into the ground with user time.
Tony
Tony Kay
TeamUnify, LLC
TU Corporate Website <http://www.teamunify.com/>
TU Facebook <http://www.facebook.com/teamunify> | Free OnDeck Mobile
Apps<http://www.teamunify.com/__corp__/ondec
d familiar to anyone, and if so, please advise.
Thanks in advance,
Tony Kay
Here's the explain:
pg=# explain select getMemberAdminPrevious_sp(247815829,
1,'test.em...@hotmail.com', 'Email', 'Test');
QUERY PLAN
--
Result (cost=0.00..0.26 rows=1 width=0)
(1 row)
Time: 1.167 ms
There was discussion
ecute the function?
On Tue, 2012-01-24 at 21:47 +0100, Pavel Stehule wrote:
> Hello
>
> 2012/1/24 Tony Capobianco :
> > We are migrating our Oracle warehouse to Postgres 9.
> >
> > This function responds well:
> >
> > pg=# select public.getMemberAdminPrevio
.549 ms
However, when testing, this fetch takes upwards of 38 minutes:
BEGIN;
select public.getMemberAdminPrevious_sp2(247815829, 1,'test.em...@hotmail.com',
'email', 'test');
FETCH ALL IN "";
How can I diagnose any performance issues with the fetch in t
Oooo...some bad math there. Thanks.
On Wed, 2011-06-08 at 12:38 -0700, Samuel Gendler wrote:
>
>
> On Wed, Jun 8, 2011 at 12:03 PM, Tony Capobianco
> wrote:
> My current setting is 22G. According to some documentation, I
> want to
> set effectiv
? Most of our other etl processes are running fine,
however I'm curious if I could see a significant performance boost by
reducing the effective_cache_size.
On Wed, 2011-06-08 at 13:03 -0400, Tom Lane wrote:
> Tony Capobianco writes:
> > Well, this ran much better. However, I'
wrote:
> Hello
>
> what is your settings for
>
> random_page_cost, seq_page_cost and work_mem?
>
> Regards
>
> Pavel Stehule
>
> 2011/6/8 Tony Capobianco :
> > Here's the explain analyze:
> >
> > pg_dw=# explain analyze CREATE
> Seq Scan on ecr_sents s (cost=0.00..8.79 rows=479
width=4) (actual time=0.010..0.121 rows=479 loops=1)
Total runtime: 167279.950 ms
On Wed, 2011-06-08 at 11:51 -0400, Stephen Frost wrote:
> * Tony Capobianco (tcapobia...@prospectiv.com) wrote:
> > HashAggregate (cost=43911
Here's the explain analyze:
pg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)
as
select o.emailcampaignid, count(memberid) opencnt
from openactivity o,ecr_sents s
where s.emailcampaignid = o.emailcampaignid
group by o.emailcampaignid;
QUERY
PLAN
.79 rows=479
width=4)
Yikes. Two sequential scans.
On Wed, 2011-06-08 at 11:33 -0400, Tom Lane wrote:
> Tony Capobianco writes:
> > pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)
> > pg_dw-# as
> > pg_dw-# select o.emailcampaignid, count(memberid) opencnt
| 0
log_truncate_on_rotation | off
logging_collector| on
maintenance_work_mem | 1GB
max_connections | 400
max_stack_depth | 2MB
search_path | x
server_encoding | UTF8
shared_buffers | 768
Very true Igor! Free is my favorite price.
I'll figure a way around this issue.
Thanks for your help.
Tony
On Fri, 2010-10-15 at 14:54 -0400, Igor Neyman wrote:
> > -Original Message-
> > From: Tony Capobianco [mailto:tcapobia...@prospectiv.com]
> > Sent: Fri
identified by operation id):
---
4 - filter("EMAILBOUNCED"=0 AND "EMAILOK"=1)
16 rows selected.
On Fri, 2010-10-15 at 13:43 -0400, Igor Neyman wrote:
>
> > -Original Message-
> > From: Tony Capobian
and the present
design of our indexes. Over time, indexes were added/removed to satisfy
particular functionality. Considering this is our most important table,
I will research exactly how this table is queried to better
optimize/reorganize our indexes.
Thanks for your help.
Tony
On Thu, 2010-10
We are in the process of testing migration of our oracle data warehouse
over to postgres. A potential showstopper are full table scans on our
members table. We can't function on postgres effectively unless index
scans are employed. I'm thinking I don't have something set correctly
in my postgres
Thanks for all of the input everyone.
I believe I am going to put together a test case using schemas and
partitioning and then doubling the amount of data currently in the system
to give me an idea of how things will be performing a couple of years down
the road.
I was looking at a server using t
I am in the process of moving a system that has been built around FoxPro
tables for the last 18 years into a PostgreSQL based system.
Over time I came up with decent strategies for making the FoxPro tables
work well with the workload that was placed on them, but we are getting to
the point that th
ally Open Source ones) that
makes it particularly suitable for running PostgreSQL?
Best,
Tony
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 5/2/06, Dan Harris <[EMAIL PROTECTED]> wrote:
My database is used primarily in an OLAP-type environment. Sometimes my
users get a little carried away and find some way to slip past the
sanity filters in the applications and end up bogging down the server
with queries that run for hours and ho
On 5/2/06, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
On Tue, May 02, 2006 at 12:06:30 -0700,
Tony Wasson <[EMAIL PROTECTED]> wrote:
>
> Ah thanks, it's a bug in my understanding of the thresholds.
>
> "With the standard freezing policy, the age column will st
On 5/2/06, Vivek Khera <[EMAIL PROTECTED]> wrote:
On May 2, 2006, at 2:26 PM, Tony Wasson wrote:
> The script detects a wrap at 2 billion. It starts warning once one or
> more databases show an age over 1 billion transactions. It reports
> critical at 1.5B transactions. I ho
ransactions. It reports
critical at 1.5B transactions. I hope everyone out there is vacuuming
*all* databases often.
Hope some of you can use this script!
Tony Wasson
check_pg_transactionids.pl
Description: Perl program
---(end of broadcast)---
TIP 2:
On 9/28/05, Matthew Nuzum <[EMAIL PROTECTED]> wrote:
> On 9/28/05, Arnau <[EMAIL PROTECTED]> wrote:
> > Hi all,
> >
> >I have been "googling" a bit searching info about a way to monitor
> > postgresql (CPU & Memory, num processes, ... ) and I haven't found
> > anything relevant. I'm using munin
27 matches
Mail list logo