econds
max_locks_per_transaction = 200 # min 10, ~260*max_connections bytes each
#---
# VERSION/PLATFORM COMPATIBILITY
#---
# - Previous Postgres Versions -
#add_missing_from = true
#regex_flavor = advanced# advanced, extended, or
ig/E MTU 9000 (99% dedicated to database)
8GB RAM
Postgres v8.1.9
The database is only about 4GB in size and the key tables total about 700MB.
Primary keys are CHAR(32) GUIDs
Thanks,
Bryan
No, but I was just informed of that trick earlier and intend to try it
soon. Sometimes, the solution is so simple it's TOO obvious... :)
Bryan
On 6/25/07, Oleg Bartunov <[EMAIL PROTECTED]> wrote:
On Mon, 25 Jun 2007, Bryan Murphy wrote:
> We have a search facility in our data
information.
This may or may not work in your scenario, but it was a reasonable trade off
for us.
Bryan
On 7/11/07, Patric de Waha <[EMAIL PROTECTED]> wrote:
Hi,
I've two questions for which I not really found answers in the web.
Intro:
I've a Website with some traffic.
2 M
en we get up to 10's, or even 100's of
millions and let you know how it scaled.
Bryan
On 7/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
I am planning to add a tags (as in the "web 2.0" thing) feature to my web
based application. I would like some feedback from th
of a specific
query, or determine the PID of the process it is running in? I'd like
to throw together a quick shell script if at all possible, as right
now I have to monitor the process manually and we'll have fixed the
problem long before we have the chance to implement proper database
clu
On 8/2/07, Alan Hodgson <[EMAIL PROTECTED]> wrote:
> On Thursday 02 August 2007 09:02, "Bryan Murphy" <[EMAIL PROTECTED]>
> wrote:
> > My question: Is there a way I can decrease the priority of a specific
> > query, or determine the PID of the process it is runni
we currently have logging enabled for all queries over 100ms, and keep
the last 24 hours of logs before we rotate them. I've found this tool
very helpful in diagnosing new performance problems that crop up:
http://pgfouine.projects.postgresql.org/
Bryan
On 8/8/07, Steinar H. Gunderson &l
e to try a single 8 disk RAID 10 with battery wired up directly
to our database, but given the size of our company and limited funds,
it won't be feasible any time soon.
Bryan
On 9/7/07, Matthew Schumacher <[EMAIL PROTECTED]> wrote:
> I'm getting a san together to consolida
last 24 hours. I can't even count
the # of times I've come in in the morning and some new query has
bubbled to the top.
It's very handy. I don't know if it would have helped you identify
your problem, but it's saved our butts a few times.
Bryan
On 9/25/07, Kamen Stanev
everything into account and model it correctly (not too loose,
not too tight), your solution will be reusable and will save time and
hardware expenses.
Regards -
Bryan
On Thu, May 27, 2010 at 2:43 AM, David Jarvis wrote:
> Hi, Bryan.
>
> I was just about to reply to the thread, t
Is this a bulk insert? Are you wrapping your statements within a
transaction(s)?
How many columns in the table? What do the table statistics look like?
On Fri, Jun 4, 2010 at 9:21 AM, Michael Gould <
mgo...@intermodalsoftwaresolutions.net> wrote:
> In my opinion it depends on the application,
UFS2 w/ soft updates on FreeBSD might be an interesting addition to the list
of test cases
On Fri, Jun 4, 2010 at 9:33 AM, Andres Freund wrote:
> On Friday 04 June 2010 16:25:30 Tom Lane wrote:
> > Andres Freund writes:
> > > On Friday 04 June 2010 14:17:35 Jon Schewe wrote:
> > >> XFS (logbufs
What types of journaling on each fs?
On Fri, Jun 4, 2010 at 1:26 PM, Jon Schewe wrote:
> On 6/4/10 9:33 AM, Andres Freund wrote:
> > On Friday 04 June 2010 16:25:30 Tom Lane wrote:
> >
> >> Andres Freund writes:
> >>
> >>> On Friday 04 June 2010 14:17:35 Jon Schewe wrote:
> >>>
> XFS (log
s like a fun project!
Bryan
On Fri, Jun 25, 2010 at 7:02 PM, Greg Smith wrote:
> Kevin Grittner wrote:
>
>> A schema is a logical separation within a database. Table
>> client1.account is a different table from client2.account. While a
>> user can be limited to tables w
On Wed, Dec 8, 2010 at 3:03 PM, Benjamin Krajmalnik wrote:
> I need to build a new high performance server to replace our current
> production database server.
We run FreeBSD 8.1 with PG 8.4 (soon to upgrade to PG 9). Hardware is:
Supermicro 2u 6026T-NTR+
2x Intel Xeon E5520 Nehalem 2.26GHz
> All,
> My company (Chariot Solutions) is sponsoring a day of free
> PostgreSQL training by Bruce Momjian (one of the core PostgreSQL
> developers). The day is split into 2 sessions (plus a Q&A session):
>
> * Mastering PostgreSQL Administration
> * PostgreSQL Performance Tuning
>
>
econds
max_locks_per_transaction = 200 # min 10, ~260*max_connections bytes each
#---
# VERSION/PLATFORM COMPATIBILITY
#---
# - Previous Postgres Versions -
#add_missing_from = true
#regex_flavor = advanced# advanced, extended, or
Does this apply to table names as well or just columns?
Bryan
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
I've got a new server and am myself new to tuning postgres.
Server is an 8 core Xeon 2.33GHz, 8GB RAM, RAID 10 on a 3ware 9550SX-4LP w/ BBU.
It's serving as the DB for a fairly write intensive (maybe 25-30%) Web
application in PHP. We are not using persistent connections, thus the
high max conne
On Mon, Mar 3, 2008 at 4:26 PM, Bill Moran
<[EMAIL PROTECTED]> wrote:
> > > cat /boot/loader.conf
> > kern.ipc.semmni=256
> > kern.ipc.semmns=512
> > kern.ipc.semmnu=256
> >
> > > cat /etc/sysctl.conf
> > kern.ipc.shmall=393216
> > kern.ipc.shmmax=1610612736
>
> I would just set this to 2
On Mon, Mar 3, 2008 at 5:11 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Mon, 3 Mar 2008, alan bryan wrote:
>
> >> pgbench -c 100 -t 1000 testdb
>
> > tps = 558.013714 (excluding connections establishing)
> >
> > Just for testing, I tried turning
# - Previous Postgres Versions -
#add_missing_from = off
#array_nulls = on
#backslash_quote = safe_encoding# on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#standard_conforming_strings = off
#regex_flavor = ad
On Tue, Apr 22, 2008 at 08:41:09AM -0700, Joshua D. Drake wrote:
> On Wed, 23 Apr 2008 00:31:01 +0900
> Bryan Buecking <[EMAIL PROTECTED]> wrote:
>
> > at any given time there is about 5-6 postgres in startup
> > (ps auxwww | grep postgres | grep startup | wc -l)
>
On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:
> On Apr 22, 2008, at 10:31 AM, Bryan Buecking wrote:
>
> >max_connections = 2400
>
> That is WAY too high. Get a real pooler, such as pgpool, and drop
> that down to 1000 and test from there.
I agree, b
for that article, very informative and persuasive enough that
I've turned off persistent connections.
--
Bryan Buecking http://www.starling-software.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
On Tue, Apr 22, 2008 at 01:21:03PM -0300, Rodrigo Gonzalez wrote:
> Are tables vacuumed often?
How often is often. Right now db is vaccumed once a day.
--
Bryan Buecking http://www.starling-software.com
--
Sent via pgsql-performance mailing list (pgsql-performa
I hate to nag, but could anybody help me with this? We have a few
related queries that are causing noticeable service delays in our
production system. I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.
Thanks,
Bryan
On Mon, Ma
I hate to nag, but could anybody help me with this? We have a few
related queries that are causing noticeable service delays in our
production system. I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.
Thanks,
Bryan
On Mon, Ma
running into these problems and need to understand what is going
on so that I know we're fixing the correct things.
Thanks,
Bryan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
l getting the same execution plan.
Looking through our configuration one more time, I see that at some
point I set random_page_cost to 2.0, but I don't see any other changes
to query planner settings from their default values.
Bryan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Mar 25, 2009 at 8:40 AM, Robert Haas wrote:
> On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote:
>> Looking through our configuration one more time, I see that at some
>> point I set random_page_cost to 2.0, but I don't see any other changes
>> to query p
temexperiencelog.visitorid and
> visitors.user_id both to 500.
I tried that already, but I decided to try again in case I messed up
something last time. Here's what I ran. As you can see, it still
chooses to do a sequential scan. Am I changing the stats for those
columns correctly?
Thanks,
Bryan
y into our
architecture, so our usage patterns have transformed overnight.
Previously we were very i/o bound, now most of the actively used data
is actually in memory. Just a few weeks ago there was so much churn
almost nothing stayed cached for long.
This is great, thanks guys!
Bryan
On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane wrote:
> Bryan Murphy writes:
>> What I did was change seq_page_cost back to 1.0 and then changed
>> random_page_cost to 0.5
>
> [ squint... ] It makes no physical sense for random_page_cost to be
> less than seq_page_cost.
make huge I/O load and should run in background. When this
> query runs all other fast queries slow down dramatically.
Could you use something like slony to replicate the needed data to a
secondary database and run the query there?
Bryan
--
Sent via pgsql-performance mailing list (pg
36 matches
Mail list logo