> On Feb 13, 2016, at 10:43 AM, Dan Langille wrote:
>
> Today I tackled the production server. After discussion on the Bacula devel
> mailing list (http://marc.info/?l=bacula-devel&m=145537742804482&w=2
> <http://marc.info/?l=bacula-devel&m=145537742804482&w
> On Feb 11, 2016, at 4:41 PM, Dan Langille wrote:
>
>> On Feb 10, 2016, at 5:13 AM, Dan Langille wrote:
>>
>>> On Feb 10, 2016, at 2:47 AM, Jeff Janes wrote:
>>>
>>> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille wrote:
>>>> I have
> On Feb 10, 2016, at 5:13 AM, Dan Langille wrote:
>
>> On Feb 10, 2016, at 2:47 AM, Jeff Janes wrote:
>>
>> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille wrote:
>>> I have a wee database server which regularly tries to insert 1.5 million or
>>> even
> On Feb 10, 2016, at 2:47 AM, Jeff Janes wrote:
>
> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille wrote:
>> I have a wee database server which regularly tries to insert 1.5 million or
>> even 15 million new rows into a 400 million row table. Sometimes these
>> ins
itions.
https://gist.github.com/dlangille/1a8c8cc62fa13b9f
<https://gist.github.com/dlangille/1a8c8cc62fa13b9f>
I'm tempted to move it to faster hardware, but in case I've missed something
basic...
Thank you.
--
Dan Langille - BSDCan / PGCon
d...@langille.org
s
We had a problem in the 8.X series with COPY IN - it did not respect any
configured maximums and just kept allocating memory until it could fit the
entire COPY contents down to the \. into RAM. Could there be a similar
issue with COPY OUT?
-
Dan
On Wed, Jun 11, 2014 at 6:02 PM, Timothy
...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Josh Berkus
Sent: Thursday, February 14, 2013 6:58 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu
12.04
On 02/14/2013 12:41 PM, Dan Kogan wrote:
> We u
On 02/13/2013 05:30 PM, Dan Kogan wrote:
> Just to be clear - I was describing the current situation in our production.
>
> We were running pgbench on different Ununtu versions today. I don’t have
> 12.04 setup at the moment, but I do have 12.10, which seems to be performing
> ab
Thanks for the info.
Our application does have a lot of concurrency. We checked the zone reclaim
parameter and it is turn off (that was the default, we did not have to change
it).
Dan
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: Thursday, February 14
with 8 jobs and 32 clients resulted in load average of about 15
and TPS was 51350.
Question - how many cores does your server have? Ours has 8 cores.
Thanks,
Dan
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf
so we had to revert
back to 3.2.
At this point we are contemplating whether it's better to go back to 11.04 or
upgrade to 12.10 (which comes with kernel version 3.5).
Any thoughts on that would be appreciated.
Dan
From: Will Ferguson [mailto:wfergu...@northplains.com]
Sent: Tuesday, Februar
(or even
tried) with the 9.0 jdbc driver against 9.2 server?
Dan
From: Eric Haertel [mailto:eric.haer...@groupon.com]
Sent: Tuesday, February 12, 2013 12:52 PM
To: Dan Kogan
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu
12.04
/run/postgresql
wal_keep_segments=128
wal_level=hot_standby
work_mem=8MB
Thanks,
Dan
Apologies for leaping in a little late, but I note the version on Github has
been updated much more recently:
https://github.com/gregs1104/pgtune
Cheers,
Dan
--
Dan Fairs | dan.fa...@gmail.com | @danfairs | secondsync.com
virtual memory
could get quite large - as in several GB. It plus the buffer pool
sometimes exceeded the amount of RAM I had available at that time (several
years ago), with bad effects on performance.
This may have been fixed since then, or maybe RAM's gotten big enough that
it's not a problem.
Dan Franklin
This is a good general discussion of the problem - looks like you could
replace "MySQL" with "PostgreSQL" everywhere without loss of generality:
http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-archite
cture/
Dan
-Original Message-
From:
r
> hardware and software monitoring, no errors in the os logs, nothing in
> the dell drac logs. After a hard reset it's back up as if nothing
> happened, and it's an issue I'm none the wiser to the cause. Not good
> piece of mind.
>
> Look around and find another vendor, even if your company has to pay
> more for you to have that blame avoidance.
We're currently using Dell and have had enough problems to think about
switching.
What about HP?
Dan Franklin
efore changing production, but it looks good - thanks
very much!
Cheers,
Dan
--
Dan Fairs | dan.fa...@gmail.com | www.fezconsulting.com
uot;timecode_transmission"."tx" <= '2012-04-06 23:59:59'
AND "timecode_transmission"."tx" >= '2012-04-06 00:00:00'
GROUP BY "timecode_transmission"."id"
The twitter_tweet table has about 25m rows, as you'll s
verlooking here?
Thanks
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t seems like in this case that should be fine.
-Dan
On Sun, May 15, 2011 at 3:02 PM, Ezequiel Lovelle
wrote:
> Hi, I'm new to postgres and I have the next question.
>
> I have a php program that makes 10 inserts in my database.
> autoincrement numbers inserted into a table
ry.
And you're right fork, Record Linkage is in fact an entire academic
discipline! I had no idea, this is fascinating and helpful:
http://en.wikipedia.org/wiki/Record_linkage
Thanks so much!
Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql
13 rows=444613
width=113)"
One general question: does the width of the tables (i.e. the numbers
of columns not being joined and the size of those fields) matter? The
tables do have a lot of extra columns that I could slice out.
Thanks so much!
Dan
System:
client: pgadmin III, Mac OS
On 3/4/11 11:03 AM, Wayne Conrad wrote:
On 03/04/11 10:34, Glyn Astill wrote:
> I'm wondering (and this may be a can of worms) what peoples opinions
are on these schedulers?
When testing our new DB box just last month, we saw a big improvement
in bonnie++ random I/O rates when using the noop
Thank you everybody for the detailed answers, the help is well appreciated.
A couple of follow-up questions:
- Is the supercap + flash memory considered superior to the BBU in practice?
Is that type of system well tested?
- Is the linux support of the LSI and Adaptec cards comparable?
-Dan
On
eplication was keeping up, which would be
monitored).
-Dan
Hi,
My name is Dan and I'm a co-worker of Nick Matheson who initially submitted this question (because the mail group had me blacklisted
for awhile for some reason).
Thank you for all of the suggestions. We were able to improve out bulk read performance from 3 MB/s to 60 MB/s (assumin
tion. (We actually
tried Tokyo Cabinet and found it to perform quite well. However it does not measure up to Postgres in terms of replication, data
interrogation, community support, acceptance, etc).
Thanks
Dan Schaffer
Paul Hamer
Nick Matheson
<>
--
Sent via pgsql-performance mailing list (p
On 10/12/10 4:33 PM, Neil Whelchel wrote:
On Tuesday 12 October 2010 08:39:19 Dan Harris wrote:
On 10/11/10 8:02 PM, Scott Carey wrote:
would give you a 1MB read-ahead. Also, consider XFS and its built-in
defragmentation. I have found that a longer lived postgres DB will get
extreme file
On 10/12/10 10:44 AM, Scott Carey wrote:
On Oct 12, 2010, at 8:39 AM, Dan Harris wrote:
On 10/11/10 8:02 PM, Scott Carey wrote:
would give you a 1MB read-ahead. Also, consider XFS and its built-in
defragmentation. I have found that a longer lived postgres DB will get extreme
file
tremendously.
We just had a corrupt table caused by an XFS online defrag. I'm scared
to use this again while the db is live. Has anyone else found this to
be safe? But, I can vouch for the fragmentation issue, it happens very
quickly in our system.
-Dan
--
Sent via pgsql-performance ma
ing a connection pooler
like pgpool to reduce your connection memory overhead.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
he other major bottleneck they ran into was a kernel one: reading from
the heap file requires a couple lseek operations, and Linux acquires a
mutex on the inode to do that. The proper place to fix this is
certainly in the kernel but it may be possible to work around in
Postgres.
Dan
--
Dan R. K
On 3/22/10 4:36 PM, Carlo Stonebanks wrote:
Here we go again!
Can anyone see any obvious faults?
Carlo
maintenance_work_mem = 256MB
I'm not sure how large your individual tables are, but you might want to
bump this value up to get faster vacuums.
max_fsm_relations = 1000
I think this will d
lsar_ssd/
>
> I have updated our documentation to mention that even SSD drives often
> have volatile write-back caches. Patch attached and applied.
Hmmm. That got me thinking: consider ZFS and HDD with volatile cache.
Do the characteristics of ZFS avoid this issue entirely?
- --
Dan Langi
At 12:36 AM -0400 9/25/09, Tom Lane wrote:
Dan Sugalski writes:
Is there any practical limit to the number of shared buffers PG 8.3.7
can handle before more becomes counter-productive?
Probably, but I've not heard any definitive measurements showing an
upper limit. The traditional w
ad things don't happen because of buffer management.
(Unfortunately I've only got a limited window to bounce the server,
so I can't do too much in the way of experimentation with buffer
sizing)
--
Dan
--it'
> publicised list manager address, so I am addressing this complaint to
> the whole list. Is there someone here who can fix the problem?
This one seems to have made it.
Rest assured, nobody is interested enough to censor anything here.
- --
Dan Langille
BSDCan - The Technical BSD
over psql.
Fair enough. (And sorry about the mis-read) Next time this occurs I'll try
and duplicate this in psql. FWIW, a quick read of the C underlying the
DBD::Pg module shows it using PQexecPrepared, so I'm pretty sure it is
using prepared statements with placeholders, but
rchitecture) is 1, the second ? (for branchid) is 0.
They both should get passed to Postgres as $1 and $2, respectively,
assuming DBD::Pg does its substitution right. (They're both supposed to go
in as placeholders)
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
undef, $db->{arch},
$db->{basebranch});
There's no transform of the sql variable between the two statements, just
a quick loop over the returned rows from the explain analyze to print them
out. (I did try to make sure that the debugging bi
#x27;)
and libobject.objinstance = provide_symbol.objinstance
and libinstance.branchid = ?
and provide_symbol.symbolid = temp_symbol.symbolid
and objectinstance.objinstance = libobject.objinstance
and libinstance.istemp = 0
The explain analyze for the query's attached in a (poss
uery has seven
tables (one of them a temp table) and my geqo_threshold is set to 12. If
I'm reading the docs right GEQO shouldn't kick in.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ing a lot of tables together? Could be GEQO kicking in.
Only if I get different query plans for the query depending on whether
it's being EXPLAIN ANALYZEd or not. That seems unlikely...
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
e problems, but
that isn't helping as it shows perfectly sane results. That leaves
abnormal means, and outside of trussing the back end or attaching with dbx
to get a stack trace I just don't have any of those. I'm not even sure
what I should be looking for when I do get a stack trace.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tself. It's possible something's going wrong in that, but the code's
pretty simple.
Arguably in this case the actual query should run faster than the EXPLAIN
ANALYZE version, since the cache is hot. (Though that'd only likely shave
a few dozen ms off the runtime)
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I'm running a 64-bit build of Postgres 8.3.5 on AIX 5.3, and have a really
strange, annoying transient problem with one particular query stalling.
The symptom here is that when this query is made with X or more records in
a temp table involved in the join (where X is constant when the problem
mani
checkpoints can't have very
much work to do, so their impact on performance is smaller. Once
you've got a couple of hundred MB on there, the per-checkpoint
overhead can be considerable.
Ahh bugger, I've just trashed my test setup.
Pardon? How did you do that?
--
Da
relevant information.
As always, thank you for your insight.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Kenneth Marshall wrote:
Dan,
Did you try this with 8.3 and its new HOT functionality?
Ken
I did not. I had to come up with the solution before we were able to
move to 8.3. But, Tom did mention that the HOT might help and I forgot
about that when writing the prior message. I'm i
counts in program memory, but it was the only way I
found to avoid the penalty of constant table churn on the triggered inserts.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nd up going with 1+0 instead.
-Dan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I learned a little about pg_trgm here:
http://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm
But this seems like it's for finding similarities, not substrings. How can
I use it to speed up t1.col like '%t2.col%'?
Thanks,
Dan
-Original Message-
From: [
I've got a lot of rows in one table and a lot of rows in another table. I
want to do a bunch of queries on their join column. One of these is like
this: t1.col like '%t2.col%'
I know that always sucks. I'm wondering how I can make it better. First, I
should let you know that I can likely ho
I've got a lot of rows in one table and a lot of rows in another table. I
want to do a bunch of queries on their join column. One of these is like
this: t1.col like '%t2.col%'
I know that always sucks. I'm wondering how I can make it better. First, I
should let you know that I can likely ho
Erik Jones wrote:
On Feb 15, 2008, at 3:55 PM, Dan Langille wrote:
We're using PostgreSQL 8.1.11 on AIX 5.3 and we've been doing some
playing around
with various settings. So far, we've (I say we, but it's another guy
doing the work) found
that open_datasync seems better
Have you seen this behaviour?
FYI, 8.3.0 is not an option for us in the short term.
What have you been using on AIX and why?
thanks
--
Dan Langille -- http://www.langille.org/
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
her than endless arguments to happen,
come up with a nice key-management design for encrypted function
bodies.
I keep thinking the problem of keys is similar that of Apache servers
which use certificates that require passphrases. When the server is
started, the passphrase is entered on
etween pg_dump and vacuum, or
2. reduce the dead tuple pile up between vacuums
Thanks for reading
-Dan
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
e more to that original table. What about triggers?
rules? Perhaps there other things going on in the background.
--
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
My PG server came to a screeching halt yesterday. Looking at top saw a very
large number of "startup waiting" tasks. A pg_dump was running and one of my
scripts had issued a CREATE DATABASE command. It looks like the CREATE DATABASE
was exclusive but was having to wait for the pg_dump to fin
Kari Lavikka wrote:
Hello!
Some background info.. We have a blog table that contains about eight
million blog entries. Average length of an entry is 1200 letters.
Because each 8k page can accommodate only a few entries, every query
that involves several entries causes several random seeks to
nfirmed via explain (or explain analyse) that the index is
being used?
> So I'm asking me if it is useful to update to the actual 8.2 version
> and if we could experience performance improvement only by updating.
There are other benefits from upgrading, but you may be able to
Tom Lane wrote:
Dan Harris <[EMAIL PROTECTED]> writes:
Here's the strace summary as run for a few second sample:
% time seconds usecs/call callserrors syscall
-- --- --- - -
97.250.671629 9
there is no index on word (
there should be! ). Would this have caused the problem?
This is 8.0.12
Linux sunrise 2.6.15-26-amd64-server #1 SMP Fri Sep 8 20:33:15 UTC 2006 x86_64
GNU/Linux
Any idea what might have set it into this loop?
-Dan
---(end of broadcast)
thomas
I'd say that "it depends". We run an OLAP workload on 350+ gigs of database on
a system with 64GB of RAM. I can tell you for certain that fetching non-cached
data is very sensitive to disk throughput!
Different types of workloads will find different bottlenecks in the
doesn't really make that big of a difference.
My recommendation, each database gets it's own aggregate unless the
IO footprint is very low.
Let me know if you need more details.
Regards,
Dan Gorman
On Jul 11, 2007, at 6:03 AM, Dave Cramer wrote:
Assuming we have 24 73G drives is it bett
No, however, I will attach the postgreql.conf so everyone can look at
other settings just in case.
postgresql.conf
Description: Binary data
Regards,
Dan Gorman
On Jun 25, 2007, at 10:07 AM, Gregory Stark wrote:
"Simon Riggs" <[EMAIL PROTECTED]> writes:
WARNING: page 2
you guys would like me to try to 'break' it again and keep the db
around for further testing let me know.
Regards,
Dan Gorman
On Jun 25, 2007, at 9:34 AM, Tom Lane wrote:
Dan Gorman <[EMAIL PROTECTED]> writes:
Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06
Greg,
PG 8.2.4
Regards,
Dan Gorman
On Jun 25, 2007, at 9:02 AM, Gregory Stark wrote:
"Dan Gorman" <[EMAIL PROTECTED]> writes:
I took several snapshots. In all cases the FS was fine. In one
case the db
looked like on recovery it thought there were outstanding pages
t
I took several snapshots. In all cases the FS was fine. In one case
the db looked like on recovery it thought there were outstanding
pages to be written to disk as seen below and the db wouldn't start.
Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21
00:39:43 PDTLOG: redo
It's the latter, is snapshot of the durable state of the storage
system (e.g. it will never be corrupted)
Regards,
Dan Gorman
On Jun 22, 2007, at 11:02 AM, Tom Lane wrote:
"Simon Riggs" <[EMAIL PROTECTED]> writes:
On Fri, 2007-06-22 at 13:12 -0400, Tom Lane wrote:
If
Ah okay. I understand now. So how can I signal postgres I'm about to
take a backup ? (read doc from previous email ? )
Regards,
Dan Gorman
On Jun 22, 2007, at 4:38 AM, Simon Riggs wrote:
On Fri, 2007-06-22 at 04:10 -0700, Dan Gorman wrote:
This snapshot is done at the LUN (filer)
This snapshot is done at the LUN (filer) level, postgres is un-aware
we're creating a backup, so I'm not sure how pg_start_backup() plays
into this ...
Regards,
Dan Gorman
On Jun 22, 2007, at 3:55 AM, Simon Riggs wrote:
On Fri, 2007-06-22 at 11:30 +0900, Toru SHIMOGAKI wrote:
level) which a lot of storage vender provide,
the backup data
can be corrupted as Dan said. During recovery we can't even
read it,
especially if meta-data was corrupted.
I can't see any explanation for how this could happen, other
than your hardware vendor is lying about snapsh
Some of our databases are doing about 250,000 commits/min.
Best Regards,
Dan Gorman
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Andrew Sullivan wrote:
On Thu, Jun 07, 2007 at 03:26:56PM -0600, Dan Harris wrote:
They don't always have to be in a single transaction, that's a good idea to
break it up and vacuum in between, I'll consider that. Thanks
If you can do it this way, it helps _a lot_. I've
http://archives.postgresql.org
select count(*) will *always* do a sequential scan, due to the MVCC
architecture. See archives for much discussion about this.
-Dan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
about. This should be a
cleaner solution for you.
-Dan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Orhan Aglagul wrote:
Hi Everybody,
I was trying to see how many inserts per seconds my application could
handle on various machines.
I read that postgres does have issues with MP Xeon (costly context
switching). But I still think that with fsync=on 65 seconds is ridiculous.
CPU is unlikel
Bill Moran wrote:
In response to Dan Harris <[EMAIL PROTECTED]>:
Why does the user need to manually track max_fsm_pages and max_fsm_relations? I
bet there are many users who have never taken the time to understand what this
means and wondering why performance still stinks after vac
.
In closing, I am not bashing PG! I love it and swear by it. These comments are
purely from an advocacy perspective. I'd love to see PG user base continue to grow.
My .02
-Dan
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
3.2.3-20)
(1 row)
We used the rpm source from postgresql-7.4-0.5PGDG.
You make it sound so easy. Our database size is at 308 GB. We actually
have 8.2.3 running and would like to transfer in the future. We have to
investigate the best way to do it.
Dan.
-Original Message-
From
We have a table which we want to normalize and use the same SQL to
perform selects using a view.
The old table had 3 columns in it's index
(region_id,wx_element,valid_time).
The new table meteocode_elmts has a similar index but the region_id is a
reference to another table region_lookup and wx_e
run.
I have been able to do this with tables, using a helpful view posted to this
list a few months back, but I'm not sure if I can get the same results on indexes.
Thanks
-Dan
---(end of broadcast)---
TIP 6: explain analyze is your friend
d then do the seq scan for the LIKE condition. Instead, it seems that it's seqscanning the
whole 70 million rows first and then doing the join, which takes a lot longer than I'd like to wait for it. Or, maybe I'm
misreading the explain output?
Thanks again
-Dan
-
Dan Harris wrote:
I've found that it would be helpful to be able to tell how busy my
dedicated PG server is ( Linux 2.6 kernel, v8.0.3 currently ) before
pounding it with some OLAP-type queries.
..snip
Thank you all for your great ideas! I'm going to try the perl function
as
idea for obvious security reasons...
So far, that's all I can come up with, other than a dedicated socket
server daemon on the DB machine to do it.
Any creative ideas are welcomed :)
Thanks
-Dan
---(end of broadcast)---
TIP 1: if posting/
Thank you all for your ideas. I appreciate the quick response.
-Dan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
key piece of knowledge is escaping me on this.
I don't expect someone to write this for me, I just need a nudge in the
right direction and maybe a URL or two to get me started.
Thank you for reading this far.
-Dan
---(end of broadcast)---
2.6.18 fairly recently, I am *very* interested in
what caused the throughput to drop in 2.6.18? I haven't done any
benchmarking on my system to know if it affected my usage pattern
negatively, but I am curious if anyone knows why this happened?
-Dan
---(e
On 23 Aug 2006 at 22:30, Tom Lane wrote:
> "Dan Langille" <[EMAIL PROTECTED]> writes:
> > Without leaving "enable_hashjoin = false", can you suggest a way to
> > force the index usage?
>
> Have you tried reducing random_page_cost?
Yes. No effect
On 23 Aug 2006 at 13:31, Chris wrote:
> Dan Langille wrote:
> > I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use
> > an index. With the index, I get executions times of 0.5 seconds.
> > Without, it's closer to 2.5 seconds.
> >
&
act_suffix,
P.homepage,
P.status,
P.broken,
P.forbidden,
P.ignore,
P.restricted,
P.deprecated,
P.no_cdrom,
P.expiration_date,
P.latest_link
FROM categories C, ports P JOIN element E on P.element_id = E.id
WHERE P.status = 'D'
A
ingle instance and
it's working quite well. There may be reasons to run multiple
instances but it seems like tuning them to cooperate for memory would
pose some problems - e.g. effective_cache_size.
-Dan
---(end of broadcast)---
TIP 3
Currently I have jumbo frames enabled on the NA and the switches and
also are using a the 32K R/W NFS options. Everything is gigE.
Regards,
Dan Gorman
On Jun 14, 2006, at 10:51 PM, Joe Conway wrote:
Dan Gorman wrote:
That makes sense. Speaking of NetApp, we're using the 3050C with 4
That makes sense. Speaking of NetApp, we're using the 3050C with 4 FC
shelfs. Any generic advice other than the NetApp (their NFS oracle
tuning options)
that might be useful? (e.g. turning off snapshots)
Regards,
Dan Gorman
On Jun 14, 2006, at 10:14 PM, Jonah H. Harris wrote:
On 1
rites
to via the NVRAM can I safely turn fsync off to gain additional
performance?
Best Regards,
Dan Gorman
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ing the query in several ways, eg putting
the function call in a sub-select, and so on. I also tried
disabling the various query plans, but in the end I've only
managed to slow it down even further.
So, I'm hoping someone can tell me what the magical cure is.
know if the postgres team is working on this?
(btw, I pasted in the wrong oracle query lol - but it can be done in
mysql and oracle)
Best Regards,
Dan Gorman
On May 23, 2006, at 11:51 AM, Simon Riggs wrote:
On Tue, 2006-05-23 at 11:33 -0700, Dan Gorman wrote:
In any other DB (oracle
1 - 100 of 190 matches
Mail list logo