Carlo Stonebanks wrote:
So, we have the hardware, we have the O/S - but I think our config
leaves much to be desired. Typically, our planner makes nad decisions,
picking seq scan over index scan, where index scan has a better result.
You're not setting effective_cache_size, so I wouldn't exp
On 3/22/10 11:47 AM, Scott Carey wrote:
On Mar 17, 2010, at 9:41 AM, Craig James wrote:
On 3/17/10 2:52 AM, Greg Stark wrote:
On Wed, Mar 17, 2010 at 7:32 AM, Pierre C wrote:
I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of curren
On 3/22/10 4:36 PM, Carlo Stonebanks wrote:
Here we go again!
Can anyone see any obvious faults?
Carlo
maintenance_work_mem = 256MB
I'm not sure how large your individual tables are, but you might want to
bump this value up to get faster vacuums.
max_fsm_relations = 1000
I think this will d
Not to beat a dead horse excessively, but I think the below is a pretty
good argument for index hints? I know the general optimizer wants to be
highest priority (I very much agree with this), but I think there are
fully legitimate cases like the below. Asking the user to rewrite the
query in an unn
Here we go again!
Based on recommendations made here, I got my client to migrate off of our
Windows 2003 Server x64 box to a new Linux box.
# CENTOS 5.4
# Linux mdx_octo 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64
x86_64 x86_64 GNU/Linux
# pgsql 8.3.10, 8 CPUs, 48GB RAM
# RAID 1
HI Greg,
Thanks for the insight. How much more of a server's resources will be
consumed by an ANALYZE with default_statistics_target = 100?
We have two environments hosting the same data. One is our "live" server,
which serves the web site, and this hosts our published data, not more than
2
On Mon, Mar 22, 2010 at 6:47 PM, Scott Carey wrote:
> Its fairly easy to break. Just do a parallel import with say, 16 concurrent
> tables being written to at once. Result? Fragmented tables.
>
Fwiw I did do some investigation about this at one point and could not
demonstrate any significant
This is why pre-allocation is a good idea if you have the space
Tom, what about a really simple command in a forthcoming release of PG that
would just preformat a 1GB file at a time? This is what I've always done
scripted with Oracle (ALTER TABLESPACE foo ADD DATAFILE ) rather than
relyin
This is one of the more-or-less solved problems in Unix/Linux. Ext*
file systems have a "reserve" usually of 10% of the disk space that
nobody except root can use. It's not for root, it's because with 10%
of the disk free, you can almost always do a decent job of allocating
contiguous b
Christian Brink writes:
> -> Nested Loop (cost=0.01..2416.59 rows=22477 width=4)
> (actual time=0.595..2.150 rows=225 loops=1)
> -> Index Scan using sysstrings_pkey on sysstrings
> (cost=0.00..8.27 rows=1 width=182) (actual time=0.110..0.112 rows=1 loops=1)
>
I previously posted 'forcing index scan on query produces 16x faster'
and it seemed that the consensus was that 8.0.x series had an issue. I
have upgraded to the highest practical version for our distro. But we
seem to have the same issue.
If I force the 'enable_seqscan' off our actual time is
On Mar 17, 2010, at 9:41 AM, Craig James wrote:
> On 3/17/10 2:52 AM, Greg Stark wrote:
>> On Wed, Mar 17, 2010 at 7:32 AM, Pierre C wrote:
I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of current size, capped at (say) 1 GB.
>
Message from Corin at 03-19-2010 01:26:35 PM
--
***snip
The intention of the query is to find rows with no "partner" row. The
offset and limit are just to ignore the time needed to send the result
to the client.
---
I don't understand the point of OFFSET, limit will accomplish
Yes.
On Mar 18, 2010, at 5:20 PM, Hannu Krosing wrote:
> On Thu, 2010-03-18 at 16:12 -0400, Justin Pitts wrote:
>> It seems to me that a separate partition / tablespace would be a much
>> simpler approach.
>
> Do you mean a separate partition/ tablespace for _each_ index built
> concurrently ?
On Thu, 2010-03-18 at 16:12 -0400, Justin Pitts wrote:
> It seems to me that a separate partition / tablespace would be a much simpler
> approach.
Do you mean a separate partition/ tablespace for _each_ index built
concurrently ?
> On Mar 17, 2010, at 5:18 PM, Hannu Krosing wrote:
>
> > On Wed,
It seems to me that a separate partition / tablespace would be a much simpler
approach.
On Mar 17, 2010, at 5:18 PM, Hannu Krosing wrote:
> On Wed, 2010-03-17 at 16:49 -0400, Greg Smith wrote:
>> Alvaro Herrera wrote:
>>> Andres Freund escribió:
>>>
>>>
I find it way much easier to believe
Absolutely ...
- for fixed size rows with a lot of small updates, Oracle wins. BTW, as of
Oracle 9 they're called "UNDO tablesapces"
- for lots of transactions and feely mixing transactions of all sizes, MVCC
tables (Postgres) wins
- if you just want a structured filesystem and don't have integrit
On Mon, 22 Mar 2010, Yeb Havinga wrote:
Yes, that is certainly a factor. For example, the page size for bioseg
which we use here is 130 entries, which is very excessive, and doesn't
allow very deep trees. On the other hand, it means that a single disc seek
performs quite a lot of work.
Yeah,
Matthew Wakeling wrote:
On Sat, 20 Mar 2010, Yeb Havinga wrote:
The gist virtual pages would then match more the original blocksizes
that
were used in Guttman's R-tree paper (first google result, then figure
4.5).
Since the nature/characteristics of the underlying datatypes and keys
is not
cha
On Sat, 20 Mar 2010, Yeb Havinga wrote:
The gist virtual pages would then match more the original blocksizes that
were used in Guttman's R-tree paper (first google result, then figure 4.5).
Since the nature/characteristics of the underlying datatypes and keys is not
changed, it might be that with
Matthew Wakeling writes:
> On Fri, 19 Mar 2010, Stephen Frost wrote:
>> ...it has to go to an external on-disk sort (see later on, and how to
>> fix that).
> This was covered on this list a few months ago, in
> http://archives.postgresql.org/pgsql-performance/2009-08/msg00184.php and
> http://
On Fri, 19 Mar 2010, Stephen Frost wrote:
...it has to go to an external on-disk sort (see later on, and how to
fix that).
This was covered on this list a few months ago, in
http://archives.postgresql.org/pgsql-performance/2009-08/msg00184.php and
http://archives.postgresql.org/pgsql-performa
On Mon, 22 Mar 2010 12:14:51 +0100, Merlin Moncure
wrote:
On Sun, Mar 21, 2010 at 9:14 PM, Dave Crooke wrote:
Note however that Oracle offeres full transactionality and does in
place row
updates. There is more than one way to do it.
There's no free lunch.
MVCC : VACUUM
Oracle : Rollba
On Sun, Mar 21, 2010 at 9:14 PM, Dave Crooke wrote:
> Note however that Oracle offeres full transactionality and does in place row
> updates. There is more than one way to do it.
There's no free lunch. If you do mvcc you have to maintain multiple
versions of the same row.
merlin
--
Sent via p
24 matches
Mail list logo