I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of current size, capped at (say) 1
GB.
Using fallocate() ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Yeb Havinga wrote:
Matthew Wakeling wrote:
Matthew Wakeling wrote:
A second quite distinct issue is the general performance of GiST
indexes
which is also mentioned in the old thread linked from Open Items. For
that, we have a test case at
On Wed, Mar 17, 2010 at 7:32 AM, Pierre C li...@peufeu.com wrote:
I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of current size, capped at (say) 1 GB.
This turns out to be a bad idea. One of the first thing Oracle DBAs
are told to do
Rob Wultsch wrote:
Are there any particular performance optimizations that would be in
play in such a scenario?
You'd want to increase maintenance_work_mem significantly, just for the
sessions that are running these. Something like this:
|SET maintenance_work_mem = '1GB';|
I don't know
On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote:
We've enjoyed our FusionIO drives very much. They can do 100k iops
without breaking a sweat.
Yeah, performance is excellent. I bet we could get more, but CPU was
bottleneck in our test, since it was just a demo server :(
--
Devrim GÜNDÜZ
On Wed, 2010-03-17 at 14:30 +0200, Devrim GÜNDÜZ wrote:
On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote:
We've enjoyed our FusionIO drives very much. They can do 100k iops
without breaking a sweat.
Yeah, performance is excellent. I bet we could get more, but CPU was
bottleneck in our
On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote:
On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:
I've been hearing bad things from some folks about the quality of the
FusionIO drives from a durability standpoint.
Can you be more specific about that? Durability over what time
Greg Smith g...@2ndquadrant.com writes:
Rob Wultsch wrote:
At a minimum I assume that if both of the commands were started at
about the same time they would each scan the table in the same
direction and whichever creation was slower would benefit from most of
the table data it needed being
On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device,
which wear levels across 100GB of actual installed capacity.
http://community.fusionio.com/forums/p/34/258.aspx#258
20% of overall capacity free for levelling
On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Smith g...@2ndquadrant.com writes:
Rob Wultsch wrote:
At a minimum I assume that if both of the commands were started at
about the same time they would each scan the table in the same
direction and whichever creation was
See below:
On Wed, Mar 10, 2010 at 9:28 PM, Paul McGarry p...@paulmcgarry.com wrote:
Hi there,
I'm after a little bit of advice on the shared_buffers setting (I have
read the various docs on/linked from the performance tuning wiki page,
some very helpful stuff there so thanks to those
Greg - with Oracle, I always do fixed 2GB dbf's for poartability, and
preallocate the whole file in advance. However, the situation is a bit
different in that Oracle will put blocks from multiple tables and indexes in
a DBF if you don't tell it differently.
Tom - I'm not sure what Oracle does,
On 3/17/10 2:52 AM, Greg Stark wrote:
On Wed, Mar 17, 2010 at 7:32 AM, Pierre Cli...@peufeu.com wrote:
I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of current size, capped at (say) 1 GB.
This turns out to be a bad idea. One of the
Greg is correct, as usual. Geometric growth of files is A Bad Thing in an
Oracle DBA's world, since you can unexpectedly (automatically?) run out of file
system space when the database determines it needs x% more extents than last
time.
The concept of contiguous extents, however, has some
On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:
On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device,
which wear levels across 100GB of actual installed capacity.
On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:
I've been hearing bad things from some folks about the quality of the
FusionIO drives from a durability standpoint.
Can you be more specific about that? Durability over what time frame? How many
devices in the sample set? How did FusionIO
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which
wear levels across 100GB of actual installed capacity.
http://community.fusionio.com/forums/p/34/258.aspx#258
Max drive performance would be about 41TB/day, which coincidently works out
very close to the 3 year
On Wednesday 17 March 2010 19:44:56 Greg Smith wrote:
Rob Wultsch wrote:
On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:
No, it's not optimistic in the least, at least not since we implemented
synchronized seqscans (in 8.3 or thereabouts).
Where can I find details
Greg,
Did you ever contact them and get your hands on one?
We eventually did see long SSD rebuild times on server crash as well. But data
came back uncorrupted per my blog post. This is a good case for Slony Slaves.
Anyone in a high TX low downtime environment would have already engineered
On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote:
On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:
On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device,
which wear levels across 100GB of actual installed
On Wed, 17 Mar 2010, Brad Nicholson wrote:
On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote:
On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:
On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which
wear
Andres Freund escribió:
I find it way much easier to believe such issues exist on a tables in
constrast to indexes. The likelihood to get sequential accesses on an index
is
small enough on a big table to make it unlikely to matter much.
Vacuum walks indexes sequentially, for one.
--
Alvaro Herrera wrote:
Andres Freund escribió:
I find it way much easier to believe such issues exist on a tables in
constrast to indexes. The likelihood to get sequential accesses on an index is
small enough on a big table to make it unlikely to matter much.
Vacuum walks indexes
I am running into a problem with a particular query. The execution plan
cost shows that the Seq Scan is a better bet (cost=54020.49..54020.55)
over the forced index 'enable_seqscan = false'
(cost=1589703.87..1589703.93). But when I run the query both ways I get
a vastly different result
On Wed, Mar 17, 2010 at 5:25 PM, Christian Brink cbr...@r-stream.comwrote:
- Index Scan using sales_tranzdate_index on sales s
(cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 rows=225
loops=1)
Have you tried increasing the statistics on that table (and
Christian Brink cbr...@r-stream.com writes:
Is there a way to rewrite or hint the planner to get me the better plan
without resorting to 'enable_seqscan' manipulation (or am I missing
something)?
I think your problem is here:
PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC
On Mar 17, 2010, at 7:41 AM, Brad Nicholson wrote:
As an aside, some folks in our Systems Engineering department here did
do some testing of FusionIO, and they found that the helper daemons were
inefficient and placed a fair amount of load on the server. That might
be something to watch of
I'm running 8.4.2 and have noticed a similar heavy preference for
sequential scans and hash joins over index scans and nested loops. Our
database is can basically fit in cache 100% so this may not be
applicable to your situation, but the following params seemed to help
us:
seq_page_cost = 1.0
28 matches
Mail list logo