Arjen van der Meijden wrote:
Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're
not identical in layout etc), so it would be a bit weird if they
performed much less than the similar LSI's wouldn't you think?
I've recently had to replace a PERC4/DC with the exact same card
Rajesh Kumar Mallah wrote:
I've checked out the latest Areca controllers, but the manual
available on their website states there's a limitation of 32 disks
in an array...
Where exactly is there limitation of 32 drives. the datasheet of
1680 states support upto 128drives using
Scott Carey wrote:
You probably don’t want a single array with more than 32 drives anyway,
its almost always better to start carving out chunks and using software
raid 0 or 1 on top of that for various reasons. I wouldn’t put more than
16 drives in one array on any of these RAID cards, they’re
Glyn Astill wrote:
Stupid question, but why do people bother with the Perc line of
cards if the LSI brand is better? It seems the headache of trying
to get the Perc cards to perform is not worth any money saved.
I think in most cases the dell cards actually cost more, people end
up stuck
On Feb 2, 6:06 am, Edgardo Portal egportal2...@yahoo.com wrote:
On 2010-02-02, Matt White mattw...@gmail.com wrote:
I have a relatively straightforward query that by itself isn't that
slow, but we have to run it up to 40 times on one webpage load, so it
needs to run much faster than
I have a relatively straightforward query that by itself isn't that
slow, but we have to run it up to 40 times on one webpage load, so it
needs to run much faster than it does. Here it is:
SELECT COUNT(*) FROM users, user_groups
WHERE users.user_group_id = user_groups.id AND NOT users.deleted
Hi. I've only been using PostgreSQL properly for a week or so, so I
apologise if this has been covered numerous times, however Google is
producing nothing of use.
I'm trying to import a large amount of legacy data (billions of
denormalised rows) into a pg database with a completely different
Robert Haas wrote:
Old row versions have to be kept around until they're no longer of
interest to any still-running transaction.
Thanks for the explanation.
Regarding the snippet above, why would the intermediate history of
multiply-modified uncommitted rows be of interest to anything, or is
reasonable (8644.1985
MB/s with 1 core - 25017 MB/s with 12 cores). The box is running 2.6.26.6-49
and postgresql 9.0.6.
I'm stumped as to why it's so much slower, any ideas on what might explain it…
or other benchmarks I could run to try to narrow down the cause?
Thanks!
Matt
..94459.07 rows=1926207 width=0) (actual
time=0.005..1475.218 rows=1926207 loops=1)
Total runtime: 2889.360 ms
(3 rows)
Time: 2889.842 ms
On Tuesday, 21 August, 2012 at 3:57 PM, Matt Daw wrote:
Howdy. I'm curious what besides raw hardware speed determines the performance
of a Seq Scan
why this plan is being
chosen? Thanks!
Matt
Hi Tom, thank you very much. I'll load these tables onto a 9.2 instance and
report back.
Matt
On Fri, Sep 28, 2012 at 2:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Matt Daw m...@shotgunsoftware.com writes:
Howdy, I've been debugging a client's slow query today and I'm curious
about the query
%'::text))
Rows Removed by Filter: 1
Total runtime: 147.411 ms
(14 rows)
On Fri, Sep 28, 2012 at 2:47 PM, Matt Daw m...@shotgunsoftware.com wrote:
Hi Tom, thank you very much. I'll load these tables onto a 9.2 instance
and report back.
Matt
On Fri, Sep 28, 2012 at 2:44 PM
getting a bad estimate. Is this an expected result?
Thanks!
Matt
=
1) Filter on project_id only, row estimate for Bitmap Index Scan quite good.
explain (analyze,buffers) select count(id) from versions WHERE project_id=115;
QUERY PLAN
own.
Matt
On Tue, Feb 26, 2013 at 11:35 AM, Matt Daw m...@shotgunsoftware.com wrote:
Howdy, the query generator in my app sometimes creates redundant
filters of the form:
project_id IN ( list of projects user has permission to see ) AND
project_id = single project user is looking
/9.0/static/row-estimation-examples.html
was a big help.
Matt
On Wed, Feb 27, 2013 at 9:08 AM, Matt Daw m...@shotgunsoftware.com wrote:
Quick follow up... I've found that the row estimate in:
explain select count(id) from versions where project_id IN (80,115)
AND project_id=115
the
distribution stats as the operation progresses and if it detects that it
is changing the distribution of data beyond a certain threshold it would
update the pg stats accordingly.
--
Matt Clarkson
Catalyst.Net Limited
--
Sent via pgsql-performance mailing list (pgsql-performance
On Tue, 2013-05-07 at 18:32 +1200, Mark Kirkwood wrote:
On 07/05/13 18:10, Simon Riggs wrote:
On 7 May 2013 01:23, mark.kirkw...@catalyst.net.nz wrote:
I'm thinking that a variant of (2) might be simpler to inplement:
(I think Matt C essentially beat me to this suggestion - he
Hi all. This might be tricky in so much as there’s a few moving parts (when
isn’t there?), but I’ve tried to test the postgres side as much as possible.
Trying to work out a potential database bottleneck with a HTTP application
(written in Go):
Pages that render HTML templates but don’t perform
Thanks for the replies Jeff, Tom and Merlin.
Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to
top 1.3k req/s
Is that tested at the OFFSET and LIMIT of 0 and 15, as shown in the
explain plan?
Yes — 0 (OFFSET) and 16 (LIMIT), or 15 and 31 (i.e. “second page” of
| 2.30 | 0.04
WALWriteLock | 3172725 | 457 | 24.67 | 0.08
CLogControlLock| 1012458 |6423 | 10.59 | 1.09
The same test done with a readonly workload show virtually no SpinDelay
at all.
Any thoughts or comments on these results are welcome!
Regards,
Matt
Rodrigo Madera wrote:
I am concerned with performance issues involving the storage of DV on
a database.
I though of some options, which would be the most advised for speed?
1) Pack N frames inside a container and store the container to the db.
2) Store each frame in a separate record in the
If memory serves me correctly I have seen several posts about this in
the past.
I'll try to recall highlights.
1. Create a md in linux sufficiently large enough to handle the data set
you are wanting to store.
2. Create a HD based copy somewhere as your permanent storage mechanism.
3. Start
101 - 123 of 123 matches
Mail list logo