Hi,
I am in the process of setting up a postgresql server with 12 SAS disks.
I am considering two options:
1) set up a 12 disks raid 10 array to get maximum raw performance from
the system and put everything on it (it the whole pg cluster, including
WAL, and every tablespcace)
2) set up 3 raid
Hi,
my Raid controller is an Adaptec 31205 SAS/RAID controller. The battery
was an option, but I didn't know it at purchase time. So I have no
battery, but the whole system is on an UPS.
I have done quite a few tests using bonnie++, focusing on 'random seek'
results, and found out that:
1)
Hi,
Le vendredi 29 février 2008 à 23:56 -0500, Greg Smith a écrit :
Wording is intentional--if you don't have a battery for it, the cache has
to be turned off (or set to write-through so it's only being used on
reads) in order for the database to be reliable. If you can't finish
writes
Hi,
Le mercredi 05 mars 2008 à 11:39 +0100, Steinar H. Gunderson a écrit :
Without knowing what a lakhs record is,
I had the same question... and Wikipedia gave me the answer : it is an
Indian word meaning 10^5, often used in indian english.
Franck
--
Sent via pgsql-performance mailing
)
ER diagram tool: Yes / No
Query Analysis Tool: Yes / No
Probably other informations also
common start sharing...
--
Franck Routier
Axège Sarl - 23, rue Saint-Simon, 63000 Clermont-Ferrand (FR)
Tél : +33 463 059 540
mèl : [EMAIL PROTECTED]
--
Sent via pgsql-performance mailing list
Hi,
I have problems with my database becoming huge in size (around 150 GB
right now, and 2/3 for only three tables, each having around 30 millions
tuples. Space is spent mainly on indices.).
I have a lot of multi-column varchar primary keys (natural keys), and
lot of foreign keys on these tables
Le lundi 30 juin 2008 à 13:24 -0700, Mark Roberts a écrit :
Hi Mark,
Is there any particular reason that you're not using a surrogate key?
Well, human readability is the main reason, no standard way to handle
sequences between databases vendors being the second... (and also
problems when
Hi,
I have to manage a database that is getting way too big for us.
Currently db size is 304 GB.
One table is accounting for a third of this space.
The table itself has 68.800.000 tuples, taking 28GB.
There are 39 indices on the table, and many of them use multiple
columns. A lot of these
Hi,
I am trying to restore a table out of a dump, and I get an 'out of
memory' error.
The table I want to restore is 5GB big.
Here is the exact message :
[EMAIL PROTECTED]:/home/backup-sas$ pg_restore -F c -a -d axabas -t cabmnt
axabas.dmp
pg_restore: [archiver (db)] Error while PROCESSING
a sledgehammer
will impact me :-)
Franck
Le jeudi 16 septembre 2010 à 08:49 -0500, Kevin Grittner a écrit :
Franck Routier franck.rout...@axege.com wrote:
I come into cases where the planner under-estimates the number of
rows in some relations, chooses to go for nested loops, and takes
Hi,
I have pretty large tables, with columns that might never receive any
data, or always receive data, based on the customer needs.
The index on these columns are really big, even if the column is never
used, so I tend to add a where col is not null clause on those indexes.
What are the
Le 11/10/2012 07:26, Craig Ringer a écrit :
* The partial index will only be used for queries that use the
condition WHERE col IS NOT NULL themselves. The planner isn't
super-smart about how it matches index WHERE conditions to query WHERE
conditions, so you'll want to use exactly the same
Hi,
I have a postgresql database (8.4) running in production whose
performance is degrading.
There is no single query that underperforms, all queries do.
Another interesting point is that a generic performance test
(https://launchpad.net/tpc-b) gives mediocre peformance when run on the
Hi,
I don't know that tcp-b does
tpcb.jar is a java implementation of the http://www.tpc.org/tpcb/
benchmark. It is not particularly representative of my workload, but
gives a synthetic, db-agnostic, view of the system performance.
We use it to have quick view to compare differents servers
Hi,
I have query that should be quick, and used to be quick, but is not
anymore... Explain analyze can be seen here http://explain.depesz.com/s/cpV
but it is fundamentaly quick : Total runtime: 0.545 ms.
But query execution takes 3.6 seconds. Only 12 rows are returned. Adding
a limit 1 has
by running the database on nfs.
Good luck,
Nik
On Thu, Apr 4, 2013 at 10:48 AM, Franck Routier
franck.rout...@axege.com mailto:franck.rout...@axege.com wrote:
Hi,
I have query that should be quick, and used to be quick, but is
not anymore... Explain analyze can be seen here
http
Le 04/04/2013 18:25, Tom Lane a écrit :
Franck Routier franck.rout...@axege.com writes:
Right, explain alone takes 3.6 seconds, so the time really seems to go
query planning...
Well, you've not shown us the query, so it's all going to be
speculation. But maybe you have some extremely
Le 04/04/2013 21:08, Tom Lane a écrit :
Franck Routier franck.rout...@axege.com writes:
The request is not using any function. It looks like this:
[ unexciting query ]
Hmph. Can't see any reason for that to take a remarkably long time to
plan. Can you put together a self-contained test case
Le 05/04/2013 16:17, Tom Lane a écrit :
Franck Routier franck.rout...@axege.com writes:
Le 04/04/2013 21:08, Tom Lane a écrit :
Hmph. Can't see any reason for that to take a remarkably long time to
plan. Can you put together a self-contained test case demonstrating
excessive planning time
Le 05/04/2013 16:17, Merlin Moncure a écrit :
Ok, explain (without analyze) is measuring plan time only (not
execution time). Can you confirm that's the time we are measuring
(and again, according to what)? Performance issues here are a
different ball game. Please supply precise version#,
Le 05/04/2013 16:18, Nikolas Everett a écrit :
On Fri, Apr 5, 2013 at 9:55 AM, Franck Routier
franck.rout...@axege.com mailto:franck.rout...@axege.com wrote:
Le 04/04/2013 21:08, Tom Lane a écrit :
Maybe the statistics tables for sandia and saneds are in a bad
shape ? (don't know
Le 05/04/2013 16:17, Tom Lane a écrit :
What have you got the statistics targets set to in this database?
Ok, the problem definitely comes from the default_statistics_target
which is obviously too high on the database.
I have experimented with explain on queries with another set of 4 joined
Le 06/04/2013 18:27, Tom Lane a écrit :
Ok, the problem definitely comes from the default_statistics_target
which is obviously too high on the database.
Yeah, eqjoinsel() is O(N^2) in the lengths of the MCV lists, in the
worst case where there's little overlap in the list memberships.
The
Le 29/03/2013 15:20, Franck Routier a écrit :
Hi,
I have a postgresql database (8.4) running in production whose
performance is degrading.
There is no single query that underperforms, all queries do.
Another interesting point is that a generic performance test
(https://launchpad.net/tpc-b
Hi,
I have a table (sanact) with 23.125.525 rows (and a hundred columns).
I am doing a select, that did not finish after some 15 hours... Select
is as follows:
select * from sanact where sanact___rfovsnide = 'MYVERSION' order by
sanactcsu;
There is an index on sanact___rfovsnide and doing
Le 20/03/2014 14:56, Tom Lane a écrit :
pg_locks, probably. regards, tom lane
select * from pg_stat_activity shows 'F'alse in the waiting column for
the query.
Can I rely on that or should I be investigating further for subtile
types of locks ?
smime.p7s
Description: Signature
Hi,
we are using a mono-column index on a huge table to try to make a quick
'select distinct ' on the column.
This used to work fine, but... it does not anymore. We don't know what
happened.
Here are the facts:
- request:
SELECT dwhinv___rfovsnide::varchar FROM dwhinv WHERE dwhinv___rfovsnide
Hi,
That is not equivalent to a distinct. There must be more to it than that.
Indeed, this query is used in a loop:
CREATE OR REPLACE FUNCTION small_distinct(IN tablename character
varying, IN fieldname character varying, IN sample anyelement DEFAULT
''::character varying)
RETURNS SETOF
Hi,
Le 17/04/2014 20:17, Jeff Janes a écrit :
If there are any open transactions (even ones that have never touched
this particular table) which started before the delete was committed,
then the vacuum was obliged to keep those deleted records around, in
case that open transaction happens
I have found the problem, using this query |(found here
http://stackoverflow.com/questions/3312929/postgresql-idle-in-transaction-diagnosis-and-reading-pg-locks)|
select pg_class.relname, pg_locks.transactionid, pg_locks.mode,
pg_locks.granted as g, pg_stat_activity.current_query,
30 matches
Mail list logo