never
reach others. Only manual vacuum can help with this situation. With wraparound
issue it can be a nightmare
--
Alex Ignatov
Postgres Professional: <http://www.postgrespro.com> http://www.postgrespro.com
The Russian Postgres Company
* | +260 964153686 <tel:%2B260%20964153686>
Hello! What do you mean by
"Server is an Amazon RDS instance with default settings and Postgres
9.3.10, with one other database in the instance."
PG is with default config or smth else?
Is it with default config as it is as from compile version? If so you
should definitely have to do some tuning on it.
By looking on plan i saw a lot of disk read. It can be linked to small
shared memory dedicated to PG exactly what Tom said.
Can you share pg config or raise for example shared_buffers parameter?
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Seq Scan on public.dim_cliente (cost=0.00..618.90
rows=16890 width=86) (actual time=0.005..13.736 rows=16890 loops=1)
Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,
dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente
-> Hash (cost=18.90..18.90 rows=590 width=59) (actual
time=0.715..0.715 rows=590 loops=1)
Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor
Buckets: 1024 Batches: 1 Memory Usage: 56kB
-> Seq Scan on public.dim_vendedor (cost=0.00..18.90 rows=590
width=59) (actual time=0.024..0.405 rows=590 loops=1)
Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor
Total runtime: 37249.268 ms
(25 filas)
___
Is anything that I can do to solve this problem, is that a bug or a
config problem?
Here the link with a dump of the tables
https://drive.google.com/file/d/0Bwupj61i9BtWZ1NiVXltaWc0dnM/view?usp=sharing
I appreciate your help
Hello!
What is your Postgres version?
Do you have correct statistics on this tables?
Please show yours execution plans with buffers i.e. explain
(analyze,buffers) ...
--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
n (organization_id, status, handled_by) so
that all
the conditions can be checked in the index.
regards, tom lane
Hello Bertrand once again!
What's your status? Does the plan changed after deploying three field
index ?
--
Alex Ignatov
Postgres Profession
as I know.
Thanks in Advance.
Hello Javier!
Our tests shows that PG 9.4 scales well up to 60 Intel cores. I.E
pgbech -S and DB on tmpfs gave us 700 000 tps. After 60 соres s_lock is
dominating in cpu usage%. 9.5 scales way better.
--
Alex Ignatov
Postgres Professional: http
On 27.10.2015 14:10, Bertrand Paquet wrote:
Yes, I have run VACUUM ANALYZE, no effect.
Bertrand
2015-10-27 12:08 GMT+01:00 Alex Ignatov <a.igna...@postgrespro.ru
<mailto:a.igna...@postgrespro.ru>>:
On 27.10.2015 12:35, Bertrand Paquet wrote:
Hi all,
We
| | sent_to_proxy
(3 rows)
# select count(*) from external_sync_messages;
count
992912
(1 row)
Hello, Bertrand!
May be statistics on external_sync_messages is wrong? i.e planner give
us rows=6385 but seq scan give us Rows Removed by Filter: 600140
Maybe you should recalc it
:17 GMT+01:00 Alex Ignatov <a.igna...@postgrespro.ru
<mailto:a.igna...@postgrespro.ru>>:
On 27.10.2015 14:10, Bertrand Paquet wrote:
Yes, I have run VACUUM ANALYZE, no effect.
Bertrand
2015-10-27 12:08 GMT+01:00 Alex Ignatov <a.igna...@postgrespro.ru
On Sat, Jul 25, 2015 at 8:50 AM, Craig James cja...@emolecules.com wrote:
The canonical advice here is to avoid more connections than you have CPUs,
and to use something like pg_pooler to achieve that under heavy load.
We are considering using the Apache mod_perl fast-CGI system and perl's
PostgreSQL 9.3.5, and have work_mem set to 32MB.
Is there any way I can work around this problem, other than to experiment with
disabling enable_hashjoin for this query/database?
Alex
Hash Right Join (cost=609908622207072.24..149222936608255392.00
rows=6928136791749514240 width=1223) (actual time
yet?
There have to be some limits to the space and/or counts taken by the
new, uncommitted, data, while the COPY operation is still in progress.
What are they?
Say, I am COPYing 100 TB of data and the bad records are close to the
end of the feed -- how will this all error out?
Thanks,
-- Alex
Thank you, Kevin -- this is helpful.
But it still leaves questions for me.
Kevin Grittner kgri...@ymail.com wrote:
Alex Goncharov alex.goncharov@gmail.com wrote:
The whole thing is aborted then, and the good 99 records are not
making it into the target table.
Right. This is one
Thank you, Kevin -- this is helpful.
Thank you David, too.
But it still leaves questions for me.
Still...
Alex Goncharov alex.goncharov@gmail.com wrote:
How do I decide, before starting a COPY data load, whether such a load
protection (complexity) makes sense (is necessary
On Mon, Jan 28, 2013 at 6:55 PM, Filip Rembiałkowski plk.zu...@gmail.comwrote:
On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik alvinni...@gmail.com wrote:
It sure turned out that default settings are not a good fit.
do you know pgtune?
it's a good tool for starters, if you want a fast
mmonc...@gmail.com wrote:
On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik alvinni...@gmail.com wrote:
It sure turned out that default settings are not a good fit. Setting
random_page_cost to 1.0 made query to run in 2.6 seconds and I clearly
see
that indexes are being used in explain plan
On Tue, Jan 29, 2013 at 11:39 AM, Ben Chobot be...@silentmedia.com wrote:
On Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:
random_page_cost=1 might be not what you really want.
it would mean that random reads are as fast as as sequential reads, which
probably is true only for SSD
What
On Tue, Jan 29, 2013 at 2:06 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Sort Key: visits.id, views.id
Sort Method: external sort Disk: 4248kB
What query are you running? The query you originally showed us should
not be doing this sort in the first place.
Cheers,
Jeff
Here is
index definition
CREATE INDEX views_visit_id_visit_buoy_index ON views USING btree
(visit_id, visit_buoy)
On Tue, Jan 29, 2013 at 1:35 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Jan 29, 2013 at 12:59 PM, Alex Vinnik alvinni...@gmail.com
wrote:
On Tue, Jan 29, 2013 at 11:39
: 33698.000 ms
Basically PG is going through all views again and not using Index Scan
using views_visit_id_index on views. Looks like setting work_mem confuses
planner somehow. Any idea what can be done to do sorting in memory. I
suspect it should make query even more faster. Thanks -Alex
nothing
Guys, thanks a lot for your input. It is very valuable for us. We plan to
fix a separate dev server similar to production one, copy all data there
and try you suggestions as we really don't want to do it on production
server. I also noticed that IOPS jumps to 100% when running this query. So
it is
the rest of the settings are default ones
Thanks
-Alex
On Wed, May 25, 2011 at 12:45, Reuven M. Lerner reu...@lerner.co.il wrote:
Hi, Alex. You wrote:
I think select E'\XXX' is what you are looking for (per the fine
manual:
http://www.postgresql.org/docs/current/static/datatype-binary.html)
I didn't think that I could (easily) build a string
On Wed, May 25, 2011 at 10:59, Reuven M. Lerner reu...@lerner.co.il wrote:
Hi, everyone. I'm working on a project that's using PostgreSQL 8.3, that
requires me to translate strings of octal digits into strings of characters
-- so '141142143' should become 'abc', although the database column
.
-- Alex -- alex-goncha...@comcast.net --
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
for you, and libpq's higher performance
may turn out to be a loss for you, in your specific situation.
-- Alex -- alex-goncha...@comcast.net --
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
On Thu, Nov 11, 2010 at 06:41, Marc Mamin m.ma...@intershop.de wrote:
There are a few places in our data flow where we have to wait for index
creation before being able to distribute the process on multiple threads
again.
Would CREATE INDEX CONCURRENTLY help here?
--
Sent via
On Wed, Oct 27, 2010 at 08:00, Divakar Singh dpsma...@yahoo.com wrote:
I am attaching my code below.
Is any optimization possible in this?
Do prepared statements help in cutting down the insert time to half for this
kind of inserts?
In half? not for me. Optimization possible? Sure, using the
On Wed, Oct 27, 2010 at 21:08, Divakar Singh dpsma...@yahoo.com wrote:
So another question pops up: What method in PostgreSQL does the stored proc
use when I issue multiple insert (for loop for 100 thousand records) in the
stored proc?
It uses prepared statements (unless you are using
On Wed, Oct 13, 2010 at 07:49, Tom Lane t...@sss.pgh.pa.us wrote:
Neil Whelchel neil.whelc...@gmail.com writes:
I concur with Mark's question about whether your UPDATE pushed the table
size across the limit of what would fit in RAM.
Yeah, you said you have ~2GB of ram, just counting the bytes
On Wed, Oct 13, 2010 at 02:38, Neil Whelchel neil.whelc...@gmail.com wrote:
And the cache helps...
So, we are right back to within 10ms of where we started after INSERTing the
data, but it took a VACUUM FULL to accomplish this (by making the table fit in
RAM).
This is a big problem on a
On Thu, Aug 13, 2009 at 1:29 PM, Alvaro
Herreraalvhe...@commandprompt.com wrote:
Alex wrote:
The writer process seems to be using inordinate amounts of memory:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
11088 postgres 13 -2 3217m 2.9g 2.9g S 0 38.7 0:10.46 postgres
How is the index sl_city_etc defined?
Index public.sl_city_etc
Column|Type
--+-
city | text
listing_type | text
post_time| timestamp without time zone
bedrooms | integer
region | text
geo_lat |
Thanks. That's very helpful. I'll take your suggestions and see if
things improve.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I am using Postgres with Rails. Each rails application thread is
actually a separate process (mongrel) with it's own connection.
Normally, the db connection processes (?) look something like this in
top:
15772 postgres 15 0 229m 13m 12m S0 0.8 0:00.09 postgres:
db db [local]
idle
data distributed and served by separate segment hosts? By how
many? Is the network connectivity not a factor? What happens with
the times if you don't sort your result set?
-- Alex -- alex-goncha...@comcast.net --
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
are satisfactory or at least comparable,
| like-
|
| select distinct so_no, serial_no from observation_all;
| in postgres it takes - 1404.238 ms
| in gp it takes - 1217.283 ms
No surprise here: the data is picked by multiple segment hosts and
never sorted on the master.
-- Alex -- alex-goncha
Forgot to add:
postg...@ec2-75-101-128-4:~$ psql --version
psql (PostgreSQL) 8.3.5
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Below is a query that takes 16 seconds on the first run. I am having
generally poor performance for queries in uncached areas of the data
and often mediocre (500ms-2s+) performance generallly, although
sometimes it's very fast. All the queries are pretty similar and use
the indexes this way.
features; click on the Services tab on their site to find out more.
Alex
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Mon, Sep 8, 2008 at 8:19 PM, Rainer Mager [EMAIL PROTECTED] wrote:
1. Move some of the databases to the new drive. If this is a good idea, is
there a way to do this without a dump/restore? I'd prefer to move the folder
if possible since that would be much faster.
What like tablespaces?
archiving.
Alex
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
that speed things up, and if so, where in the drive mappings
should that partition go?
Thank you for your help. I'm mostly interested in if I can speed
these things up from 5-6 days to 1 day, otherwise I need to look at
optimizing it.
Alex
---(end of broadcast
simultaneous queries are small, and currently run
acceptably. It's the big imports, data-mining pulls, and system
manipulation were we routinely wait days on the query that we are
looking to speed up.
Thanks,
Alex
---(end of broadcast)---
TIP 4
.
But many large queries are getting Out of Memory errors.
Alex
On Nov 29, 2007, at 2:15 PM, Richard Huxton wrote:
Alex Hochberger wrote:
Does anyone have any white papers or basic guides for a large RAM
server?
We are consolidating two databases to enable better data-mining
that currently
are getting cached. try two queries in a row with the same plan.
Alex
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get
On 7/11/07, A. Kretschmer [EMAIL PROTECTED] wrote:
am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:
Okay, i got a really different plan, but i expected _NOT_ a
performance-boost like this. I expected the opposite.
It's not a really problem, i just played
On 7/11/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 7/11/07, A. Kretschmer [EMAIL PROTECTED] wrote:
am Wed, dem 11.07.2007, um 14:52:01 -0400 mailte Alex Deucher folgendes:
Okay, i got a really different plan, but i expected _NOT_ a
performance-boost like this. I expected the opposite
On 4/6/07, Michael Fuhr [EMAIL PROTECTED] wrote:
On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:
One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded
onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans
for the first query while the ordering
On 4/9/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 4/6/07, Michael Fuhr [EMAIL PROTECTED] wrote:
On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:
One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded
onto 8.2, both have locale set to C. 8.2 seems to prefer Seq
On 4/9/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 4/9/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 4/6/07, Michael Fuhr [EMAIL PROTECTED] wrote:
On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:
One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded
onto 8.2
and SQL_ASCII; same numbers and plans. Any ideas how to
improve this?
Thanks,
Alex
postgres 7.4
EXPLAIN ANALYZE select pnum, event_pid, code_name, code_description,
code_mcam, event_date, effective_date, ref_country,
ref_country_legal_code, corresponding_pnum, withdrawal_date,
payment_date, extension_date
On 4/6/07, Michael Fuhr [EMAIL PROTECTED] wrote:
On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:
One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded
onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans
for the first query while the ordering
Ok, well, I dropped the DB and reloaded it and now all seems to be
fine and performing well. I'm not sure what was going on before.
Thanks for everyone's help!
Alex
On 4/3/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 4/3/07, Tom Lane [EMAIL PROTECTED] wrote:
Alex Deucher [EMAIL PROTECTED
a
great deal of heat.
I went w/ Fujitsu. Fortunately these servers are hosted in a very
well ventilated area so i am not that concerned with heat issues.
We have the 2.5 drives (seagates and fujitsus) and they have been
reliable and performed well.
Alex
---(end
that others have looked at this before so you'll probably want to start
there if your up to it.
Linux used to have (still does?) a RAW interface which might also be
useful. I think the original code was contributed by oracle so they
could support direct IO.
Alex
---(end
On 4/5/07, Xiaoning Ding [EMAIL PROTECTED] wrote:
Alex Deucher wrote:
On 4/5/07, Erik Jones [EMAIL PROTECTED] wrote:
On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:
Erik Jones wrote:
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool
On 4/3/07, Tom Lane [EMAIL PROTECTED] wrote:
Alex Deucher [EMAIL PROTECTED] writes:
and here are the query plans referenced in my last email (apologies if
you get these twice, they didn't seem to go through the first time,
perhaps due to size?). I cut out the longer ones.
The first case
On 4/3/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 4/3/07, Tom Lane [EMAIL PROTECTED] wrote:
Alex Deucher [EMAIL PROTECTED] writes:
and here are the query plans referenced in my last email (apologies if
you get these twice, they didn't seem to go through the first time,
perhaps due
On 4/3/07, Merlin Moncure [EMAIL PROTECTED] wrote:
On 4/3/07, Alex Deucher [EMAIL PROTECTED] wrote:
(('{AB6698130,AB7076908,AB6499382,AB643,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188
On 4/3/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 4/3/07, Merlin Moncure [EMAIL PROTECTED] wrote:
On 4/3/07, Alex Deucher [EMAIL PROTECTED] wrote:
(('{AB6698130,AB7076908,AB6499382,AB643,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101
On 4/3/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 4/3/07, Alex Deucher [EMAIL PROTECTED] wrote:
On 4/3/07, Merlin Moncure [EMAIL PROTECTED] wrote:
On 4/3/07, Alex Deucher [EMAIL PROTECTED] wrote:
(('{AB6698130,AB7076908,AB6499382,AB643,AB6385893,AB6378237,AB7146973,AB7127138
On 4/3/07, Tom Lane [EMAIL PROTECTED] wrote:
Alex Deucher [EMAIL PROTECTED] writes:
Turning off bitmapscan ends up doing a sequential scan. Turning off
both bitmapscan and seqscan results in a bitmap heap scan. It doesn't
seem to want to use the index at all. Any ideas?
The ORed
On 4/3/07, Tom Lane [EMAIL PROTECTED] wrote:
Alex Deucher [EMAIL PROTECTED] writes:
Turning off bitmapscan ends up doing a sequential scan. Turning off
both bitmapscan and seqscan results in a bitmap heap scan. It doesn't
seem to want to use the index at all. Any ideas?
The ORed
On 4/3/07, Tom Lane [EMAIL PROTECTED] wrote:
Alex Deucher [EMAIL PROTECTED] writes:
Turning off bitmapscan ends up doing a sequential scan. Turning off
both bitmapscan and seqscan results in a bitmap heap scan. It doesn't
seem to want to use the index at all. Any ideas?
The ORed
servers, but I'd really like to
stick to the 8.x series if possible.
I've included some sample query plans below.
Thanks,
Alex
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command
and here are the query plans referenced in my last email (apologies if
you get these twice, they didn't seem to go through the first time,
perhaps due to size?). I cut out the longer ones.
Thanks,
Alex
postgres 7.4
EXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,
c11 from
On 3/5/07, Guido Neitzer [EMAIL PROTECTED] wrote:
On 05.03.2007, at 19:56, Alex Deucher wrote:
Yes, I started setting that up this afternoon. I'm going to test that
tomorrow and post the results.
Good - that may or may not give some insight in the actual
bottleneck. You never know
On 3/6/07, Ron [EMAIL PROTECTED] wrote:
At 10:25 AM 3/6/2007, Alex Deucher wrote:
On 3/5/07, Guido Neitzer [EMAIL PROTECTED] wrote:
On 05.03.2007, at 19:56, Alex Deucher wrote:
Yes, I started setting that up this afternoon. I'm going to test that
tomorrow and post the results.
Good
, but
performance remains the same. Any ideas?
Thanks,
Alex
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 3/5/07, Guido Neitzer [EMAIL PROTECTED] wrote:
On 01.03.2007, at 13:40, Alex Deucher wrote:
I read several places that the SAN might be to blame, but
testing with bonnie and dd indicates that the SAN is actually almost
twice as fast as the scsi discs in the old sun server. I've tried
On 3/2/07, Florian Weimer [EMAIL PROTECTED] wrote:
* Alex Deucher:
I have noticed a strange performance regression and I'm at a loss as
to what's happening. We have a fairly large database (~16 GB).
Sorry for asking, but is this a typo? Do you mean 16 *TB* instead of
16 *GB*?
If it's
On 3/1/07, Jeff Frost [EMAIL PROTECTED] wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
here are some examples. Analyze is still running on the new db, I'll
post results when that is done. Mostly what our apps do is prepared
row selects from different tables:
select c1,c2,c3,c4,c5 from t1
On 3/2/07, Ron [EMAIL PROTECTED] wrote:
At 10:16 AM 3/2/2007, Alex Deucher wrote:
On 3/2/07, Florian Weimer [EMAIL PROTECTED] wrote:
* Alex Deucher:
I have noticed a strange performance regression and I'm at a loss as
to what's happening. We have a fairly large database (~16 GB).
Sorry
On 3/2/07, Tom Lane [EMAIL PROTECTED] wrote:
Alex Deucher [EMAIL PROTECTED] writes:
Anyway, new numbers after the analyze.
Unfortunately, they are improved, but still not great:
Why are the index names different between the old and new servers?
Is that just cosmetic, or is 8.2 actually
On 3/2/07, Ron [EMAIL PROTECTED] wrote:
At 11:03 AM 3/2/2007, Alex Deucher wrote:
On 3/2/07, Ron [EMAIL PROTECTED] wrote:
May I suggest that it is possible that your schema, queries, etc were
all optimized for pg 7.x running on the old HW?
(explain analyze shows the old system taking ~1/10
On 3/2/07, Ron [EMAIL PROTECTED] wrote:
At 02:43 PM 3/2/2007, Alex Deucher wrote:
On 3/2/07, Ron [EMAIL PROTECTED] wrote:
...and I still think looking closely at the actual physical layout of
the tables in the SAN is likely to be worth it.
How would I go about doing that?
Alex
Hard for me
On 3/2/07, Jeff Frost [EMAIL PROTECTED] wrote:
On Fri, 2 Mar 2007, Guido Neitzer wrote:
On 02.03.2007, at 14:20, Alex Deucher wrote:
Ah OK. I see what you are saying; thank you for clarifying. Yes,
the SAN is configured for maximum capacity; it has large RAID 5
groups. As I said
, but
performance remains the same. Any ideas?
Thanks,
Alex
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 3/1/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
Alex Deucher wrote:
Hello,
I have noticed a strange performance regression and I'm at a loss as
to what's happening. We have a fairly large database (~16 GB). The
original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB
On 3/1/07, Jeff Frost [EMAIL PROTECTED] wrote:
On Thu, 1 Mar 2007, Joshua D. Drake wrote:
Alex Deucher wrote:
Hello,
I have noticed a strange performance regression and I'm at a loss as
to what's happening. We have a fairly large database (~16 GB). The
original postgres 7.4 was running
On 3/1/07, Jeff Frost [EMAIL PROTECTED] wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
Vacuum? Analayze? default_statistics_target? How many shared_buffers?
effective_cache_size? work_mem?
I'm running the autovacuum process on the 8.1 server. vacuuming on
the old server was done manually
On 3/1/07, Jeff Frost [EMAIL PROTECTED] wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
On 3/1/07, Jeff Frost [EMAIL PROTECTED] wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
Vacuum? Analayze? default_statistics_target? How many shared_buffers?
effective_cache_size? work_mem?
I'm
one.
The direct attached scsi discs on the old database server we getting
45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad.
How many spindles you got in that SAN?
105 IIRC.
Alex
---(end of broadcast)---
TIP 5: don't forget
On 3/1/07, Jeff Frost [EMAIL PROTECTED] wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
On 3/1/07, Jeff Frost [EMAIL PROTECTED] wrote:
On Thu, 1 Mar 2007, Alex Deucher wrote:
Postgresql might be choosing a bad plan because your
effective_cache_size
is
way off (it's the default now
On 3/1/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
Alex Deucher wrote:
On 3/1/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
\
Is the SAN being shared between the database servers and other
servers? Maybe
it was just random timing that gave you the poor write performance on
the old
. You
can also look towards Bizgres which allegedly elimates some of these
problems, and is cheaper than most BigRDBMS products.
Alex.
On 12/28/06, Guy Rouillier [EMAIL PROTECTED] wrote:
I don't want to violate any license agreement by discussing performance,
so I'll refer to a large, commercial
The problem I see with software raid is the issue of a battery backed unit:
If the computer loses power, then the 'cache' which is held in system
memory, goes away, and fubars your RAID.
Alex
On 12/5/06, Michael Stone [EMAIL PROTECTED] wrote:
On Tue, Dec 05, 2006 at 01:21:38AM -0500, Alex
.
Alex.
On 12/5/06, Alexandru Coseru [EMAIL PROTECTED] wrote:
Hello..
Thanks for the advices..
Actually , i'm waiting for the clovertown to show up on the market...
Regards
Alex
- Original Message -
From: Sven Geisler [EMAIL PROTECTED]
To: Alexandru Coseru [EMAIL PROTECTED]
Cc: pgsql
/AMCC, LSI).
Thanks,
Alex
On 12/4/06, Scott Marlowe [EMAIL PROTECTED] wrote:
On Mon, 2006-12-04 at 01:17, Alex Turner wrote:
People recommend LSI MegaRAID controllers on here regularly, but I
have found that they do not work that well. I have bonnie++ numbers
that show the controller
http://en.wikipedia.org/wiki/RAID_controller
Alex
On 12/4/06, Michael Stone [EMAIL PROTECTED] wrote:
On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:
This discussion I think is important, as I think it would be useful for
this
list to have a list of RAID cards that _do_ work well
bring us to a _good_ card).
Alex.
On 12/4/06, Greg Smith [EMAIL PROTECTED] wrote:
On Mon, 4 Dec 2006, Alex Turner wrote:
People recommend LSI MegaRAID controllers on here regularly, but I have
found that they do not work that well. I have bonnie++ numbers that
show the controller
.
for the output: http://www.infoconinc.com/test/bonnie++.html (the first line
is a six drive RAID 10 on a 3ware 9500S, the next three are all RAID 1s on
LSI MegaRAID controllers, verified by lspci).
Alex.
On 12/4/06, Greg Smith [EMAIL PROTECTED] wrote:
On Thu, 30 Nov 2006, Carlos H. Reimer wrote
Does anyone have any performance experience with the Dell Perc 5i
controllers in RAID 10/RAID 5?
Thanks,
Alex
On 23 Oct 2006, at 22:59, Jim C. Nasby wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect
that
if you want performance perl isn't the
On 3 Oct 2006, at 16:04, Merlin Moncure wrote:
On 10/3/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
Some very helpful people had asked that I post the troublesome
code that was
generated by my import program.
I installed a SQL log feature in my import program. I have
posted samples of the
! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [0/0] voluntary/involuntary context switches ! buffer usage stats:
! Shared blocks: 229066 read, 2 written, buffer hit rate = 55.61% ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written
Alex.
ahh good pointThanksOn 9/22/06, Tom Lane [EMAIL PROTECTED] wrote:
Alex Turner [EMAIL PROTECTED] writes: Home come the query statistics showed that 229066 blocks where read given that all the blocks in all the tables put together only total 122968?
You forgot to count the indexes.Also, the use
-indexed, and no changes beyond this insert were made in that time and result_entry has recently been vacuumed.Any insight would be greatly appreciatedAlex
On 9/22/06, Alex Turner [EMAIL PROTECTED] wrote:
ahh good pointThanksOn 9/22/06, Tom Lane
[EMAIL PROTECTED] wrote:
Alex Turner [EMAIL
Do the basic math:If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). If you have to do a full table scan, then it will take roughly 400 seconds with a single 10k RPM
Sweet - thats good - RAID 10 support seems like an odd thing to leave out.AlexOn 9/18/06, Luke Lonergan
[EMAIL PROTECTED] wrote:Alex,On 9/18/06 4:14 PM, Alex Turner
[EMAIL PROTECTED] wrote: Be warned, the tech specs page: http://www.sun.com/servers/x64/x4500/specs.xml#anchor3
doesn't mention
1 - 100 of 253 matches
Mail list logo