Hi,
I am new to this list so please forgive me if it not fits the standards.
I have the following query that I run agains postgresql 8.2:
select distinct
m.koid,
m.name,
m.farbe,
m.aktennummer,
m.durchgefuehrt_von,
m.durchgefuehrt_bis,
rf.bezeichnung as rf_bezeichnung,
Balkrishna Sharma b...@hotmail.com writes:
I will have a web application having postgres 8.4+ as backend. At any given
time, there will be max of 1000 parallel web-users interacting with the
database (read/write)
I wish to do performance testing of 1000 simultaneous read/write to
the
On Wed, 16 Jun 2010, Balkrishna Sharma wrote:
Hello,I will have a web application having postgres 8.4+ as backend. At
any given time, there will be max of 1000 parallel web-users interacting
with the database (read/write)I wish to do performance testing of 1000
simultaneous read/write to the
When you set up a server that has high throughput requirements, the last
thing you want to do is use it in a manner that cripples its throughput.
Don't try and have 1000 parallel Postgres backends - it will process
those queries slower than the optimal setup. You should aim to have
Pierre C li...@peufeu.com writes:
The same is true of a web server : 1000 active php interpreters (each eating
several megabytes or more) are not ideal for performance !
For php, I like lighttpd with php-fastcgi : the webserver proxies requests
to a small pool of php processes, which are only
Kaufhold, Christian (LFD) christian.kaufh...@blfd.bayern.de writes:
I have the following query that I run agains postgresql 8.2:
...
But when I run analyse the same query runs for hours.
Seems like the core of the problem is here:
Thanks Tom,
alter table boden.massnahmeobjekt alter column aktennummer set statistics 1000;
fixed it.
Regards
Christian
-Ursprüngliche Nachricht-
Von: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] Im Auftrag von Tom Lane
Gesendet:
All,
So, I've been discussing this because using PostgreSQL on the caching
layer has become more common that I think most people realize. Jonathan
is one of 4 companies I know of who are doing this, and with the growth
of Hadoop and other large-scale data-processing technologies, I think
Especially as, in repeated tests, PostgreSQL with persistence turned off
is just as fast as the fastest nondurable NoSQL database. And it has a
LOT more features.
An option to completely disable WAL for such use cases would make it a lot
faster, especially in the case of heavy
Hi,
Josh Berkus j...@agliodbs.com writes:
a) Eliminate WAL logging entirely
b) Eliminate checkpointing
c) Turn off the background writer
d) Have PostgreSQL refuse to restart after a crash and instead call an
exteral script (for reprovisioning)
Well I guess I'd prefer a per-transaction
Dimitri Fontaine dfonta...@hi-media.com writes:
Josh Berkus j...@agliodbs.com writes:
a) Eliminate WAL logging entirely
b) Eliminate checkpointing
c) Turn off the background writer
d) Have PostgreSQL refuse to restart after a crash and instead call an
exteral script (for reprovisioning)
Josh Berkus wrote:
a) Eliminate WAL logging entirely
c) Turn off the background writer
Note that if you turn off full_page_writes and set
bgwriter_lru_maxpages=0, you'd get a substantial move in both these
directions without touching any code. Would help prove those as useful
directions to
Well I guess I'd prefer a per-transaction setting, allowing to bypass
WAL logging and checkpointing. Forcing the backend to care itself for
writing the data I'm not sure is a good thing, but if you say so.
Well if the transaction touches a system catalog it better be WAL-logged...
A
Tom Wilcox wrote:
Any suggestions for good monitoring software for linux?
By monitoring, do you mean for alerting purposes or for graphing
purposes? Nagios is the only reasonable choice for the former, while
doing at best a mediocre job at the latter. For the later, I've found
that Munin
Hello there,
I've searched the web and can find very little on this issue, so I was
hoping those on this list would be able to shed some light on it.
Performance has dropped through the floor after converting my db from ASCI
to UTF8. Is this normal behavior on 8.4.x?
I'm mystified as to the
Well I guess I'd prefer a per-transaction setting, allowing to bypass
WAL logging and checkpointing.
Not even conceiveable. For this to work, we're talking about the whole
database installation. This is only a set of settings for a database
*server* which is considered disposable and
Brant Fitzsimmons brant.fitzsimm...@gmail.com writes:
I've searched the web and can find very little on this issue, so I was
hoping those on this list would be able to shed some light on it.
Performance has dropped through the floor after converting my db from ASCI
to UTF8. Is this normal
Josh Berkus j...@agliodbs.com writes:
(a) and (d) are probably simple, if by reprovisioning you mean
rm -rf $PGDATA; initdb.
Exactly. Followed by scp database_image. Or heck, just replacing the
whole VM.
Right, that would work. I don't think you really need to implement that
inside
On 17/06/2010 22:41, Greg Smith wrote:
Tom Wilcox wrote:
Any suggestions for good monitoring software for linux?
By monitoring, do you mean for alerting purposes or for graphing
purposes? Nagios is the only reasonable choice for the former, while
doing at best a mediocre job at the latter.
Some more on the RHEL 5.5 system I'm helping to setup. Some benchmarking
using pgbench appeared to suggest that wal_sync_method=open_sync was a
little faster than fdatasync [1]. Now I recall some discussion about
this enabling direct io and the general flakiness of this on Linux, so
is the
Mark Kirkwood wrote:
Now I recall some discussion about this enabling direct io and the
general flakiness of this on Linux, so is the option regarded as safe?
No one has ever refuted the claims in
http://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php that
it can be unsafe under a
The conclusion I read was that Linux O_SYNC behaves like O_DSYNC on
other systems. For WAL, this seems satisfactory?
Personally, I use fdatasync(). I wasn't able to measure a reliable
difference for my far more smaller databases, and fdatasync() seems
reliable and fast enough, that fighting
On tor, 2010-06-17 at 18:28 -0400, Brant Fitzsimmons wrote:
Performance has dropped through the floor after converting my db from
ASCI to UTF8.
Converting from ASCII to UTF8 is a noop.
If you did some configuration changes, you need to tell us which.
--
Sent via pgsql-performance mailing
On 18/06/10 15:29, Greg Smith wrote:
P.S. Be wary of expecting pgbench to give you useful numbers on a
single run. For the default write-heavy test, I recommend three runs
of 10 minutes each (-T 600 on recent PostgreSQL versions) before I
trust any results it gives. You can get useful data
24 matches
Mail list logo