Ryan Hansen wrote:
>
> Hey all,
>
> This may be more of a Linux question than a PG question, but I’m
> wondering if any of you have successfully allocated more than 8 GB of
> memory to PG before.
>
> I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of
> memory, and I’ve tried to comm
Scott Marlowe wrote:
> On Mon, Oct 13, 2008 at 8:55 AM, Carlos Moreno <[EMAIL PROTECTED]> wrote:
>
>> I guess my logical next step is what was suggested by Scott --- install
>> 8.2.4 and repeat the same tests with this one; that should give me
>> interestin
Thanks Greg and others for your replies,
> This is really something to watch out for. One quick thing first
> though: what frequency does the CPU on the new server show when you
> look at /proc/cpuinfo? If you see "cpu MHz: 1000.00"
It was like that in the initial setup --- I just disabled t
Ok, I know that such an open and vague question like this one
is... well, open and vague... But still.
The short story:
Just finished an 8.3.4 installation on a new machine, to replace
an existing one; the new machine is superior (i.e., higher
performance) in virtually every way --- twice as
Alvaro Herrera wrote:
Carlos Moreno wrote:
That is: the first time I run the query, it has to go through the
disk; in the normal case it would have to read 100MB of data, but due
to bloating, it actually has to go through 2GB of data. Ok, but
then, it will load only 100MB (the ones that
Jonah H. Harris wrote:
On 9/23/07, Carlos Moreno <[EMAIL PROTECTED]> wrote:
Wait a second --- am I correct in understanding then that the bloating
you guys are referring to occurs *in memory*??
No, bloating occurs on-disk; but this does affect memory. Bloat means
that even though your
I don't understand this argument --- the newer system has actually
less memory than the old one; how could it fit there and not on the
old one? Plus, how could dropping-recreating the database on the same
machine change the fact that the entire dataset entirely fit or not in
memory??
Because
Jonah H. Harris wrote:
You didn't specify the database size
Oops, sorry about that one --- the full backup is a 950MB file. The
entire database
should fit in memory (and the effective_cache_size was set to 2GB for
the machine
with 4GB of memory)
, but my guess is that the total
data size
I recently had a puzzling experience (performace related).
Had a DB running presumably smoothly, on a server with Dual-Core
Opteron and 4GB of RAM (and SATA2 drives with Hardware RAID-1).
(PostgreSQL 8.2.4 installed from source, on a FC4 system --- databases
with no encoding --- initdb -E SQL_
smiley2211 wrote:
Hello all,
Old servers that housed 7.4 performed better than 8.1.4 version...are there
any MAJOR performance hits with this version???
Are you using the default UNICODE encoding for your databases??
This could potentially translate into a performance hit (considerable?
Ma
Thanks again, Peter, for expanding on these points.
Peter Koczan wrote:
Anyway... One detail I don't understand --- why do you claim that
"You can't take advantage of the shared file system because you can't
share tablespaces among clusters or servers" ???
I say that because you can't s
About 5 months ago, I did an experiment serving tablespaces out of
AFS, another shared file system.
You can read my full post at
http://archives.postgresql.org/pgsql-admin/2007-04/msg00188.php
Thanks for the pointer! I had done a search on the archives, but
didn't find this one (strange,
Hi,
Anyone has tried a setup combining tablespaces with NFS-mounted partitions?
I'm considering the idea as a performance-booster --- our problem is
that we are
renting our dedicated server from a hoster that does not offer much
flexibility
in terms of custom hardware configuration; so, the *
Daniel Griscom wrote:
Thanks again for all the feedback. Running on a dual processor/core
machine is clearly a first step, and I'll look into the other
suggestions as well.
As per one of the last suggestions, do consider as well putting a dual
hard disk
(as in, independent hard disks, to allo
Joshua D. Drake wrote:
CPU is unlikely your bottleneck.. You failed to mention anything
about your I/O setup. [...]
He also fails to mention if he is doing the inserts one at a time or
as batch.
Would this really be important? I mean, would it affect a *comparison*??
As long as he does
Daniel Griscom wrote:
Several people have mentioned having multiple processors; my current
machine is a uni-processor machine, but I believe we could spec the
actual runtime machine to have multiple processors/cores.
My estimate is that yes, you should definitely consider that.
I'm only ru
Joshua D. Drake wrote:
Am I missing something?? There is just *one* instance of this idea
in, what,
four replies?? I find it so obvious, and so obviously the only
solution that
has any hope to work, that it makes me think I'm missing something ...
Is it that multiple PostgreSQL processes wil
Steinar H. Gunderson wrote:
Or use a dual-core system. :-)
Am I missing something?? There is just *one* instance of this idea in,
what,
four replies?? I find it so obvious, and so obviously the only solution
that
has any hope to work, that it makes me think I'm missing something ...
Is it
That would be a valid argument if the extra precision came at a
considerable cost (well, or at whatever cost, considerable or not).
the cost I am seeing is the cost of portability (getting similarly
accruate info from all the different operating systems)
Fair enough --- as I mentioned, I w
I don't think it's that hard to get system time to a reasonable level
(if this config tuner needs to run for a min or two to generate
numbers that's acceptable, it's only run once)
but I don't think that the results are really that critical.
Still --- this does not provide a valid argument
been just being naive) --- I can't remember the exact name, but I
remember
using (on some Linux flavor) an API call that fills a struct with
data on the
resource usage for the process, including CPU time; I assume measured
with precision (that is, immune to issues of other applications runni
CPUs, 32/64bit, or clock speeds. So any attempt to determine "how
fast"
a CPU is, even on a 1-5 scale, requires matching against a database of
regexes which would have to be kept updated.
And let's not even get started on Windows.
I think the only sane way to try and find the cpu speed is
large problem from a slog perspective; there is no standard way even within
Linux to describe CPUs, for example. Collecting available disk space
information is even worse. So I'd like some help on this portion.
Quite likely, naiveness follows... But, aren't things like /proc/cpuinfo ,
/
Harald Armin Massa wrote:
Carlos,
about your feature proposal: as I learned, nearly all
Perfomance.Configuration can be done by editing the .INI file and
making the Postmaster re-read it.
So, WHY at all should those parameters be guessed at the installation
of the database? Would'nt it be a
Tom Lane wrote:
Carlos Moreno <[EMAIL PROTECTED]> writes:
... But, wouldn't it make sense that the configure script
determines the amount of physical memory and perhaps even do a HD
speed estimate to set up defaults that are closer to a
performance-optimized
configuration
Steve Crawford wrote:
Have you changed _anything_ from the defaults? The defaults are set so
PG will run on as many installations as practical. They are not set for
performance - that is specific to your equipment, your data, and how you
need to handle the data.
Is this really the sensible thin
Tom Lane wrote:
Dan Shea <[EMAIL PROTECTED]> writes:
You make it sound so easy. Our database size is at 308 GB.
Well, if you can't update major versions that's understandable; that's
why we're still maintaining the old branches. But there is no excuse
for not running a reasonably rec
Steve wrote:
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performance.
The annotated config file talks about setting shared_buffers to a third
of the
available memory --- well, it says "it should be no more than 1/3 of the
total
amount of memory
Jeff Frost wrote:
You know, I should answer emails at night...
Indeed you shouldN'T ;-)
Carlos
--
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
And by the subject, I mean: please provide a "factual" answer, as opposed
to the more or less obvious answer which would be "no one in their sane
mind would even consider doing such thing" :-)
1) Would it be possible to entirely disable WAL? (something like setting a
symlink so that pg_xlog p
Problem is :), you can purchase SATA Enterprise Drives.
Problem I would have thought that was a good thing!!! ;-)
Carlos
--
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http:
[EMAIL PROTECTED] wrote:
Hi All,
Currently in one of the projects we want to restrict the
unauthorized users to the Postgres DB.Here we are using Postgres
version 8.2.0
Can anybody tell me how can I provide the user based previleges
to the Postgres DB so that, we can restri
count(*). I was just discussing general performance issues on the
phone line and when my colleague asked me about the size of the
database he just wonderd why this takes so long for a job his
MS-SQL server is much faster. [...].
Simple. MSSQL is optimized for this case, and uses "older"
Joshua D. Drake wrote:
insert into foo(bar) values (bang) (bong) (bing) ...?
Nit pick (with a "correct me if I'm wrong" disclaimer :-)) :
Wouldn't that be (bang), (bong), (bing) ??
Carlos
--
---(end of broadcast)---
TIP 2: Don't '
Ron wrote:
Speak Their Language (tm) [ ... ] Do The Right Thing (tm)
[...] Not Listening to Reason (tm),
[...]
fiscal or managerial irresponsibility.)
And *here*, of all the instances, you don't put a (TM) sign ??
Tsk-tsk-tsk
:-)
Carlos
--
---(end of broadc
I would just like to note here that this is an example of inefficient
strategy.
[ ... ]
Alex may have made the correct, rational choice, given the state of
accounting at most corporations. Corporate accounting practices and
the budgetary process give different weights to cash and labor.
Much better to use flash RAM for read heavy applications.
Even there you have to be careful that seek performance, not
throughput, is what is gating your day to day performance with those
tables.
Isn't precisely there where Flash disks would have *the* big advantage??
I mean, access time
Csaba Nagy wrote:
I only know to answer your no. 2:
2) What about the issue with excessive locking for foreign keys when
inside a transaction? Has that issue disappeared in 8.2? And if not,
would it affect similarly in the case of multiple-row inserts?
The exclusive lock is gone alr
1. If you're running 8.2 you can have multiple sets of values in an
INSERT
http://www.postgresql.org/docs/8.2/static/sql-insert.html
Yeah, i'm running the 8.2.3 version ! i didn't know about multiple
inserts sets ! Thanks for the tip ;-)
No kidding --- thanks for the tip from me as well
Tom Lane wrote:
Carlos Moreno <[EMAIL PROTECTED]> writes:
I would have expected a mind-blowing increase in responsiveness and
overall performance. However, that's not the case --- if I didn't know
better, I'd probably tend to say that it is indeed the opposite
(perf
Florian Weimer wrote:
* Alex Deucher:
I have noticed a strange performance regression and I'm at a loss as
to what's happening. We have a fairly large database (~16 GB).
Sorry for asking, but is this a typo? Do you mean 16 *TB* instead of
16 *GB*?
If it's really 16 GB, you should c
Are there any issues with client libraries version mismatching
backend version?
I'm just realizing that the client software is still running on the
same machine (not the same machine where PG is running) that
has PG 7.4 installed on it, and so it is using the client libraries 7.4
Any chance tha
Rodrigo Gonzalez wrote:
I've since discovered a problem that *may* be related to the
deterioration
of the performance *now* --- but that still does not explain the machine
choking since last night, so any comments or tips are still most
welcome.
[...]
And the problem that *may* be related
Tom Lane wrote:
Carlos Moreno <[EMAIL PROTECTED]> writes:
I would have expected a mind-blowing increase in responsiveness and
overall performance. However, that's not the case --- if I didn't know
better, I'd probably tend to say that it is indeed the opposite
(perf
As the subject says. A quite puzzling situation: we not only upgraded the
software, but also the hardware:
Old system:
PG 7.4.x on Red Hat 9 (yes, it's not a mistake!!!)
P4 HT 3GHz with 1GB of RAM and IDE hard disk (120GB, I believe)
New system:
PG 8.2.3 on Fedora Core 4
Athlon64 X2 4200+
Say that I have a dual-core processor (AMD64), with, say, 2GB of memory
to run PostgreSQL 8.2.3 on Fedora Core X.
I have the option to put two hard disks (SATA2, most likely); I'm
wondering
what would be the optimal configuration from the point of view of
performance.
I do have the option
Arnau wrote:
Hi Bill,
In response to Arnau <[EMAIL PROTECTED]>:
I have postgresql 7.4.2 running on debian and I have the oddest
postgresql behaviour I've ever seen.
I do the following queries:
espsm_asme=# select customer_app_config_id, customer_app_config_name
from customer_app_config
Tom Lane wrote:
One reason you might consider updating is that newer versions check the
physical table size instead of unconditionally believing
pg_class.relpages/reltuples. Thus, they're much less likely to get
fooled when a table has grown substantially since it was last vacuumed
or analyzed
Tomas Vondra wrote:
When I force it via "set enable_seqscan to off", the index scan
takes about 0.1 msec (as reported by explain analyze), whereas
with the default, it chooses a seq. scan, for a total execution
time around 10 msec!! (yes: 100 times slower!). The table has
20 thousand record
Hi,
I find various references in the list to this issue of queries
being too slow because the planner miscalculates things and
decides to go for a sequenctial scan when an index is available
and would lead to better performance.
Is this still an issue with the latest version? I'm doing some
t
50 matches
Mail list logo