On Wed, Jun 02, 2010 at 11:58:47AM +0100, Tom Wilcox wrote:
> Hi,
>
> Sorry to revive an old thread but I have had this error whilst trying to
> configure my 32-bit build of postgres to run on a 64-bit Windows Server
> 2008 machine with 96GB of RAM (that I would very much like to use with
> p
Jori,
What is the PostgreSQL
version/shared_buffers/work_mem/effective_cache_size/default_statistics_target?
Are the statistics for the table up to date? (Run analyze verbose
to update them.) Table and index structure would be nice to know, too.
If all else fails you can set enable_seqscan
Tom,
A 32 bit build could only reference at most 4 Gb - certainly not 60 Gb. Also,
Windows doesn't do well with large shared buffer sizes anyway. Try setting
shared_buffers to 2 Gb and let the OS file system cache handle the rest.
Your other option, of course, is a nice 64-bit linux variant,
On 03/06/10 11:30, Craig James wrote:
I'm testing/tuning a new midsize server and ran into an inexplicable
problem. With an RAID10 drive, when I move the WAL to a separate
RAID1 drive, TPS drops from over 1200 to less than 90! I've checked
everything and can't find a reason.
Are the 2 n
I'm testing/tuning a new midsize server and ran into an inexplicable problem.
With an RAID10 drive, when I move the WAL to a separate RAID1 drive, TPS drops
from over 1200 to less than 90! I've checked everything and can't find a
reason.
Here are the details.
8 cores (2x4 Intel Nehalem 2 G
"Kevin Grittner" writes:
> Jori Jovanovich wrote:
>> what is the recommended way to solve this?
> The recommended way is to adjust your costing configuration to
> better reflect your environment.
Actually, it's probably not the costs so much as the row estimates.
For instance, that first query
2010/6/2 Jori Jovanovich
> hi,
>
> I have a problem space where the main goal is to search backward in time
> for events. Time can go back very far into the past, and so the
> table can get quite large. However, the vast majority of queries are all
> satisfied by relatively recent data. I have
Jori Jovanovich wrote:
> what is the recommended way to solve this?
The recommended way is to adjust your costing configuration to
better reflect your environment. What version of PostgreSQL is
this? What do you have set in your postgresql.conf file? What does
the hardware look like? How b
hi,
I have a problem space where the main goal is to search backward in time for
events. Time can go back very far into the past, and so the
table can get quite large. However, the vast majority of queries are all
satisfied by relatively recent data. I have an index on the row creation
date and
On 03/06/10 02:53, Alan Hodgson wrote:
On Tuesday 01 June 2010, Mark Kirkwood
wrote:
I'm helping set up a Red Hat 5.5 system for Postgres. I was going to
recommend xfs for the filesystem - however it seems that xfs is
supported as a technology preview "layered product" for 5.5. This
apparent
On Mon, May 24, 2010 at 12:50 PM, Łukasz Dejneka wrote:
> Hi group,
>
> I could really use your help with this one. I don't have all the
> details right now (I can provide more descriptions tomorrow and logs
> if needed), but maybe this will be enough:
>
> I have written a PG (8.3.8) module, which
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
> Tom Wilcox wrote:
> > Is it possible to get postgres to make use of the available 96GB
> > RAM on a Windows 32-bit build?
>
> I would try setting shared_memory to somewhere between 200MB and 1GB
> and set effective_cache_size = 90GB or so.
Tom Wilcox wrote:
> Is it possible to get postgres to make use of the available 96GB
> RAM on a Windows 32-bit build?
I would try setting shared_memory to somewhere between 200MB and 1GB
and set effective_cache_size = 90GB or so. The default behavior of
Windows was to use otherwise idle RAM f
On Thu, May 27, 2010 at 9:01 AM, venu madhav wrote:
> Thanks for the reply..
> I am using postgres 8.01 and since it runs on a client box, I
> can't upgrade it. I've set the auto vacuum nap time to 3600 seconds.
You've pretty much made autovac run every 5 hours with that setting.
What
On Tue, Jun 1, 2010 at 9:03 AM, Torsten Zühlsdorff
wrote:
> Hello,
>
> i have a set of unique data which about 150.000.000 rows. Regullary i get a
> list of data, which contains multiple times of rows than the already stored
> one. Often around 2.000.000.000 rows. Within this rows are many duplica
Hi,
Sorry to revive an old thread but I have had this error whilst trying to
configure my 32-bit build of postgres to run on a 64-bit Windows Server
2008 machine with 96GB of RAM (that I would very much like to use with
postgres).
I am getting:
2010-06-02 11:34:09 BSTFATAL: requested share
Hi,
Hmm, that's nice, though I cannot but wonder whether the exclusive lock
> required by CLUSTER is going to be a problem in the long run.
>
Not an issue; the inserts are one-time (or very rare; at most: once a year).
> Hm, keep in mind that if the station clause alone is not selective
> enou
Sorry, Alvaro.
I was contemplating using a GIN or GiST index as a way of optimizing the
query.
Instead, I found that re-inserting the data in order of station ID (the
primary look-up column) and then CLUSTER'ing on the station ID, taken date,
and category index increased the speed by an order of
I'm needing some tutorial to use and understand the graphical feature
"Explain" of PgAdmin III?
Do you have it?
Thanks,
Jeres.
Tom ,
Thank you for your reply!
I am encountering a context-switch storm problem .
We got the pg_locks data when context-switch value over 200K/sec
We fount that the value of CS relate to the count of
Exclutivelocks .
And I donnt know how to make the probl
Hello,
i have a set of unique data which about 150.000.000 rows. Regullary i
get a list of data, which contains multiple times of rows than the
already stored one. Often around 2.000.000.000 rows. Within this rows
are many duplicates and often the set of already stored data.
I want to store ju
Hi,
I have two similar queries that calculate "group by" summaries over a huge
table (74.6mil rows).
The only difference between two queries is the number of columns that group by
is performed on.
This difference is causing two different plans which are vary so very much in
performance.
Postgres
I made a brute force check and indeed, for one of the parameters the query was
switching to sequential scans (or bitmaps scans with condition on survey_pk=16
only if sequential scans were off). After closer look at the plan cardinalities
I thought it would be worthy to increase histogram size an
Thanks for the reply..
I am using postgres 8.01 and since it runs on a client box, I
can't upgrade it. I've set the auto vacuum nap time to 3600 seconds.
On Thu, May 27, 2010 at 8:03 PM, Bruce Momjian wrote:
> venu madhav wrote:
> > Hi All,
> >In my application we are using po
you can try Scientific Linux 5.x,it plus XFS and some other soft for HPC based
on CentOS.
It had XFS for years
--- On Wed, 6/2/10, Alan Hodgson wrote:
> From: Alan Hodgson
> Subject: Re: [PERFORM] File system choice for Red Hat systems
> To: pgsql-performance@postgresql.org
> Date: Wednesday
On Tuesday 01 June 2010, Mark Kirkwood
wrote:
> I'm helping set up a Red Hat 5.5 system for Postgres. I was going to
> recommend xfs for the filesystem - however it seems that xfs is
> supported as a technology preview "layered product" for 5.5. This
> apparently means that the xfs tools are only
On Wednesday 02 June 2010 13:37:37 Mozzi wrote:
> Hi
>
> Thanx mate Create Index seems to be the culprit.
> Is it normal to just use 1 cpu tho?
If it is a single-threaded process, then yes.
And a "Create index" on a single table will probably be single-threaded.
If you now start a "create index"
Am 02.06.2010 12:03, schrieb Pierre C:
Usually WAL causes a much larger performance hit than this.
Since the following command :
CREATE TABLE tmp AS SELECT n FROM generate_series(1,100) AS n
which inserts 1M rows takes 1.6 seconds on my desktop, your 800k rows
INSERT taking more than 3 min
Mozzi,
* Mozzi (mozzi.g...@gmail.com) wrote:
> Thanx mate Create Index seems to be the culprit.
> Is it normal to just use 1 cpu tho?
Yes, PG can only use 1 CPU for a given query or connection. You'll
start to see the other CPUs going when you have more than one connection
to the database. If y
In response to Mozzi :
> Hi
>
> Thanx mate Create Index seems to be the culprit.
> Is it normal to just use 1 cpu tho?
If you have only one client, yes. If you have more then one active
connections, every connection will use one CPU. In your case: create
index can use only one CPU.
Regards, And
Hi
Thanx mate Create Index seems to be the culprit.
Is it normal to just use 1 cpu tho?
Mozzi
On Wed, 2010-06-02 at 12:24 +0100, Matthew Wakeling wrote:
> On Wed, 2 Jun 2010, Mozzi wrote:
> > This box is basically adle @ the moment as it is still in testing yet
> > top shows high usage on just 1
On Wed, 2 Jun 2010, Mozzi wrote:
This box is basically adle @ the moment as it is still in testing yet
top shows high usage on just 1 of the cores.
First port of call: What process is using the CPU? Run top on a fairly
wide terminal and use the "c" button to show the full command line.
Matth
Hallo all
I have a strange problem here.
I have a pgsql database running on Intel hardware here, it has 8 cores
hyperthreaded so you see 16 cpu's.
This box is basically adle @ the moment as it is still in testing yet
top shows high usage on just 1 of the cores.
mpstat gives the below.
As you can
As promised, I did a tiny benchmark - basically, 8 empty tables are
filled with 100k rows each within 8 transactions (somewhat typically for
my application). The test machine has 4 cores, 64G RAM and RAID1 10k
drives for data.
# INSERTs into a TEMPORARY table:
[joac...@testsrv scaling]$ t
34 matches
Mail list logo