I've got a table with ~121 million records in it. Select count on it
currently takes ~45 minutes, and an update to the table to set a value
on one of the columns I finally killed after it ran 17 hours and had
still not completed. Queries into the table are butt slow, and
The update query
Auto-vacuum has made Postgres a much more "friendly" system. Is there some
reason the planner can't also auto-ANALYZE in some situations?
Here's an example I ran into:
create table my_tmp_table (...);
insert into my_tmp_table (select some stuff from here and there);
select ... from my_tm
Mark Lewis wrote:
PG could scan the index looking for matches first and only load the
actual rows if it found a match, but that could only be a possible win
if there were very few matches, because the difference in cost between a
full index scan and a sequential scan would need to be greater tha
Alvaro Herrera wrote:
>> Just out of curiosity: Does Postgress store a duplicate of the data in the
index, even for long strings? I thought indexes only had to store the
string up to the point where there was no ambiguity, for example, if I have
"missing", "mississippi" and "misty", the index
We're thinking of building some new servers. We bought some a while back that
have ECC (error correcting) RAM, which is absurdly expensive compared to the
same amount of non-ECC RAM. Does anyone have any real-life data about the
error rate of non-ECC RAM, and whether it matters or not? In my
On Fri, 2007-05-25 at 20:16 +0200, Arnau wrote:
The point I'm worried is performance. Do you think the performance
would be better executing exactly the same queries only adding an extra
column to all the tables e.g. customer_id, than open a connection to the
only one customers DB and execut
Apologies for a somewhat off-topic question, but...
The Linux kernel doesn't properly detect my software RAID1+0 when I boot up.
It detects the two RAID1 arrays, the partitions of which are marked properly.
But it can't find the RAID0 on top of that, because there's no corresponding
device t
[EMAIL PROTECTED] wrote:
various people (not database experts) are pushing to install Oracle
cluster so that they can move all of these to one table with a
customerID column.
They're blowing smoke if they think Oracle can do this. One of my applications
had this exact same problem -- table-p
Scott Marlowe wrote:
OTOH, there are some things, like importing data, which are MUCH faster
in pgsql than in the big database.
An excellent point, I forgot about this. The COPY command is the best thing
since the invention of a shirt pocket. We have a database-per-customer design,
and one o
Jonah H. Harris wrote:
On 6/6/07, Craig James <[EMAIL PROTECTED]> wrote:
They're blowing smoke if they think Oracle can do this.
Oracle could handle this fine.
Oracle fell over dead, even with the best indexing possible,
tuned by the experts, and using partitions keyed to the
Tyrrill, Ed wrote:
I have a table, let's call it A, whose primary key, a_id, is referenced
in a second table, let's call it B. For each unique A.a_id there are
generally many rows in B with the same a_id. My problem is that I want
to delete a row in A when the last row in B that references it
Tyrrill, Ed wrote:
QUERY PLAN
---
Merge Left Join (cost=38725295.93..42505394.70 rows=13799645 width=8)
(actual time=6503583.342..82
On 2007-06-11 Christo Du Preez wrote:
I really hope someone can shed some light on my problem. I'm not sure
if this is a posgres or potgis issue.
Anyway, we have 2 development laptops and one live server, somehow I
managed to get the same query to perform very well om my laptop, but
on both the
Looking for replication solutions, I find:
Slony-I
Seems good, single master only, master is a single point of failure,
no good failover system for electing a new master or having a failed
master rejoin the cluster. Slave databases are mostly for safety or
for parallelizing queries for performan
Thanks to all who replied and filled in the blanks. The problem with the web
is you never know if you've missed something.
Joshua D. Drake wrote:
Looking for replication solutions, I find...
Slony-II
Dead
Wow, I'm surprised. Is it dead for lack of need, lack of resources, too
complex, or
Andreas Kostyrka wrote:
Slony provides near instantaneous failovers (in the single digit seconds
range). You can script an automatic failover if the master server
becomes unreachable.
But Slony slaves are read-only, correct? So the system isn't fully functional
once the master goes down.
T
Markus Schiltknecht wrote:
Not quite... there's still Postgres-R, see www.postgres-r.org And I'm
continuously working on it, despite not having updated the website for
almost a year now...
I planned on releasing the next development snapshot together with 8.3,
as that seems to be delayed, th
Dolafi, Tom wrote:
min(fmin) | max(fmin)|avg(fmin)
1 | 55296469 |11423945
min(fmax) | max(fmax)|avg(fmax)
18 | 3288 |11424491
There are 5,704,211 rows in the table.
When you're looking for weird index problems, it's more in
I have the same schema in two different databases. In "smalldb", the two tables of
interest have about 430,000 rows, in "bigdb", the two tables each contain about 5.5
million rows. I'm processing the data, and for various reasons it works out well to process it in
100,000 row chunks. However
The two queries below produce different plans.
select r.version_id, r.row_num, m.molkeys from my_rownum r
join my_molkeys m on (r.version_id = m.version_id)
where r.version_id >= 320
and r.version_id < 330
order by r.version_id;
select r.version_id, r.row_num, m.molkeys from my_rownu
Sorry, I forgot to mention: This is 8.1.4, with a fairly ordinary configuration
on a 4 GB system.
Craig
Craig James wrote:
The two queries below produce different plans.
select r.version_id, r.row_num, m.molkeys from my_rownum r
join my_molkeys m on (r.version_id = m.version_id)
where
Here's an oddity. I have 10 databases, each with about a dozen connections to Postgres
(about 120 connections total), and at midnight they're all idle. These are mod_perl
programs (like a FastCGI -- they stay connected so they're ready for instant service).
So using "ps -ef" and grep, we fin
Bruno Rodrigues Siqueira wrote:
Who can help me? My SELECT in a base with 1 milion register,
using expression index = 6seconds…
Run your query using
EXPLAIN ANALYZE SELECT ... your query ...
and then post the results to this newsgroup. Nobody can help until they see
the res
Tilmann Singer wrote:
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [20070728 21:05]:
Let's try putting the sort/limit in each piece of the UNION to speed them up
separately.
SELECT * FROM (
(SELECT * FROM large_table lt
WHERE lt.user_id = 12345
ORDER BY created_at DESC LIMIT 10) AS q1
UNION
(S
If I delete a whole bunch of tables (like 10,000 tables), should I vacuum
system tables, and if so, which ones? (This system is still on 8.1.4 and isn't
running autovacuum).
Thanks,
Craig
---(end of broadcast)---
TIP 4: Have you searched our lis
Carlos Moreno wrote:
Anyone has tried a setup combining tablespaces with NFS-mounted partitions?
There has been some discussion of this recently, you can find it in the
archives (http://archives.postgresql.org/). The word seems to be that NFS can
lead to data corruption.
Craig
-
Luiz K. Matsumura wrote:
Is connected to full 100Mb, it transfers many things quick to clients.
Is running Apache adn JBoss, transfer rate is good, I did scp to copy
many archives and is as quick as the old server.
I have no idea how to continue researching this problem. Now I'm going
to do s
On 3/17/10 2:52 AM, Greg Stark wrote:
On Wed, Mar 17, 2010 at 7:32 AM, Pierre C wrote:
I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of current size, capped at (say) 1 GB.
This turns out to be a bad idea. One of the first thing Oracl
On 3/22/10 11:47 AM, Scott Carey wrote:
On Mar 17, 2010, at 9:41 AM, Craig James wrote:
On 3/17/10 2:52 AM, Greg Stark wrote:
On Wed, Mar 17, 2010 at 7:32 AM, Pierre C wrote:
I was thinking in something like that, except that the factor I'd use
would be something like 50% or 10
On 3/26/10 4:57 PM, Richard Yen wrote:
Hi everyone,
We've recently encountered some swapping issues on our CentOS 64GB Nehalem
machine, running postgres 8.4.2. Unfortunately, I was foolish enough to set
shared_buffers to 40GB. I was wondering if anyone would have any insight into
why the sw
Most of the time Postgres runs nicely, but two or three times a day we get a
huge spike in the CPU load that lasts just a short time -- it jumps to 10-20
CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike
events. During these spikes, the system is completely unresponsi
On 4/7/10 2:40 PM, Joshua D. Drake wrote:
On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:
Most of the time Postgres runs nicely, but two or three times a day we get a
huge spike in the CPU load that lasts just a short time -- it jumps to 10-20
CPU loads. Today it hit 100 CPU loads
On 4/7/10 3:36 PM, Joshua D. Drake wrote:
On Wed, 2010-04-07 at 14:45 -0700, Craig James wrote:
On 4/7/10 2:40 PM, Joshua D. Drake wrote:
On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:
Most of the time Postgres runs nicely, but two or three times a day we get a
huge spike in the CPU
On 4/7/10 2:59 PM, Tom Lane wrote:
Craig James writes:
Most of the time Postgres runs nicely, but two or three times a day we get a
huge spike in the CPU load that lasts just a short time -- it jumps to 10-20
CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike
events
On 4/7/10 5:47 PM, Robert Haas wrote:
On Wed, Apr 7, 2010 at 6:56 PM, David Rees wrote:
max_fsm_pages = 1600
max_fsm_relations = 625000
synchronous_commit = off
You are playing with fire here. You should never turn this off unless
you do not care if your data becomes irrecoverably corrup
Now that it's time to buy a new computer, Dell has changed their RAID models
from the Perc6 to Perc H200 and such. Does anyone know what's inside these? I
would hope they've stuck with the Megaraid controller...
Also, I can't find any info on Dell's site about how these devices can be
config
On 5/12/10 4:55 AM, Kevin Grittner wrote:
venu madhav wrote:
we display in sets of 20/30 etc. The user also has the option to
browse through any of those records hence the limit and offset.
Have you considered alternative techniques for paging? You might
use values at the edges of the page t
On 5/26/10 9:47 AM, Stephen Frost wrote:
* Eliot Gable (egable+pgsql-performa...@gmail.com) wrote:
Since PostgreSQL is written in C, I assume there is no
such additional overhead. I assume that the PL/PGSQL implementation at its
heart also uses SPI to perform those executions. Is that a fair sta
On 5/18/10 3:28 PM, Carlo Stonebanks wrote:
Sample code:
SELECT *
FROM MyTable
WHERE foo = 'bar' AND MySlowFunc('foo') = 'bar'
Let's say this required a SEQSCAN because there were no indexes to
support column foo. For every row where foo <> 'bar' would the filter on
the SEQSCAN short-circuit th
On 5/27/10 2:28 PM, Kevin Grittner wrote:
Craig James wrote:
It would be nice if Postgres had a way to assign a cost to every
function.
The COST clause of CREATE FUNCTION doesn't do what you want?
http://www.postgresql.org/docs/8.4/interactive/sql-createfunction.html
Cool ... I must
I'm testing/tuning a new midsize server and ran into an inexplicable problem.
With an RAID10 drive, when I move the WAL to a separate RAID1 drive, TPS drops
from over 1200 to less than 90! I've checked everything and can't find a
reason.
Here are the details.
8 cores (2x4 Intel Nehalem 2 G
On 6/2/10 4:40 PM, Mark Kirkwood wrote:
On 03/06/10 11:30, Craig James wrote:
I'm testing/tuning a new midsize server and ran into an inexplicable
problem. With an RAID10 drive, when I move the WAL to a separate RAID1
drive, TPS drops from over 1200 to less than 90! I've checked
ever
On 6/10/10 12:34 PM, Anne Rosset wrote:
Jochen Erwied wrote:
Thursday, June 10, 2010, 8:36:08 PM you wrote:
psrdb=# (SELECT
psrdb(# MAX(item_rank.rank) AS maxRank
psrdb(# FROM
psrdb(# item_rank item_rank
psrdb(# WHERE
psrdb(# item_rank.project_id='proj2783'
psrdb(# AND item_rank.pf_id IS NULL
[oops, didn't hit "reply to list" first time, resending...]
On 6/15/10 9:02 AM, Steve Wampler wrote:
Chris Browne wrote:
"jgard...@jonathangardner.net" writes:
My question is how can I configure the database to run as quickly as
possible if I don't care about data consistency or durability? T
On 6/16/10 12:00 PM, Josh Berkus wrote:
* fsync=off => 5,100
* fsync=off and synchronous_commit=off => 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
I found that pgbench has "noise" of about 20% (I posted about this
Can anyone tell me what's going on here? I hope this doesn't mean my system
tables are corrupt...
Thanks,
Craig
select relname, pg_relation_size(relname) from pg_class
where pg_get_userbyid(relowner) = 'emol_warehouse_1'
and relname not like 'pg_%'
order by pg_relation
On 6/24/10 4:19 PM, Alvaro Herrera wrote:
Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:
select relname, pg_relation_size(relname) from pg_class
where pg_get_userbyid(relowner) = 'emol_warehouse_1'
and relname not like 'pg_%'
order by pg_rel
I'm reviving this question because I never figured it out. To summarize: At random
intervals anywhere from a few times per hour to once or twice a day, we see a huge spike
in CPU load that essentially brings the system to a halt for up to a minute or two.
Previous answers focused on "what is
On 6/24/10 9:04 PM, Tom Lane wrote:
Craig James writes:
So what is it that will cause every single Postgres backend to come to life at
the same moment, when there's no real load on the server? Maybe if a backend
crashes? Some other problem?
sinval queue overflow comes to
On 6/25/10 7:47 AM, Tom Lane wrote:
Craig James writes:
On 6/24/10 9:04 PM, Tom Lane wrote:
sinval queue overflow comes to mind ... although that really shouldn't
happen if there's "no real load" on the server. What PG version is
this?
8.3.10. Upgraded based on your
On 6/25/10 9:41 AM, Kevin Grittner wrote:
Craig James wrote:
I always just assumed that lots of backends that would be harmless
if each one was doing very little.
Even if each is doing very little, if a large number of them happen
to make a request at the same time, you can have problems
I've got a new server and want to make sure it's running well. Are these
pretty decent numbers?
8 cores (2x4 Intel Nehalem 2 GHz)
12 GB memory
12 x 7200 SATA 500 GB disks
3WARE 9650SE-12ML RAID controller with BBU
WAL on ext2, 2 disks: RAID1 500GB, blocksize=4096
Database on ext4, 8 disks:
On 6/25/10 3:28 PM, Kevin Grittner wrote:
wrote:
With the PostgreSQL type tables I am not so certain how the data
is arranged within the one file. Does having the data all in one
database allow PostgreSQL to better utilize indexes and caches or
does having a number of smaller databases provide
On 6/25/10 12:03 PM, Greg Smith wrote:
Craig James wrote:
I've got a new server and want to make sure it's running well.
Any changes to the postgresql.conf file? Generally you need at least a
moderate shared_buffers (1GB or so at a minimum) and checkpoint_segments
(32 or higher) in
On 6/30/10 9:42 AM, Dave Crooke wrote:
I haven't jumped in yet on this thread, but here goes
If you're really looking for query performance, then any database which
is designed with reliability and ACID consistency in mind is going to
inherently have some mis-fit features.
Some other ideas
On 7/2/10 6:59 AM, Eliot Gable wrote:
Yes, I have two pl/pgsql functions. They take a prepared set of data
(just the row id of the original results, plus the particular priority
and weight fields) and they return the same set of data with an extra
field called "order" which contains a numerical o
On 7/8/10 9:31 AM, Ryan Wexler wrote:
Thanks a lot for all the comments. The fact that both my windows box
and the old linux box both show a massive performance improvement over
the new linux box seems to point to hardware to me. I am not sure how
to test the fsync issue, but i don't see how th
On 7/8/10 12:47 PM, Ryan Wexler wrote:
On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner
mailto:kevin.gritt...@wicourts.gov>> wrote:
Ryan Wexler mailto:r...@iridiumsuite.com>>
wrote:
> One thing I don't understand is why BBU will result in a huge
> performance gain. I thought
a disk that's exceptionally fast at flushing
its buffers.
Craig
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Craig James
Sent: Thursday, July 08, 2010 4:02 PM
To: pgsql-performance@postgresql.org
Subj
On 7/21/10 5:47 PM, Craig Ringer wrote:
On 21/07/10 22:59, Greg Smith wrote:
A useful trick to know is that if you replace the version number
with "current", you'll get to the latest version most of the time
(sometimes the name of the page is changed between versions, too, but
this isn't that
On 7/21/10 6:47 PM, Greg Smith wrote:
Craig James wrote:
By using "current" and encouraging people to link to that, we could
quickly change the Google pagerank so that a search for Postgres would
turn up the most-recent version of documentation.
How do you propose to encourage pe
On 7/23/10 2:22 AM, Torsten Zühlsdorff wrote:
Craig James schrieb:
A useful trick to know is that if you replace the version number
with "current", you'll get to the latest version most of the time
(sometimes the name of the page is changed between versions, too, but
this isn&
On 7/24/10 5:57 AM, Torsten Zühlsdorff wrote:
Craig James schrieb:
The problem is that Google ranks pages based on inbound links, so
older versions of Postgres *always* come up before the latest version
in page ranking.
Since 2009 you can deal with this by defining the canonical-version
I can query either my PARENT table joined to PRICES, or my VERSION table joined
to PRICES, and get an answer in 30-40 msec. But put the two together, it jumps
to 4 seconds. What am I missing here? I figured this query would be nearly
instantaneous. The VERSION.ISOSMILES and PARENT.ISOSMILES
On 8/5/10 11:28 AM, Kenneth Cox wrote:
I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM
running CentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6 SATA
7500RPM disks in RAID 6, and for the OLAP workload it feels* slow
My current performance is 85MB/s write, 151 MB/s
On 8/18/10 12:24 PM, Samuel Gendler wrote:
With barriers off, I saw a transaction rate of about 1200. With
barriers on, it was closer to 1050. The test had a concurrency of 40
in both cases.
I discovered there is roughly 10-20% "noise" in pgbench results after running
the exact same test ove
On 8/27/10 5:21 PM, Ozer, Pam wrote:
I have a query that
Select Distinct VehicleId
From Vehicle
Where VehicleMileage between 0 and 15000.
I have an index on VehicleMileage. Is there another way to put an index on a
between? The index is not being picked up. It does get picked up when I run
On 9/14/10 9:10 AM, mark wrote:
Hello,
I am relatively new to postgres (just a few months) so apologies if
any of you are bearing with me.
I am trying to get a rough idea of the amount of bang for the buck I
might see if I put in a connection pooling service into the enviroment
vs our current m
On 10/9/10 6:47 PM, Scott Marlowe wrote:
On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel wrote:
I know that there haven been many discussions on the slowness of count(*) even
when an index is involved because the visibility of the rows has to be
checked. In the past I have seen many suggestions a
On 10/15/10 6:58 PM, Navkirat Singh wrote:
I am interested in finding out the pros/cons of using UUID as a
primary key field. My requirement states that UUID would be perfect
in my case as I will be having many small databases which will link
up to a global database using the UUID. Hence, the n
On 11/11/10 9:13 AM, Mladen Gogala wrote:
Kevin Grittner wrote:
Mladen Gogala wrote:
create a definitive bias toward one type of the execution plan.
We're talking about trying to support the exact opposite.
I understand this, that is precisely the reason for my intervention into the
discu
On 12/7/10 9:34 AM, Tom Polak wrote:
We are in the process of deciding on how to proceed on a database upgrade. We
currently have MS SQL 2000 running on Windows 2003 (on my test server). I was
shocked at the cost for MS SQL 2008 R2 for a new server (2 CPU license). I
started comparing DB’s
On 12/7/10 1:29 PM, Tom Polak wrote:
What I was really after was a quick comparison between the two. I did not
create anything special, just the two tables. One table SQL generated the
records for me. I did not tweak anything after installing either system.
That's not a valid test. Postgres
On 12/17/10 9:08 AM, Tom Polak wrote:
So, I am back on this topic again.
I have a related question, but this might be the correct thread (and
please let me know that). The boss is pressing the issue because of the
cost of MSSQL.
You need to analyze the total cost of the system. For the price
On 12/29/10 6:28 AM, Julian v. Bock wrote:
I have the problem that on our servers it happens regularly under a
certain workload (several times per minute) that all backend processes
get a SIGUSR1 and spend several seconds in ProcessCatchupEvent(). At
100-200 connections (most of them idle) this c
On 12/29/10 11:58 AM, Tom Lane wrote:
Craig James writes:
On 12/29/10 6:28 AM, Julian v. Bock wrote:
I have the problem that on our servers it happens regularly under a
certain workload (several times per minute) that all backend processes
get a SIGUSR1 and spend several seconds in
On 2/3/11 1:34 PM, Shaun Thomas wrote:
I must say that this purist attitude is extremely surprising to me.
All the major DB vendors support optimizer hints, yet in the
Postgres community, they are considered bad with almost religious
fervor. Postgres community is quite unique with the fatwa again
On 2/10/11 9:21 AM, Kevin Grittner wrote:
Shaun Thomas wrote:
how difficult would it be to add that syntax to the JOIN
statement, for example?
Something like this syntax?:
JOIN WITH (correlation_factor=0.3)
Where 1.0 might mean that for each value on the left there was only
one distinct va
We have a medium-sized catalog (about 5 million rows), but some of our customers only
want to see portions of it. I've been experimenting with a customer-specific schema that
contains nothing but a "join table" -- just the primary keys of that portion of
the data that each customer wants to se
Craig James writes:
Here is the "bad" query, which is run on the view:
em=> explain analyze
select version.version_id, version.isosmiles
from hitlist_rows_reset_140
left join version on (hitlist_rows_reset_140.objectid = version.version_id)
where hitlist_rows_reset_140.sorto
On 3/17/11 9:42 AM, J Sisson wrote:
On Thu, Mar 17, 2011 at 10:13 AM, Jeff wrote:
hey folks,
Running into some odd performance issues between a few of our db boxes.
We've noticed similar results both in OLTP and data warehousing conditions here.
Opteron machines just seem to lag behind *espe
On 3/29/11 3:16 PM, Samuel Gendler wrote:
I've got some functionality that necessarily must scan a relatively large
table. Even worse, the total workload is actually 3 similar, but different
queries, each of which requires a table scan. They all have a resultset that
has the same structure,
If I do:
begin;
update some_table set foo = newvalue where a_bunch_of_rows_are_changed;
analyze some_table;
rollback;
does it roll back the statistics? (I think the answer is yes, but I need to be
sure.)
Thanks,
Craig
---(end of broadcast)-
Alvaro Herrera wrote:
To recap:
- your app only does inserts
- there has been no rollback lately
- there are no updates
- there are no deletes
The only other source of dead rows I can think is triggers ... do you
have any? (Not necessarily on this table -- perhaps triggers on other
tables can
tmp wrote:
what exactly is that
"random_number" column
A random float that is initialized when the row is created and never
modified afterwards. The physical row ordering will clearly not match
the random_number ordering. However, other queries uses a row ordering
by the primary key so I don
This is driving me crazy. I have some Postgres C function extensions in a
shared library. They've been working fine. I upgraded to Fedora Core 6 and
gcc4, and now every time psql(1) disconnects from the server, the serverlog
gets this message:
*** glibc detected *** postgres: mydb mydb [l
Alvaro Herrera wrote:
Craig James wrote:
This is driving me crazy. I have some Postgres C function extensions
in a shared library. They've been working fine. I upgraded to Fedora
Core 6 and gcc4, and now every time psql(1) disconnects from the
server, the serverlog gets this me
Tom Lane wrote:
Craig James <[EMAIL PROTECTED]> writes:
This is driving me crazy. I have some Postgres C function extensions in a
shared library. They've been working fine. I upgraded to Fedora Core 6 and
gcc4, and now every time psql(1) disconnects from the server, the serv
Alvaro Herrera wrote:
Craig James wrote:
Here is my guess -- and this is just a guess. My functions use a
third-party library which, of necessity, uses malloc/free in the
ordinary way. I suspect that there's a bug in the Postgres palloc()
code that's walking over memory that regu
Alvaro Herrera wrote:
Craig James wrote:
Alvaro Herrera wrote:
Craig James wrote:
Here is my guess -- and this is just a guess. My functions use a
third-party library which, of necessity, uses malloc/free in the
ordinary way. I suspect that there's a bug in the Postgres palloc()
Alvaro Herrera wrote:
...Since you've now shown that OpenBabel is
multithreaded, then that's a much more likely cause.
Can you elaborate? Are multithreaded libraries not allowed to be
linked to Postgres?
Absolutely not.
Ok, thanks, I'll work on recompiling OpenBabel without thread support.
Tom Lane wrote:
Craig James <[EMAIL PROTECTED]> writes:
GNU gdb Red Hat Linux (6.5-15.fc6rh)
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under c
Tom Lane wrote:
Magnus Hagander <[EMAIL PROTECTED]> writes:
On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:
Since I'm not a Postgres developer, perhaps one of the maintainers could
update the Postgres manual. In chapter 32.9.6, it says,
"To be precise, a shared
Gregory Stark wrote:
"Tom Lane" <[EMAIL PROTECTED]> writes:
James Mansion <[EMAIL PROTECTED]> writes:
Is there any particular reason not to ensure that any low-level
threading support in libc is enabled right
from the get-go, as a build-time option?
Yes.
1) It's of no value to us
Who is "u
some stability.
I suggest you find out the cause of your problem and then we can do more
research. Talking about us changing the Postgres behavior from the
report of one user who doesn't even have the full details isn't
productive.
I think you're confusing James Mansion with me (Cr
Guy Rouillier wrote:
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server will
be a better way to spend your money. For certain high throughput,
relatively small databases (i.e. transactional wo
Mike Smith wrote:
I’ve seen a few performance posts on using different hardware
technologies to gain improvements. Most of those comments are on raid,
interface and rotation speed. One area that doesn’t seem to have
been mentioned is to run your disks empty.
...
On the outside of the dis
Does anyone have performance info about the new Dell Perc/6 controllers? I found a long
discussion ("Dell vs HP") about the Perc/5, but nothing about Perc/6. What's
under the covers?
Here is the (abbreviated) info from Dell on this machine:
PowerEdge 1950 IIIQuad Core Intel® Xeon® E5
We're upgrading to a medium-sized server, a Dell PowerEdge 2950, dual-quad CPU's and
8 GB memory. This box can hold at most 8 disks (10K SCSI 2.5" 146 GB drives)
and has Dell's Perc 6/i RAID controller.
I'm thinking of this:
6 disks RAID 1+0 Postgres data
1 disk WAL
1 disk Linux
I'v
Joshua D. Drake wrote:
On Sat, 01 Mar 2008 10:06:54 -0800
Craig James <[EMAIL PROTECTED]> wrote:
We're upgrading to a medium-sized server, a Dell PowerEdge 2950,
dual-quad CPU's and 8 GB memory. This box can hold at most 8 disks
(10K SCSI 2.5" 146 GB drives) and has
1 - 100 of 293 matches
Mail list logo