I've got a table with ~121 million records in it. Select count on it
currently takes ~45 minutes, and an update to the table to set a value
on one of the columns I finally killed after it ran 17 hours and had
still not completed. Queries into the table are butt slow, and
The update
Auto-vacuum has made Postgres a much more friendly system. Is there some
reason the planner can't also auto-ANALYZE in some situations?
Here's an example I ran into:
create table my_tmp_table (...);
insert into my_tmp_table (select some stuff from here and there);
select ... from
Mark Lewis wrote:
PG could scan the index looking for matches first and only load the
actual rows if it found a match, but that could only be a possible win
if there were very few matches, because the difference in cost between a
full index scan and a sequential scan would need to be greater
Alvaro Herrera wrote:
Just out of curiosity: Does Postgress store a duplicate of the data in the
index, even for long strings? I thought indexes only had to store the
string up to the point where there was no ambiguity, for example, if I have
missing, mississippi and misty, the index only
We're thinking of building some new servers. We bought some a while back that
have ECC (error correcting) RAM, which is absurdly expensive compared to the
same amount of non-ECC RAM. Does anyone have any real-life data about the
error rate of non-ECC RAM, and whether it matters or not? In
On Fri, 2007-05-25 at 20:16 +0200, Arnau wrote:
The point I'm worried is performance. Do you think the performance
would be better executing exactly the same queries only adding an extra
column to all the tables e.g. customer_id, than open a connection to the
only one customers DB and
Apologies for a somewhat off-topic question, but...
The Linux kernel doesn't properly detect my software RAID1+0 when I boot up.
It detects the two RAID1 arrays, the partitions of which are marked properly.
But it can't find the RAID0 on top of that, because there's no corresponding
device
[EMAIL PROTECTED] wrote:
various people (not database experts) are pushing to install Oracle
cluster so that they can move all of these to one table with a
customerID column.
They're blowing smoke if they think Oracle can do this. One of my applications
had this exact same problem --
Scott Marlowe wrote:
OTOH, there are some things, like importing data, which are MUCH faster
in pgsql than in the big database.
An excellent point, I forgot about this. The COPY command is the best thing
since the invention of a shirt pocket. We have a database-per-customer design,
and one
Jonah H. Harris wrote:
On 6/6/07, Craig James [EMAIL PROTECTED] wrote:
They're blowing smoke if they think Oracle can do this.
Oracle could handle this fine.
Oracle fell over dead, even with the best indexing possible,
tuned by the experts, and using partitions keyed to the
customerID.
I
Tyrrill, Ed wrote:
I have a table, let's call it A, whose primary key, a_id, is referenced
in a second table, let's call it B. For each unique A.a_id there are
generally many rows in B with the same a_id. My problem is that I want
to delete a row in A when the last row in B that references it
Tyrrill, Ed wrote:
QUERY PLAN
---
Merge Left Join (cost=38725295.93..42505394.70 rows=13799645 width=8)
(actual
On 2007-06-11 Christo Du Preez wrote:
I really hope someone can shed some light on my problem. I'm not sure
if this is a posgres or potgis issue.
Anyway, we have 2 development laptops and one live server, somehow I
managed to get the same query to perform very well om my laptop, but
on both
Looking for replication solutions, I find:
Slony-I
Seems good, single master only, master is a single point of failure,
no good failover system for electing a new master or having a failed
master rejoin the cluster. Slave databases are mostly for safety or
for parallelizing queries for
Thanks to all who replied and filled in the blanks. The problem with the web
is you never know if you've missed something.
Joshua D. Drake wrote:
Looking for replication solutions, I find...
Slony-II
Dead
Wow, I'm surprised. Is it dead for lack of need, lack of resources, too
complex, or
Andreas Kostyrka wrote:
Slony provides near instantaneous failovers (in the single digit seconds
range). You can script an automatic failover if the master server
becomes unreachable.
But Slony slaves are read-only, correct? So the system isn't fully functional
once the master goes down.
Markus Schiltknecht wrote:
Not quite... there's still Postgres-R, see www.postgres-r.org And I'm
continuously working on it, despite not having updated the website for
almost a year now...
I planned on releasing the next development snapshot together with 8.3,
as that seems to be delayed,
Dolafi, Tom wrote:
min(fmin) | max(fmin)|avg(fmin)
1 | 55296469 |11423945
min(fmax) | max(fmax)|avg(fmax)
18 | 3288 |11424491
There are 5,704,211 rows in the table.
When you're looking for weird index problems, it's more
I have the same schema in two different databases. In smalldb, the two tables of
interest have about 430,000 rows, in bigdb, the two tables each contain about 5.5
million rows. I'm processing the data, and for various reasons it works out well to process it in
100,000 row chunks. However,
The two queries below produce different plans.
select r.version_id, r.row_num, m.molkeys from my_rownum r
join my_molkeys m on (r.version_id = m.version_id)
where r.version_id = 320
and r.version_id 330
order by r.version_id;
select r.version_id, r.row_num, m.molkeys from my_rownum
Sorry, I forgot to mention: This is 8.1.4, with a fairly ordinary configuration
on a 4 GB system.
Craig
Craig James wrote:
The two queries below produce different plans.
select r.version_id, r.row_num, m.molkeys from my_rownum r
join my_molkeys m on (r.version_id = m.version_id)
where
Here's an oddity. I have 10 databases, each with about a dozen connections to Postgres
(about 120 connections total), and at midnight they're all idle. These are mod_perl
programs (like a FastCGI -- they stay connected so they're ready for instant service).
So using ps -ef and grep, we find
Bruno Rodrigues Siqueira wrote:
Who can help me? My SELECT in a base with 1 milion register,
using expression index = 6seconds…
Run your query using
EXPLAIN ANALYZE SELECT ... your query ...
and then post the results to this newsgroup. Nobody can help until they see
the
Tilmann Singer wrote:
* [EMAIL PROTECTED] [EMAIL PROTECTED] [20070728 21:05]:
Let's try putting the sort/limit in each piece of the UNION to speed them up
separately.
SELECT * FROM (
(SELECT * FROM large_table lt
WHERE lt.user_id = 12345
ORDER BY created_at DESC LIMIT 10) AS q1
UNION
If I delete a whole bunch of tables (like 10,000 tables), should I vacuum
system tables, and if so, which ones? (This system is still on 8.1.4 and isn't
running autovacuum).
Thanks,
Craig
---(end of broadcast)---
TIP 4: Have you searched our
Carlos Moreno wrote:
Anyone has tried a setup combining tablespaces with NFS-mounted partitions?
There has been some discussion of this recently, you can find it in the
archives (http://archives.postgresql.org/). The word seems to be that NFS can
lead to data corruption.
Craig
Luiz K. Matsumura wrote:
Is connected to full 100Mb, it transfers many things quick to clients.
Is running Apache adn JBoss, transfer rate is good, I did scp to copy
many archives and is as quick as the old server.
I have no idea how to continue researching this problem. Now I'm going
to do
If I do:
begin;
update some_table set foo = newvalue where a_bunch_of_rows_are_changed;
analyze some_table;
rollback;
does it roll back the statistics? (I think the answer is yes, but I need to be
sure.)
Thanks,
Craig
---(end of
Alvaro Herrera wrote:
To recap:
- your app only does inserts
- there has been no rollback lately
- there are no updates
- there are no deletes
The only other source of dead rows I can think is triggers ... do you
have any? (Not necessarily on this table -- perhaps triggers on other
tables can
tmp wrote:
what exactly is that
random_number column
A random float that is initialized when the row is created and never
modified afterwards. The physical row ordering will clearly not match
the random_number ordering. However, other queries uses a row ordering
by the primary key so I
This is driving me crazy. I have some Postgres C function extensions in a
shared library. They've been working fine. I upgraded to Fedora Core 6 and
gcc4, and now every time psql(1) disconnects from the server, the serverlog
gets this message:
*** glibc detected *** postgres: mydb mydb
Alvaro Herrera wrote:
Craig James wrote:
This is driving me crazy. I have some Postgres C function extensions
in a shared library. They've been working fine. I upgraded to Fedora
Core 6 and gcc4, and now every time psql(1) disconnects from the
server, the serverlog gets this message
Tom Lane wrote:
Craig James [EMAIL PROTECTED] writes:
This is driving me crazy. I have some Postgres C function extensions in a
shared library. They've been working fine. I upgraded to Fedora Core 6 and
gcc4, and now every time psql(1) disconnects from the server, the serverlog
gets
Alvaro Herrera wrote:
Craig James wrote:
Alvaro Herrera wrote:
Craig James wrote:
Here is my guess -- and this is just a guess. My functions use a
third-party library which, of necessity, uses malloc/free in the
ordinary way. I suspect that there's a bug in the Postgres palloc()
code
Alvaro Herrera wrote:
...Since you've now shown that OpenBabel is
multithreaded, then that's a much more likely cause.
Can you elaborate? Are multithreaded libraries not allowed to be
linked to Postgres?
Absolutely not.
Ok, thanks, I'll work on recompiling OpenBabel without thread support.
Tom Lane wrote:
Craig James [EMAIL PROTECTED] writes:
GNU gdb Red Hat Linux (6.5-15.fc6rh)
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions
Tom Lane wrote:
Magnus Hagander [EMAIL PROTECTED] writes:
On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:
Since I'm not a Postgres developer, perhaps one of the maintainers could
update the Postgres manual. In chapter 32.9.6, it says,
To be precise, a shared library needs
Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
James Mansion [EMAIL PROTECTED] writes:
Is there any particular reason not to ensure that any low-level
threading support in libc is enabled right
from the get-go, as a build-time option?
Yes.
1) It's of no value to us
Who is us?
stability.
I suggest you find out the cause of your problem and then we can do more
research. Talking about us changing the Postgres behavior from the
report of one user who doesn't even have the full details isn't
productive.
I think you're confusing James Mansion with me (Craig James). I'm
Guy Rouillier wrote:
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server will
be a better way to spend your money. For certain high throughput,
relatively small databases (i.e. transactional
Mike Smith wrote:
I’ve seen a few performance posts on using different hardware
technologies to gain improvements. Most of those comments are on raid,
interface and rotation speed. One area that doesn’t seem to have
been mentioned is to run your disks empty.
...
On the outside of the
Does anyone have performance info about the new Dell Perc/6 controllers? I found a long
discussion (Dell vs HP) about the Perc/5, but nothing about Perc/6. What's
under the covers?
Here is the (abbreviated) info from Dell on this machine:
PowerEdge 1950 IIIQuad Core Intel® Xeon®
We're upgrading to a medium-sized server, a Dell PowerEdge 2950, dual-quad CPU's and
8 GB memory. This box can hold at most 8 disks (10K SCSI 2.5 146 GB drives)
and has Dell's Perc 6/i RAID controller.
I'm thinking of this:
6 disks RAID 1+0 Postgres data
1 disk WAL
1 disk Linux
Joshua D. Drake wrote:
On Sat, 01 Mar 2008 10:06:54 -0800
Craig James [EMAIL PROTECTED] wrote:
We're upgrading to a medium-sized server, a Dell PowerEdge 2950,
dual-quad CPU's and 8 GB memory. This box can hold at most 8 disks
(10K SCSI 2.5 146 GB drives) and has Dell's Perc 6/i RAID
Matthew wrote:
On Sat, 1 Mar 2008, Craig James wrote:
Right, I do understand that, but reliability is not a top priority in
this system. The database will be replicated, and can be reproduced
from the raw data.
So what you're saying is:
1. Reliability is not important.
2. There's zero
In the 3 years I've been using Postgres, the problem of count() performance has come up
more times than I can recall, and each time the answer is, It's a sequential scan
-- redesign your application.
My question is: What do the other databases do that Postgres can't do, and why
not?
Count()
Tom Lane wrote:
Craig James [EMAIL PROTECTED] writes:
Count() on Oracle and MySQL is almost instantaneous, even for very large
tables. So why can't Postgres do what they do?
AFAIK the above claim is false for Oracle. They have the same
transactional issues we do.
My experience doesn't
I just received a new server and thought benchmarks would be interesting. I think this
looks pretty good, but maybe there are some suggestions about the configuration file.
This is a web app, a mix of read/write, where writes tend to be insert into ...
(select ...) where the resulting insert
Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wed, 12 Mar 2008 21:55:18 -0700
Craig James [EMAIL PROTECTED] wrote:
Diffs from original configuration:
max_connections = 1000
shared_buffers = 400MB
work_mem = 256MB
max_fsm_pages = 100
max_fsm_relations = 5000
Rich wrote:
I am going to embarkon building a music library using apache,
postgresql and php. What is the best way to store the music files?
Which file type should I use?
In Postgres, its all just binary data. It's entirely up to you which
particular format you use. mp2, mp3 mp4, wmv, avi,
Dave Cramer wrote:
On 16-Mar-08, at 2:19 AM, Justin wrote:
I decided to reformat the raid 10 into ext2 to see if there was any
real big difference in performance as some people have noted here is
the test results
please note the WAL files are still on the raid 0 set which is still
in
Craig James wrote:
Dave Cramer wrote:
On 16-Mar-08, at 2:19 AM, Justin wrote:
I decided to reformat the raid 10 into ext2 to see if there was any
real big difference in performance as some people have noted here
is the test results
please note the WAL files are still on the raid 0 set
Justin wrote:
2000 tps ??? do you have fsync turned off ?
Dave
No its turned on.
Unless I'm seriously confused, something is wrong with these numbers. That's
the sort of performance you expect from a good-sized RAID 10 six-disk array.
With a single 7200 rpm SATA disk and XFS, I get 640
Gavin M. Roy wrote:
On Wed, Apr 16, 2008 at 4:39 PM, Joshua D. Drake [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Wed, 16 Apr 2008 13:37:32 -0700
Jeffrey Baker [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
I can second this. The MSA 70 is a great unit for the
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov [EMAIL PROTECTED] wrote:
I naively thought that if I have a 100,000,000 row table, of the form
(integer,integer,smallint,date), and add a real coumn to it, it will scroll
through the memory reasonably fast.
In Postgres, an update is the same as a
Campbell, Lance wrote:
We currently backup all of our database tables per schema using pg_dump
every half hour. We have been noticing that the database performance
has been very poor during the backup process. How can I improve the
performance?
It sounds like the goal is to have frequent,
Justin wrote:
This falls under the stupid question and i'm just curious what other
people think what makes a query complex?
There are two kinds:
1. Hard for Postgres to get the answer.
2. Hard for a person to comprehend.
Which do you mean?
Craig
--
Sent via pgsql-performance mailing list
Greg Smith wrote:
On Mon, 5 May 2008, Craig James wrote:
pgbench -i -s 20 -U test
That's way too low to expect you'll see a difference in I/O schedulers.
A scale of 20 is giving you a 320MB database, you can fit the whole
thing in RAM and almost all of it on your controller cache. What's
Greg Smith wrote:
On Tue, 6 May 2008, Craig James wrote:
I only did two runs of each, which took about 24 minutes. Like the
first round of tests, the noise in the measurements (about 10%)
exceeds the difference between scheduler-algorithm performance, except
that anticipatory seems
idc danny wrote:
Hi everybody,
I'm fairly new to PostgreSQL and I have a problem with
a query:
SELECT * FROM LockerEvents LIMIT 1 OFFSET
1099
The table LockerEvents has 11 Mlillions records on it
and this query takes about 60 seconds to complete.
The OFFSET clause is almost always
idc danny wrote:
Hi James,
Than you for your response.
What I want to achieve is to give to the application
user 10k rows where the records are one after another
in the table, and the application has a paginating GUI
(First page, Previous page, Next page, Last
page - all links Jump to page
Matthew Wakeling wrote:
Probably of more use are some of the other settings:
-m reserved-blocks-percentage - this reserves a portion of the filesystem
that only root can write to. If root has no need for it, you can kill
this by setting it to zero. The default is for 5% of the disc to
This seems like a bug to me, but it shows up as a performance problem. Since
the column being queried is an integer, the second query (see below) can't
possibly match, yet Postgres uses a typecast, forcing a full table scan for a
value that can't possibly be in the table.
The application
Tom Lane wrote:
Craig James [EMAIL PROTECTED] writes:
This seems like a bug to me, but it shows up as a performance problem.
emol_warehouse_1= explain analyze select version_id, parent_id from version
where version_id = 999;
If you actually *need* so many 9's here
I've never gotten a single spam from the Postgres mailing list ... until today.
A Chinese company selling consumer products is using this list. I have my
filters set to automatically trust this list because it has been so reliable
until now. It would be really, really unfortunate if this
/gmail_captcha_crack/
I think you need to do some training:
http://www2.b3ta.com/bigquiz/hacker-or-spacker/
Sending a link to a web site that plays loud rap music is not a friendly way to
make your point.
Craig
- Original Message
From: Craig James [EMAIL PROTECTED]
To: pgsql-performance
Valentin Bogdanov wrote:
I have ran quite a few tests comparing how long a query takes to
execute from Perl/DBI as compared to psql/pqlib. No matter how many
times I run the test the results were always the same.
I run a SELECT all on a fairly big table and enabled the
Valentin Bogdanov wrote:
--- On Mon, 11/8/08, Gregory Stark [EMAIL PROTECTED] wrote:
From: Gregory Stark [EMAIL PROTECTED]
Subject: Re: [PERFORM] Using PK value as a String
To: Jay [EMAIL PROTECTED]
Cc: pgsql-performance@postgresql.org
Date: Monday, 11 August, 2008, 10:30 AM
Jay [EMAIL
The OOM killer is a terrible idea for any serious database server. I wrote a
detailed technical paper on this almost 15 years ago when Silicon Graphics had
this same feature, and Oracle and other critical server processes couldn't be
made reliable.
The problem with overallocating memory as
[EMAIL PROTECTED] wrote:
On Wed, 27 Aug 2008, Craig James wrote:
The OOM killer is a terrible idea for any serious database server. I
wrote a detailed technical paper on this almost 15 years ago when
Silicon Graphics had this same feature, and Oracle and other critical
server processes
Matthew Wakeling wrote:
On Thu, 28 Aug 2008, Steve Atkins wrote:
Probably the best solution is to just tell the kernel somehow to
never kill the postmaster.
Or configure adequate swap space?
Oh yes, that's very important. However, that gives the machine the
opportunity to thrash.
No,
-Mensaje original-
De: [EMAIL PROTECTED]
* I think we will go for hardware-based RAID 1 with a good
battery-backed-up controller. I have read that software RAID
perform surprisingly good, but for a production site where
hotplug replacement of dead disks is required, is software
RAID
James Mansion wrote:
I can't see how an OS can lie to processes about memory being allocated
to them and not be ridiculed as a toy, but there you go. I don't think
Linux is the only perpetrator - doesn't AIX do this too?
This is a leftover from the days of massive physical modeling
Florian Weimer wrote:
* Craig James:
So it never makes sense to enable overcommitted memory when
Postgres, or any server, is running.
There are some run-time environments which allocate huge chunks of
memory on startup, without marking them as not yet in use. SBCL is in
this category
Florian Weimer wrote:
* Craig James:
There are some run-time environments which allocate huge chunks of
memory on startup, without marking them as not yet in use. SBCL is in
this category, and also the Hotspot VM (at least some extent).
I stand by my assertion: It never makes sense. Do
Maybe this is an obviously dumb thing to do, but it looked reasonable to me. The problem is, the
seemingly simple sort below causes a fairly powerful computer to completely freeze for 5-10
minutes. During the sort, you can't login, you can't use any shell sessions you already have open,
the
justin wrote:
Tom Lane wrote:
Hmm ... I wonder whether this means that the current work on
parallelizing I/O (the posix_fadvise patch in particular) is a dead
end. Because what that is basically going to do is expend more CPU
to improve I/O efficiency. If you believe this thesis then that's
David Rees wrote:
On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
ibrahim.harr...@gmail.com wrote:
Version 1.93d --Sequential Output-- --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec
Ibrahim Harrani wrote:
Hi Craig,
Here is the result. It seems that disk write is terrible!.
r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)
100+0 records in
100+0 records out
819200 bytes transferred in 945.343806 secs (8665630 bytes/sec)
real
Dave Cramer wrote:
So I tried writing directly to the device, gets around 250MB/s, reads at
around 500MB/s
The client is using redhat so xfs is not an option.
I'm using Red Hat and XFS, and have been for years. Why is XFS not an option
with Red Hat?
Craig
--
Sent via pgsql-performance
Whit Armstrong wrote:
I have the opportunity to set up a new postgres server for our
production database. I've read several times in various postgres
lists about the importance of separating logs from the actual database
data to avoid disk contention.
Can someone suggest a typical partitioning
Kenneth Marshall wrote:
Additionally are there any clear choices w/ regard to filesystem
types? ?Our choices would be xfs, ext3, or ext4.
Well, there's a lot of people who use xfs and ext3. XFS is generally
rated higher than ext3 both for performance and reliability. However,
we run Centos 5
David Wilson wrote:
On Tue, May 12, 2009 at 5:53 PM, Angel Alvarez cl...@uah.es wrote:
we suffer a 'more optimal' superlative missuse
there is not so 'more optimal' thing but a simple 'better' thing.
im not native english speaker but i think it still applies.
Well this a superlative list
Matthew Wakeling wrote:
On Thu, 21 May 2009, Linos wrote:
i have to buy a new server and in the budget i have (small) i have
to select one of this two options:
-4 sas 146gb 15k rpm raid10.
-8 sas 146gb 10k rpm raid10.
It depends what you are doing. I think in most situations, the second
Greg Smith wrote:
What I'd love to have is a way to rent a fairly serious piece of
dedicated hardware, ideally with multiple (at least 4) hard drives in a
RAID configuration and a battery-backed write cache. The cache is
negotiable. Linux would be preferred, FreeBSD or Solaris would also
Brian Herlihy wrote:
We have a problem with some of our query plans. One of our
tables is quite volatile, but postgres always uses the last
statistics snapshot from the last time it was analyzed for query
planning. Is there a way to tell postgres that it should not
trust the statistics for
Greg Smith wrote:
No amount of theoretical discussion advances that any until
you're at least staring at a very specific locking problem you've
already characterized extensively via profiling. And even then,
profiling trumps theory every time.
In theory, there is no difference between
Mathieu Nebra wrote:
Greg Stark a écrit :
All the other comments are accurate, though it does seem like
something the database ought to be able to handle.
The other thing which hasn't been mentioned is that you have a lot of
indexes. Updates require maintaining all those indexes. Are all of
Suppose I have a large table with a small-cardinality CATEGORY column (say,
categories 1..5). I need to sort by an arbitrary (i.e. user-specified) mapping
of CATEGORY, something like this:
1 = 'z'
2 = 'a'
3 = 'b'
4 = 'w'
5 = 'h'
So when I get done, the sort order should be 2,3,5,4,1.
I
Apologies for a slightly off-topic question ... a friend is overseeing the demise
of a company and has several computers that they need to get rid of. She's an
attorney and knows little about them except that they're IBM and cost $50K
originally. Where does one go to sell equipment like
8.4 has vastly improved the warm-standby features, but it looks to me like this
is still an installation-wide backup, not a per-database backup. That is, if
you have (say) a couple hundred databases, and you only want warm-backup on one
of them, you can't do it (except using other solutions
Fabio La Farcioli wrote:
i am developing a web app for thousands users (1.000/2.000).
Each user have a 2 table of work...I finally have 2.000 (users) x 2
tables = 4.000 tables!
Postgres support an elevate number of tables??
i have problem of performance ???
We have run databases with over
Greg Stark wrote:
What you want is a multi-column primary key where userid is part of
the key. You don't want to have a separate table for each user unless
each user has their own unique set of columns.
Not always true.
When the user logs back in, a hidden part of the login process gets a
Pierre Frédéric Caillaud wrote:
I just compiled it with gcc and produces the gmon.out file for every
process; by the way I am running below script in order to produce
readable .out files
gprof .../pgsql/bin/postgres gmon.out createtable2.out
is postgres the right executable?
regards
Dave Dutcher wrote:
You need a COMMIT for every BEGIN. If you just run a SELECT statement
without first beginning a transaction, then you should not end up with a
connection that is Idle in Transaction. If you are beginning a transaction,
doing a select, and then not committing, then yes that
=$count\n;
$sth-finish();
# $dbh-commit;
sleep(3);
}
$dbh-disconnect;
--
http://www.wiesinger.com/
On Thu, 24 Sep 2009, Craig James wrote:
Dave Dutcher wrote:
You need a COMMIT for every BEGIN. If you just run a SELECT statement
without first beginning a transaction, then you should not end
Xia Qingran wrote:
Hi,
I have a big performance problem in my SQL select query:
select * from event where user_id in
Scott Marlowe wrote:
Personally, I use Fedora, and my servers have been quite stable. One of our
main web servers running Fedora:
It's not that there can't be stable releases of FC, it's that it's not
the focus of that project. So, if you get lucky, great! I can't
imagine running a
Merlin Moncure wrote:
On Tue, Oct 6, 2009 at 10:59 AM, Michal Vitecek f...@mageo.cz wrote:
Merlin Moncure wrote:
On Mon, Oct 5, 2009 at 5:17 AM, Michal Vitecek f...@mageo.cz wrote:
Could the problem be the HW RAID card? There's ServerRAID 8k with 256MB
with write-back enabled. Could it be
Kevin Grittner wrote:
Which leaves the issue open -- a flexible way to flag the *reason* (or
*reasons*) for the absence of a value could be a nice enhancement, if
someone could invent a good implementation. Of course, one could
always add a column to indicate the reason for a NULL; and perhaps
1 - 100 of 275 matches
Mail list logo