"Craig A. James" <[EMAIL PROTECTED]> writes:
> I guess I misinterpreted the Postgress manual, which says (in 31.9, "C
> Language Functions"),
> "When allocating memory, use the PostgreSQL functions palloc and pfree
> instead of the corresponding C library functions malloc and free."
> I
Tom Lane wrote:
>My suspicion is that it's an incompatibility between malloc()
>libraries.
On Linux there's only supposed to be one malloc, ie, glibc's version.
On other platforms I'd be worried about threaded vs non-threaded libc
(because the backend is not threaded), but not Linux.
I guess I
"Craig A. James" <[EMAIL PROTECTED]> writes:
> My suspicion is that it's an incompatibility between malloc()
> libraries.
On Linux there's only supposed to be one malloc, ie, glibc's version.
On other platforms I'd be worried about threaded vs non-threaded libc
(because the backend is not threaded
Hmm... I should figure out how to have OS X email me daily log updates
like FreeBSD does...
Logwatch.
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL so
Jim C. Nasby wrote:
> On Thu, May 11, 2006 at 07:20:27PM -0400, Bruce Momjian wrote:
> > Joshua D. Drake wrote:
> > >
> > > >> You want an in-depth comparison of how a server disk drive is
> > > >> internally
> > > >> better than a desktop drive:
> > > >>
> > > >>
> > > >> http://www.seag
On Thu, May 11, 2006 at 07:20:27PM -0400, Bruce Momjian wrote:
> Joshua D. Drake wrote:
> >
> > >> You want an in-depth comparison of how a server disk drive is internally
> > >> better than a desktop drive:
> > >>
> > >>
> > >> http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_In
Joshua D. Drake wrote:
>
> >> Well western digital and Seagate both carry 5 year warranties. Seagate I
> >> believe does on almost all of there products. WD you have to pick the
> >> right drive.
> >
> > That's nice, but it seems similar to my Toshiba laptop drive experience
> > --- it breaks,
Well western digital and Seagate both carry 5 year warranties. Seagate I
believe does on almost all of there products. WD you have to pick the
right drive.
That's nice, but it seems similar to my Toshiba laptop drive experience
--- it breaks, we replace it. I would rather not have to replace
Joshua D. Drake wrote:
>
> >> You want an in-depth comparison of how a server disk drive is internally
> >> better than a desktop drive:
> >>
> >>
> >> http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf
> >
> > BTW, someone (Western Digital?) is
On Thu, May 11, 2006 at 03:38:31PM -0700, Joshua D. Drake wrote:
>
> >>You want an in-depth comparison of how a server disk drive is internally
> >>better than a desktop drive:
> >>
> >>
> >> http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf
> >
On Thu, May 11, 2006 at 06:08:36PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > I'd hope that wasn't what's happening... is the backend smart enough to
> > know not to fsync anything involved with the temp table?
>
> The catalog entries required for it have to be fsync'd
On Thu, May 11, 2006 at 08:57:48AM -0400, Ketema Harris wrote:
> Nested Loops on:
> Nested Loop (cost=3.33..11.37 rows=1 width=268) (actual time=2.166..2.982
>
> Nested Loops off:
> Hash Join (cost=8.27..11.78 rows=1 width=268) (actual time=1.701..1.765
>
> With nested loops enabled does it cho
Jim C. Nasby wrote:
On Fri, May 05, 2006 at 03:47:53PM -0700, Adam Palmblad wrote:
Hi,
We've got a C function that we use here and we find that for every
connection, the first run of the function is much slower than any
subsequent runs. ( 50ms compared to 8ms)
Besides using connection pooling
On Wed, May 10, 2006 at 09:47:07AM -0500, Dave Dutcher wrote:
> The hash lines mean your tables are being joined by hash joins. You
> should read this page for more info:
>
> http://www.postgresql.org/docs/8.1/interactive/performance-tips.html
You might also want to read
http://www.pervasivepost
You want an in-depth comparison of how a server disk drive is internally
better than a desktop drive:
http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf
BTW, someone (Western Digital?) is now offering SATA drives that carry
the same MTBF
On Tue, May 09, 2006 at 12:10:32PM +0200, Jean-Yves F. Barbier wrote:
> > I myself can't see much reason to spend $500 on high end controller
> > cards for a simple Raid 1.
>
> Naa, you can find ATA &| SATA ctrlrs for about EUR30 !
And you're likely getting what you paid for: crap. Such a contro
On Tue, May 09, 2006 at 08:59:55PM -0400, Bruce Momjian wrote:
> Joshua D. Drake wrote:
> > Vivek Khera wrote:
> > >
> > > On May 9, 2006, at 11:51 AM, Joshua D. Drake wrote:
> > >
> > >> Sorry that is an extremely misleading statement. SATA RAID is
> > >> perfectly acceptable if you have a hard
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> I'd hope that wasn't what's happening... is the backend smart enough to
> know not to fsync anything involved with the temp table?
The catalog entries required for it have to be fsync'd, unless you enjoy
putting your entire database at risk (a bad block
On Mon, May 08, 2006 at 07:29:32PM -0600, Brendan Duddridge wrote:
> Do you have any suggestions on how I can optimize the query so both
> versions of the query come back fast without doing a sequential scan
> on the price table?
Well, before you do anything you should verify that an index sca
On Fri, May 05, 2006 at 03:47:53PM -0700, Adam Palmblad wrote:
> Hi,
> We've got a C function that we use here and we find that for every
> connection, the first run of the function is much slower than any
> subsequent runs. ( 50ms compared to 8ms)
>
> Besides using connection pooling, are there
If you're trying to come up with ranking then you'll be much happier
using a sequence and pulling from it using an ordered select. See lines
19-27 in http://lnk.nu/cvs.distributed.net/9bu.sql for an example.
Depending on what you're doing you might not need the temp table.
On Fri, May 05, 2006 at
> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id
DESC
> LIMIT 20
> 0.443 ms ANALYZE tmp
> 0.365 ms SELECT * FROM tmp
> 0.310 ms DROP TABLE tmp
> 32.918 ms COMMIT
The 32 seconds for commit can hardly be catalog related. It seems the
file is
fsynced before it is dropped.
On Thu, May 11, 2006 at 09:55:15AM +0200, Zeugswetter Andreas DCP SD wrote:
>
> > Something else worth considering is not using the normal
> > catalog methods
> > for storing information about temp tables, but hacking that together
> > would probably be a rather large task.
>
> But the timings s
On Thu, May 11, 2006 at 08:43:46PM +0200, Martijn van Oosterhout wrote:
> On Thu, May 11, 2006 at 11:35:34AM -0400, Greg Stark wrote:
> > I can say that I've seen plenty of instances where the ability to create
> > temporary tables very quickly with no overhead over the original query would
> > be
On Thu, May 11, 2006 at 08:03:19PM +0200, Martijn van Oosterhout wrote:
> On Thu, May 11, 2006 at 12:18:06PM -0500, Jim C. Nasby wrote:
> > > Yes, because there can be more than one active snapshot within a single
> > > transaction (think about volatile functions in particular).
> >
> > Any docume
On Thu, May 11, 2006 at 11:35:34AM -0400, Greg Stark wrote:
> I can say that I've seen plenty of instances where the ability to create
> temporary tables very quickly with no overhead over the original query would
> be useful.
I wonder if this requires what the standard refers to as a global
tempo
On Thu, May 11, 2006 at 12:18:06PM -0500, Jim C. Nasby wrote:
> > Yes, because there can be more than one active snapshot within a single
> > transaction (think about volatile functions in particular).
>
> Any documentation on how snapshot's work? They're a big mystery to me.
> :(
A snapshot is a
On Thu, 2006-05-11 at 12:18, Jim C. Nasby wrote:
> On Wed, May 10, 2006 at 08:31:54PM -0400, Tom Lane wrote:
> > "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > > On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
> > >> PFC <[EMAIL PROTECTED]> writes:
> > >>> Fun thing is, the rowcount from
On Wed, May 10, 2006 at 08:31:54PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
> >> PFC <[EMAIL PROTECTED]> writes:
> >>> Fun thing is, the rowcount from a temp table (which is the problem here)
> >>> should be
Chris wrote:
This is a deadly bug, because our web site goes dead when this
happens, ...
Sounds like a deadlock issue.
...
stats_command_string = true
and restart postgresql.
then you'll be able to:
select * from pg_stat_activity;
to see what queries postgres is running and that might give yo
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> Perhaps it would be worth creating a class of temporary tables that used
> a tuplestore, although that would greatly limit what could be done with
> that temp table.
I can say that I've seen plenty of instances where the ability to create
temporary ta
""Craig A. James"" <[EMAIL PROTECTED]> wrote
> I'm having a rare but deadly problem. On our web servers, a process
> occasionally gets stuck, and can't be unstuck. Once it's stuck, all
> Postgres activities cease. "kill -9" is required to kill it --
> signals 2 and 15 don't work, and "/etc/
Title: Nested Loops vs. Hash Joins or Merge Joins
I am attempting to learn more about the way Pg decides what operators to use in its query planning and executions. I have moderately complicated table layout, but it is mostly normalized I recently created a query:
select account.acct_name as "
On Thu, May 11, 2006 at 09:55:15AM +0200, Zeugswetter Andreas DCP SD wrote:
> > 0.101 ms BEGIN
> > 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER
> NOT
> > NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
>
> 1.4 seconds is not great for create table, is that
"Christian Paul Cosinas" writes:
> Hi!
>
> How can I speed up my server's performance when I use offset and limit
> clause.
>
> For example I have a query:
> SELECT * FROM table ORDER BY id, name OFFSET 10 LIMIT 1
>
> This query takes a long time about more than 2 minutes.
>
> If my q
Why do you want to use it this way ?
Explain what you want to do, there probably is another faster
solution...
On Thu, 11 May 2006 16:45:33 +0200, Christian Paul Cosinas
<[EMAIL PROTECTED]> wrote:
Hi!
How can I speed up my server's performance when I use offset and limit
c
36 matches
Mail list logo