On Sat, 3 Dec 2005, Luke Lonergan wrote:
Tom,
On 12/3/05 12:32 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
Last I looked at the Postgres binary dump format, it was not portable or
efficient enough to suit the need. The efficiency problem with it was
On Fri, 2005-12-02 at 23:03 -0500, Luke Lonergan wrote:
> And how do we compose the binary data on the client? Do we trust that the
> client encoding conversion logic is identical to the backend's?
Well, my newbieness is undoubtedly showing already, so I might as well
continue with my line of du
Tom,
On 12/3/05 12:32 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> "Luke Lonergan" <[EMAIL PROTECTED]> writes:
>> Last I looked at the Postgres binary dump format, it was not portable or
>> efficient enough to suit the need. The efficiency problem with it was that
>> there was descriptive informa
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
> Last I looked at the Postgres binary dump format, it was not portable or
> efficient enough to suit the need. The efficiency problem with it was that
> there was descriptive information attached to each individual data item, as
> compared to the approa
Tom,
On 12/2/05 3:00 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>
> Sure it does ... at least as long as you are willing to assume everybody
> uses IEEE floats, and if they don't you have semantic problems
> translating float datums anyhow.
>
> What we lack is documentation, more than functionali
On Fri, 2005-12-02 at 15:18 -0500, Stephen Frost wrote:
> The other thought, of course, is that you could use PITR for your
> backups instead of pgdump...
Yes, it is much faster that way.
Over on -hackers a few optimizations of COPY have been discussed; one of
those is to optimize COPY when it i
On Fri, 2 Dec 2005, Luke Lonergan wrote:
And how do we compose the binary data on the client? Do we trust that
the client encoding conversion logic is identical to the backend's? If
there is a difference, what happens if the same file loaded from
different client machines has different resul
On Fri, 2 Dec 2005, Luke Lonergan wrote:
Micahel,
On 12/2/05 1:46 PM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
Not necessarily; you may be betting that it's more *efficient* to do the
parsing on a bunch of lightly loaded clients than your server. Even if
you're using the same code this may
On Fri, 2 Dec 2005, Luke Lonergan wrote:
Stephen,
On 12/2/05 1:19 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
I've used the binary mode stuff before, sure, Postgres may have to
convert some things but I have a hard time believing it'd be more
expensive to do a network_encoding -> host_enc
On Fri, 2 Dec 2005, Michael Stone wrote:
On Fri, Dec 02, 2005 at 01:24:31PM -0800, Luke Lonergan wrote:
From a performance standpoint no argument, although you're betting that you
can do parsing / conversion faster than the COPY core in the backend can
Not necessarily; you may be betting tha
On Fri, 2 Dec 2005, Luke Lonergan wrote:
Stephen,
On 12/2/05 12:18 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
Just a thought, but couldn't psql be made to use the binary mode of
libpq and do at least some of the conversion on the client side? Or
does binary mode not work with copy (that
;[EMAIL PROTECTED]>; Steve
Oualline <[EMAIL PROTECTED]>; pgsql-performance@postgresql.org
Sent: Fri Dec 02 22:26:06 2005
Subject: Re: [PERFORM] Database restore speed
On Fri, 2005-12-02 at 13:24 -0800, Luke Lonergan wrote:
> It's a matter of safety and generality - in general you
&
On Fri, 2005-12-02 at 13:24 -0800, Luke Lonergan wrote:
> It's a matter of safety and generality - in general you
> can't be sure that client machines / OS'es will render the same conversions
> that the backend does in all cases IMO.
Can't binary values can safely be sent cross-platform in DataRow
Micahel,
On 12/2/05 1:46 PM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
> Not necessarily; you may be betting that it's more *efficient* to do the
> parsing on a bunch of lightly loaded clients than your server. Even if
> you're using the same code this may be a big win.
If it were possible in l
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
> One more thing - this is really about the lack of a cross-platform binary
> input standard for Postgres IMO. If there were such a thing, it *would* be
> safe to do this. The current Binary spec is not cross-platform AFAICS, it
> embeds native represen
On Fri, Dec 02, 2005 at 01:24:31PM -0800, Luke Lonergan wrote:
From a performance standpoint no argument, although you're betting that you
can do parsing / conversion faster than the COPY core in the backend can
Not necessarily; you may be betting that it's more *efficient* to do the
parsing o
Stephen,
On 12/2/05 1:19 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
>
>> I've used the binary mode stuff before, sure, Postgres may have to
>> convert some things but I have a hard time believing it'd be more
>> expensive to do a network_encoding -> host_encoding (or toasting, or
>> whatever)
Stephen,
On 12/2/05 1:19 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
> I've used the binary mode stuff before, sure, Postgres may have to
> convert some things but I have a hard time believing it'd be more
> expensive to do a network_encoding -> host_encoding (or toasting, or
> whatever) than
* Luke Lonergan ([EMAIL PROTECTED]) wrote:
> On 12/2/05 12:18 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
> > Just a thought, but couldn't psql be made to use the binary mode of
> > libpq and do at least some of the conversion on the client side? Or
> > does binary mode not work with copy (that
Stephen,
On 12/2/05 12:18 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
> Just a thought, but couldn't psql be made to use the binary mode of
> libpq and do at least some of the conversion on the client side? Or
> does binary mode not work with copy (that wouldn't suprise me, but
> perhaps copy
* Luke Lonergan ([EMAIL PROTECTED]) wrote:
> > Luke, would it help to have one machine read the file and
> > have it connect to postgres on a different machine when doing
> > the copy? (I'm thinking that the first machine may be able to
> > do a lot of the parseing and conversion, leaving the se
David,
> Luke, would it help to have one machine read the file and
> have it connect to postgres on a different machine when doing
> the copy? (I'm thinking that the first machine may be able to
> do a lot of the parseing and conversion, leaving the second
> machine to just worry about doing
On Fri, 2 Dec 2005, Luke Lonergan wrote:
Steve,
When we restore the postmaster process tries to use 100% of the CPU.
The questions we have are:
1) What is postmaster doing that it needs so much CPU?
Parsing mostly, and attribute conversion from text to DBMS native
formats.
2) How can we
Steve,
> When we restore the postmaster process tries to use 100% of the CPU.
>
> The questions we have are:
>
> 1) What is postmaster doing that it needs so much CPU?
Parsing mostly, and attribute conversion from text to DBMS native
formats.
> 2) How can we get our system to go faster?
Title: Database restore speed
Our application tries to insert data into the database as fast as it can.
Currently the work is being split into a number of 1MB copy operations.
When we restore the postmaster process tries to use 100% of the CPU.
The questions we have are:
1) What is pos
25 matches
Mail list logo