With Pg 8.0.x, with the capability of tablespace, it's possible to
create a Database that relie on many tbs.
Scenario:
I create the catalog (schemes pg_catalog, information_schema, obviously
except the tables/view that relies in tbs "pg_global") on the tablespace
"tbs1" and all other user objec
Hi,
While i am trying to install
libpqxx on windows...It is asking for postgresql,include and library paths to
put in common(which is in win32 folder of libpqxx) file.But when i
downloaded postgresql on windows i got only lib...but not include.
With out which i can't proceed
further.
If
Yeah, that was it, thank you.
However this doesn't solve our problem.
I've already set checkpoints to occour every 1 wal file. This seems to be
slightly better, but the overhead is also increased.
What's usually better ? checkpoint more often or more seldom, even 1/day ???
|-Original Messag
Is the returned value of a function defined as
IMMUTABLE cached globally? In other words could postgresql potentially
return a cached value obtained from one client session to a different
client session?
Thanks in advance
Donald Fraser
On Thu, Jun 23, 2005 at 02:26:23PM +0100, Donald Fraser wrote:
> Is the returned value of a function defined as IMMUTABLE cached
> globally? In other words could postgresql potentially return a cached
> value obtained from one client session to a different client session?
Return values from functi
"Donald Fraser" <[EMAIL PROTECTED]> writes:
> Is the returned value of a function defined as IMMUTABLE cached =
> globally?
No, in fact it isn't cached at all. IMMUTABLE tells the planner that
it's OK to fold a function call with constant inputs to a constant
result value at plan time. Nothing m
On Thu, Jun 23, 2005 at 02:26:23PM +0100, Donald Fraser wrote:
>
> Is the returned value of a function defined as IMMUTABLE cached
> globally? In other words could postgresql potentially return a
> cached value obtained from one client session to a different client
> session?
You could experiment
Howdy all. I'm doing some research on 'middleware' type connection
pooling, such as pgpool. I'm having some trouble finding other options
that are actively being maintained, whether it be by the open source
community or not. Can anyone point me to some other resources or ideas
for connection
When reading the docs on recovery.conf, I noticed this:
WAL segments that cannot be found in the archive will be sought in pg_xlog/;
this allows use of recent un-archived segments. However segments that are
available from the archive will be used in preference to files in pg_xlog/. The
system
Jeff Frost <[EMAIL PROTECTED]> writes:
> If the system will use the files in the archive in preference to the ones in
> pg_xlog, how can this actually happen if it will not overwrite the contents
> of
> pg_xlog?
Segment files pulled from the archive are saved using temporary file
names (and the
On Thu, 23 Jun 2005, Tom Lane wrote:
Segment files pulled from the archive are saved using temporary file
names (and then deleted after being replayed). For obvious safety
reasons, we try never to overwrite any xlog file in either the archive
or local storage.
So, that, immediately begs the q
Jeff Frost <[EMAIL PROTECTED]> writes:
> So, that, immediately begs the question as to why the docs indicate we
> should clear out all the files in the pg_xlog directory before
> beginning the restore? Wouldn't it be better to keep them in place?
IIRC, the docs recommend getting rid of any xlog f
I'm finding the \copy is very brittle. It seems to stop for everyone
little reason. Is there a way to tell it to be more forgiving -- for
example, to ignore extra data fields that might exists on a line?
Or, to have it just skip that offending record but continue on to the
next.
I've got a tab de
On Thu, Jun 23, 2005 at 12:27:44PM -0700, David Bear wrote:
>
> I'm finding the \copy is very brittle. It seems to stop for everyone
> little reason. Is there a way to tell it to be more forgiving -- for
> example, to ignore extra data fields that might exists on a line?
>
> Or, to have it just sk
On Thu, 2005-06-23 at 10:55 -0700, Jeff Frost wrote:
> When reading the docs on recovery.conf, I noticed this:
>
> WAL segments that cannot be found in the archive will be sought in
> pg_xlog/;
> this allows use of recent un-archived segments. However segments that are
> available from the ar
On June 23, 2005 03:27 pm, David Bear wrote:
> I'm finding the \copy is very brittle. It seems to stop for everyone
> little reason. Is there a way to tell it to be more forgiving -- for
> example, to ignore extra data fields that might exists on a line?
>
> Or, to have it just skip that offending
On Thu, 23 Jun 2005, Simon Riggs wrote:
I also noticed that if there is not at least one wal archive available in
the archive or the pg_xlog dir, the restore errors out and exits. So the
base backup is really not complete without at least one wal archive
following it. Is this by design?
That
On Thu, 23 Jun 2005, Simon Riggs wrote:
That was the bit I thought of. The files are streamed in one by one using a
temp filename, so you never run out of space no matter how big the archive
of transaction logs. Thats an important feature if a base backup goes bad
and you have to go back to n-
I guess I'm too stupid to see the error, but I don't understand why
the following fails.
insert into person3 (asuid, fname, lname, addedby, addedon,
slopbucket) values ("123455", "name", "name", "entered", "12/12/2004", NULL);
ERROR: column "123455" does not exist
is the double quote byting me?
On Thu, Jun 23, 2005 at 05:12:03PM -0700, David Bear wrote:
>
> I guess I'm too stupid to see the error, but I don't understand why
> the following fails.
>
> insert into person3 (asuid, fname, lname, addedby, addedon,
> slopbucket) values ("123455", "name", "name", "entered", "12/12/2004", NULL);
Try single quotes (') instead of double-quotes (").
-Bruno
On Thu, 23 Jun 2005 17:12:03 -0700, David Bear said:
> I guess I'm too stupid to see the error, but I don't understand why
> the following fails.
>
>
> insert into person3 (asuid, fname, lname, addedby, addedon,
> slopbucket) values (
David Bear <[EMAIL PROTECTED]> writes:
> I guess I'm too stupid to see the error, but I don't understand why
> the following fails.
> insert into person3 (asuid, fname, lname, addedby, addedon,
> slopbucket) values ("123455", "name", "name", "entered", "12/12/2004", NULL);
> ERROR: column "123455
David Bear,您好!
yes.of course.
you should insert the record like this:
insert into person3 (asuid, fname, lname, addedby, addedon,
slopbucket) values ('123455', 'name', 'name', 'entered', '12/12/2004', '');
=== 2005-06-24 08:12:03 您在来信中写道:===
>I guess I'm too stupid to
David Bear:
Yes. I agree with you.
\copy is really too brittle.
I wonder why \copy is not like oracle's sqlldr?
I think sqlldr is more powerful. When using sqlldr,we can specify the
maximum error records we allow,and we can also specify the number we should
co
24 matches
Mail list logo