Christopher Kings-Lynne wrote:
Hi,
Why is there no custom format dump option for pg_dumpall? What if I
want to use pg_dumpall to dump all db's and blobs? Or if I want to have
a huge sql dump from which I can easily exract the sql to recreate just
one table?
I think it is because we
Bruce Momjian [EMAIL PROTECTED] writes:
Christopher Kings-Lynne wrote:
Why is there no custom format dump option for pg_dumpall?
I think it is because we have no mechanism to merge multiple custom
format files from different databases into a single file. Perhaps this
is a TODO.
It would
Fernando Nasser wrote:
Bruce Momjian wrote:
Manfred Spraul wrote:
Bruce Momjian wrote:
Which basically shows one fsync, no O_SYNC's, and setting of the flag
only for klog reads.
Which sysklogd do you look at? The version from RedHat 9 contains this
block:
I looked on
Andrew, Tom:
This will be a really nice feature for those of us with PG servers that
participate in VPNs.Currently I'm blocking certain interfaces using
pg_hba.conf but would prefer a listen address instead.
Of course, the drawback to this is that confused DBAs will have their
pg_hba.conf
Richard, Tom:
Call me crazy, but maybe we have to throw a message for primary
key lookups on foreign key tables without indexes. I hate to throw a
message on update/delete rather than create table, but I don't see
another way.
You're crazy. For one thing, there are sometimes
Our current WAL implementation writes copies of full pages to WAL before
modifying the page on disk. This is done to prevent partial pages from
being corrupted in case the operating system crashes during a page
write.
For example, suppose an 8k block is being written to a heap file.
First
Josh Berkus wrote:
Andrew, Tom:
This will be a really nice feature for those of us with PG servers that
participate in VPNs.Currently I'm blocking certain interfaces using
pg_hba.conf but would prefer a listen address instead.
Of course, the drawback to this is that confused DBAs
Josh Berkus [EMAIL PROTECTED] writes:
Might I suggest that this default to 127.0.0.1 in postgresql.conf.sample?
No, the default should be localhost. Your thinking is too
IPv4-centric.
regards, tom lane
---(end of
Tom,
No, the default should be localhost. Your thinking is too
IPv4-centric.
Good point. My clients are all years away from implementing Ipv6, so I tend
to forget about it.
--
-Josh Berkus
Aglio Database Solutions
San Francisco
---(end of
If I understand WAL correctly (and I may not), it is essentially a write
cache for writes to the data files, because:
1. Data file writes are notoriously random, and writing the log is
sequential. Ironically, the sectors mapped by the OS to the disk are
likely not at all sequential, but they
I wrote:
Josh Berkus wrote:
If I had time, I would also love to see setting the password for the
postgres user become part of the initdb script. However, I can see
that this wouldn't work with packages.
Orthogonal problem.
BTW, initdb is no longer a script - some idiot rewrote it in C
Idiot is such a strong word, don't you think?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andrew Dunstan
Sent: Tuesday, March 16, 2004 7:26 AM
To: Postgresql Hackers
Subject: Re: [HACKERS] listening addresses
I wrote:
Josh Berkus wrote:
If I had
Andrew Dunstan [EMAIL PROTECTED] writes:
The intention is to make localhost the default. That should translate
to 127.0.0.1 and ::1 (if they have ipv6 on). Of course, if they have a
broken resolver things might get sticky, but that is true now anyway.
Just to be clear: right now, if
I suspect (but cannot prove) that performance would jump for systems
like ours if WAL was done away with entirely and the individual data
files were synchronized on commit.
You know.. thats exactly what WAL is designed to prevent? Grab a copy of
7.0 and 7.1. Do a benchmark between the 2 with
On Fri, Mar 12, 2004 at 10:37:58AM -0500, Tom Lane wrote:
This also brings up the thought that if we do want to use pgfoundry.org,
we'd better register pgfoundry.net and pgfoundry.com before someone
else does.
And .info? And .biz? And whatever other unrestricted nonsense
things there are
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
The intention is to make localhost the default. That should translate
to 127.0.0.1 and ::1 (if they have ipv6 on). Of course, if they have a
broken resolver things might get sticky, but that is true now anyway.
Just to be clear:
Marty Scholes [EMAIL PROTECTED] writes:
I suspect (but cannot prove) that performance would jump for systems
like ours if WAL was done away with entirely and the individual data
files were synchronized on commit.
I rather doubt this, since we used to do things that way and we saw an
On Sat, Mar 13, 2004 at 10:45:35AM -0500, Rod Taylor wrote:
Not that I'm volunteering, but I think the biggest issue is many users
simply don't know how to approach the problem. Some docs on using
syslog, cron, etc. with PostgreSQL to accomplish maintenace jobs would
probably be enough.
There
Tom Lane wrote:
Your analysis is missing an important point, which is what happens when
multiple transactions successively modify the same page. With a
sync-the-data-files approach, we'd have to write the data page again for
each commit. With WAL, the data page will likely not get written at all
Hi all,
I have sent a patch to implement tinyint, an unsigned one byte integer,
for MS SQL compatibility. The replies I got were under two categories.
One was our type system is complicated enough, and the other was it
should be signed and we should have a general unsigned keyword.
Now, I'm
Folks,
Last month, there was a discussion about deferring constraints that use
user-defined functions to the end of the pg_dump file, like we do with FK
constraints.Did this go anywhere, or is it still a TODO in search of an
owner?
--
-Josh Berkus
Aglio Database Solutions
San Francisco
Andrew Dunstan [EMAIL PROTECTED] writes:
I did wonder if we should treate localhost as a bit special and not
rely on the resolver for it.
I don't think so; we went in the other direction in 7.4 for pgstats.
(It used to try to bind to 127.0.0.1 and now tries localhost.)
So far I've not seen any
Shachar Shemesh [EMAIL PROTECTED] writes:
I have sent a patch to implement tinyint, an unsigned one byte integer,
...
Now, I'm not against neither simplifying the type system nor having a
unsigned keyword. The thing is that between these two remarks, my
patch was not applied and I don't
Josh Berkus [EMAIL PROTECTED] writes:
Last month, there was a discussion about deferring constraints that use
user-defined functions to the end of the pg_dump file, like we do with FK
constraints.Did this go anywhere, or is it still a TODO in search of an
owner?
Isn't this already
Tom Lane wrote:
Josh Berkus [EMAIL PROTECTED] writes:
Might I suggest that this default to 127.0.0.1 in postgresql.conf.sample?
No, the default should be localhost. Your thinking is too
IPv4-centric.
FYI, once we default to listening on localhost, we need to warn folks
who are using
Bruce Momjian [EMAIL PROTECTED] writes:
FYI, once we default to listening on localhost, we need to warn folks
who are using socket permission to control access that they have to turn
off localhost. That needs to be mentioned in the release notes, and in
the SGML docs that talk about socket
I've been looking at an example provided by Arjen van der Meijden in
which practically all the runtime goes into evaluating very trivial
comparison expressions (it's basically a CASE statement with a huge
number of arms). Whether or not you think that a CASE with a huge
number of arms is a
At 02:00 AM 16/03/2004, Tom Lane wrote:
Why is there no custom format dump option for pg_dumpall?
I think it is because we have no mechanism to merge multiple custom
format files from different databases into a single file.
That was the main reason (and as Tom suggested, dumpall was a shell
Andrew Sullivan wrote:
On Sat, Mar 13, 2004 at 10:45:35AM -0500, Rod Taylor wrote:
Not that I'm volunteering, but I think the biggest issue is many users
simply don't know how to approach the problem. Some docs on using
syslog, cron, etc. with PostgreSQL to accomplish maintenace jobs would
Philip Warner [EMAIL PROTECTED] writes:
I'm envisioning
some super-archive format in which there are individual entries
containing the pg_dump output for each database
Not sure about this. I'd be inclined to investigate just adding another
attribute to each TOC entry (database_name).
That
At 11:12 AM 16/03/2004, Tom Lane wrote:
That seems like it would complicate both pg_dump and pg_restore unduly.
I'd rather keep both of them as simple one-database programs.
Sounds fine, if that's a design objective. But it is a different problem to
definining the archive format of pg_dumpall.
Tom == Tom Lane [EMAIL PROTECTED] writes:
Tom I'm not sure that this would let us catch up to what Arjen
Tom reports as MySQL's expression evaluation speed, but it should
Tom at least speed things up a bit with only fairly localized
Tom changes.
I like the idea of memoizing the
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
FYI, once we default to listening on localhost, we need to warn folks
who are using socket permission to control access that they have to turn
off localhost. That needs to be mentioned in the release notes, and in
the SGML docs that
Hi,
I am trying to revise my patch for counting the number of rows inserted
when using a COPY statement.
I set it up a printf to see the results in the log file.
When I execute a COPY the first time it reports 0 rows inserted(but
actually inserts all rows). If I execute again it reports the
Sailesh Krishnamurthy [EMAIL PROTECTED] writes:
I like the idea of memoizing the switch with function pointers as I
don't think branch prediction helps much with varying switch arms
selected with different exprs.
Check, it's hard to see how any CPU could get much traction on the
behavior of
Sailesh Krishnamurthy [EMAIL PROTECTED] writes:
Tom == Tom Lane [EMAIL PROTECTED] writes:
Tom I'm not sure that this would let us catch up to what Arjen
Tom reports as MySQL's expression evaluation speed, but it should
Tom at least speed things up a bit with only fairly
mike g [EMAIL PROTECTED] writes:
If there is a simple explanation to why ProcessUtility is not called the
first time COPY is executed and it is from then on it could help me a
lot.
CreateCommandTag is called before the query is executed, I believe.
regards, tom lane
Greg Stark [EMAIL PROTECTED] writes:
a) I don't see why you would assume branch prediction would be ineffective
here. There are probably a few arms of the switch that are more common than
all the others, especially when a large query is evaluating the same
expression over and over again.
Marty Scholes wrote:
2. Put them on an actual (or mirrored actual) spindle
Pros:
* Keeps WAL and data file I/O separate
Cons:
* All of the non array drives are still slower than the array
Are you sure this is a problem? The dbt-2 benchmarks from osdl run on an
8-way Intel computer with several
Tom Lane [EMAIL PROTECTED] writes:
b) Instead of storing one of a small set of function pointers in every
node of every expression, wouldn't it make more sense to have a table
lookup from node type to function pointer?
That's pretty much what the ExecEvalExpr switch() does already, on
[moving to hackers]
Matthew T. O'Connor wrote:
Good luck, I hope you can get permission. Would e nice to fix this
little crash.
I went ahead and recompiled with --enable-debug, and get this trace:
#0 0xfefb3218 in strlen () from /usr/lib/libc.so.1
#1 0xff006520 in _doprnt () from
Hi,
I was thinking other way round. What if we write to WAL pages only to those
portions which we need to modify and let kernel do the job the way it sees fit?
What will happen if it fails?
Bruce Momjian wrote:
Our current WAL implementation writes copies of full pages to WAL before
modifying
42 matches
Mail list logo