Josh Berkus wrote:
The performance of every path to get data into the database besides COPY
is too miserable for us to use anything else, and the current
inflexibility makes it useless for anything but the cleanest input data.
One potential issue we're facing down this road is that current
Josh Berkus wrote:
The user-defined table for rejects is obviously exclusive of the system
one, either of those would be fine from my perspective.
I've been thinking about it, and can't come up with a really strong case
for wanting a user-defined table if we settle the issue of having a
On Fri, 11 Sep 2009, Josh Berkus wrote:
I've been thinking about it, and can't come up with a really strong case
for wanting a user-defined table if we settle the issue of having a
strong key for pg_copy_errors. Do you have one?
No, but I'd think that if the user table was only allowed to be
On Fri, 11 Sep 2009, Emmanuel Cecchet wrote:
I guess the problem with extra or missing columns is to make sure that
you know exactly which data belongs to which column so that you don't
put data in the wrong columns which is likely to happen if this is fully
automated.
Allowing the extra
On Thu, Sep 10, 2009 at 04:24:15PM -0400, Jan Wieck wrote:
The feature was originally intended to be a clean way of avoiding
interferences of triggers and/or foreign keys with replication systems
that work on the user level (like Bucardo, Londiste and Slony). The only
way to break
usual round of updates to the scan report.
Today's report available at:
http://zlew.org/postgresql_static_check/scan-build-2009-09-12-1/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Greg Smith wrote:
After some thought, I think that Andrew's feature *is* generally
applicable, if done as IGNORE COLUMN COUNT (or, more likely,
column_count=ignore). I can think of a lot of data sets where column
count is jagged and you want to do ELT instead of ETL.
Exactly, the ELT
Andrew Dunstan and...@dunslane.net writes:
Right. What I proposed would not have been terribly invasive or
difficult, certainly less so than what seems to be our direction by an
order of magnitude at least. I don't for a moment accept the assertion
that we can get a general solution for the
Tom Lane wrote:
Andrew Dunstan and...@dunslane.net writes:
Right. What I proposed would not have been terribly invasive or
difficult, certainly less so than what seems to be our direction by an
order of magnitude at least. I don't for a moment accept the assertion
that we can get a
Jan Otto as...@me.com writes:
This patch basically frees dirdesc and rereads the tablespace location
in case a subdirectory was deleted from the tablespace. this is the
place
where snow leopard fails to read the next entry with readdir().
I've applied this patch in HEAD only for the moment.
Andrew Dunstan and...@dunslane.net writes:
At the same time, I think it's probably not a good thing that users who
deal with very large amounts of data would be forced off the COPY fast
path by a need for something like input support for non-rectangular
data.
[ shrug... ] Everybody in the
I have just noticed, somewhat to my chagrin, that while in a plperl
function that returns an array type you can return a perl arrayref, like
this:
return [qw(a b c)];
if the function returns a setof an array type you cannot do this:
return_next [qw(a b c)];
Now the plperl docs say:
Andrew Dunstan and...@dunslane.net writes:
The fix is fairly small (see attached) although I need to check with
some perlguts guru to see if I need to decrement a refcounter here or there.
The array_ret variable seems a bit unnecessary, and declared well
outside the appropriate scope if it is
Not sure that this really belongs on pgsql-committers - maybe pgsql-hackers?
No matter what scheme PostgreSQL uses for storing the data, there can be
underlying file system limitations. One solution, for example, would be
to use a file system that does not have a limitation of 32k
(removed -committers)
Mark,
* Mark Mielke (m...@mark.mielke.cc) wrote:
No matter what scheme PostgreSQL uses for storing the data, there can be
underlying file system limitations.
This is true, but there's a reason we only create 1GB files too. I
wouldn't be against a scheme such as
* Tom Lane (t...@sss.pgh.pa.us) wrote:
I've applied this patch in HEAD only for the moment. I hope that
Apple will have fixed their bug before the next set of PG back-branch
updates come out --- if not, we'll probably have to back-patch.
and on the flip side, I was hoping to see a new 8.4.2
* Stephen Frost (sfr...@snowman.net) wrote:
Ehhh, it's likely to be cached.. Sounds like a stretch to me that this
would actually be a performance hit. If it turns out to really be one,
we could just wait to move to subdirectories until some threshold (eg-
30k) is hit.
Thinking this through
On 09/12/2009 03:33 PM, Stephen Frost wrote:
* Mark Mielke (m...@mark.mielke.cc) wrote:
No matter what scheme PostgreSQL uses for storing the data, there can be
underlying file system limitations.
This is true, but there's a reason we only create 1GB files too. I
wouldn't be against
On 09/12/2009 03:48 PM, Stephen Frost wrote:
This would allow for 220M+ databases. I'm not sure how bad it'd be to
introduce another field to pg_database which provides the directory (as
it'd now be distinct from the oid..) or if that might require alot of
changes. Not sure how easy it'd be to
Stephen Frost sfr...@snowman.net writes:
* Mark Mielke (m...@mark.mielke.cc) wrote:
No matter what scheme PostgreSQL uses for storing the data, there can be
underlying file system limitations.
This is true, but there's a reason we only create 1GB files too. I
wouldn't be against a scheme
Mark Mielke m...@mark.mielke.cc writes:
My God - I thought 32k databases in the same directory was insane.
220M+???
Considering that the system catalogs alone occupy about 5MB per
database, that would require an impressive amount of storage...
In practice I think users would be
* Mark Mielke (m...@mark.mielke.cc) wrote:
There is no technical requirement for PostgreSQL to separate data in
databases or tables on subdirectory or file boundaries. Nothing wrong
with having one or more large files that contain everything.
Uhh, except where you run into system
Stephen Frost sfr...@snowman.net writes:
* Mark Mielke (m...@mark.mielke.cc) wrote:
I guess I'm not seeing how using 32k tables is a sensible model.
For one thing, there's partitioning. For another, there's a large user
base. 32K tables is, to be honest, not all that many, especially for
* Tom Lane (t...@sss.pgh.pa.us) wrote:
I believe the filesystem limit the OP is hitting is on the number of
*subdirectories* per directory, not on the number of plain files.
Right, I'm not entirely sure how we got onto the question of number of
tables.
So the question I would ask goes more
On Sat, 12 Sep 2009, Tom Lane wrote:
Everybody in the world is going to want their own little problem to be
handled in the fast path. And soon it won't be so fast anymore. I
think it is perfectly reasonable to insist that the fast path is only
for clean data import.
The extra overhead is
Robert Haas robertmh...@gmail.com writes:
On Sep 6, 2009, at 10:45 AM, Tom Lane t...@sss.pgh.pa.us wrote:
... But now that we have a plan for a less obviously broken costing
approach, maybe we should open the floodgates and allow
materialization
to be considered for any inner path that
Well, 10.6.1 is out and it's still got the readdir() bug :-(.
Has someone filed a bug report about this with Apple?
yes i have filed a bugreport and keep this list informed when
there is something going on.
regards, jan otto
--
Sent via pgsql-hackers mailing list
On Sep 11, 2009, at 10:19 AM, Robert Haas wrote:
On Fri, Sep 11, 2009 at 10:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
I think the main benefit of a sprintf type function for
PostgreSQL is
in the formatting (setting length, scale, alignment),
On Fri, Sep 11, 2009 at 11:43:32AM -0400, Merlin Moncure wrote:
If you are going to use printf format codes, which is good and useful
being something of a standard, I'd call routine printf (not format)
and actually wrap vsnprintf. The format codes in printf have a very
specific meaning:
decibel wrote:
Speaking of concatenation...
Something I find sorely missing in plpgsql is the ability to put
variables inside of a string, ie:
DECLARE
v_table text := ...
v_sql text;
BEGIN
v_sql := SELECT * FROM $v_table;
Of course, I'm assuming that if it was easy to do that it would
Tom Lane wrote:
Nobody has complained about it over the years, so I wonder if it should
be backpatched. It wouldn't change any working behaviour, just remove
the non-working property of some documented behaviour.
AFAICT it just fails, so backpatching seems like a bug fix not a
On 09/12/2009 04:17 PM, Stephen Frost wrote:
* Mark Mielke (m...@mark.mielke.cc) wrote:
There is no technical requirement for PostgreSQL to separate data in
databases or tables on subdirectory or file boundaries. Nothing wrong
with having one or more large files that contain everything.
Tom Lane wrote:
So the question I would ask goes more like do you really need 32K
databases in one installation? Have you considered using schemas
instead? Databases are, by design, pretty heavyweight objects.
That's a fair question. OTOH, devising a scheme to
Andrew Dunstan and...@dunslane.net writes:
Tom Lane wrote:
So the question I would ask goes more like do you really need 32K
databases in one installation? Have you considered using schemas
instead? Databases are, by design, pretty heavyweight objects.
That's a fair question. OTOH,
On Fri, Sep 11, 2009 at 10:27:06AM +0200, Dimitri Fontaine wrote:
Maybe instead of opening FROM for COPY, having it accepted in WITH would
be better, the same way (from the user point of view) that DML returning
are worked on.
...
WITH csv AS (
COPY t FROM stdin CSV
)
INSERT INTO
Folks,
There was a fairly good response to my first request for reviewers for
the upcoming CommitFest, but we still have only about half as many
reviewers as we do patches, and I expect a few more patches to come in
at the last minute, so please volunteer if you haven't already.
Patch authors,
On Fri, Sep 11, 2009 at 5:45 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Sep 11, 2009 at 5:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
The biggest problem I have with this change is that it's going to
massively break anyone who is using the
37 matches
Mail list logo