Hi,
Le mercredi 27 février 2008, Florian G. Pflug a écrit :
Upon reception of a COPY INTO command, a backend would
.) Fork off a dealer and N worker processes that take over the
client connection. The dealer distributes lines received from the
client to the N workes, while the original
On Tue, Feb 26, 2008 at 02:57:12PM -0800, Joshua D. Drake wrote:
If we get volunteers set up, they will start running it daily.
Would there be a way to script the responses to flag us for things
that are important?
There was (briefly) a way for them to send emails whenever something
new
On Tue, 2008-02-26 at 20:14 +, Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
Simon Riggs [EMAIL PROTECTED] writes:
I've not been advocating improving pg_restore, which is where the -Fc
tricks come in.
...
I see you thought I meant pg_restore. I don't thinking extending
Le mardi 26 février 2008, Tom Lane a écrit :
Or in more practical terms in this case, we have to balance
speed against potentially-large costs in maintainability, datatype
extensibility, and suchlike issues if we are going to try to get more
than percentage points out of straight COPY.
Could
On Tue, Feb 26, 2008 at 03:48:28PM -0600, Robert Lor wrote:
Gregory Stark wrote:
I think both types of probes are useful to different people.
I think certain higher level probes can be really useful to DBAs.
Perhaps looking at the standard database SNMP MIB counters would give us a
place
On Wed, Feb 27, 2008 at 02:02:29AM -0800, craigp wrote:
I'm having trouble compiling the current cvs version on windows xp (msvc 2005
express). Compile errors below.
Did you by any chance use a tree that's been sitting around for a long
time? Like sometime earlier in the 8.3 series. We had a
I'm having trouble compiling the current cvs version on windows xp (msvc 2005
express). Compile errors below.
I have bison 1.875 (I can't find 2.2+ for windows) and flex 2.5.4. These tools
seem to generate correct outputs.
It looks like it might be including parse.h from include/parser/parse.h
Le mardi 26 février 2008, Joshua D. Drake a écrit :
Think 100GB+ of data that's in a CSV or delimited file. Right now
the best import path is with COPY, but it won't execute very fast as
a single process. Splitting the file manually will take a long time
(time that could be spend loading
On Wed, 2008-02-27 at 09:09 +0100, Dimitri Fontaine wrote:
Hi,
Le mercredi 27 février 2008, Florian G. Pflug a écrit :
Upon reception of a COPY INTO command, a backend would
.) Fork off a dealer and N worker processes that take over the
client connection. The dealer distributes lines
What exactly is needed for building the required libuuid files? rom what I
can tell, the authorh as no binaries available, correct?
It builds with mingw only? Or with msvc? does the mingw build generate all
the required libraries for the msvc build as well? (sorry, I'm on a win64
box right now,
The default parser doesn't allow commas in numbers (I can see why, I think).
SELECT ts_parse('default', '123,000');
ts_parse
--
(22,123)
(12,,)
(22,000)
One option of course is to pre-process the text, but since we can
support custom parsers I thought I'd take a look at the code to
I'd like to extend the libpq service file by allowing
wildcards, e.g. like this:
[%]
host=dbhost.mycompany.com
dbname=%
Such an entry would match all service parameters,
and all ocurrences of the wildcard right of a = would
be replaced with the service parameter.
That implies that a [%] entry
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
...
Neither the dealer, nor the workers would need access to the either
the shared memory or the disk, thereby not messing with the one backend
is one transaction is one session dogma.
...
Unfortunately, this idea has far too narrow a
Hi.
- Original Message -
From: Magnus Hagander [EMAIL PROTECTED]
What exactly is needed for building the required libuuid files? rom what I
can tell, the authorh as no binaries available, correct?
Yes, both can be built. however, msvc-build is not official though regrettable.
Peter Eisentraut wrote:
Using the order-only prerequisites feature, which is what is failing with the
old make version, solves item 1).
The alternative is your suggestion
If the dependencies
need to stay as they are, maybe we could avoid the annoyance by having
make not
Dimitri Fontaine wrote:
Of course, the backends still have to parse the input given by pgloader, which
only pre-processes data. I'm not sure having the client prepare the data some
more (binary format or whatever) is a wise idea, as you mentionned and wrt
Tom's follow-up. But maybe I'm all
Alvaro Herrera wrote:
How about we use order-only prerequisite only if present, and use the
ugly or undesirable way as fallback? I see that you can find out if
your Make version supports it by checking .FEATURES.
I think this can be used with a conditional like
ifneq (,$(findstring
I have the following table:
Objeto Valor
ob1 10
ob1 20
ob2 50
ob2 10
ob3 50
With the following command:
select distinct Objeto, sum(valor) from tb
group by Objeto;
I have to return:
Objeto Valor
ob1
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
...
Neither the dealer, nor the workers would need access to the either
the shared memory or the disk, thereby not messing with the one backend
is one transaction is one session dogma.
...
Unfortunately, this idea has far too
A.M. wrote:
On Feb 27, 2008, at 9:11 AM, Florian G. Pflug wrote:
The reason that I'd love some within-one-backend solution is that I'd
allow you to utilize more than one CPU for a restore within a *single*
transaction. This is something that a client-side solution won't be
able to
[EMAIL PROTECTED] wrote:
I have the following table:
The hackers list is for development of the PostgreSQL database itself.
Please try reposting on the general or sql mailing lists.
--
Richard Huxton
Archonet Ltd
---(end of broadcast)---
Hello
On 26/02/2008, Andrew Dunstan [EMAIL PROTECTED] wrote:
Pavel Stehule wrote:
Hello,
I found easy implementation of variadic functions. It's based on
adapation FuncnameGetCandidates. When I found variadic function, then
I should create accurate number of last arguments
Hello
I thing RETURN QUERY is successful idea. It should be completed with
support of dynamic SQL.
Syntax:
RETURN EXECUTE sqlstring [USING];
This is shortcut for
FOR r IN EXECUTE sqlstring USING LOOP
RETURN NEXT r;
END LOOP;
Regards
Pavel Stehule
---(end of
On Wed, Feb 27, 2008 at 09:46:14PM +0900, Hiroshi Saito wrote:
What exactly is needed for building the required libuuid files? rom what I
can tell, the authorh as no binaries available, correct?
Yes, both can be built. however, msvc-build is not official though
regrettable. but, It can be
On Feb 27, 2008, at 9:11 AM, Florian G. Pflug wrote:
Dimitri Fontaine wrote:
Of course, the backends still have to parse the input given by
pgloader, which only pre-processes data. I'm not sure having the
client prepare the data some more (binary format or whatever) is a
wise idea, as
A.M. wrote:
On Feb 27, 2008, at 9:11 AM, Florian G. Pflug wrote:
Dimitri Fontaine wrote:
Of course, the backends still have to parse the input given by
pgloader, which only pre-processes data. I'm not sure having the
client prepare the data some more (binary format or whatever) is a
wise
Albe Laurenz [EMAIL PROTECTED] writes:
I'd like to extend the libpq service file by allowing
wildcards, e.g. like this:
[%]
host=dbhost.mycompany.com
dbname=%
Such an entry would match all service parameters,
and all ocurrences of the wildcard right of a = would
be replaced with the
Brian Hurt wrote:
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
...
Neither the dealer, nor the workers would need access to the either
the shared memory or the disk, thereby not messing with the one backend
is one transaction is one session dogma.
...
Unfortunately, this idea
but putting these and other counters in context is what could be
missing. Correlating a given (set of) stats with others (possible
outside of the application domain) is one of the assets offered by
DTrace. Besides the generic transaction begin/start/end it could
also be helpful to see the
On Wed, Feb 27, 2008 at 9:26 PM, Florian G. Pflug [EMAIL PROTECTED] wrote:
I was thinking more along the line of letting a datatype specify a
function void* ioprepare(typmod) which returns some opaque object
specifying all that the input and output function needs to know.
We could than
Hi.
- Original Message -
From: Magnus Hagander [EMAIL PROTECTED]
Ok.
Do you know if there are any plans to include this in the distribution? I
would make life a whole lot easier. If not, perhaps we should include the
win32.mak file in a subdir to our uuid module?
Ahh, I don't have
Alvaro Herrera [EMAIL PROTECTED] writes:
Yeah, but it wouldn't take advantage of, say, the hack to disable WAL
when the table was created in the same transaction.
In the context of a parallelizing pg_restore this would be fairly easy
to get around. We could probably teach the thing to combine
Florian G. Pflug wrote:
Would it be possible to determine when the copy is starting that this
case holds, and not use the parallel parsing idea in those cases?
In theory, yes. In pratice, I don't want to be the one who has to
answer to an angry user who just suffered a major drop in COPY
Andrew Dunstan wrote:
Florian G. Pflug wrote:
Would it be possible to determine when the copy is starting that
this case holds, and not use the parallel parsing idea in those cases?
In theory, yes. In pratice, I don't want to be the one who has to
answer to an angry user who just
Hi,
I'm toying around with the idea of tracking snaphots more accurately to
be able to advance Xmin for read committed transactions.
I think it's relatively easy to do it in the straightforward way, which
is to just add destroy snapshots in the spots where a snapshot
variable goes out of scope.
Andrew Dunstan wrote:
Florian G. Pflug wrote:
Would it be possible to determine when the copy is starting that this
case holds, and not use the parallel parsing idea in those cases?
In theory, yes. In pratice, I don't want to be the one who has to
answer to an angry user who just suffered a
Florian G. Pflug [EMAIL PROTECTED] writes:
Plus, I'd see this as a kind of testbed for gently introducing
parallelism into postgres backends (especially thinking about sorting
here).
This thinking is exactly what makes me scream loudly and run in the
other direction. I don't want threads
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Plus, I'd see this as a kind of testbed for gently introducing
parallelism into postgres backends (especially thinking about sorting
here).
This thinking is exactly what makes me scream loudly and run in the
other direction. I don't
Alvaro Herrera [EMAIL PROTECTED] writes:
We currently just copy the portal's content into a Materialize node, and
let the snapshot go away at transaction's end. This works, but ISTM we
could improve that by keeping track of the portal's snapshot separately
from the transaction -- that is to
On Wed, 2008-02-27 at 15:24 +0100, Pavel Stehule wrote:
I thing RETURN QUERY is successful idea. It should be completed with
support of dynamic SQL.
Yeah, I can see that being useful.
RETURN EXECUTE sqlstring [USING];
What is the USING clause for?
-Neil
---(end
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
We currently just copy the portal's content into a Materialize node, and
let the snapshot go away at transaction's end. This works, but ISTM we
could improve that by keeping track of the portal's snapshot separately
from the
Alvaro Herrera wrote:
I think this can be used with a conditional like
ifneq (,$(findstring order-only,$(.FEATURES)))
...
endif
Yes, that was my thought.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
Hiroshi Saito wrote:
Hi.
Ok.
Do you know if there are any plans to include this in the distribution? I
would make life a whole lot easier. If not, perhaps we should include the
win32.mak file in a subdir to our uuid module?
Ahh, I don't have a good idea... build of MinGW is required before
On Wed, Feb 27, 2008 at 1:58 PM, Neil Conway [EMAIL PROTECTED] wrote:
On Wed, 2008-02-27 at 15:24 +0100, Pavel Stehule wrote:
I thing RETURN QUERY is successful idea. It should be completed with
support of dynamic SQL.
Yeah, I can see that being useful.
RETURN EXECUTE sqlstring
I notice that several of the call sites of tuplestore_puttuple() start
with arrays of datums and nulls, call heap_form_tuple(), and then switch
into the tstore's context and call tuplestore_puttuple(), which
deep-copies the HeapTuple into the tstore. ISTM it would be faster and
simpler to provide
Referring to tuplesort.c andtuplestore.c
BACKGROUND: Starting from dumptuples() [ tuplesort.c ] write functions move
the tuple from a buffer to another in order to finally write it in a logical
tape. Is there a way (even the most inefficient way) to use current
read/write functions
On 27/02/2008, Merlin Moncure [EMAIL PROTECTED] wrote:
On Wed, Feb 27, 2008 at 1:58 PM, Neil Conway [EMAIL PROTECTED] wrote:
On Wed, 2008-02-27 at 15:24 +0100, Pavel Stehule wrote:
I thing RETURN QUERY is successful idea. It should be completed with
support of dynamic SQL.
On IRC today someone brought up a problem in which users were still able
to connect to a database after a REVOKE CONNECT ... FROM theuser. The
reason theuser is still able to connect is because PUBLIC still has
privileges to connect by default (AndrewSN was the one who answered
this).
Would it be
On Tue, Feb 26, 2008 at 06:19:48PM +0100, Dimitri Fontaine wrote:
So... where do I start to create a varlena datatype which has to store the 3
following values: text prefix, char start, char end.
It's not clear for me whether this is what I need to provide:
typedef struct
I see no-one
Jeff Davis [EMAIL PROTECTED] writes:
Would it be reasonable to throw a warning if you revoke a privilege from
some role, and that role inherits the privilege from some other role (or
PUBLIC)?
This has been suggested and rejected before --- the consensus is it'd
be too noisy.
Possibly the
In Read Committed transactions we take snapshots much more frequently
than transactions begin and commit. It would be help scalability if we
didn't need to re-take a snapshot. That's only helpful if the chances of
seeing the snapshot is relatively high.
Now that we have virtual transactions we
Neil Conway [EMAIL PROTECTED] writes:
I notice that several of the call sites of tuplestore_puttuple() start
with arrays of datums and nulls, call heap_form_tuple(), and then switch
into the tstore's context and call tuplestore_puttuple(), which
deep-copies the HeapTuple into the tstore. ISTM
Hello.
I am currently playing with UUID data type and try to use it to store provided
by third party (Hewlett-Packard) application. The problem is they
format UUIDs as
-------, so I have to
replace(text,'-','')::uuid for
this kind of data.
Nooow, the case is
Dawid,
I am working on a patch to support this format (yes, it is a simple
modification).
I'd suggest writing a formatting function for UUIDs instead. Not sure what
it should be called, though. to_char is pretty overloaded right now.
--
--Josh
Josh Berkus
PostgreSQL @ Sun
San Francisco
I am working on a patch to support this format (yes, it is a simple
modification).
There was a proposal and a discussion regarding how this datatype would be
before I started developing it. We decided to go with the format proposed in
RFC. Unless there is strong case, I doubt any non
Josh Berkus [EMAIL PROTECTED] writes:
I am working on a patch to support this format (yes, it is a simple
modification).
I'd suggest writing a formatting function for UUIDs instead.
That seems like overkill, if not outright encouragement of people to
come up with yet other nonstandard formats
Hi.
- Original Message -
From: Magnus Hagander [EMAIL PROTECTED]
I take it you are in contact with them, since you helped them with the
port? Can you ask them if they are interested in distributing that file?
Yes, However, It is not discussing about MSVC. It is because it needed
It looks like gypsy_moth has been failing like this:
creating directory
/export/home/tmp/pg-test/build-suncc/HEAD/pgsql.21325/src/test/regress/./tmp_check/data
... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers/max_fsm_pages ...
On Tue, 2008-02-26 at 12:29 +0100, Martijn van Oosterhout wrote:
When we're running a COPY over a high latency link then network time is
going to become dominant, so potentially, running COPY asynchronously
might help performance for loads or initial Slony configuration. This is
Simon Riggs [EMAIL PROTECTED] wrote:
The LOCK is only required because we defer the inserts into unique
indexes, yes?
No, as far as present pg_bulkload. It creates a new relfilenode like REINDEX,
therefore, access exclusive lock is needed. When there is violations of
unique constraints, all
Hello All,
We are facing some problems while downloading the Postgresql 8.2.4
version of 64bit processor for both Windows and SUSE Linux ES-7000
partitions. The entire download URL's of PostgreSQL 8.2.4 version are
displaying the error message The webpage can't found.
Can you please let me
61 matches
Mail list logo