Tom Lane wrote:
The real point here is that omitting the per-command subtransaction
ought to be a hidden optimization, not something that intrudes to the
point of having unclean semantics when we can't do it.
Sorry to be stupid here, but I didn't understand this when it was
disussed originally
Richard Huxton wrote:
Tom Lane wrote:
The real point here is that omitting the per-command subtransaction
ought to be a hidden optimization, not something that intrudes to the
point of having unclean semantics when we can't do it.
Sorry to be stupid here, but I didn't understand this when it was
I think I recall that lseek may have a negative effect on some OS's
readahead calculations (probably only systems that cannot handle an
lseek to the next page eighter) ? Do you think we should cache the
last value to avoid the syscall ?
We really can't, since the point of doing it is to
On Tue, 30 Nov 2004, Greg Stark wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
The advantage of having it in COPY is that it can be done serverside
direct from the file system. For massive bulk loads that might be a
plus, although I don't know what the protocol+socket overhead is.
On Mon, 29 Nov 2004, Marc G. Fournier wrote:
If there were a comp.databases.postgresql.hackers newsgroup created and
carried by all the news servers ... would you move to using it vs using
the mailing lists?
No. (yes, I'm still here :)
Vince.
--
Fast, inexpensive internet service 56k and
Greg Stark wrote:
Personally I find the current CSV support inadequate. It seems pointless to
support CSV if it can't load data exported from Excel, which seems like the
main use case.
OK, I'm starting to get mildly annoyed now. We have identified one
failure case connected with multiline
Hi all!
I need to operate with large objects through ODBC in
C/C++ program. How can I do that?
__
Do you Yahoo!?
Meet the all-new My Yahoo! - Try it today!
http://my.yahoo.com
---(end of
Bojidar Mihajlov wrote:
Hi all!
I need to operate with large objects through ODBC in
C/C++ program. How can I do that?
Look at the contrib lo data type.
Sincerely,
Joshua D. Drake
__
Do you Yahoo!?
Meet the all-new My Yahoo! - Try it today!
On 11/29/2004 10:49 AM, Greg Stark wrote:
I'll point out other databases end up treading the same ground. Oracle started
with a well defined rules-based system that was too inflexible to handle
complex queries. So they went to a cost-based optimizer much like Postgres's
current optimizer.
On 11/29/2004 11:03 AM, Marc G. Fournier wrote:
The USENET community seems to think that there would be a mass exodus
from the lists to usenet ... based on past discussions concerning moving
some stuff out of email to stuff like bug trackers, I don't believe this
to be the case, but am
Greg Stark wrote:
Personally I find the current CSV support inadequate. It seems
pointless to
support CSV if it can't load data exported from Excel, which seems like
the
main use case.
OK, I'm starting to get mildly annoyed now. We have identified one
failure case connected
On Mon, 29 Nov 2004, Marc G. Fournier wrote:
If there were a comp.databases.postgresql.hackers newsgroup created and
carried by all the news servers ... would you move to using it vs using
the mailing lists?
Trying this again with the right From address...
No. (and yes, I'm still here :)
Thanks Neil,
I will just have to hassle EMS to upgrade :)
Cheers
Johan Wehtje
Neil Conway wrote:
On Tue, 2004-11-30 at 17:54 +1100, Johan Wehtje wrote:
I am getting the error Column n.nsptablespace does not exist in my
application when I connect using my Administrative tool. This only
happens
Richard Huxton [EMAIL PROTECTED] writes:
Tom Lane wrote:
The real point here is that omitting the per-command subtransaction
ought to be a hidden optimization, not something that intrudes to the
point of having unclean semantics when we can't do it.
Sorry to be stupid here, but I didn't
Greg Stark [EMAIL PROTECTED] writes:
Mark Wong [EMAIL PROTECTED] writes:
I have some initial results using 8.0beta5 with our OLTP workload.
http://www.osdl.org/projects/dbt2dev/results/dev4-010/199/
throughput: 4076.97
Do people really only look at the throughput numbers? Looking at those
[EMAIL PROTECTED] wrote:
I am normally more of a lurker on these lists, but I thought you had
better know
that when we developed CSV import/export for an application at my last
company
we discovered that Excel can't always even read the CSV that _it_ has
output!
(With embedded newlines a
On Mon, 2004-11-29 at 16:01 -0800, Mark Wong wrote:
I have some initial results using 8.0beta5 with our OLTP workload.
Off the bat I see about a 23% improvement in overall throughput. The
most significant thing I've noticed was in the oprofile report where
FunctionCall2 and hash_seq_search
Andrew Dunstan [EMAIL PROTECTED] writes:
FWIW, I don't make a habit of using multiline fields in my spreadsheets - and
some users I have spoken to aren't even aware that you can have them at all.
Unfortunately I don't get a choice. I offer a field on the web site where
users can upload an
Thomas Hallgren [EMAIL PROTECTED] writes:
I don't understand this either. Why a subtransaction at all?
Don't get me wrong. I fully understand that a subtransaction would make
error recovery possible. What I try to say is that the kind of error
recovery that needs a subtransaction is fairly,
On Tue, Nov 30, 2004 at 07:12:10AM +, Simon Riggs wrote:
If you look at the graph of New Order response time distribution, the
higher result gives much more frequent sub-second response for 8.0beta5
and the hump at around 23secs has moved down to 14secs. Notably, the
payment transaction
On Tue, Nov 30, 2004 at 08:34:20AM +0100, Michael Paesold wrote:
Mark Wong wrote:
I have some initial results using 8.0beta5 with our OLTP workload.
Off the bat I see about a 23% improvement in overall throughput. The
most significant thing I've noticed was in the oprofile report where
On Tue, Nov 30, 2004 at 10:57:02AM -0500, Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
Mark Wong [EMAIL PROTECTED] writes:
I have some initial results using 8.0beta5 with our OLTP workload.
http://www.osdl.org/projects/dbt2dev/results/dev4-010/199/
throughput: 4076.97
Do
Tom Lane wrote:
In the case of Perl I suspect it is reasonably possible to determine
whether there is an eval surrounding the call or not, although we
might have to get more friendly with Perl's internal data structures
than a purist would like.
Not really very hard. (caller(0))[3] should
On Tue, Nov 30, 2004 at 11:03:03AM -0500, Rod Taylor wrote:
On Mon, 2004-11-29 at 16:01 -0800, Mark Wong wrote:
I have some initial results using 8.0beta5 with our OLTP workload.
Off the bat I see about a 23% improvement in overall throughput. The
most significant thing I've noticed was in
On 11/27/2004 7:40 PM, Tom Lane wrote:
Thomas F.O'Connell [EMAIL PROTECTED] writes:
So why not have VACUUM FULL FREEZE just do what you propose: VACUUM
FULL then VACUUM FREEZE.
The objective is to make it more safe, not less so. Doing that would
require rewriting a whole bunch of code, which I
On 11/29/2004 2:03 PM, Marc G. Fournier wrote:
If there were a comp.databases.postgresql.hackers newsgroup created and
carried by all the news servers ... would you move to using it vs using
the mailing lists?
Certainly not.
Jan
The USENET community seems to think that there would be a mass
Mark Wong [EMAIL PROTECTED] writes:
I do have bgwriter_delay increased to 10, per previous
recommendation, which did smooth out the throughput graph
considerably. I can continue to adjust those settings.
Please try a variety of settings and post your results. It would give
us some hard data
Tom Lane wrote:
On what evidence do you base that claim? It's true there are no
existing Tcl or Perl functions that do error recovery from SPI
operations, because it doesn't work in existing releases. That does
not mean the demand is not there. We certainly got beat up on often
enough about the
Tom,
I do have bgwriter_delay increased to 10, per previous
recommendation, which did smooth out the throughput graph
considerably. I can continue to adjust those settings.
Please try a variety of settings and post your results. It would give
us some hard data to help in deciding what
Andrew Dunstan wrote:
Greg Stark wrote:
Personally I find the current CSV support inadequate. It seems pointless to
support CSV if it can't load data exported from Excel, which seems like the
main use case.
OK, I'm starting to get mildly annoyed now. We have identified one
Great idea. Added to TODO:
* Make log_min_duration_statement output when the duration is reached rather
than when the statement completes
This prints long queries while they are running, making trouble shooting
easier. Also, it eliminates the need for log_statement because it
would
On Tue, Nov 30, 2004 at 02:00:29AM -0500, Greg Stark wrote:
Mark Wong [EMAIL PROTECTED] writes:
I have some initial results using 8.0beta5 with our OLTP workload.
http://www.osdl.org/projects/dbt2dev/results/dev4-010/199/
throughput: 4076.97
Do people really only look at the
I've been using log_min_duration_statement = 0 to get durations on all
SQL statements for the purposes of performance tuning, because this logs
the duration on the same line as the statement. My reading of this TODO
is that now log_min_duration_statement = 0 would give me the statements
but no
Bruce Momjian wrote:
I am wondering if one good solution would be to pre-process the input
stream in copy.c to convert newline to \n and carriage return to \r and
double data backslashes and tell copy.c to interpret those like it does
for normal text COPY files. That way, the changes to copy.c
David Parker wrote:
I've been using log_min_duration_statement = 0 to get durations on all
SQL statements for the purposes of performance tuning, because this logs
the duration on the same line as the statement. My reading of this TODO
is that now log_min_duration_statement = 0 would give me
Andrew Dunstan wrote:
Bruce Momjian wrote:
I am wondering if one good solution would be to pre-process the input
stream in copy.c to convert newline to \n and carriage return to \r and
double data backslashes and tell copy.c to interpret those like it does
for normal text COPY files.
Hi,
recently i need to use pg in my project. everything going ok till when i want to createdb
it appear this :
Warning : could not remove database directory "/var/postgresql/data/base/17147"
Detail: Failing system command was : rm -rf '/var/postgresql/data/base/17147'
Error: could not
Thomas Hallgren [EMAIL PROTECTED] writes:
From your statement it sounds like you want to use the subtransactions
solely in a hidden mechanism and completely remove the ability to use
them from the function developer. Is that a correct interpretation?
No; I would like to develop the ability
While your message was directed at Thomas, I think I share Thomas'
position; well, for the most part.
On Tue, 2004-11-30 at 11:21 -0500, Tom Lane wrote:
But I think it's a bogus design, because (a) it puts extra burden on the
function author who's already got enough things to worry about, and
James William Pye wrote:
I think I may hold a more of a hold nose stance here than Thomas. I am
not sure if I want to implement savepoint/rollback restrictions as I
can't help but feel this is something Postgres should handle; not me or
any other PL or C Function author.
I agree with this but
James William Pye [EMAIL PROTECTED] writes:
plpy being an untrusted language, I *ultimately* do not have control
over this. I can only specify things within my code. I *cannot* stop a
user from making an extension module that draws interfaces to those
routines that may rollback to a savepoint
Could we come up with a compromise then? I guess maybe another setting
that says log any query when it hits more than x amount of time. (I'd
also argue you should get a NOTICE or WARNING when this exceeds the
query timeout time).
A perhapse more friendly alternative would be a way to query to get
I noticed that we have a bottleneck in aggregate performance in
advance_transition_function(): according to callgrind, doing datumCopy()
and pfree() for every tuple produced by the transition function is
pretty expensive. Some queries bare this out:
dvl=# SELECT W.element_id, count(W.element_ID)
Jim C. Nasby wrote:
Could we come up with a compromise then? I guess maybe another setting
that says log any query when it hits more than x amount of time. (I'd
also argue you should get a NOTICE or WARNING when this exceeds the
query timeout time).
A perhapse more friendly alternative
Neil Conway [EMAIL PROTECTED] writes:
I've attached a quick and dirty hack that avoids the need to palloc()
and pfree() for every tuple produced by the aggregate's transition
function.
And how badly does it leak memory? I do not believe this patch is
tenable.
Something that occurred to me
Tom Lane wrote:
libpq compiled with --enable-thread-safety thinks it can set the SIGPIPE
signal handler. It thinks once is enough.
psql thinks it can arbitrarily flip the signal handler between SIG_IGN
and SIG_DFL. Ergo, after the first use of the pager for output, libpq's
SIGPIPE
On Tue, 2004-11-30 at 23:15 -0500, Tom Lane wrote:
And how badly does it leak memory? I do not believe this patch is
tenable.
Did you read the rest of my mail?
Something that occurred to me the other morning in the shower is that we
could trivially inline MemoryContextSwitchTo() when using
Tom Lane wrote:
The fundamental point you are missing, IMHO, is that a savepoint is a
mechanism for rolling back *already executed* SPI commands when the
function author wishes that to happen.
Of course. That's why it's imperative that it is the developer that
defines the boundaries. I forsee
As long as the web page maintainers are going to the trouble of taking a
survey, might I (at the risk of being tarred and feathered :-p) suggest
a more thorough survey?
Suggested questions:
(1) If there were a USENET newsfeed, under comp.databases.postgresql.*,
of one or
On Mon, Nov 29, 2004 at 12:49:46 +,
Chris Green [EMAIL PROTECTED] wrote:
This is a perpetual problem, if people all used the same MUA and
(assuming it has the capability) all used the 'reply to list' command
to reply to the list everything would be wonderful! :-)
I think using
50 matches
Mail list logo