I will submit a patch. As soon as I have read the developers FAQ and learned
how this is done :-)
B.T.W. I needed one additional function. Do you think I should submit it
too? This function copies some behavior found in the SPI_cursor_open. If
submitted, I'd suggest that the SPI_cursor_open calls
>> Improving on "not ideal" would be good, and would get even closer to
>> full Oracle/SQLServer migration/compatibility. However, since I've never
>> looked at that section of code, I couldn't comment on any particular
>> approach nor implement such a change, so I'll shut up and be patient.
>
> I
On Thu, Feb 12, 2004 at 09:55:36AM +0100, Zeugswetter Andreas SB SD wrote:
>
> Yeah, but in other db's this is solved by the frontend. e.g. in Informix
> dbaccess has a mode that simply stops execution upon first error. So I don't
> think this is a nogo argument, if we added such a feature to psq
Thomas Hallgren wrote:
> I will submit a patch. As soon as I have read the developers FAQ and learned
> how this is done :-)
>
> B.T.W. I needed one additional function. Do you think I should submit it
> too? This function copies some behavior found in the SPI_cursor_open. If
> submitted, I'd sugg
On Wed, 11 Feb 2004 [EMAIL PROTECTED] wrote:
> Thank you very much for your reply.
>
> Yes, that's true. But it seems not a good idea if I have many databases
> and I want them totally seperated with each other.
>
> What's your opinion? Thanks.
OK, here's the issue. Postgresql uses certain res
Zeugswetter Andreas SB SD wrote:
>
> >> Improving on "not ideal" would be good, and would get even closer to
> >> full Oracle/SQLServer migration/compatibility. However, since I've never
> >> looked at that section of code, I couldn't comment on any particular
> >> approach nor implement such a ch
> But for seperating out applications from each other, there's really
> nothing to be gained by putting each seperate database application into
> it's own cluster.
I believe the initial email requested individual logs, and presumably
the ability to grant superuser access without risking a user c
Tom Lane wrote:
I just saw the parallel regression tests hang up again. Inspection
revealed that StrategyInvalidateBuffer() was stuck in an infinite loop
because the freelist was circular.
(gdb) p StrategyControl->listFreeBuffers
$5 = 579
(gdb) p BufferDescriptors[579]
$6 = {bufNext = 106, data =
On Wed, Feb 11, 2004 at 17:48:51 -0500,
[EMAIL PROTECTED] wrote:
> Thank you very much for your reply. I'd like to discuss the why.
>
> I don't think letting them share data and logs could gain me something.
> And if I have 2 databases totally not relevant, I think the most natural
> way is to m
Tom Lane <[EMAIL PROTECTED]> writes:
> > Hmmm ... maybe query_work_mem and maintenance_work_mem, or something
> > similar?
>
> I'll go with these unless someone has another proposal ...
dml_sort_mem and ddl_sort_mem ?
--
greg
---(end of broadcast)
Rod Taylor wrote:
Last time I looked,
you couldn't get the database name in the log files to allow for
mechanical filtering
Watch this space.When my log_disconnections patch makes it through the
filter process it will be followed up with a patch that allows tagging
of log lines with a printf-sty
> Oh, okay. So when's that fix going to be committed?
Never mind, I see you just did ...
regards, tom lane
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Hello,
Depending on your needs and transaction load per database you can easily
run 30 databases on a machine with 2 Gig of RAM. You will of course have
to use initdb for each cluster and change the tcp port for each cluster
but it works just fine.
Sincerely,
Joshua D. Drake
[EMAIL PROTECT
Jan Wieck <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> I just saw the parallel regression tests hang up again.
> Anyhow, according to our discussion in early January I have changed the
> code in StrategyInvalidateBuffer() so that it clears out the buffer tag
> and the CDB's buffer tag. Also
"Jeroen T. Vermeulen" <[EMAIL PROTECTED]> writes:
> It does require that the application be meticulous in its checking though.
> Existing client programs, for instance, may ignore any errors coming back
> from PQexec() during the transaction and just see if the COMMIT succeeds.
> Such could would b
Curt Sampson wrote:
>
> I notice that pg_dump is still dumping CHECK constraints with the table,
> rather than at the very end, as it does with all the other constraints.
> As discussed in bug report #787, at
>
> http://archives.postgresql.org/pgsql-bugs/2002-09/msg00278.php
>
> this breaks
Greg Stark wrote:
> Tom Lane <[EMAIL PROTECTED]> writes:
>
> > > Hmmm ... maybe query_work_mem and maintenance_work_mem, or something
> > > similar?
> >
> > I'll go with these unless someone has another proposal ...
>
> dml_sort_mem and ddl_sort_mem ?
I thought about that, but didn't think DML
On Thu, 12 Feb 2004, Rod Taylor wrote:
> > But for seperating out applications from each other, there's really
> > nothing to be gained by putting each seperate database application into
> > it's own cluster.
>
> I believe the initial email requested individual logs, and presumably
> the abilit
Ok, I have EXPLAIN ANALYZE results for both the power and throughput
tests:
http://developer.osdl.org/markw/dbt3-pgsql/
It's run #60 and the links are towards the bottom of the page under the
"Run log data" heading. The results from the power test is
"power_query.result" and "thuput_qs1.r
Looks good to me but I will get some other eyse on it before I apply it.
Your patch has been added to the PostgreSQL unapplied patches list at:
http://momjian.postgresql.org/cgi-bin/pgpatches
I will try to apply it within the next 48 hours.
-
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> On 12 Feb 2004, Greg Stark wrote:
>> dml_sort_mem and ddl_sort_mem ?
> I like those. Are they an accurte representation of what's going on?
No, not particularly ...
regards, tom lane
---(end of broadc
On 12 Feb, Josh Berkus wrote:
> Mark,
>
>> It's run #60 and the links are towards the bottom of the page under the
>> "Run log data" heading. The results from the power test is
>> "power_query.result" and "thuput_qs1.result", etc. for each stream in
>> the throughput test.
>
> I'm confused. Wer
Mark,
> Oh sorry, I completely forgot that Q19 the whole purpose of this. So
> #60 doesn't have the right Q19. I'll run with the one you want now.
Thanks! And the original, not the "fixed", Q19 if you please. It's the
original that wouldn't finish on Postgres 7.3.
--
-Josh Berkus
Aglio D
Greg Stark wrote:
>
> Bruce Momjian <[EMAIL PROTECTED]> writes:
>
> > Imagine this:
> >
> > BEGIN WORK;
> > LOCK oldtab;
> > CREATE_X TABLE newtab AS SELECT * FROM oldtab;
> > DELETE oldtab;
> > COMMIT
> >
> > In this case, you would want the database to abort on a syntax er
Hey there everyone.
Sorry for what seems to be a rather strange
thought but, could we change the seperator used to
distinguish 'cross-database' vs 'cross-schema' ?
For example, i would expect the following
to work:
CREATE OR REPLACE FUNCTION test_autohist() RETURNS trigge
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Looks good to me but I will get some other eyse on it before I apply it.
It's in already ...
regards, tom lane
---(end of broadcast)---
TIP 6: Have you searched our list archives?
Michael Meskes wrote:
> Just wanted to let you know that if we would be interested in adding
> that patch to our main cvs the guy who wrote it would be more than
> willing to change his license to BSD.
I was under the impression we wanted to implement the ANSI way to do
this. Is this what the pat
> > > BEGIN WORK;
> > > LOCK oldtab;
> > > CREATE_X TABLE newtab AS SELECT * FROM oldtab;
> > > DELETE oldtab;
> > > COMMIT
> > >
> > > In this case, you would want the database to abort on a syntax error, right?
> >
> > Certainly not if I was typing this from the command line. Imagine
Already applied. Thanks.
---
Gavin Sherry wrote:
> The attached patch changes the existing behaviour of length(char(n)).
> Currently, this is what happens:
>
> template1=# select length('blah'::char(10));
> length
>
Gavin Sherry wrote:
> I believe Tom applied this while you were away.
Oh, sorry, I see it now:
test=> select length('blah'::char(10));
length
4
(1 row)
I did test this before placing it the queue, but I now realize I have
been testing regre
[EMAIL PROTECTED] writes:
> Ok, I have EXPLAIN ANALYZE results for both the power and throughput
> tests:
> http://developer.osdl.org/markw/dbt3-pgsql/
Thanks. I just looked at Q9 and Q21, since those are the slowest
queries according to your chart. (Are all the queries weighted the same
f
I believe Tom applied this while you were away.
Gavin
On Thu, 12 Feb 2004, Bruce Momjian wrote:
>
> Looks good to me but I will get some other eyse on it before I apply it.
>
> Your patch has been added to the PostgreSQL unapplied patches list at:
>
> http://momjian.postgresql.org/cgi-bin/
On Thu, 12 Feb 2004, Stef wrote:
> > U. Postgresql doesn't natively support cross database queries...
> >
>
> I know, but it does schema's, and currently, the same
> notation is used to specify schema's as 'cross database'.
>
> So the planner often reports 'cross-database not allowed'
> in
> U. Postgresql doesn't natively support cross database queries...
>
I know, but it does schema's, and currently, the same
notation is used to specify schema's as 'cross database'.
So the planner often reports 'cross-database not allowed'
in areas where it should at least report 'cross-sche
> > case in point, the example trigger. i would have expected
> > deliberate schemaname.table during an insert to work, but
> > instead the parser complains about cross-database.
>
> I would think just changing the error message to "no schema by the name of
> suchandsuch found" would make it pret
U. Postgresql doesn't natively support cross database queries...
On Thu, 12 Feb 2004, Stef wrote:
> Hey there everyone.
>
> Sorry for what seems to be a rather strange
> thought but, could we change the seperator used to
> distinguish 'cross-database' vs 'cross-schema' ?
>
> Fo
>Bruce Momjian wrote
> Zeugswetter Andreas SB SD wrote:
> >
> > >> Improving on "not ideal" would be good, and would get even closer
to
> > >> full Oracle/SQLServer migration/compatibility. However, since
I've
> never
> > >> looked at that section of code, I couldn't comment on any
particular
> >
Koichi Suzuki wrote:
> Hi, This is Suzuki from NTT DATA Intellilink.
>
> I told Bruce Momjan that I and my colleagues are interested in
> implementing PITR in BOF in NY LW2004. NTT's laboratory is very
> interested in this issue and I'm planning to work with them. I hope we
> could cooperate.
>>
In this case, you would want the database to abort on a syntax error, right?
>>
Am I completely off thread to ask why HOW we allow an abort on syntax
errors? (at least in regard to stored functions)
Shouldn't PostgreSQL do somethng intellignet like *notice* the syntax
error in the stored funct
>Bruce Momjian
> Simon Riggs wrote:
> > >Tom Lane
> > > "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > > > Most importantly, other references I have state that: the ANSI
> > SQL-99
> > > > specification does require that if a statement errors then only
that
> > > > statement's changes are rolled bac
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Rod Taylor wrote:
>> Can this be done entirely on the client side?
>>
>> Have psql silently wrap every statement going out with a BEGIN and a
>> COMMIT or ROLLBACK depending on whether there was an error or not?
> Yep, we could do it in the client like
Stef <[EMAIL PROTECTED]> writes:
> For example, i would expect the following
> to work:
> CREATE OR REPLACE FUNCTION test_autohist() RETURNS trigger
> AS 'BEGIN
> INSERT INTO history.test2 VALUES
> (new.field1,history.test_hist.nextval(), new.field2, new.field3, ne
On Thu, 12 Feb 2004, Stef wrote:
> > > case in point, the example trigger. i would have expected
> > > deliberate schemaname.table during an insert to work, but
> > > instead the parser complains about cross-database.
> >
> > I would think just changing the error message to "no schema by the name
On 12 Feb, Josh Berkus wrote:
> Mark,
>
>> Oh sorry, I completely forgot that Q19 the whole purpose of this. So
>> #60 doesn't have the right Q19. I'll run with the one you want now.
>
> Thanks! And the original, not the "fixed", Q19 if you please. It's the
> original that wouldn't finish o
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Rod Taylor wrote:
> >> Can this be done entirely on the client side?
> >>
> >> Have psql silently wrap every statement going out with a BEGIN and a
> >> COMMIT or ROLLBACK depending on whether there was an error or not?
>
> > Yep, we
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> Hmmm. I would think the first step would be to simply change the cross-db
> queries aren't supported to one of "schema either does not exist or is not
> in the search path".
AFAICT the issue is that Stef thought it was complaining about a
different
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Bruce Momjian) would write:
> I guess my question is that now that we have the new cache
> replacement policy, is the vacuum delay worth while. I looked at
> http://developer.postgresql.org/~wieck/vacuum_cost/ and does seem
> useful.
They
Andrew Dunstan wrote:
> Based on Larry's idea, I had in mind to provide a third escape in the
> log_line_info string (in addition to the %U and %D that I had previously
> done) of %S for sessionid, which would look something like this:
> 402251fc.713f
>
> I will start redoing this feature when
Bruce Momjian wrote:
Jan Wieck wrote:
Attached is a corrected version that solves the query cancel problem by
not napping any more and going full speed as soon as any signal is
pending. If nobody objects, I'm going to commit this tomorrow.
Jan, three questions. First, is this useful now that we
Jan Wieck wrote:
> Attached is a corrected version that solves the query cancel problem by
> not napping any more and going full speed as soon as any signal is
> pending. If nobody objects, I'm going to commit this tomorrow.
Jan, three questions. First, is this useful now that we have the new
c
Jan Wieck wrote:
> Bruce Momjian wrote:
>
> > Jan Wieck wrote:
> >> Attached is a corrected version that solves the query cancel problem by
> >> not napping any more and going full speed as soon as any signal is
> >> pending. If nobody objects, I'm going to commit this tomorrow.
> >
> > Jan, th
51 matches
Mail list logo