On Sun, May 28, 2006 at 09:12:34PM -0400, Tom Lane wrote: > But we're still avoiding the central issue: does it make sense to dump a > probin clause at all for plpython functions? If it's a compiled form of > prosrc then it probably doesn't belong in the dump.
That's why I initially thought pg_dump or I was the dirty one. Even if CREATE FUNCTION would take it, the probin value would be ignored(well, overwritten). > On reflection I'm kind of inclined to think that plpython is abusing the > column. If it were really expensive to derive bytecode from source text > then maybe it'd make sense to do what you're doing, but surely that's > not all that expensive. Everyone else manages to parse prosrc on the > fly and cache the result in memory; why isn't plpython doing that? Yeah, I don't think it's expensive. It wasn't a feature that I implemented out of any kind of demand or testing. Rather, I knew I could marshal code objects, and I figured it would likely yield some improvement on initial loads, so I implemented it. > If we think that plpython is leading the wave of the future, I'd be kind > of inclined to invent a new pg_proc column in which derived text can be > stored, rather than trying to use probin for the purpose. Although > arguably probin itself was once meant to do that, there's too much > baggage now. I think having something like that in pg_proc could be useful. Certainly my case may not really be demanding, but I guess there may be some languages that could enjoy a good benefit from avoiding recompilation. Tho, such a column seems like it would be more of a mere convenience for PL authors. If initial load were truly that expensive, I would think that it would justify creating a table containing compiled code and taking the extra lookup hit on initial load. ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend