On Tue, 16 Feb 2010 15:22:00 +0100, Greg Stark gsst...@mit.edu wrote:
There's a second problem though. We don't actually know how long any
given query is going to take to plan or execute. We could just
remember how long it took to plan and execute last time or how long it
took to plan last time
On Thu, 18 Feb 2010 16:09:42 +0100, Dimitri Fontaine
dfonta...@hi-media.com wrote:
Pierre C li...@peufeu.com writes:
Problem with prepared statements is they're a chore to use in web apps,
especially PHP, since after grabbing a connection from the pool, you
don't
know if it has prepared
What about catching the error in the application and INSERT'ing into the
current preprepare.relation table? The aim would be to do that in dev or
in pre-prod environments, then copy the table content in production.
Yep, but it's a bit awkward and time-consuming, and not quite suited to
My opinion is that PostgreSQL should accept any MySQL syntax and return
warnings. I believe that we should access even innodb syntax and turn it
immediately into PostgreSQL tables. This would allow people with no
interest in SQL to migrate from MySQL to PostgreSQL without any harm.
A solution
As far as I can tell, we already do index skip scans:
This feature is great but I was thinking about something else, like SELECT
DISTINCT, which currently does a seq scan, even if x is indexed.
Here is an example. In both cases it could use the index to skip all
non-interesting rows,
Oh, this is what I believe MySQL calls loose index scans. I'm
Exactly :
http://dev.mysql.com/doc/refman/5.0/en/loose-index-scan.html
actually looking into this as we speak,
Great ! Will it support the famous top-n by category ?
but there seems to be a
non-trivial amount of work to be
So, if php dev doesn't have time to learn to do things right then we
have to find time to learn to do things wrong? seems like a nosense
argument to me
The best ever reply I got from phpBB guys on I don't remember which
question was :
WE DO IT THIS WAY BECAUSE WE WANT TO SUPPORT MYSQL 3.x
On Tue, 30 Mar 2010 13:01:54 +0200, Peter Eisentraut pete...@gmx.net
wrote:
On tis, 2010-03-30 at 08:39 +0200, Stefan Kaltenbrunner wrote:
on fast systems pg_dump is completely CPU bottlenecked
Might be useful to profile why that is. I don't think pg_dump has
historically been developed
On Sunday 30 May 2010 18:29:31 Greg Stark wrote:
On Sun, May 30, 2010 at 4:54 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I read through that thread and couldn't find much discussion of
alternative CRC implementations --- we spent all our time on arguing
about whether we needed 64-bit CRC or not.
The linux kernel also uses it when it's availabe, see e.g.
http://tomoyo.sourceforge.jp/cgi-bin/lxr/source/arch/x86/crypto/crc32c-intel.c
If you guys are interested I have a Core i7 here, could run a little
benchmark.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
The real problem here is that we're sending records to the slave which
might cease to exist on the master if it unexpectedly reboots. I
believe that what we need to do is make sure that the master only
sends WAL it has already fsync'd
How about this :
- pg records somewhere the xlog
The problem can generally be written as tuples seeing multiple
updates in the same transaction?
I think that every time PostgreSQL is used with an ORM, there is
a certain amount of multiple updates taking place. I have actually
been reworking clientside to get around multiple updates, since
On Wed, 21 Sep 2011 18:13:07 +0200, Tom Lane t...@sss.pgh.pa.us wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
On 21.09.2011 18:46, Tom Lane wrote:
The idea that I was toying with was to allow the regular SQL-callable
comparison function to somehow return a function
Not to mention palloc, another extremely fundamental and non-reentrant
subsystem.
Possibly we could work on making all that stuff re-entrant, but it would
be a huge amount of work for a distant and uncertain payoff.
Right. I think it makes more sense to try to get parallelism working
first
14 matches
Mail list logo