Martin Foster <[EMAIL PROTECTED]> writes:
> The one not using sub-queries under EXPLAIN ANALYZE proves itself to be
> less efficient and have a far higher cost then those with the penalty of
> a sub-query. Since this seems to be counter to what I have been told
> in the past, I thought I would
I thought this could generate some interesting discussion. Essentially,
there are three queries below, two using sub-queries to change the way
the randomized information (works first by author and then by work) and
the original which simply randomizes out of all works available.
The one not us
Christopher Browne wrote:
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] ("Joshua D. Drake") would write:
I hope you understand that I, in no way have ever suggested
(purposely) anything negative about Slony. Only that I believe they
serve different technical solutions.
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] ("Joshua D. Drake") would
write:
> I hope you understand that I, in no way have ever suggested
> (purposely) anything negative about Slony. Only that I believe they
> serve different technical solutions.
Stipulating that I may have some bi
Bruce Momjian wrote:
> Pierre-Frédéric Caillaud wrote:
> > Is there also a possibility to tell Postgres : "I don't care if I
> > lose 30 seconds of transactions on this table if the power goes
> > out, I just want to be sure it's still ACID et al. compliant but
> > you can fsync less often and
On 8/14/2004 12:22 PM, Joshua D. Drake wrote:
I hope you understand that I, in no way have ever suggested (purposely)
anything negative about Slony. Only that I believe they serve different
technical solutions.
You know I never took anything you said negative. I think People here
need to know th
Once again, Joshua, would you please explain what you mean with
"batch" and "live" replication system? Slony does group multiple
"master" transactions into one replication transaction to improve
performance (fewer commits on the slaves). The interval of these
groups is configurable and for hig
On 8/13/2004 9:39 PM, Joshua D. Drake wrote:
Chris Cheston wrote:
HI all, I'm trying to implement a highly-scalable, high-performance,
real-time database replication system to back-up my Postgres database
as data gets written.
So far, Mammoth Replicator is looking pretty good but it costs $1000+ .
gnari wrote:
"G u i d o B a r o s i o" <[EMAIL PROTECTED]> wrote:
[speeding up 100 inserts every 5 minutes]
Tips!
*Delete indexes and recreate them after the insert.
sounds a bit extreme, for only 100 inserts
which fsync method are you using ?
change it and see what happen
Regards
Gaetano Mendola