Hi Daniel,

Generally, I think it's bad programming practice to retrieve such big
datasets if it is possible to do otherwise.

I definitely agree that it is bad practice, and in that respect I'm inclined towards doing batch loading as you suggest too. However, there's some data agregation I'll have to take into account, and as it involves testing for the presence of specific tables of a merge table set, I'd have to rewrite part of that logic.

All doable, of course, and no big issue either, but it would be a lot faster for me if I could simply increase the memory limit....

Still, I very much hear you, and I know that what you suggest _is_ the proper approach, so I may end up doing that too. ;)

Also: there is another perhaps more elegant (read: robust) way, being a hybrid solution between the PHP script and using mysqldump. I can then use PHP for working out the batches, and retrieve the batches using a (set of) commandline mysqldump call(s). The generated batches can then directly be dumped into the proper merge tables. The only catch is that I directly left join data in into the merge tables, so I'd have to first do a blunt dump of the lhs of the data, then of the rhs(es) (both to temp tables) and then afterwards left join them into the eventual merge tables. This is the main reason why I hadn't chosen this solution, as at present I can combine all of these steps in one query...:/

If someone knows a clean way to increase the memory limit, I'd be happy to hear about it. If not, I'll do some rewrite...

Cheers,
Olafo
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to