Please don’t be patronizing insulting. What did read. What used to be one fairly quick query will now be thousands of decoupled queries. I know a number of cases where the decoupled query count can get into the hundreds of thousands. So what used to take 1 query and <2 minutes is now going to hammer the database with ~300k queries and additional logic on the programming side.
On Mon, Jun 3, 2019 at 4:50 PM Amir Sarabadani <[email protected]> wrote: > I guarantee you, mediawiki core is everything but "over-normalized". We > haven't done anything yet. > > In WMCS it's slower for reasons mentioned above, in production it's fast. > Also, as I mentioned about "join decomposition", please read. > > Best regards > > On Mon, Jun 3, 2019, 21:58 John <[email protected]> wrote: > >> This is a prime example of over normalization. We are normalizing and >> making things slower. >> >> On Mon, Jun 3, 2019 at 3:38 PM Jesse Plamondon-Willard < >> [email protected]> wrote: >> >>> Sure! I just fetch the comment ID in the original query ([1]), and fetch >>> the comment text with a separate query ([2]). You need *n*+1 queries >>> based on the number of records, but the performance is adequate for my use >>> cases. >>> >>> [1]: main query >>> <https://tools.wmflabs.org/sql-optimizer?use=enwiki_p&sql=SELECT%0D%0A++++user_id%2C%0D%0A++++user_registration%2C%0D%0A++++DATE_FORMAT%28user_registration%2C+%22%25Y-%25m-%25d+%25H%3A%25i%22%29+AS+registration%2C%0D%0A++++user_editcount%2C%0D%0A++++GROUP_CONCAT%28ug_group+SEPARATOR+%22%2C+%22%29+AS+user_groups%2C%0D%0A++++ipb_by_text%2C%0D%0A++++ipb_reason_id%2C%0D%0A++++DATE_FORMAT%28ipb_timestamp%2C+%22%25Y-%25m-%25d+%25H%3A%25i%22%29+AS+ipb_timestamp%2C%0D%0A++++ipb_deleted%2C%0D%0A++++COALESCE%28DATE_FORMAT%28ipb_expiry%2C+%22%25Y-%25m-%25d+%25H%3A%25i%22%29%2C+ipb_expiry%29+AS+ipb_expiry%0D%0AFROM%0D%0A++++user%0D%0A++++LEFT+JOIN+user_groups+ON+user_id+%3D+ug_user%0D%0A++++LEFT+JOIN+ipblocks_ipindex+ON+user_id+%3D+ipb_user%0D%0AWHERE+user_name+%3D+%27Pathoschild%27%0D%0ALIMIT+1> >>> [2]: secondary query to fetch comment text >>> <https://tools.wmflabs.org/sql-optimizer?use=enwiki_p&sql=SELECT+comment_text%0D%0AFROM+comment%0D%0AWHERE+comment_id+%3D+1000%0D%0ALIMIT+1> >>> >>> On Mon, 3 Jun 2019 at 14:51, Huji Lee <[email protected]> wrote: >>> >>>> Out of curiosity, Jesse, can you share the query after that change, so >>>> it is all stored in this thread for future reference? >>>> >>> _______________________________________________ >>> Wikimedia Cloud Services mailing list >>> [email protected] (formerly [email protected]) >>> https://lists.wikimedia.org/mailman/listinfo/cloud >> >> _______________________________________________ >> Wikimedia Cloud Services mailing list >> [email protected] (formerly [email protected]) >> https://lists.wikimedia.org/mailman/listinfo/cloud > > _______________________________________________ > Wikimedia Cloud Services mailing list > [email protected] (formerly [email protected]) > https://lists.wikimedia.org/mailman/listinfo/cloud
_______________________________________________ Wikimedia Cloud Services mailing list [email protected] (formerly [email protected]) https://lists.wikimedia.org/mailman/listinfo/cloud
