> On Mar 31, 2021, at 5:18 PM, Roy Smith <[email protected]> wrote:
> 
> I'm just playing around on tools-sgebastion-08.  I can dump the first 1 
> million image names about half a minute:
> 
>> tools.spi-tools-dev:xw-join$ time mysql  
>> --defaults-file=$HOME/replica.my.cnf -h 
>> commonswiki.web.db.svc.wikimedia.cloud commonswiki_p -N -e 'select img_name 
>> from image limit 10000000 ' > /dev/null
>> 
>> real    0m36.586s
>> user    0m9.678s
>> sys     0m1.324s
> 
> but if I try 10 million, it fails:
> 
>> tools.spi-tools-dev:xw-join$ time mysql  
>> --defaults-file=$HOME/replica.my.cnf -h 
>> commonswiki.web.db.svc.wikimedia.cloud commonswiki_p -N -e 'select img_name 
>> from image limit 100000000 ' > /dev/null
>> Killed
>> 
>> real    0m9.875s
>> user    0m1.417s
>> sys     0m1.561s
> 
> Is there some maximum query size configured by default?  The full image table 
> on commons is about 70M rows, so extrapolating from the first example, 
> something like 1 hour to move all that data.
> 

That could be RAM limits on the bastion. 
Actually, scratch that, I’ve confirmed you were killed by the OOM killer on 
that bastion:
Mar 31 23:29:17 tools-sgebastion-08 kernel: [2860588.199138] mysql invoked 
oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, 
oom_score_adj=0

-Brooke
_______________________________________________
Wikimedia Cloud Services mailing list
[email protected] (formerly [email protected])
https://lists.wikimedia.org/mailman/listinfo/cloud

Reply via email to