hoo added a comment.

I just thought about this a bit and we might want to split the dumping process up into two steps:

  1. Generating the dump via the maintenance script (sharded) and concatenating the shards
  2. Re-Compress/ format conversion (ttl <> nt)/ …

This way we could run 1) in serial/ limited parallel and 2) in parallel, as these steps don't are single threaded anyway.


TASK DETAIL
https://phabricator.wikimedia.org/T206535

EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/

To: hoo
Cc: hoo, Smalyshev, ArielGlenn, Nandana, Lahi, Gq86, GoranSMilovanovic, Lunewa, QZanden, LawExplorer, gnosygnu, Wikidata-bugs, aude, Mbch331
_______________________________________________
Wikidata-bugs mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs

Reply via email to