| hoo added a comment. |
I just thought about this a bit and we might want to split the dumping process up into two steps:
- Generating the dump via the maintenance script (sharded) and concatenating the shards
- Re-Compress/ format conversion (ttl <> nt)/ …
This way we could run 1) in serial/ limited parallel and 2) in parallel, as these steps don't are single threaded anyway.
TASK DETAIL
EMAIL PREFERENCES
To: hoo
Cc: hoo, Smalyshev, ArielGlenn, Nandana, Lahi, Gq86, GoranSMilovanovic, Lunewa, QZanden, LawExplorer, gnosygnu, Wikidata-bugs, aude, Mbch331
Cc: hoo, Smalyshev, ArielGlenn, Nandana, Lahi, Gq86, GoranSMilovanovic, Lunewa, QZanden, LawExplorer, gnosygnu, Wikidata-bugs, aude, Mbch331
_______________________________________________ Wikidata-bugs mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs
