Envlh added a comment.
Thank you for your replies. A few comments / questions: - While I understand your point, I fair that isolating some data from the main dump is only a temporary solution to its size growth. Sooner or later, it will weight 1 TB (even compressed), and we'll have to deal with this (as producer or as consumer). - Will the lexemes dumps contain the P namespace, or will the consumer have to additionally download the other complete dump to get the data about properties? - Will there be one dump per namespace (one for P, one for Q, one for L)? I'll be fine with a lexemes dump containing only the L namespace (even if it's much less practical), but I prefer to ask theses questions to help you anticipate other use cases. TASK DETAIL https://phabricator.wikimedia.org/T220883 EMAIL PREFERENCES https://phabricator.wikimedia.org/settings/panel/emailpreferences/ To: Envlh Cc: Aklapper, hoo, Lydia_Pintscher, Envlh, alaa_wmde, Nandana, Mringgaard, Lahi, Gq86, GoranSMilovanovic, QZanden, LawExplorer, _jensen, rosalieper, Jonas, Wikidata-bugs, aude, Svick, Darkdadaah, Mbch331, jeremyb
_______________________________________________ Wikidata-bugs mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs
