[Wikidata-bugs] [Maniphest] [Commented On] T193645: [Epic] querying for lexicographical data
Smalyshev added a comment. Right now full lexeme dump is just 2.1M compressed, so adding it to main dump would not be a big deal for dump size. However, absent the separate dump, you'd have to always download the huge one, of course. Which makes me still support the separate dump route.TASK DETAILhttps://phabricator.wikimedia.org/T193645EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: SmalyshevCc: johl, Yurik, Lea_Lacroix_WMDE, Smalyshev, Aklapper, Lucas_Werkmeister_WMDE, Lydia_Pintscher, Mringgaard, Lahi, Gq86, Darkminds3113, GoranSMilovanovic, QZanden, EBjune, merbst, LawExplorer, Avner, Gehel, Jonas, FloNight, Xmlizer, jkroll, Wikidata-bugs, Jdouglas, aude, Tobias1984, Manybubbles, Darkdadaah, Mbch331___ Wikidata-bugs mailing list Wikidata-bugs@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs
[Wikidata-bugs] [Maniphest] [Commented On] T193645: [Epic] querying for lexicographical data
Yurik added a comment. I think lex data dumps should be available independently of the other Wikidata data. For example, https://sklonenie-slov.ru/ shows all Russian noun declensions (30,000+), and I think such sites can greatly benefit from the community work. P.S. I have began a discussion with the site authors, trying to get them to donate their database to Wikidata.TASK DETAILhttps://phabricator.wikimedia.org/T193645EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: YurikCc: Yurik, Lea_Lacroix_WMDE, Smalyshev, Aklapper, Lucas_Werkmeister_WMDE, Lydia_Pintscher, Mringgaard, Lahi, Gq86, Darkminds3113, GoranSMilovanovic, QZanden, EBjune, merbst, LawExplorer, Avner, Gehel, Jonas, FloNight, Xmlizer, jkroll, Wikidata-bugs, Jdouglas, aude, Tobias1984, Manybubbles, Darkdadaah, Mbch331___ Wikidata-bugs mailing list Wikidata-bugs@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs