|WMDE-leszek closed this task as "Resolved".|
WMDE-leszek moved this task from Review to Done on the Wikidata-Sprint-2018-01-31 board.
WMDE-leszek removed a project: Wikidata-Sprint-2018-02-14.
WMDE-leszek claimed this task.
WMDE-leszek added a comment.
I believe this concludes this investigation. Next step would be creating the actual implementation, and one of the requirements would be to somehow "batch" fetching the data of all lexemes on the page, to minimize the amount of DB querying.
Regarding the estimation of the table size. The very rough but secure estimate would be IMO: the lemma table would be of size 10 * number of all lexemes, and the item reference (language and lexical category) table would have the exact number of rows as the number of existing lexemes.
The more accurate and throughout estimates would be provided when we have the actual DB schema draft (I don't consider the proof-of-concept code to be this), and have it discussed with people with more DB expertise.
Cc: thiemowmde, Lucas_Werkmeister_WMDE, gerritbot, Aklapper, WMDE-leszek, Giuliamocci, Adrian1985, Cpaulf30, Lahi, Gq86, Baloch007, Darkminds3113, Lordiis, Cinemantique, GoranSMilovanovic, Adik2382, Th3d3v1ls, Ramalepe, Liugev6, QZanden, LawExplorer, Lewizho99, Maathavan, Wikidata-bugs, aude, Darkdadaah, Mbch331
_______________________________________________ Wikidata-bugs mailing list Wikidatafirstname.lastname@example.org https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs