https://bugzilla.wikimedia.org/show_bug.cgi?id=46440

--- Comment #24 from [email protected] ---

2)Databases
Allow me to make a very bold statement here that  json/xml are ill suited for a
project of this nature.
Justification:
1) There are many helper classes already written for MediaWiki like sanitizer
class(which depends on the databases) load balancer class(Incase one wants to
distribute the comments data) and few other classes which implement
functionality which is critically needed. Incase one wants to go the XML/JSON
way one needs to implement them as well.

2) Maintenance will become an issue.

3)When migrating between hosts one needs to manually select the xml/json files
and move them around. MYSQL supports one click export and import functionality
for all the tables.

I will be making a prototype for databases as well in a couple of days.
And here's my idea. Each article is stored as a WikiText string, so as
illustrated by my UI the selected text's starting and ending indexes are caught
and the article page name is also caught. These three attributes will be used
as indexes into a table "Comments" 

When parsing this wikitext, based on the starting and ending indexes the text
will be marked and a link will be provided next to it to access the comments as
I have shown.

I will be implementing this functionality by tuesday. But for the sake of
proposal I will be not using any of the MediaWiki classes for now so you people
could test drive it once and based on your feedback I will be making
ammendments to my proposal, so please review my work so far :)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
_______________________________________________
Wikibugs-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to