My biggest issue with this solution is that it goes against the grain of
having dedicated disposable public instance. Once clustering goes back
to the author, you have just one copy of the data so if given repo gets
corrupted on the public, be it for technical reasons or intentional
action by humans, the data is lost for good. To keep this intention,
perhaps backpublishing & versioning on author after receiving copy of
the content from public would be the safest course of action. It still
doesn't solve the problem of having to open door in public so that
public can go and send data back to author. To solve this last issue,
you would perhaps just use scheduled task on the author and periodically
pull the data from public rather then having public to push it to the
author.

In the end, I can't see any clean solution... One way or another, to handle correctly the UGC you are bound to break the "public isolation and disposability" feature. Anyway, I'd like to spend a few words again for the clustering solution: backpublishing forces to open a "hole" in author machine, but the same code is also running on public machine, so if there is even a small problem with the "hole" programming, that may lead to a "hole" exposed to the net. And the "hole" (if becames a Magnolia standard) is there in the code even if you are not using any of the UGC feature. Clustering makes the UGC repositories shared between public and author, but still the standard public repositories are completely disposable, and the public machine is still isolated from author. About losing the data, yes, the problem is there and if something bad happens the data is gone (at least until the last backup, if any). But which is this data weight on average Magnolia site? The average of what I saw is not a Magnolia-driven-community/facebook with only UGC, it's a "institutional" website with authors/editors that write most of the text and public users that registers and adds comments, votes, tags and similar. If by any chance the public machine is lost, the institutional part is recovered in no time, and the public part will lose contents since last backup. For the average site (at least those I saw) this seems to be a better "failure option" than having both pubilc and author machines "killed". I must also admit that both the "hole" and the "repository death" are fortunately quite rare, but still repository corruption seems to be more frequent than "holes that let someone hack the website".

Even if there is no perfect solution to the UGC problem, I thought we should at least agree on some kind of interface or commonbase that every UGC handler solution should implement, so it would be easier to compare them and choose the best for each case. Having that as Magnolia standard would be better than nothing anyway.

Regards, Danilo.

----------------------------------------------------------------
For list details see
http://www.magnolia-cms.com/home/community/mailing-lists.html
To unsubscribe, E-mail to: <[email protected]>
----------------------------------------------------------------

Reply via email to