----- Original message -----
> On Fri, Jan 7, 2011 at 17:59, Banlu Kemiyatorn <[email protected]> wrote:
> It does allow every page to look the same. On the other hand, messing
> with a ton of markup is painful, and requires you to know exactly which
> templates exist, what properties exist, etc.   I personally don't care
> for either; that is one of the reasons why I don't edit public wikis.

And you cannot automatically have that with a dedicated tool, ie. you cannot do 
that without a plan. You will need a plan for backup, corrections and ways for 
the others to get backup your data and source code of the web software in case 
that your system went down. For both systems, what you need is documentations.


> 
> What about removing of some properties from the template, or adding new
> ones? It can be handled, sure, but why bother with the pain?

I dont see it as a pain. For many people it is even much easier than messing 
with SQL. What is so hard about adding or removing properties from a template 
really? How is that harder than granting rights, checking docs that define the 
tables, verified the value for possible web code injections etc.
 
> I will always consider using a wiki for something private (documentation,
> presentation of my work, …). On the other hand, for something like a
> software index, a relational database is ideal, isn't it? Well, if it's
> ideal, why not make use of it?
> 
> How is it ideal? Let's take a look. Each piece of software has some
> properties such as title, image and homepage, it has several releases, it
> has one or more source code repositories, it has requirements, it has
> build instructions, … I think it's a textbook example of something to use
> SQL-based databases for.

I am not against the SQL works in anyway. But I dont believe ther is an ideal 
tool for anything. A database can do the job without a wiki and can also even 
support the wiki process in many ways. The problem is the plan for the 
maintenances I described. But if you have time and power to make it works 
better then just do it.

> > And if it really is the case, Wiki bot would easily do the job.
> > 
> 
> Relying on a bot to do cleanup after humans is not nice, and neither is
> infalliability of a human reviewer guaranteed. Especially if the original
> submitter did not fill out all the data, or provided extra data; where is
> the bot or the reviewer supposed to get that data from, or where is the
> bot or the reviewer supposed to place them if they don't fit into a
> template?

In comments and post alert for needs review. And where do you want to place 
that with SQL? Just let them refill the form sure. It's good in its way, but I 
dont see it better.

> Having specific, precisely defined fields where you enter data is great.
> Having a consisted method of presenting that data is also great. Ability
> to easily visually reorganize data, or analyze it, is also great.

And great less flexible.
 
> While you can do that by parsing the wiki using a bot.. ehm, why not
> simply organize the data properly in the first place?
> -- 
> Regards,
> 
> Ivan Vučica

Less flexible and need more security plan for community to access the host. And 
if someone accidently break some records, you will need a complex revision 
control method.
_______________________________________________
Discuss-gnustep mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/discuss-gnustep

Reply via email to