[Foundation-l] Fwd: Copyright problems of images from India

2011-05-10 Thread Shiju Alex
Dear All,

I am forwarding the below mail on behalf of a Malayalam wikipedian who is
very active in Wikimedia Commons.

Of late it is becoming very difficult for many Wikimedians from India to
contribute to Wikimedia Commons especially if they are uploading historical
images which are in PD.  We are facing lot of issues (and many a times
unnecessary controversies also) with the historic images in PD, images of
wall paintings and statues, and so on. Please see the below mail in which
Sreejith citing various examples.

It is almost impossible for the uploaders from India to show proof of the
century old images of  Hindu Gods and Goddesses. The current policies of
Commons are not permitting many of the PD images from India citing all sorts
of policies which might be relevant only in the western world. With these
type of policies we are going to have serious issues when we try to go for
GLAM type events.

But I also do not know the solution for this issue. Requesting constructive
discussion.


Shiju Alex



-- Forwarded message --
From: Sreejith K. sreejithk2...@gmail.com
Date: Thu, May 5, 2011 at 12:03 PM
Subject: Copyright problems of images from India
To: Shiju Alex shijualexonl...@gmail.com


Shiju,

As you might be aware already, we are having trouble keeping historical
images about India in Wikimedia commons. This pertains mostly to images
about Hindu gods and people who died before 1947.

Please see the below examples:

   - File:Narayana
Guru.jpghttp://commons.wikimedia.org/wiki/File:Narayana_Guru.jpg -
   This is the image of Sree Narayana
Guruhttp://en.wikipedia.org/wiki/Narayana_Guru,
   a Hindu saint, social reformer and is even considered a god by certain
   castes in Kerala. This image has been tagged as an image with No source.
   Narayana Guru expired in 1928 and considering the conditions in which India
   was in during that period and before, it is very difficult to get an image
   source online. Most active Wikipedians does not have access or information
   on how old the image is or where a source of it can be found. Any photograph
   published before 1941 in India is in public domain as per Indian copyright
   act. Common sense says that this image meets this criteria because the
   person was long lead before 1941, but we still need proof of the first
   publishing date. Deleting this image on grounds that no source could be
   found will only reduce the informative values of all the articles which this
   image is included in.
   - File:Aravana.JPG: This image has already been deleted, but you can see
   the amount of discussion that went in before deleting it. See
Commons:Deletion
   
requests/File:Aravana.JPGhttp://commons.wikimedia.org/wiki/Commons:Deletion_requests/File:Aravana.JPG.
   (An almost similar image can be found
herehttp://www.flickr.com/photos/anoopp/5706721852/in/photostream/.)This
   image as put for deletion because it had the image of Swami
Ayyappanhttp://en.wikipedia.org/wiki/Swami_Ayyappanin it. Ayyappan,
a popular god of Kerala, has his image circulated
   everywhere on the plant with no proof of copyrights. It makes sense to
   believe that this image is not eligible for copyright because
   Hindu deities are all common property, but again, Commons need proof that
   the image is in public domain. This is the same case with all Hindu
   gods/goddesses. The images can only be kept in Commons if the uploader can
   provide proof that the images are in public domain.
   - File:Kottarathil
sankunni.jpghttp://commons.wikimedia.org/wiki/File:Kottarathil_sankunni.jpg:
   This is a picture of Kottarathil
Sankunnihttp://en.wikipedia.org/wiki/Kottarathil_Sankunni,
   the author of the famous book
Aithiyamaalahttp://en.wikipedia.org/wiki/Aithihyamala.
   Kottarathil Sankunni died in 1937 and so it makes sense to believe that this
   image was created on or before 1937 and thus falls in Public Domain. But
   some people in Commons is refusing to believe that and is asking for proof.
   Now it becomes the responsibility of the uploader to show proof that this
   image was published 60 years before today. The editor who nominated the
   image for deletion is on the safer side because it is not his responsibility
   to prove that the image is a copyright violation. So long story short,
   anyone can nominate any image for copyright violation and it becomes the
   uploaders responsibility to prove that its not. The deletion nomination need
   not be accompanied with a reason for disbelief.
   - File:Anoop
Menon.jpghttp://commons.wikimedia.org/wiki/File:Anoop_Menon.jpg:
   This is the picture of Anoop
Menonhttp://en.wikipedia.org/wiki/Anoop_Menon,
   a popular actor from Kerala. A discussion is going on about the uploaders
   credibility whether he is the original photographer of this image. Please
   see File talk:Anoop
Menon.jpghttp://commons.wikimedia.org/wiki/File_talk:Anoop_Menon.jpg.
   The reason for doubting the uploader is simple. This image has

[Foundation-l] Fwd: [Wikimediaindia-l] PDF rendering of Indian language wiki pages

2011-02-14 Thread Shiju Alex
Dear All,

The following mail is about a PDF rendering solution for non-Latin scripts
which is under development. One of the main reason for developing this tool
is that the current *Download to PDF* option (available in the sidebar of
wiki) is useless for Indic language scripts (current Download to PDF
solution might not be useful for all the non-Latin scripts)

I am sharing this here since the below PDF rendering solution is
applicable to all the scripts (not only Indic langauge/non-latin scripts).
Please mail your feedback/comment to santhosh.thottin...@gmail.com.

The solution is not limited to wiki. But surely wiki users will be one of
the major cosumers.

Please see the below mail thread for details.

Thanks
Shiju


-- Forwarded message --
From: Santhosh Thottingal santhosh.thottin...@gmail.com
Date: Mon, Feb 14, 2011 at 10:36 PM
Subject: [Wikimediaindia-l] PDF rendering of Indian language wiki pages
To: Discussion list on Indian language projects of Wikimedia. 
wikimediaindi...@lists.wikimedia.org


Hi,
We are working on a Complex script PDF rendering library named
PyPDFLib(https://savannah.nongnu.org/projects/pypdflib/). One of its
test case (or use case) is to render a wiki page in complex script(any
Indian Language) to PDF. Currently PDF export feature is not
available(not working) for Indian language wiki projects because of
technical incapability of Python Reportlab library.

Just wanted to give an early preview of this software library through
an online interface : http://silpa.smc.org.in/Render
You can try with a Wikipedia page in your language and verify the generated
PDF.
You can also access this using this URL
http://silpa.smc.org.in/Render?wiki=http://ta.wikipedia.org/wiki/இலங்கை
(replace that wiki URL with other page addresses too - any Ianguage -
not limited to Indian languages)

There are lot of items not implemented, but your feedback is requested
on the current version.
The library uses Pango for text rendering and Cairo for graphics and
PDF features.

ps: Don't get surprised if you get a 500 Error page for the random
page you are trying. Just try another wiki page ;)

Thanks
Santhosh Thottingal
http://thottingal.in

___
Wikimediaindia-l mailing list
wikimediaindi...@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikimediaindia-l
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] External links to PHP scripts

2010-10-11 Thread Alex
On 10/11/2010 12:48 PM, wjhon...@aol.com wrote:
 I stumbled upon a link on the Talk Page of Henry Fonda (which I removed) 
 which directs the reader to a page that contains a PHP script.
 
 That idea disturbs me, I think it should be, but I'm not sure it is, 
 against policy.
 
 Do we have a policy that forbids or at least discourages the use of links 
 to pages with embedded scripts?
 

Such a policy would prohibit linking to most of the internet. PHP is a
very common language to make websites with; its what MediaWiki is
written in. PHP is executed server-side, so its not inherently more
dangerous than plain HTML, at least not significantly so to the extent
that we should avoid linking to a page on the sole basis that it uses
PHP. Its files that are executed client-side, outside of the browser
like exe and zip files that we need to be concerned about.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Increasing the number of new accounts who actually edit

2010-09-22 Thread Alex
On 9/22/2010 3:55 AM, Lennart Guldbrandsson wrote:
 Hello,
 
 Did you know that less than a third of the users who create an account on
 English Wikipedia make even *one* edit afterwards? Two-thirds of all new
 accounts never edit! Interestingly, this percentage vary very much from
 language version to language version.
 
 Now, the question is not: what can we do about it? We know plenty of
 things that we *could* do. The question is this: what are the easiest
 levers to push that increase the numbers?
 
 We have a couple of ideas (they are presented on the Outreach wiki, at
 http://outreach.wikimedia.org/wiki/Account_Creation_Improvement_Project),
 but we need your help! Here are three easy things that you can do:
 
 1. Offer ideas
 2. Sign up to help with the project
 3. Spread the word. Do you know anybody who would want to be interested in
 helping out? Pass this message on.
 

Personally, I think you're starting too far back on the issue. We have
plenty of people who create accounts, edit, then stop almost
immediately. Among people who do make an edit, the enwiki retention rate
a few months later is 1-2%. I think we should try to improve that first.
One concern on the English Wikipedia is the rather impersonal way that
new users are handled - everything is bots and template messages. Simply
increasing the volume of new accounts will only exacerbate that problem.

Just very recently, for a completely unrelated discussion, I complied
some statistics for the English Wikipedia (though it could be run for
any project) about users' first edits and editor retention. The full
results are at [1], the summary is:

* Users who create an article are much more likely to leave the project
if their article is deleted. 1 in ~160 will stay if their article is
deleted while 1 in ~22 will stay otherwise
* Our main source of regular users is not users who start by creating an
article.
* Users who start by editing an existing page outnumber article
creators by 3:1
* Users who start by editing an existing article are far less likely
to have their first edits deleted (and therefore are far more likely to
stay)
* From the users analyzed, we got fewer than 200 regular users from
those who created articles (1.3% retention), we got more than 900 from
users who edited existing pages (2.5% retention)
* A significant number of regular users (24%) get their start outside of
mainspace.

[1] http://en.wikipedia.org/wiki/User:Mr.Z-man/newusers

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Is Google translation is good for Wikipedias?

2010-07-28 Thread Shiju Alex
 content on wikipedia. We'd rather stay
 small and hand-craft than allow an experimental tool and unskilled
 paid translators creating a big mess.


 Thanks

 Ragib (User:Ragib on en and bn)

 --
 Ragib Hasan, Ph.D
 NSF Computing Innovation Fellow and
 Assistant Research Scientist

 Dept of Computer Science
 Johns Hopkins University
 3400 N Charles Street
 Baltimore, MD 21218

 Website:
 http://www.ragibhasan.com




 On Sun, Jul 25, 2010 at 2:12 AM, Shiju Alex shijualexonl...@gmail.com
 wrote:
  Hello All,
 
  Recently there are lot of discussions (in this list also) regarding the
  translation project by Google for some of the big language wikipedias.
 The
  foundation also seems like approved the efforts of Google. But I am not
 sure
  whether any one is interested to consult the respective language
 community
  to know their views.
 
  As far as I know only Tamil, Bengali, and Swahili Wikipedians have raised
  their concerns about Google's project. But, does this means that other
  communities are happy about Google efforts? If there is no active
 community
  in a wikipedia how can we expect response from communities? If there is
 no
  response from a community, does that mean that Google can hire some
 native
  speakers and use machine translation to create articles for that
 wikipedia?
 
  Now let us go back to a basic question. Does WMF require a wiki community
 to
  create wikipedia in any language? Or can they utilize the services of
  companies like Google to create wikipedias in N number of languages?
 
  One of the main point raised by the supporters of Google translation is
  that, Google's project is good *for the online version of the
 language*.That
  might be true. But no body is cared to verify whether it is good for
  Wikipedia.
 
  As pointed out by Ravi in his presentation in Wikimania, (
  http://docs.google.com/present/view?id=ddpg3qwc_279ghm7kbhs), the Google
  translation of wikipedia articles:
 
- will affect the biological growth of a Wikipedia article
- will create copy of English wikipedia article in local wikis
- it is against some of the basic philosophies of wikipedia
 
  The people outside wiki will definitely benefit from this tool, if Google
  translation tool is developed for each language. I saw the working
 example
  of this in Poland during Wikimania, when some people who are not good in
  English used google translator to communicate with us. :)
 
  Apart from the points raised by Ravi in his presentation, this will
 affect
  the community growth.If there is no active wiki community, how can we
 expect
  them to look after all these junk articles uploaded to wiki every day.
 When
  all the important article links are already turned blue, how we can
 expect
  any future potential editors. So according to me, Google's project is
  killing the growth of an active wiki community.
 
  Of course, Tamil Wikipedia is trying to use Google project effectively.
 But
  only Tamil is doing that since they have an active wiki community*. Many
  Wiki communities are not even aware that such a project is happening in
  their wiki*.
 
  I do not want to point out specific language wikipedas to prove my point.
  But visit the wikipedias (especially wikipedias* that use non-latin
 scripts*)
  to view the status of google translation project.  Loads of junk articles
  are uploaded to wiki every day. Most of the time the only edit in these
  articles is the edit by its creator and the  inter language wiki bots.
 
  This effort will definitely affect community growth. Kindly see the
 points
  raised by a Swahali
  Wikipedian
 http://muddybtz.blog.com/2010/07/16/what-happened-on-the-google-challenge-the-swahili-wikipedia/
 .
  Many Swahali users (and other language users) now expect a laptop or some
  other monitory benefits to write in their wikipedia. That affects the
  community growth.
 
  So what is the solution for this? Can we take lessons from
  Tamil/Bengali/Swahili wikipedias and find methods to use this service
  effectively or continue with the current article creation process.
 
  One last question. Is this tool that is developing by Google is an open
  source tool? If not, we need to answer so many questions that may follow.
 
  Regards
 
  Shiju Alex
  http://en.wikipedia.org/wiki/User:Shijualex
  ___
  foundation-l mailing list
  foundation-l@lists.wikimedia.org
  Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Push translation

2010-07-27 Thread Shiju Alex

 Google Translator Toolkit is particularly problematic because it
 messes up the existing article formatting (one example, it messes up
 internal links by putting punctuation marks before double brackets
 when they should be after) and it includes incompatible formatting
 such as redlinked templates. It also doesn't help that many editors
 don't stick around to fix their articles afterwards.



Yes this is one of the main issue of *Google Translator Tool Kit* (GTTK).
There are many points raised by Ravi regarding GTTK in his presentation at
WikiMania. http://docs.google.com/present/view?id=ddpg3qwc_279ghm7kbhs

Not all wikipedias work/create articles in the same way as English Wikipedia
or some other big wikipedias does. Many of the active wiki communities does
not like the word-to-word translation of the English Wikipedia articles. But
that doesn't mean that while developing an article, they won't refer English
Wikipedia. English wikipedia article is their first point of reference most
of the time. The problem starts when some one start forcing English
Wikipedia articles in a language wikipedia. Here it is Google, using the
Google Translate Tool Kit (GTTK) .

Most of the active wiki communities (especially non-Latin wikis) are not
interested in the word-to-word translation of the English Wikipedia
articles. Also many of them are not willing to to go through the big
articles (with lot of issues) created using GTTK and rewrite the entire
article to bring it to the wiki style. They will better prefer to start the
article from the scratch.

One of the main issue is that the Google/Google translators are not
communicating with the wiki community (of each language) before they start
the project in a wikipedia. For example, Tamil wikipedia community came to
know about Google efforts only 6 months after they started the project in
that wiki.

Wiki communities like the biological growth of the wikipedia articles in
their wiki. Why English Wikipedia did not start building wikipedia articles
using *Encyclopedia Britannica 1911* edition which was available in the
public domain?

Personally, I am not against GTTK or against Google. At least this effort is
good for the online version of a language (even if some argue that it is not
good for wikipedia). But this effort needs to be executed in a different way
so that wikipedia of that language will benefit from it. Some of the
solutions that are coming to my mind:

   1. Ban the project of Google as done by the Bengali wiki community (Bad
   solution, and I am personally against this solution)
   2. Ask Google to engage wiki community (As happened in the case of Tamil)
   to find out a working solution. But if there is no active wiki community
   what Google can do.  But does this mean that Google can continue with the
   project as they want? (Very difficult solution if there is no active wiki
   community)
   3. Find some other solution. For example, Is it possible to upload the
   translated articles in a separate name space, for example, Google: Let the
   community decides what needs to be taken to the main/article namespace.
   4. .

If some solution is not found soon, Google's effort is going to create
problem in many language wikipedias. The worst result of this effort would
be the rift between the wiki community and the Google translators (speakers
of the same language) :(

Shiju

On Tue, Jul 27, 2010 at 8:39 AM, Mark Williamson node...@gmail.com wrote:

 Shiju Alex,

 Stevertigo is just one en.wikipedian.

 As far as using exact copies goes, I don't know about the policy at
 your home wiki, but in many Wikipedias this sort of back-and-forth
 translation and trading and sharing of articles has been going on
 since day one, not just with English but with other languages as well.
 If I see a good article on any Wikipedia in a language I understand
 that is lacking in another, I'll happily translate it. I have never
 seen this cause problems provided I use proper spelling and grammar
 and do not use templates or images that leave red links.

 I started out at en.wp in 2001, so I don't think it's unreasonable to
 call myself an English Wikipedian (although I'd prefer to think of
 myself as an international Wikipedian, with lots of edits at wikis
 such as Serbo-Croatian, Spanish, Navajo, Haitian and Moldovan). I am
 not at all in favor of pushing any sort of articles on anybody, if a
 community discusses and reaches consensus to disallow translations
 (even ones made by humans, including professionals), that is
 absolutely their right, although I don't think it's wise to disallow
 people from using material from other Wikipedias.

 Google Translator Toolkit is particularly problematic because it
 messes up the existing article formatting (one example, it messes up
 internal links by putting punctuation marks before double brackets
 when they should be after) and it includes incompatible formatting
 such as redlinked templates. It also doesn't help that many

Re: [Foundation-l] Push translation

2010-07-27 Thread Shiju Alex
On Tue, Jul 27, 2010 at 11:42 AM, Mark Williamson node...@gmail.com wrote:


 4) Include a list of most needed articles for people to create, rather
 than random articles that will be of little use to local readers. Some
 articles, such as those on local topics, have the added benefit of
 encouraging more edits and community participation since they tend to
 generate more interest from speakers of a language in my experience.


This list will automatically come if Google engage the wiki community for
their project for a particular language. But for some wikipedias there is no
active wiki community. So how this issue can be solved?

Selection of the articles for translation is an important part for this
project. Definitely http://en.wikipedia.org/wiki/Wikipedia:Vital_articles is
a good choice for this. Community might also be interested in some other
important articles (important with respect to the social/ cultural/geography
of the speakers of that language). So engaging local wiki community is most
important

 ~Shiju
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Push translation

2010-07-26 Thread Shiju Alex

 really? It's a) not
 particularly well-written, mostly and b) referenced overwhelmingly to

 English-language sources, most of which are, you guessed it.. Western in

 nature.


Very much true. Now English Wikipedians want some one to translate and use
the exact copy of en:wp in all other language wikipedias. And they have the
support of Google for that.






On Tue, Jul 27, 2010 at 5:52 AM, Oliver Keyes scire.fac...@gmail.comwrote:

 The idea is that most of en.wp's articles are well-enough written, and
 written in accord with NPOV to a sufficient degree to overcome any
 such criticism of 'imperial encyclopedism.' - really? It's a) not
 particularly well-written, mostly and b) referenced overwhelmingly to
 English-language sources, most of which are, you guessed it.. Western in
 nature.

 On Mon, Jul 26, 2010 at 3:43 AM, stevertigo stv...@gmail.com wrote:

  Mark Williamson node...@gmail.com wrote:
   I would like to add to this that I think the worst part of this idea
   is the assumption that other languages should take articles from
   en.wp.
 
  The idea is that most of en.wp's articles are well-enough written, and
  written in accord with NPOV to a sufficient degree to overcome any
  such criticism of 'imperial encyclopedism.'
 
  Mark Williamson node...@gmail.com wrote:
   Nobody's arguing here that language and culture have no relationship.
   What I'm saying is that language does not equal culture. Many people
   speak French who are not part of the culture of France, for example
   the cities of Libreville and Abidjan in Africa.
 
  Africa is an unusual case given that it was so linguistically diverse
  to begin with, and that its even moreso in the post-colonial era, when
  Arabic, French, English, and Dutch remain prominent marks of
  imperialistic influence.
 
  Ray Saintonge sainto...@telus.net wrote:
   This is well suited for the dustbin of terrible ideas.  It ranks right
   up there with the notion that the European colonization of Africa was
   for the sole purpose of civilizing the savages.
 
  This is the 'encyclopedic imperialism' counterargument. I thought I'd
  throw it out there. As Bendt noted above, Google has already been
  working on it for two years and has had both success and failure. It
  bears mentioning that their tools have been improving quite steadily.
  A simple test such as /English - Arabic - English/ will show that.
 
  Note that colonialism isnt the issue. It still remains for example a
  high priority to teach English in Africa, for the simple reason that
  language is almost entirely a tool for communication, and English is
  quite good for that purpose.  Its notable that the smaller colonial
  powers such as the French were never going to be successful at
  linguistic imperialism in Africa, for the simple reason that French
  has not actually been the lingua franca for a long time now.
 
   Key to the growth of Wikipedias in minority languages is respect for
 the
   cultures that they encompass, not flooding them with the First-World
   Point of View.  What might be a Neutral Point of View on the English
   Wikipedia is limited by the contributions of English writers.  Those
 who
   do not understand English may arrive at a different neutrality.  We
 have
   not yet arrived at a Metapedia that would synthesize a single
 neutrality
   from all projects.
 
  I strongly disagree. Neutral point of view has worked on en.wp because
  its a universalist concept. The cases where other language wikis
  reject English content appear to come due to POV, and thus a violation
  of NPOV, not because - as you seem to suggest - the POV in such
  countries must be considered NPOV.
 
  Casey Brown li...@caseybrown.org wrote:
   I'm surprised to hear that coming from someone who I thought to be a
   student of languages.  I think you might want to read an
   article from today's Wall Street Journal, about how language
   influences culture (and, one would extrapolate, Wikipedia articles).
 
  I had just a few days ago read Boroditsky's piece in Edge, and it
  covers a lot of interesting little bits of evidence. As Mark was
  saying, linguistic relativity (or the Sapir-Whorf hypothesis) has been
  around for most of a century, and its wider conjectures were strongly
  contradicted by Chomsky et al. Yes there is compelling evidence that
  language does channel certain kinds of thought, but this should not
  be overstated. Like in other sciences, linguistics can sometimes make
  the mistake of making *qualitative judgments based on a field of
  *quantitative evidence.  This was essentially important back in the
  40s and 50s when people were still putting down certain
  quasi-scientific conjectures from the late 1800s.
 
  Still there are cultures which claim their languages to be superior in
  certain ways simply because they are more sonorous or emotive, or
  otherwise expressive, and that's the essential paradigm that some
  linguists are working in.
 
  -SC
 
  

[Foundation-l] Is Google translation is good for Wikipedias?

2010-07-25 Thread Shiju Alex
Hello All,

Recently there are lot of discussions (in this list also) regarding the
translation project by Google for some of the big language wikipedias. The
foundation also seems like approved the efforts of Google. But I am not sure
whether any one is interested to consult the respective language community
to know their views.

As far as I know only Tamil, Bengali, and Swahili Wikipedians have raised
their concerns about Google's project. But, does this means that other
communities are happy about Google efforts? If there is no active community
in a wikipedia how can we expect response from communities? If there is no
response from a community, does that mean that Google can hire some native
speakers and use machine translation to create articles for that wikipedia?

Now let us go back to a basic question. Does WMF require a wiki community to
create wikipedia in any language? Or can they utilize the services of
companies like Google to create wikipedias in N number of languages?

One of the main point raised by the supporters of Google translation is
that, Google's project is good *for the online version of the language*.That
might be true. But no body is cared to verify whether it is good for
Wikipedia.

As pointed out by Ravi in his presentation in Wikimania, (
http://docs.google.com/present/view?id=ddpg3qwc_279ghm7kbhs), the Google
translation of wikipedia articles:

   - will affect the biological growth of a Wikipedia article
   - will create copy of English wikipedia article in local wikis
   - it is against some of the basic philosophies of wikipedia

The people outside wiki will definitely benefit from this tool, if Google
translation tool is developed for each language. I saw the working example
of this in Poland during Wikimania, when some people who are not good in
English used google translator to communicate with us. :)

Apart from the points raised by Ravi in his presentation, this will affect
the community growth.If there is no active wiki community, how can we expect
them to look after all these junk articles uploaded to wiki every day. When
all the important article links are already turned blue, how we can expect
any future potential editors. So according to me, Google's project is
killing the growth of an active wiki community.

Of course, Tamil Wikipedia is trying to use Google project effectively. But
only Tamil is doing that since they have an active wiki community*. Many
Wiki communities are not even aware that such a project is happening in
their wiki*.

I do not want to point out specific language wikipedas to prove my point.
But visit the wikipedias (especially wikipedias* that use non-latin scripts*)
to view the status of google translation project.  Loads of junk articles
are uploaded to wiki every day. Most of the time the only edit in these
articles is the edit by its creator and the  inter language wiki bots.

This effort will definitely affect community growth. Kindly see the points
raised by a Swahali
Wikipedianhttp://muddybtz.blog.com/2010/07/16/what-happened-on-the-google-challenge-the-swahili-wikipedia/.
Many Swahali users (and other language users) now expect a laptop or some
other monitory benefits to write in their wikipedia. That affects the
community growth.

So what is the solution for this? Can we take lessons from
Tamil/Bengali/Swahili wikipedias and find methods to use this service
effectively or continue with the current article creation process.

One last question. Is this tool that is developing by Google is an open
source tool? If not, we need to answer so many questions that may follow.

Regards

Shiju Alex
http://en.wikipedia.org/wiki/User:Shijualex
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Gmail - List messages flagged as spam

2010-06-17 Thread Shiju Alex
Yes. This is true. Many messages are marked as spam during the past few
days.

I got this message also from the SPAM folder.




On Thu, Jun 17, 2010 at 10:51 PM, Daniel ~ Leinad danny.lei...@gmail.comwrote:

 2010/6/17 Ryan Lomonaco wiki.ral...@gmail.com:
  A housekeeping note: Gmail has been marking some list messages as spam
 for
  the past five days or so.  It sounds like this is affecting other
 Wikimedia

 I've noticed the same problem with Gmail ;/

 In last days I have to check very carefully spam folder.

 --
 Leinad

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


[Foundation-l] Creating articles in small wikipedias based on user requirement

2010-06-11 Thread Shiju Alex
Recently I had a discussion with one of my fellow Malayalam wikipedian (
http://ml.wikipedia.org) about the creation of new articles in small
wikipedias like ours. He is one the few users who is keen on creating new
articles *based on the requirement of our readers*. (Of course we have many
people who only reads our wiki)

During discussion he raised this interesting point:

Some feature is required in the MediaWiki software that enable us to see a
list of keywords used most frequently by the users to search for non-exist
articles. If we get such a list then some users like him can concentrate on
creating articles using that key words.

Of course, I know that this feature may not be helpful for big wikis like
English. But for small wikis (especially small non-Latin language wikis),
this will be of great help. It is almost like* creating wiki articles based
on user requirement*.


I would like to know your opinion regarding the same.


Shiju
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Creating articles in small wikipedias based on user requirement

2010-06-11 Thread Shiju Alex
This topic came up while we were discussing about Google's translation
effort. Google/Google employees are using Google tool kit to translate
English Wikipedia articles to many of the Indic language Wikipedias.


We are definitely more interested if Google translates these user required
articles than translating  the English wiki articles about all the american
pop stars (For example, http://en.wikipedia.org/wiki/Lady_Gaga). Now the
issue is, we don't have such list to give to Google/Google employees.





On Sat, Jun 12, 2010 at 5:56 AM, Mark Williamson node...@gmail.com wrote:

 +1. This would be a SUPER useful tool for all Wikis.
 -m.

 On Fri, Jun 11, 2010 at 3:54 AM, Shiju Alex shijualexonl...@gmail.com
 wrote:
  Recently I had a discussion with one of my fellow Malayalam wikipedian (
  http://ml.wikipedia.org) about the creation of new articles in small
  wikipedias like ours. He is one the few users who is keen on creating new
  articles *based on the requirement of our readers*. (Of course we have
 many
  people who only reads our wiki)
 
  During discussion he raised this interesting point:
 
  Some feature is required in the MediaWiki software that enable us to see
 a
  list of keywords used most frequently by the users to search for
 non-exist
  articles. If we get such a list then some users like him can concentrate
 on
  creating articles using that key words.
 
  Of course, I know that this feature may not be helpful for big wikis like
  English. But for small wikis (especially small non-Latin language wikis),
  this will be of great help. It is almost like* creating wiki articles
 based
  on user requirement*.
 
 
  I would like to know your opinion regarding the same.
 
 
  Shiju
  ___
  foundation-l mailing list
  foundation-l@lists.wikimedia.org
  Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Fwd: change of registered TMs in Persian wikipedia

2010-06-09 Thread Shiju Alex
The best option according to us (Malayalam Wikipedians -
http://ml.wikipedia.org) is to use the second option for the article titles
. That is, transliterate the keyword to your language.

There is no copyright issue attached with that, I suppose. We do that
throughout our daily life. For example, local language News Paper, TV
reports, and so on.


According to me translating (third option) the book name,film name, or
company name and so on is a bad idea from Encyclopedic point of view. You
can have translated articlke titels if there is a translated version of the
book/film available in your language.

For example, in malayalam wikipedia we have an article about Imitation of
the Christ http://ml.wikipedia.org/wiki/The_Imitation_of_Christ_%28book%29.
In this article, the information about the original book is provided. Please
note that the article title is the transliteration of the original title of
the book.

We also have another article for the translated version of this book (
ക്രിസ്തുദേവാനുകരണംhttp://ml.wikipedia.org/wiki/%E0%B4%95%E0%B5%8D%E0%B4%B0%E0%B4%BF%E0%B4%B8%E0%B5%8D%E0%B4%A4%E0%B5%81%E0%B4%A6%E0%B5%87%E0%B4%B5%E0%B4%BE%E0%B4%A8%E0%B5%81%E0%B4%95%E0%B4%B0%E0%B4%A3%E0%B4%82)
to Malayalam. This article provides information about the translated version
of the original book.

Hope this helps.


Shiju





On Thu, Jun 10, 2010 at 7:09 AM, Ryan Lomonaco wiki.ral...@gmail.comwrote:

 Forwarded on behalf of a non-list-member.

 The question pertains to translation of trademarks within articles; to my
 knowledge, there's nothing wrong with us doing so, and I think this is done
 in many Wikipedias.  But I'll defer to the list on this question.

 -- Forwarded message --
 From: Amir sarabadani ladsgr...@gmail.com
 Date: Tue, Jun 8, 2010 at 3:58 PM
 Subject: change of registered TMs in Persian wikipedia
 To: foundation-l-ow...@lists.wikimedia.org


 Hello,
 I'm one of Persian wikipedia users.for making pages same name of
 trademarks (e.g. films ,games.etc.) we have several choices:
 1-we use  same name and same alphabetical with trademark(e.g.
 Google--Google)
 2-we same name but Persian Script(e.g. Call of duty--کال آو دیوتی/KAL
 AV DIUTI/)
 3-we translate it(Prince of Persian--شاهزاده ایرانی /SHAHZADE IRANI
 means Prince of Persia)
 Users of Persian wikipedia (with consequence) use third way usually
 but I think change of trademarks is crime and maybe create legal
 problem for the Foundation

 Please tell us what we do or maybe i think wrong please tell me.
 Thanks and best wishes
 --
 Amir
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-23 Thread Alex
On 5/23/2010 1:58 PM, Philippe Beaudette wrote:
 tbh, I'm very fond of Double check.  It seems to imply exactly what  
 we want: the edit isn't being accepted automatically, nor rejected,  
 but simply getting a second look.  It's fairly neutral in tone, and  
 understandable to the average person.

Except unless we consider the initial edit to be the first check, its
not correct. Only one person independent from the editor is reviewing
each edit.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-23 Thread Alex
On 5/23/2010 8:40 PM, William Pietri wrote:
 On 05/23/2010 02:13 PM, David Levy wrote:
 James Alexander wrote:

 That is basically exactly how I see it, most times you double check
 something you are only the 2nd person because the first check is done by the
 original author. We assume good faith, we assume that they are putting
 legitimate and correct information into the article and checked to make sure
 it didn't break any policies, it's just that because of problems on that
 page we wanted to have someone double check.
  
 That's a good attitude, but such an interpretation is far from
 intuitive.  Our goal is to select a name that stands on its own as an
 unambiguous description, not one that requires background knowledge of
 our philosophies.

 I'll also point out that one of the English Wikipedia's most important
 policies is ignore all rules, a major component of which is the
 principle that users needn't familiarize themselves with our policies
 (let alone check to make sure they aren't breaking them) before
 editing.

 
 Allow me to quote the whole policy: If a rule prevents you from 
 improving or maintaining Wikipedia, ignore it. That implies, in my view 
 correctly, that the person editing is presumed to set out with the 
 intention of making the encyclopedia better.
 
 I think that fits in nicely with James Alexander's view: we can and 
 should assume that most editors have already checked their work. Not 
 against the minutiae of our rules, but against their own intent, and 
 their understanding of what constitutes an improvement to Wikipedia.
 
 Given that, I think double-check fits in fine, both in a very literal 
 sense and in the colloquial one. I ask people to double-check my work 
 all the time, with the implied first check always being my own.
 

We can assume most, but we cannot assume all. It is the ones that don't
that we're especially concerned about. So, the revisions that get
double checked are mostly the ones that don't actually need it. The
intentionally bad edits are only getting a single check.

And of course, this raises the question, if we're assuming that most
editors are checking their work and are trying to improve the
encyclopedia, why do we need to double check their work? We wouldn't
call the system Second guess, but that's kind of what this explanation
sounds like.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] New project proposal: wiki-based troubleshooting

2010-05-09 Thread Alex
On 5/4/2010 5:16 PM, Yao Ziyuan wrote:
 Thomas Dalton wrote:

 We definitely do not want to be giving medical advice to people. If
 you get that wrong, people die. Medical advice should be got by going
 to the doctors. Can you give another example of what your idea could
 
 Yes, medical troubleshooting is both extremely useful and extremely
 sensitive, and that's why I said Like Wikipedia, WikiTroubleshooting
 should cite credible references. We could put a warning and a
 disclaimer on every medical troubleshooting page telling the visitor
 to check cited references and other sources before adopting any
 advice.

A disclaimer would probably shield us from lawsuits, but there would
still be a lot of ethical issues in the free medical advice anyone can
edit (since we know most people won't check sources, especially print
sources). Setting aside the issues of vandalism, even a good intentioned
edit by someone who doesn't have adequate medical training could cause
problems if they misread a source or use a source that isn't as reliable
as they think. A lot higher standard for reliable would be needed for
something like that.

 How can a wiki implement a troubleshooting wizard? A wizard is a set
 of pages. Each page assumes you have specified certain symptoms (e.g.
 symptom1, symptom3, symptom5) of your problem and asks you a question
 to specify a new symptom (e.g. symptom10); then it redirects you to a
 next page that assumes you have specified symptoms 1, 3, 5 and 10 and
 asks you yet another question or shows you possible causes and
 solutions for the symptoms you have specified so far (1, 3, 5, 10).
 
 Therefore they're just static HTML pages where each page can link to
 one or more next pages. This is exactly what a wiki can do.

The main issue I can see (other than that for medical advice and the
like), is that troubleshooters don't lend themselves as well to
incremental building. A Wikipedia article with only a few sentences or a
Wikibook with only a couple chapters are still slightly useful. A
troubleshooter with only a couple steps is much less so.

Say you have a troubleshooter for a printer not working:
1. Is the printer plugged in and on?
Yes
2. Is there paper loaded?
Yes
3. Sorry, that's all this troubleshooter can help you with for now.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Where things stand now

2010-05-09 Thread Alex
On 5/8/2010 9:08 AM, Jimmy Wales wrote:
 
 Much of the cleanup is done, although there was so much hardcore 
 pornography on commons that there's still some left in nooks and crannies.
 
 I'm taking the day off from deleting, both today and tomorrow, but I do 
 encourage people to continue deleting the most extreme stuff.
 
 But as the immediate crisis has passed (successfully!) there is not 
 nearly the time pressure that there was.  I'm shifting into a slower mode.
 
 We were about to be smeared in all media as hosting hardcore pornography 
 and doing nothing about it.  Now, the correct storyline is that we are 
 cleaning up.  I'm proud to have made sure that storyline broke the way 
 it did, and I'm sorry I had to step on some toes to make it happen.
 
 Now, the key is: let's continue to move forward with a responsible 
 policy discussion.
 
 

The correct story line now is that Wikimedia is purging historical works
by notable artists and bending due to pressure from American
conservative media. Outside of Fox news, I've yet to see any pickup of
this by any significant media outlet.[1]

The way I see it, with the rushed and ham-fisted way this was done,
we'll be lucky if it doesn't completely backfire on us and the
non-conservative media doesn't make it look like we're burning books or
that they misconstrue it and assume we've adopted some sort of outright
no-nudity policy.

[1] http://bit.ly/d8y5vy

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


[Foundation-l] Chris Clayton

2010-04-17 Thread Alex Mr.Z-man
http://www.roulette-casino-en-ligne.com/home.php

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] �lliam Pietri: Where is Flagge dRevisions?

2010-03-01 Thread Alex
The English Wikipedia isn't asking for a total rewrite of the extension.
FlaggedRevs was always (at least since it was first deployed) highly
customizable. I believe the Flagged Protection feature was able to be
implemented, or very close to it, at the time the proposal was
finalized. Supposedly the changes being made now are mostly UI and
workflow changes to make it easier to use or something like that. (Why
this wasn't done before it was deployed on dewiki or anywhere else, I
don't know)

Its not like enwiki deciding to use FlaggedRevs was a total surprise.
Erik had always assumed that enwiki would get it eventually, why did the
foundation wait until 6 months /after/ enwiki requested it to hire
people to work on this?

-- 
Alex (wikipedia:en:User:Mr.Z-man)

On 3/1/2010 7:58 AM, Gerard Meijssen wrote:
 Hoi,
 One of the things developers are not necessarily good at is communication as
 in keeping everyone up to date. With the many channels secret and not so
 secret. With the ferocity that many say typify the mailing lists, it is no
 wonder that we hear few if any updates.
 
 In my opinion the reason why the English language Wikipedia does not have
 Flagged Revisions already is because they did not want the fully functional
 Flagged Revisions that is used for some years now on the German language
 Wikipedia. Wanting something different is its prerogative but it does not
 follow that it is easy or quick. Remember the 80/20 rule and remember that
 the special wishes makes the software more complicated.
 
 The English language Wikipedia is also spoiled because it gets the things
 programmed. When you consider that many of the issues with RTL languages and
 font issues like with the Malayalam language get hardly the attention they
 require, it is rather obvious that tantrums prevent information becoming
 available on the public mailinglists.
 Thanks,
   GerardM
 
 On 1 March 2010 13:18, Casey Brown li...@caseybrown.org wrote:
 
 On Sun, Feb 28, 2010 at 11:18 PM, Aphaia aph...@gmail.com wrote:
 Not a sarcasm, but I would like to point out SUL, single user login
 took years to implement to the project wikis, and we even called once
 it Godot. FlaggedRevs implementation also - it took years to
 realize. Months are relatively shorter, and I hope you guys could wait
 for in a less pain.


 Yes, but no one was contracted for work on SUL.  People are being paid
 to work on *just* FlaggedRevs, it's not something that the tech team
 has to fit into their time to develop.

 --
 Casey Brown
 Cbrown1023

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Sue Gardner, Erik Möller , Wi lliam Pietri: Where is FlaggedRevisions?

2010-02-28 Thread Alex
On 2/28/2010 10:32 PM, MZMcBride wrote:
 William Pietri wrote:
 As soon as that's ready, I will be very excited to put up test versions
 of both the English Wikipedia and the German one, so that the community
 can test, give feedback, and opine on whether it's ready to go.
 
 When might that be? Is there a specific deadline? If not, why? And if there
 is a deadline and it slips by yet again, what's the consequence to those
 running the project?
 

I second this. Are William and Howie just under contract indefinitely
until FlaggedRevs is finally ready? Who are they responsible to, and
why is that person apparently not giving them any sort of priorities
(like, creating a plan or a deadline)?

Why is there such little transparency in this whole process? Rather than
use the normal bug tracker that all other MediaWiki developers use and
that the community is used to, they're using some entirely separate one,
hosted on a 3rd party website. As far as I can tell, there's only been
one unprompted communication with the community regarding this - the
techblog post in January that had little new information.

Its been more than 4 months, and we haven't been able to get even a
vague timeline yet. IMO, setting a deadline, missing it, and explaining
why it was missed is better than not setting a deadline until you know
you can meet it (which kind of defeats the purpose of setting it).

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] I'm here to request a new Wikimedia project

2010-02-27 Thread Alex
I wonder which would be harder, trying to start the first new project in
years, or trying to get the English Wikipedia to make a significant
policy change?

-- 
Alex (wikipedia:en:User:Mr.Z-man)


On 2/27/2010 12:40 PM, David Goodman wrote:
 WP contains many of  the essential elements  of an almanac already, and
 could very easily cover all the rest-- it doesn't take a new project, just a
 relaxation of some of the self-imposed strictures.  Relaxing, without
 eliminating , NOT NEWS ,  NOT DIRECTORY, and   NOT INDISCRIMINATE . there's
 only one point that would need actual removal:  NOT INDISCRIMINATE point 3,
 Excessive listing of statistics.
 
 The basic change could be accomplished by doing just that one
 deletion--there is not need for another project, or even another space for
 data.
 
 David Goodman, Ph.D, M.L.S.
 http://en.wikipedia.org/wiki/User_talk:DGG
 
 
 On Sat, Feb 27, 2010 at 11:12 AM, Casey Brown li...@caseybrown.org wrote:
 
 On Sat, Feb 27, 2010 at 11:02 AM, Tyler programmer...@comcast.net wrote:
  I was just wondering, how would you like to start an almanac, guys? That
 would be neat, a wiki almanac.


 http://meta.wikimedia.org/wiki/Proposals_for_new_projects :-)

 --
 Casey Brown
 Cbrown1023

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Frequency of Seeing Bad Versions - now with traffic data

2009-08-27 Thread Alex
Anthony wrote:
 On Thu, Aug 27, 2009 at 2:58 PM, Anthony wikim...@inbox.org wrote:
 
 On Thu, Aug 27, 2009 at 2:50 PM, Thomas Dalton 
 thomas.dal...@gmail.comwrote:

 I would put money on a significant majority of reverts being
 reverts of vandalism rather than BRD reverts, it may not be an
 overwhelming majority, though.

 I don't know about that, though I won't take the other end of the bet.
  Have you done much editing while not logged in?  If so, I think you have to
 admit that it's quite common to find yourself reverted for things which are
 not properly classified as vandalism.

 
 Just going through recent changes looking for rv (which is not the only
 thing detected by Robert's software, and is probably the most likely to be
 actual vandalism)...
 

Most vandalism reversion on enwiki (I believe) is done with automated
tools and/or rollback rather than manual reversion.

They typically leave more detailed summaries:
Reverted N edits by X identified as vandalism to last revision by Y
Reverted edits by X (talk) to last version by Y

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] How much of Wikipedia is vandalized? 0.4% of Articles

2009-08-20 Thread Alex
Robert Rohde wrote:
 
 Does anyone have a nice comprehensive set of page traffic aggregated
 at say a month level?  The raw data used by stats.grok.se, etc. is
 binned hourly which opens one up to issues of short-term fluctuations,
 but I'm not at all interested in downloading 35 GB of hourly files
 just to construct my own long-term averages.
 

I don't have every article, but I have the data for July 09 for ~600,000
pages on enwiki (mostly articles). It also has the hit counts for
redirects aggregated with the article, not sure if that would be more or
less useful for you. Let me know if you want it, its in a MySQL table on
the toolserver right now.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Proposal for Wikimedia Weather

2009-07-08 Thread Alex
Tris Thomas wrote:
 Dear All,
 I don't know whether this has been discussed before, apologies if it has.
 
 I'm interested in people's thoughts on a new Wikimedia project-maybe 
 WikiWeather, which basically would do what it says on the tin.  Along 
 with importing national weather from other sources(especially to begin 
 with), contributors could then put their weather where they are.  This 
 could evolve into many contributors giving very localised weather 
 forecasts worldwide, which could be used by many of the other projects 
 and anybody else.
 
 Would people be interested in this proposal/have any thoughts on it?
 
 Thanks!
 
 Wikinews User Page http://en.wikinews.org/wiki/User:Tristan%20Thomas

Except for forecasts (though it might be interesting to see how wiki
users compare to professional meteorologists), weather is mostly just
data, so computers can generally provide it better than people.

The only way I could see this as possibly being better than existing
services would be if it was set up so that people who had home weather
stations that could connect to a computer could automatically update the
site.

But A) that kind of equipment is expensive (at minimum ~$100 USD for
something that only records temperature) and B) it wouldn't really be a
wiki as the majority of the content would be automatically updated. Only
the forecasts would be human-produced.

Though you'd start to run into the principle of diminishing returns with
that - how much better is a weather report from 2 miles away versus 5
miles away? After a point, it just becomes redundant. And of course we'd
still have to rely on the weather services for things like radar and
satellite images.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikipedia tracks user behaviour via third party companies

2009-06-05 Thread Alex
John at Darkstar wrote:
 
 Alex skrev:
 John at Darkstar wrote:
 Hmm? There's no reason to do anything like that. The AbuseFilter would
 just prevent sitewide JS pages from being saved with the particular URLs
 or a particular code block in them. It'll stop the well-meaning but
 misguided admins. Short of restricting site JS to the point of
 uselessness, you'll never be able to stop determined abusers.

 A very typical code fragment to make a stat url is something like

 document.write('img scr=' + server + digest + '');

 - server is some kind of external url
 - digest is just some random garbage to bypass caching

 This kind of code exists in so many variants that it is very difficult
 to say anything about how it may be implemented. Often it will not use a
 document.write on systems like Wikipedia but instead use createElement()
 Very often someone claims that the definition of server will be
 complete and may be used to identify the external server sufficiently.
 That is not a valid claim as many such sites can be referred for other
 purposes. 
 Other purposes that have valid uses loading 3rd party content on a
 Wikimedia wiki? Like what?
 
 If you don't trust other sites you also has to accept that you can't
 trust ant kind of «toolserver» where you don't have complete control.
 That opens a lot of problems

Its not just a matter of trust, its a matter of use. Why would people be
loading content from or linking to servers used to collect website stats
in the sitewide JS on a Wikimedia wiki?

 Note also that the number of urls will be huge as this type of
 service is very popular, not to say that anyone that want may set up a
 special stat aggregator on an otherwise unknown domain.

 Basically, simple regexps are not sufficient for detecting this kind of
 code.
 I don't think I said it would be perfect, the idea isn't to 100% prevent
 it, just to try to stop the most obvious cases like Google analytics.
 
 Its not that it won't be perfect, it simply will not work.

And anything more complex would likely be too complicated and/or too
inefficient to be worthwhile.

 John
 
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 


-- 
Alex (wikipedia:en:User:Mr.Z-man)


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikipedia tracks user behaviour via third party companies #2

2009-06-05 Thread Alex
effe iets anders wrote:
 2009/6/5 Peter Gervai grin...@gmail.com
 
 snip
 The stats (which have, by surprise, a dedicated domain under th hu
 wikipedia domain) runs on a dedicated server, with nothing else on it.
 Its sole purpose to gather and publish the stats. Basically nobody
 have permission to log in the servers but me, and I since I happen to
 be checkuser as well it wouldn't even be ntertaining to read it, even
 if it wasn't big enough making this useless. I happen to be the one
 who have created the Hungarian checkuser policy, which is, as far as I
 know, the strictest one in WMF projects, and it's no joke, and I
 intend to follow it. (And those who are unfamiliar with me, I happen
 to be the founder of huwp as well, apart from my job in computer
 security.)
 snip

 
 Just a remark on the checkuser argument. Checkuser actions and checks are
 logged, and can be double checked by other checkusers and stewards. This
 server can not. I can imagine that this would pose a problem.
 

Checkuser also only stores the data for a known period of time (3
months) and, with the fairly recent exception of user-user email, only
records actions that are publicly logged by MediaWiki (edits and other
logged actions), not individual pageviews.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikipedia tracks user behaviour via third party companies

2009-06-04 Thread Alex
Dan Rosenthal wrote:
 Installing Google Analytics, even for our own purposes, is a bad idea.  
 For one, it creates a link to google that is not necessarily what we  
 want; it would be a big target for people to try and hack, and it  
 presents tempting security risks on Google's end.  Not to mention, as  
 far as I know the program is proprietary.
 
 If we're going to do something like this, it should be open source,  
 and it should something that we can internally install and monitor  
 without external options. That is, again, assuming we do something  
 like that. That's not a foregone presumption. I'm not convinced that  
 we need to be tracking user behavior at this point in time, or that  
 the tradeoffs for doing so are worth any benefits, or that doing so is  
 in furtherance of our mission.
 

The plain pageview stats are already available.
Erik Zachte has been doing some work on other stats.
http://stats.wikimedia.org/EN/VisitorsSampledLogRequests.htm

If I were to compile a wishlist of stats things:

1. stats.grok.se data for non-Wikipedia projects
2. A better interface for stats.wikimedia.org - There's a lot of data
there, but it can be hard to find it and its not very publicized. The
only reason I knew about the link above is because someone pointed it
out to me once and I bookmarked it.
3. Pageview stats at http://dammit.lt/wikistats/ in files based on
projects. It would be a lot easier for people at the West Flemish
Wikipedia to analyze statistics themselves if they didn't have to
download tons of data they don't need.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikipedia tracks user behaviour via third party companies

2009-06-04 Thread Alex
John at Darkstar wrote:
 One idea is the proposal to install the AbuseFilter in a global mode,
 i.e. rules loaded at Meta that apply everywhere.  If that were done
 (and there are some arguments about whether it is a good idea), then
 it could be used to block these types of URLs from being installed,
 even by admins.
 
 Identifying client side generated urls from server side opens up a whole
 lot of problems of its own. Basically you need a script that runs in a
 hostile environment and reports back to a server when a whole series of
 urls are injected from code loaded from some sources (mediawiki-space)
 but not from other sources user space), still code loaded from user
 space through call to mediawiki space should be allowed. Add to this
 that your url identifying code has to run after a script has generated
 the url and before it do any cleanup. The url verification can't just
 say that a url is hostile, it has to check it somehow, and that leads to
 reporting of the url - if the reporting code still executes at that
 moment. Urk...
 

Hmm? There's no reason to do anything like that. The AbuseFilter would
just prevent sitewide JS pages from being saved with the particular URLs
or a particular code block in them. It'll stop the well-meaning but
misguided admins. Short of restricting site JS to the point of
uselessness, you'll never be able to stop determined abusers.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikipedia tracks user behaviour via third party companies

2009-06-04 Thread Alex
John at Darkstar wrote:
 Hmm? There's no reason to do anything like that. The AbuseFilter would
 just prevent sitewide JS pages from being saved with the particular URLs
 or a particular code block in them. It'll stop the well-meaning but
 misguided admins. Short of restricting site JS to the point of
 uselessness, you'll never be able to stop determined abusers.

 
 A very typical code fragment to make a stat url is something like
 
 document.write('img scr=' + server + digest + '');
 
 - server is some kind of external url
 - digest is just some random garbage to bypass caching
 
 This kind of code exists in so many variants that it is very difficult
 to say anything about how it may be implemented. Often it will not use a
 document.write on systems like Wikipedia but instead use createElement()
 Very often someone claims that the definition of server will be
 complete and may be used to identify the external server sufficiently.
 That is not a valid claim as many such sites can be referred for other
 purposes. 

Other purposes that have valid uses loading 3rd party content on a
Wikimedia wiki? Like what?

 Note also that the number of urls will be huge as this type of
 service is very popular, not to say that anyone that want may set up a
 special stat aggregator on an otherwise unknown domain.
 
 Basically, simple regexps are not sufficient for detecting this kind of
 code.

I don't think I said it would be perfect, the idea isn't to 100% prevent
it, just to try to stop the most obvious cases like Google analytics.

 Otherwise, take a look at Simetricals earlier post.
 
 John
 

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Long-term archiving of Wikimedia content

2009-05-05 Thread Alex
David Goodman wrote:
 That is like saying, . Why should i backup my computer now, when there
 will be high capacity media in a few years, or when the next version
 of the OS will do it automatically.
 
 or, more closely,
 why should a books scanning project even be bothered with now. In
 future generation we might well have scanners that will do it much
 more efficiently without opening the books.
 

Because hard drive failure is far more likely than civilization
collapsing or all computers ceasing to work or exist at the same time.
Wikimedia has backups and redundancy; just not in a non-electronic form
designed to survive 1000 years/nuclear war/asteroid impact/etc.

This is somewhat the opposite of book scanning. With book scanning,
you're taking something that may only be available to a handful of
people and allowing many more people to access it by creating
distributable electronic copies. With the proposals here, we'd be taking
something that's already available to everyone electronically and
etching it onto metal plates, engraving in stone, etc. and presumably
locking it in a bomb shelter somewhere, so the benefits/costs aren't the
same.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Cross-wiki articles

2009-05-02 Thread Alex
Yoni Weiden wrote:
 The question is - shouldn't there be one set of standards for all
 Wikipedias? I think it is unfair that I can read about Simpsons episodes
 in the English Wikipedia, while those how speak Hebrew cannot.
 

Such a policy would have had to be decided several years ago. At this
point, even the best compromise proposal would likely mean major changes
for some projects. And on the largest projects, even a small change to
the primary inclusion guideline would likely have a big impact.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Board statement regarding biographies of living people

2009-04-30 Thread Alex
Anthony wrote:
 On Thu, Apr 30, 2009 at 1:10 PM, Domas Mituzas midom.li...@gmail.comwrote:
 
 Actually, I'd be happy if you were right (and you probably are!) - it
 shows, that lots of people had the motivation to come to this
 excursion.
 
 
 But yet you can't classify it as leisure?

It may not be for the board members, but I imagine for the volunteer
developers and other community members who had few or no real
commitments it was.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Board statement regarding biographies of living people

2009-04-30 Thread Alex
Michael Bimmler wrote:
 On Thu, Apr 30, 2009 at 2:15 PM, Alex mrzmanw...@gmail.com wrote:
 Anthony wrote:
 On Thu, Apr 30, 2009 at 1:10 PM, Domas Mituzas midom.li...@gmail.comwrote:

 Actually, I'd be happy if you were right (and you probably are!) - it
 shows, that lots of people had the motivation to come to this
 excursion.

 But yet you can't classify it as leisure?
 It may not be for the board members, but I imagine for the volunteer
 developers and other community members who had few or no real
 commitments it was.


 
 I'm not sure why you regard the commitments a chapter board member has
 towards his/her chapter as less real or less serious than the
 commitments a WMF board member or staff employee has towards the WMF.
 Really, this was not a wiki-meetup...
 

I'm not sure why you feel the need to read more into my comment than was
there. It was a short comment, so I thought a short reply would be
adequate. My apologies for not researching and specifically mentioning
every group that was at the event.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] [WikiEN-l] Flagged revs poll take 2

2009-03-30 Thread Alex
Nathan wrote:
 CC'd this to Foundation-l.
 
 There is a poll currently on the English Wikipedia to implement a version of
 FlaggedRevisions. The poll was introduced left into the vacuum which
 remained after the first poll failed to result in concrete action. At the
 close of poll #1, Jimmy indicated that he thought it had passed and should
 result in an FR implementation. When he received some protest, he announced
 that he would shortly unveil a new compromise proposal.
 
 While I'm sure he had the best of intentions, this proposal hasn't
 materialized and the result has been limbo. Into the limbo rides another
 proposal, this one masquerading as the hoped for compromise. Unfortunately,
 it isn't - at least, not in the sense that it is a middle ground between
 those who want FR implemented and those who oppose it. What it does do is
 compromise, as in fundamentally weaken, the concept of FR and the effort to
 improve our handling of BLPs.
 
 The proposed implementation introduces all the bureaucracy and effort of
 FlaggedRevisions, with few of the benefits. FlaggedProtection, similar to
 semi-protection, can be placed on any article. In some instances,
 FlaggedProtection is identical to normal full protection - only, it still
 allows edit wars on unsighted versions (woohoo). Patrolled revisions are
 passive - you can patrol them, but doing so won't impact what the general
 reader will see. It gives us the huge and useless backlog which is exactly
 what we should not want, and exactly what the opposition has predicted. The
 only likely result is that inertia will prevent any further FR
 implementation, and we'll be stuck with a substitute that grants no real
 benefit.
 
 What I would like to see, and what I have been hoping to see, is either
 implementation of the prior proposal (taking a form similar to that used by
 de.wp) or actual proposal of a true compromise version. The current poll
 asks us to just give up.
 

How is it not a compromise? Its a version that most of the supporters
still support and that many of the opposers now support. Compromise
involves both sides making concessions, not repeatedly proposing the
same thing in hopes of a different outcome. So far it has far more
community support than the previous proposed version (which had what?
60% support?).

I'm getting really mad at the people opposing every version of
FlaggedRevs that doesn't provide some ultimate level of protection for
BLPs. If you want something that helps BLPs, PROPOSE SOMETHING! Sitting
around and opposing everything in favor of some non-existent system is
unhelpful and basically saying that articles are worthless unless they
are BLPs. The proposed system can potentially help some articles, while
this un-proposed system that will be a magic bullet for the BLP problem
currently helps nothing, because it doesn't exist.

I agree that patrolled revisions have a high likelihood of failing. Its
too bad we aren't proposing to use it as a trial instead, so if they
don't work, we can come up with a different system. Oh, wait...

If FlaggedProtection results in a manageable system, then we can
consider expanding it to more articles than the current policy would
allow. Enwiki is big and slow; expecting it to do some massive, visible
change over hundreds of thousands of articles all at once is rather
unrealistic. Several months ago, I told Erik on this list that enwiki
would never be able to get consensus for FR, it looks like I'm wrong
about that. Perhaps there's some hope left after all.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Abuse filter

2009-03-25 Thread Alex
Bence Damokos wrote:
 On Wed, Mar 25, 2009 at 5:55 PM, Robert Rohde raro...@gmail.com wrote:
 
 Just so everyone is clear:

 1) The abuse log is public.  Anyone, including completely anonymous
 IPs, can read the log.

 2) The information in the log is either a) already publicly available
 by other means, or b) would have been made public had the edit been
 completed.  So abuse logging doesn't release any new information that
 wouldn't have been available had the edit been completed.  (Some of
 the information it does release, such as User ID number and time of
 email address confirmation, is extremely obscure though.  While
 public in the sense that it could be located by the public, some of
 the things in the log would be challenging to find otherwise.)
 
 
 Is it a wild assumption on the part of an editor, that after he has been
 warned for an abuse and not pursued it (by forcing a save if the save
 button is available) to assume that his action was lost, and thus possibly
 surprising to see it publicly logged?
 
 In my opinion pressing the preview button and then not saving is a similar
 use case as being warned by the abuse filter and not saving -- you should
 not expect the lost edit in either case to be publicly available. I think at
 the least the abuse warning should make it clear that the action and *x,y,z
 data of the user * were publicly logged.

Except his assumption when clicking save, before ever seeing the abuse
filter warning, was that his edit would be publicly viewable
immediately. Unless the user was purposely intending to do something
that he knew would be disallowed by the abuse filter, he was fully
intending for whatever he wrote to be made public.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Request for your input: biographies of living people

2009-03-04 Thread Alex
Chad wrote:
 
 While working with OTRS, I actually sent several articles through AfD. And I
 typically didn't announce that it was an OTRS thing, so as to let the 
 community
 judge the article on its own merits. This would actually be a decent policy to
 follow: encourage OTRS respondents to send the marginally notable through
 the normal AfD process (like any other) and allow those in the community
 more equipped to deal with deletion/BLP issues handle it.
 

This assumes that both of those groups are the same. Many people
involved in the deletion processes are rather unconcerned with BLP
issues (or things like sourcing and NPOV, as long as its notable), and
many people concerned about BLPs don't involve themselves in the
deletion process.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikia leasing office space to WMF

2009-01-24 Thread Alex
Brian wrote:
 If the CIA were to hand you a improved-mediawiki binary, sure
 PHP is an interpreted language. Surely you wouldn't use someone elses byte
 code.
 
 
 On Sat, Jan 24, 2009 at 8:32 AM, Platonides platoni...@gmail.com wrote:
 
 Nikola Smolenski wrote:
 Given that we know that NSA conducts massive illegal spying operations,
 there
 is possibility that selinux is altered in a fashion that will make it
 easier
 for NSA to spy on selinux' users. I don't know what are CIA's
 contributions
 to MediaWiki, but unless it is trivial to review them, I would not accept
 them.
 If the CIA were to hand you a improved-mediawiki binary, sure. You could
 very well be suspicious about it. But we're talking about open source.
 They would be providing the changes, which are to be reviewed, like any
 other code, or perhaps even more, due to coming from the CIA.

 Take into account that CIA and NSA need good software, too. So if they
 add a backdoor, they would need to add it *and* at the same time make it
 easy to protect from it, as they wouldn't want their own systems spied
 by their own rootkit (and someone will end up forgetting to apply it).

 Instead, contributing good fixes, make everything easier.

 OTOH I encourage you to review selinux. That would make a great heading
 'Nikola Smolenski discovers NSA backdoor on Linux code'


This is getting rather off-topic, especially for this thread, and
possibly for the list as well.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikia leasing office space to WMF

2009-01-24 Thread Alex
I'm criticizing the switch from Wikia leasing office space to WMF to
Is the CIA evil? I just responded to the most recent email in my
inbox; I thought that would be more appropriate than responding to all
17 CIA/NSA-related emails. I was not criticizing you in particular.

The topic of this thread is Wikia leasing office space to WMF, that
should be rather clear from the subject. And the topic of the list is
Wikimedia related issues. Its almost on topic for the list (MediaWiki
is at least mentioned occasionally), its certainly not at all related to
the topic of the thread.

Brian wrote:
 It was a clear factual error which I corrected. If you aren't going to
 criticize the original comment you have no basis for criticizing the
 correction.
 At any rate, what exactly is the topic of this thread, in your opinion?
 
 
 On Sat, Jan 24, 2009 at 10:42 AM, Alex mrzmanw...@gmail.com wrote:
 
 Brian wrote:
 If the CIA were to hand you a improved-mediawiki binary, sure
 PHP is an interpreted language. Surely you wouldn't use someone elses
 byte
 code.


 On Sat, Jan 24, 2009 at 8:32 AM, Platonides platoni...@gmail.com
 wrote:
 Nikola Smolenski wrote:
 Given that we know that NSA conducts massive illegal spying operations,
 there
 is possibility that selinux is altered in a fashion that will make it
 easier
 for NSA to spy on selinux' users. I don't know what are CIA's
 contributions
 to MediaWiki, but unless it is trivial to review them, I would not
 accept
 them.
 If the CIA were to hand you a improved-mediawiki binary, sure. You could
 very well be suspicious about it. But we're talking about open source.
 They would be providing the changes, which are to be reviewed, like any
 other code, or perhaps even more, due to coming from the CIA.

 Take into account that CIA and NSA need good software, too. So if they
 add a backdoor, they would need to add it *and* at the same time make it
 easy to protect from it, as they wouldn't want their own systems spied
 by their own rootkit (and someone will end up forgetting to apply it).

 Instead, contributing good fixes, make everything easier.

 OTOH I encourage you to review selinux. That would make a great heading
 'Nikola Smolenski discovers NSA backdoor on Linux code'

 This is getting rather off-topic, especially for this thread, and
 possibly for the list as well.

 --
 Alex (wikipedia:en:User:Mr.Z-man)

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 


-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] transparency or translucency?

2009-01-10 Thread Alex
James Rigg wrote:
 This 'principle':
 
 The mailing list will remain open, well-advertised, and will be
 regarded as the place for meta-discussions about the nature of
 Wikipedia.
 
 does seem to be referring to not just content, but also the running of
 Wikipedia. But the 'private' mailing lists which now exist seem to be
 a departure from this.
 

As has been said, certain things require privacy, if not by law, by
common sense or courtesy. Obviously things like CheckUser data can't be
discussed in public and making things like emails to OTRS and
oversight-l public would greatly reduce their usefulness to the projects.

The biggest departure from that principle is that most of the day-to-day
running isn't done on the mailing lists, mostly everything at the
project-level is done on-wiki on discussion pages.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Why is the software out of reach of the community?

2009-01-10 Thread Alex
Brian wrote:
 Mark,
 Keep in mind regarding my Semantic drum beating that I am not a developer of
 Semantic Mediawiki or Semantic Forms. I am just a user, and as Erik put it,
 an advocate.
 
 That said, I believe these two extensions together solve the problem you are
 talking about. And for whatever reason, the developers of MediaWiki are
 willing to create new complicated syntax, but not new interfaces.
 
 In your assessment, do these extensions solve the interface extensibility
 problem you describe?
 
 To the list,
 
 Regarding development process, why weren't a variety of sophisticated
 solutions, in addition to ParserFunctions, thoroughly considered before they
 were enabled on the English Wikipedia?
 
 Should ParserFunctions be reverted (a simple procedure by my estimate, which
 is a good thing) based solely on the fact that they are the most clear
 violation of Jimbo's principle that I am aware of?
 

A simple procedure? Yes, disabling the extension would be rather simple,
repairing the thousands of templates that would be broken in the
process, not so much.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] GFDL QA update and question

2009-01-09 Thread Alex
geni wrote:
 2009/1/10 Anthony wikim...@inbox.org:
  It isn't clear what it means.
 There seems to be a belief that it can be interpreted to only require
 attribution of 5 authors, and I don't like that at all.
 
 The word five doesn't appear in the license and 5 only appears in
 a section name and one reference to the section.
 
 There might be a way to use one of the clauses to do this but it would
 be darn hard and the foundation has made statements that it won't use
 the relevant clause.
 

Its actually the GFDL that has the 5 principle authors thing, section 4.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] GFDL QA update and question

2009-01-08 Thread Alex
Anthony wrote:
 There are very few offline reusers of Wikipedia content.  I know of none
 that are using more than de minimis portions of my content without
 attributing me.  If you know of any, please, tell me who they are, and I'll
 send a cease and desist to them.
 
 This switch to CC-BY-SA is clearly going to open the door for offline
 reusers to use Wikipedia content without attributing authors beyond listing
 one or more URLs.  In fact, it's quite clear from discussions which have
 taken place on this list that this is the main point of making the switch.
 The WMF condoning and facilitating such behavior is absolutely unacceptable,
 no matter how many people vote to do so.
 

This is a bad thing? Whatever happened to that spreading free content
goal we had? Or does that only apply on the internet?

There probably aren't many offline reusers because they're either
entirely non-compliant and we have no idea that they exist or they want
to be compliant, read the terms of the GFDL, and decide not to bother
with our content.
-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Remembering the People (was Fundraiser update)

2009-01-08 Thread Alex
Marc Riddell wrote:
 on 1/8/09 9:20 PM, Erik Moeller at e...@wikimedia.org wrote:
 
 2009/1/8 Marc Riddell michaeldavi...@comcast.net:
 This is pure unsubstantiated rhetoric. There are real-life, real-time
 problems - serious problems - that directly involve the people occurring in
 the English Wikipedia for example. Where is your help?
 Marc, can you give examples of what kind of help you'd like to see?
 
 Yes, Erik, I can. Just two for now, it's been a long day for me and I still
 have tomorrow's sessions to prepare for.
 
 * A person at the Foundation level who has true, sensitive inter-personal as
 well a inter-group skills, and who would keep a close eye on the Project
 looking for impasses when they arise. The person would need to be objective
 and lobby-resistant ;-). This would be the person of absolute last resort in
 settling community-confounding problems.

Why are local ArbComs insufficient for this? If the community is unable
to resolve the dispute, I highly doubt someone who's a relative outsider
stepping in the middle would be able to unless they just issue an
official, non-negotiable edict.

 *This is more of a cultural issue: I would like to see the more established
 members of the community be more open to criticism and dissent from within
 the community. As it is now that tolerance is extremely low. I'm not talking
 about me; I'm an old Berkeley war horse and have been called things I had to
 look up :-). But I have gotten private emails from persons in the community
 with legitimate beefs, along with some good ideas for change, but are very
 reluctant to voice them because of how they believe they will be received.
 

And how is the foundation supposed to resolve this? Counsel people into
changing their opinions? Ban people who appear to be suppressing
criticism? Forcibly change policies? Act as proxies for people afraid of
criticism? I'm struggling to think of anything that could be done on a
foundation level that would be effective here.


Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Remembering the People (was Fundraiser update)

2009-01-08 Thread Alex
Marc Riddell wrote:
 on 1/8/09 11:02 PM, Alex at mrzmanw...@gmail.com wrote:
 And how is the foundation supposed to resolve this? Counsel people into
 changing their opinions? Ban people who appear to be suppressing
 criticism? Forcibly change policies? Act as proxies for people afraid of
 criticism? I'm struggling to think of anything that could be done on a
 foundation level that would be effective here.


 Alex, your hostile attitude in both your responses prove my second point.
 You, and attitudes like this, are a part of the problem.

And your attitude illustrates the problem with many (not all) of the
critics and dissenters. Rather than reply to my points or explain
your ideas further when questioned, you choose to attack me. Those
weren't rhetorical questions. I really was curious as to what you were
suggesting.

-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Site notice suggestion needed.

2008-12-05 Thread Alex
Brion Vibber wrote:
 Gregory Maxwell wrote:
 On Fri, Dec 5, 2008 at 2:52 PM, effe iets anders
 [EMAIL PROTECTED] wrote:
 http://stats.wikimedia.org/EN/TablesPageViewsMonthly.htm

 More then 10 billion page views per month... (or 3900 page views per second)
 24 hour average HTTP request rate is 46347. So thats 120,131,424,000
 HTTP requests per month. I have a hard time believing that we're
 averaging more than 12 HTTP requests per page view on average. I think
 something is inaccurate, and I think the HTTP request rate the more
 trustworthy number.
 
 Sounds on the right order of magnitude to me. I just tried a full-reload
 on a random short article with no images:
 http://en.wikipedia.org/wiki/Andrew_J._Harlan
 
 which totaled 25 HTTP requests (including the fundraiser notice banner):
 
 1 HTML page
 3 static style sheets
 3 dynamic style sheets
 4 static JavaScripts
 2 dynamic JavaScripts
 1 fundraising banner JS
 3 fundraising banner images
 8 UI images (logo, background, icons)
 
 If you hit multiple pages, more of those will be cached, but you'll end
 up loading additional images as well.
 
 At some point we'll probably do some more consolidation on the CSS and
 JS files that get loaded most frequently to reduce the number of server
 round-trips on a first hit.
 
 -- brion

Based on Domas's pageview stats[1] for the past 14 days (11/21 - 12/04)
we get an average of 4112 pageviews per second, 4.9 billion total views
for the 2 week period. The results per day are available at [2].

Out of curiosity, I also checked the 3 days around the recent US
election day (11/03 - 11/05), the average views per second was 4599, an
additional 42 million pageviews per day on average (though that's
somewhat misleading, as that range is only weekdays and the number of
pageviews tends to decrease significantly on weekends)

[1] http://dammit.lt/wikistats/ Caution: fairly large page
[2] http://en.wikipedia.org/wiki/User:Mr.Z-man/views
-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l