Re: [Wikimedia-l] CheckUser openness
Hi Nathan, For a moment, let's suppose that there is a global policy that all CU checks must be disclosed to the person being checked, with the information disclosed in private email, and only consisting of the date of the check and the user who performed the check. What benefit does this have to the user who was checked? This information doesn't make the user more secure, it doesn't make the user's information more private, and there are no actions that the user is asked to take. Perhaps there is a benefit, but I am having difficulty thinking of what that benefit would be. I can think of how this information would benefit a dishonest user, but not how it would benefit an honest user. If there is a valuable benefit that an honest user receives from this information, what is it? Thanks, Pine ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] CheckUser openness
Two points that might help bring people on different sides of the issue closer together. 1. How about notifying people that they have been check-usered 2 months after the fact? By that time I hope all investigations are complete, and is the risk of tipping off the nefarious should be over. 2. Though the strategies of when to checkuser and how to interpret the results are private, the workings of CheckUser are not. It is free software, and its useage described at http://www.mediawiki.org/wiki/Extension:CheckUser I would imagine any tech-savy user with malicioius intent will check how CheckUser can be used to detect their malicious editing, and what means they have to avoid detection. Notifying someone they have been checkusered does not give them any information they didn't have already, apart from being under investigation. On Fri, Jun 15, 2012 at 8:43 AM, Neil Babbage n...@thebabbages.com wrote: Notification of some checks would always have to be withheld to allow complex investigations to be completed without tipping off. There is public information that suggests there have been complex abuse cases (real abuse, like harassment, not vandalism). To notify parties suspected of involvement while these long running investigations are underway is broadly analogous to receiving an automated email when your name is searched on the FBI national computer: the innocent want an explanation that wastes police time; the guilty realise they are being investigated and are tipped off to adapt their behaviour. As soon as there is an option to suppress the alert you are back to square 1: CUs may suppress the notification to hide what they are doing. End of the day, the communities elected the CUs knowing they'd be able to secretly check private data - so you have to trust them to do what you ask them to do or elect someone else you do trust. Neil / QuiteUnusual@Wikibooks -Original Message- From: Nathan nawr...@gmail.com Sender: wikimedia-l-boun...@lists.wikimedia.org Date: Thu, 14 Jun 2012 22:10:33 To: Wikimedia Mailing Listwikimedia-l@lists.wikimedia.org Reply-To: Wikimedia Mailing List wikimedia-l@lists.wikimedia.org Subject: Re: [Wikimedia-l] CheckUser openness On Thu, Jun 14, 2012 at 8:06 PM, Dominic McDevitt-Parks mcdev...@gmail.comwrote: I think the idea that making the log of checks public will necessarily be a service to those subject to CheckUser is misguided. One of the best reasons for keeping the logs private is not security through obscurity but the prevention of unwarranted stigma and drama. Most checks (which aren't just scanning a vandal or persistent sockpuppeteer's IP for other accounts) are performed because there is some amount of uncertainty. Not all checks are positive, and a negative result doesn't necessarily mean the check was unwarranted. I think those who have been checked without a public request deserve not to have suspicion cast on them by public logs if the check did not produce evidence of guilt. At the same time, because even justified checks will often upset the subject, the CheckUser deserves to be able to act on valid suspicions without fear of retaliation. The community doesn't need the discord that a public log would generate. That's not to say that there should be no oversight, but that a public log is not the way to do it. Dominic The threat of stigma can be ameliorated by not making the logs public, which was never suggested. A simple system notification of The data you provide to the Wikimedia web servers has been checked by a checkuser on this project, see [[wp:checkuser]] for more information would be enough. En Pine's reply to my queries seems calibrated for someone who is unfamiliar with SPI and checkuser work. I'm not - in fact I worked as a clerk with checkusers at SPI for a long time and am quite familiar with the process and its limitations. I know what's disclosed, approximately how frequently checks are run, the general proportion of checks that are public vs. all checks, etc. I still am not clear on how disclosing the fact of a check helps socks avoid detection, and I still believe that it's worthwhile for a transparent organization like Wikimedia to alert users when their private information (information that is, as Risker has mentioned, potentially personally identifying) has been disclosed to another volunteer. Nathan ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] CheckUser openness
On Fri, Jun 15, 2012 at 4:52 AM, Martijn Hoekstra martijnhoeks...@gmail.com wrote: Two points that might help bring people on different sides of the issue closer together. 1. How about notifying people that they have been check-usered 2 months after the fact? By that time I hope all investigations are complete, and is the risk of tipping off the nefarious should be over. That's an interesting concept, and I'd think this would be the only way to notify users without compromising the effectiveness of the tool, but I still have serious reservations about disclosure here for reasons previously cited and below. Also, there are conceivably complex abuse cases where an investigation would take longer than 2 months, particularly in the sort of cases that eventually end up before en.wiki's arbcom. 2. Though the strategies of when to checkuser and how to interpret the results are private, the workings of CheckUser are not. It is free software, and its useage described at http://www.mediawiki.org/wiki/Extension:CheckUser I would imagine any tech-savy user with malicioius intent will check how CheckUser can be used to detect their malicious editing, and what means they have to avoid detection. Notifying someone they have been checkusered does not give them any information they didn't have already, apart from being under investigation. The privacy rules surrounding it are very much public as well. That makes the effectiveness of checkuser as a tool very much dependent on carelessness or ignorance of person targeted, things we want to preserve as much as possible lest checkuser stop being effective or massive relaxation of privacy policies become necessary to preserve its effectiveness. ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] Who invoked principle of least surprise for the image filter?
On Thu, Jun 14, 2012 at 11:40 PM, Risker risker...@gmail.com wrote: On 14 June 2012 16:19, David Gerard dger...@gmail.com wrote: On 14 June 2012 20:36, Andrew Gray andrew.g...@dunelm.org.uk wrote: Least surprise is one way to try and get around this problem of not relying on the community's own judgement in all edge cases; I'm not sure it's the best one, but I'm not sure leaving it out is any better. The present usage (to mean you disagree with our editorial judgement therefore you must be a juvenile troll) is significantly worse. I'm not entirely certain that you've got the usage case correct, David. An example would be that one should not be surprised/astonished to see an image including nudity on the article [[World Naked Gardening Day]], but the same image would be surprising on the article [[Gardening]]. The Commons parallel would be that an image depicting nude gardening would be appropriately categorized as [[Cat:Nude gardening]], but would be poorly categorized as [[Cat:Gardening]]. One expects to see a human and gardening but not nudity in the latter, and humans, gardening, *and* nudity in the former. Now, in fairness, we all know that trolling with images has been a regular occurrence on many projects for years, much of it very obviously trolling, but edge cases can be more difficult to determine. Thus, the more neutral principle of least astonishment (would an average reader be surprised to see this image on this article?/in this category?) comes into play. I'd suggest that the principle of least astonishment is an effort to assume good faith. Risker There is a serious issue here. least astonishment is very much distinct from least offence. We don't guarantee the latter, and never should.The former was hijacked by a silly board resolution, and should be rescinded. -- -- Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]] ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] CheckUser openness
On Fri, Jun 15, 2012 at 11:18 AM, Stephanie Daugherty sdaughe...@gmail.com wrote: On Fri, Jun 15, 2012 at 4:52 AM, Martijn Hoekstra martijnhoeks...@gmail.com wrote: Two points that might help bring people on different sides of the issue closer together. 1. How about notifying people that they have been check-usered 2 months after the fact? By that time I hope all investigations are complete, and is the risk of tipping off the nefarious should be over. That's an interesting concept, and I'd think this would be the only way to notify users without compromising the effectiveness of the tool, but I still have serious reservations about disclosure here for reasons previously cited and below. Also, there are conceivably complex abuse cases where an investigation would take longer than 2 months, particularly in the sort of cases that eventually end up before en.wiki's arbcom. 2. Though the strategies of when to checkuser and how to interpret the results are private, the workings of CheckUser are not. It is free software, and its useage described at http://www.mediawiki.org/wiki/Extension:CheckUser I would imagine any tech-savy user with malicioius intent will check how CheckUser can be used to detect their malicious editing, and what means they have to avoid detection. Notifying someone they have been checkusered does not give them any information they didn't have already, apart from being under investigation. The privacy rules surrounding it are very much public as well. That makes the effectiveness of checkuser as a tool very much dependent on carelessness or ignorance of person targeted, things we want to preserve as much as possible lest checkuser stop being effective or massive relaxation of privacy policies become necessary to preserve its effectiveness. Am I correct to summorise here than that CU works because people don't know it doesn't? ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] CheckUser openness
Am I correct to summorise here than that CU works because people don't know it doesn't? Almost. It works because people don't know how, don't care how, or don't think they are attracting enough attention to avoid being targeted. ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] CheckUser openness
On Fri, Jun 15, 2012 at 2:22 AM, En Pine deyntest...@hotmail.com wrote: Hi Nathan, For a moment, let's suppose that there is a global policy that all CU checks must be disclosed to the person being checked, with the information disclosed in private email, and only consisting of the date of the check and the user who performed the check. What benefit does this have to the user who was checked? This information doesn't make the user more secure, it doesn't make the user's information more private, and there are no actions that the user is asked to take. Perhaps there is a benefit, but I am having difficulty thinking of what that benefit would be. I can think of how this information would benefit a dishonest user, but not how it would benefit an honest user. If there is a valuable benefit that an honest user receives from this information, what is it? Thanks, Pine Pine: As you have said, checkuser oversight comes from AUSC, ArbCom and the ombudspeople. These groups typically respond to requests and complaints (well, the ombuds commission typically doesn't respond at all). But you only know to make a request or complaint if you know you've been CU'd. So notifying people that they have been CU'd would allow them to follow up with the oversight bodies. My guess is most would choose not to, but at least some might have a reason to. It's also plain that even if there is no recourse, people will want to know if their identifying information has been disclosed. Neil: The difference between the FBI and checkusers is clear: checkusers are volunteers. They are elected on some projects, appointed on others, and the process can often be murky or poorly attended. The background check as such for checkusers is minimal. People with an intention to abuse the system have become checkusers in the past. Martijn: A delay makes sense. Two months seems like a long time, but two weeks or a week might be reasonable. Stephanie: Supposedly, the data only survives 3 months. If data is being retained much longer than this for investigations that go on for months on the checkuser wiki, that's concerning. ~Nathan ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] Who invoked principle of least surprise for the image filter?
Am 14.06.2012 19:31, schrieb geni: On 14 June 2012 18:01, David Gerarddger...@gmail.com wrote: Yes, but this is called editorial judgement No its called censorship. Or at least it will be called censorship by enough people to make any debate not worth the effort. It is called censorship right at that moment when useful illustrations are removed because of their shock value, while arguing with the the priciple of XYZ from a rather extreme position. Good editorial judgment would include such depictions if they further the understanding of a topic. But bad editorial judgment tends to exclude useful depictions and to include useless/unrelated, shocking or not, depictions. rather than something that can be imposed by filtering. True for wikipedia but commons in particular needs some way or another to provide more focused search results. I already made a workable suggestion for Commons, but the interest from any side was very low: http://commons.wikimedia.org/wiki/Commons:Requests_for_comment/improving_search#A_little_bit_of_intelligence Some seam not like to give up the idea of filtering (labeling) and others seam not to care. Overall we have a proposal that would be workable, being to the benefit of all users and would not introduce any controversy or additional work, once implemented. (Although the board and staff claim that editorial judgement they disagree with must just be trolling is how principle of least surprise becomes we need a filter system.) Perhaps but I wasn't aware that their opinions were considered to be of any significance at this point. Okey they did block [[user:Beta_M]] but the fact that very much came out of the blue shows how little consideration they are given these days. The fact remains that anyone who actually wants a filter could probably put one together in the form of an Adblock plus filter list within a few days. So far the only list I'm aware of is one I put together to filter out images of Giant isopods. I argued at some time that if there was a strong need for such a filter that there would already services in place that would filter the content or images. So far i have seen some very week approaches using the Google APIs, but no real filter lists. Judging from your approach to filter out Giant isopods, we see that there is no general rule what should be filtered. Some dislike X, others Y and the next one likes X and Y but not Z. Overall this results in the wish to have as many suitable filters as possible, which at the same time results in massive tagging work. ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] Who invoked principle of least surprise for the image filter?
On 15 June 2012 13:15, Tobias Oelgarte tobias.oelga...@googlemail.com wrote: I argued at some time that if there was a strong need for such a filter that there would already services in place that would filter the content or images. So far i have seen some very week approaches using the Google APIs, but no real filter lists. Judging from your approach to filter out Giant isopods, we see that there is no general rule what should be filtered. Some dislike X, others Y and the next one likes X and Y but not Z. Overall this results in the wish to have as many suitable filters as possible, which at the same time results in massive tagging work. I don't recall seeing any, but did anyone actually explain why the market had not provided a filtering solution for Wikipedia, if there's actually a demand for one? (IIRC the various netnannies for workplaces don't filter Wikipedia, or do so only by keyword, i.e. [[Scunthorpe problem]]-susceptible, methods.) I ask because of recent statements by board members that the filter is alive and well, and not at all dead. - d. ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] Who invoked principle of least surprise for the image filter?
On Friday, 15 June 2012 at 13:21, David Gerard wrote: I don't recall seeing any, but did anyone actually explain why the market had not provided a filtering solution for Wikipedia, if there's actually a demand for one? Market failures do sometimes exist. Also, because as far as I can tell, the proposed filter isn't a NetNanny type thing, it's a I don't want to see pictures of boobies AdBlock type thing. Which is a different thing entirely. Of course, there's some confusion here. Larry Sanger, for instance, is very very angry about how Wikipedia hasn't implemented a filter, even though he seems slightly confused as to the difference between an AdBlock type filter and a NetNanny type filter. Preventing people who don't want to see pictures of naked people from seeing pictures of naked people is a lot easier a task than preventing people who DO want to see pictures of naked people from doing so. -- Tom Morris http://tommorris.org/ ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] CheckUser openness
On 15 June 2012 04:55, Nathan nawr...@gmail.com wrote: Supposedly, the data only survives 3 months. If data is being retained much longer than this for investigations that go on for months on the checkuser wiki, that's concerning. We have well-known trolls and repeat vandals who have been coming back to the various wiki communities for many years - in some cases, for nearly a decade now. Why is it concerning to you that the people responsible for detecting, tracking and defeating these individuals keep track of these users and their work over time (whilst of course always being within the Privacy and CheckUser policies)? Yours, -- James D. Forrester jdforres...@gmail.com [[Wikipedia:User:Jdforrester|James F.]] (speaking purely in a personal capacity) ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] CheckUser openness
Hi Nathan, For a moment, let's suppose that there is a global policy that all CU checks must be disclosed to the person being checked, with the information disclosed in private email, and only consisting of the date of the check and the user who performed the check. What benefit does this have to the user who was checked? This information doesn't make the user more secure, it doesn't make the user's information more private, and there are no actions that the user is asked to take. Perhaps there is a benefit, but I am having difficulty thinking of what that benefit would be. I can think of how this information would benefit a dishonest user, but not how it would benefit an honest user. If there is a valuable benefit that an honest user receives from this information, what is it? Thanks, Pine Pine: As you have said, checkuser oversight comes from AUSC, ArbCom and the ombudspeople. These groups typically respond to requests and complaints (well, the ombuds commission typically doesn't respond at all). But you only know to make a request or complaint if you know you've been CU'd. So notifying people that they have been CU'd would allow them to follow up with the oversight bodies. My guess is most would choose not to, but at least some might have a reason to. It's also plain that even if there is no recourse, people will want to know if their identifying information has been disclosed. Hi Nathan, Thanks, I think I understand your points better now. Let me see if I can respond. I'm not a Checkuser or CU clerk, and I am commenting only from my limited ability to get information as an outsider. If we notify all users who have been CU'd as we are discussing, what I speculate will happen is an increase in the volume of people who contact the CU who used the tool, their local AUSC or ArbCom, other local CUs, OTRS, and the ombudsmen. This will increase the workload of emailed questions for the CU who used the tool and anyone else who might be contacted. This increase in workload could require an increase the number of people on AUSC or other audit groups who have access to the tool in order to supervise the CUs who are doing the front-line work, and this increase in the number of CUs makes it more possible for a bad CU to slip through. Another other problem that I foresee is that if a user appeals the original CU decision to another CU or any group that audits CUs, then the user is put in the position of trusting that whoever reviews the first CU's work is themselves trustworthy and competent. The user still doesn't get the personal authority to review and debate the details of the CU's work. Since my understanding is that CUs already check each other's work, I'm unsure that an increase in inquiries and appeals to supervisory groups would lead to a meaningful improvement as compared to the current system in CU accuracy or data privacy. So, what I foresee is an increase in workload for audit groups, but little meaningful increase to the assurance that the CU tool and data are used and contained properly. Additionally, as has been mentioned before, I worry about the risk of giving sockpuppets additional information that they might be able to use to evade detection. I agree with you that there might be bad CUs in the current system, although personally I haven't heard of any. Where I think we differ is on the question of what should be done to limit the risk of bad CUs while balancing other considerations. At this point, I think the available public evidence is that there are more problems with sophisticated and persistent sockpuppets than there are problems with current CUs. I hope and believe that current CUs and auditors are generally honest, competent, and vigilant about watching each other's work. Pine ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] Who invoked principle of least surprise for the image filter?
On Fri, Jun 15, 2012 at 1:21 PM, David Gerard dger...@gmail.com wrote: On 15 June 2012 13:15, Tobias Oelgarte tobias.oelga...@googlemail.com wrote: I argued at some time that if there was a strong need for such a filter that there would already services in place that would filter the content or images. So far i have seen some very week approaches using the Google APIs, but no real filter lists. Judging from your approach to filter out Giant isopods, we see that there is no general rule what should be filtered. Some dislike X, others Y and the next one likes X and Y but not Z. Overall this results in the wish to have as many suitable filters as possible, which at the same time results in massive tagging work. I don't recall seeing any, but did anyone actually explain why the market had not provided a filtering solution for Wikipedia, if there's actually a demand for one? (IIRC the various netnannies for workplaces don't filter Wikipedia, or do so only by keyword, i.e. [[Scunthorpe problem]]-susceptible, methods.) UK schools of course filter, but both the bestiality video and everything that comes up in a multimedia search for male human was accessible on computers in my son's school. Much to their surprise. The one thing their filter did catch was the masturbation videos category page in Commons. I ask because of recent statements by board members that the filter is alive and well, and not at all dead. Which board members other than Jimbo have said that? ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Re: [Wikimedia-l] CheckUser openness
On Fri, Jun 15, 2012 at 9:51 PM, ENWP Pine deyntest...@hotmail.com wrote: Hi Nathan, For a moment, let's suppose that there is a global policy that all CU checks must be disclosed to the person being checked, with the information disclosed in private email, and only consisting of the date of the check and the user who performed the check. What benefit does this have to the user who was checked? This information doesn't make the user more secure, it doesn't make the user's information more private, and there are no actions that the user is asked to take. Perhaps there is a benefit, but I am having difficulty thinking of what that benefit would be. I can think of how this information would benefit a dishonest user, but not how it would benefit an honest user. If there is a valuable benefit that an honest user receives from this information, what is it? Thanks, Pine Pine: As you have said, checkuser oversight comes from AUSC, ArbCom and the ombudspeople. These groups typically respond to requests and complaints (well, the ombuds commission typically doesn't respond at all). But you only know to make a request or complaint if you know you've been CU'd. So notifying people that they have been CU'd would allow them to follow up with the oversight bodies. My guess is most would choose not to, but at least some might have a reason to. It's also plain that even if there is no recourse, people will want to know if their identifying information has been disclosed. Hi Nathan, Thanks, I think I understand your points better now. Let me see if I can respond. I'm not a Checkuser or CU clerk, and I am commenting only from my limited ability to get information as an outsider. If we notify all users who have been CU'd as we are discussing, what I speculate will happen is an increase in the volume of people who contact the CU who used the tool, their local AUSC or ArbCom, other local CUs, OTRS, and the ombudsmen. This will increase the workload of emailed questions for the CU who used the tool and anyone else who might be contacted. This increase in workload could require an increase the number of people on AUSC or other audit groups who have access to the tool in order to supervise the CUs who are doing the front-line work, and this increase in the number of CUs makes it more possible for a bad CU to slip through. Another other problem that I foresee is that if a user appeals the original CU decision to another CU or any group that audits CUs, then the user is put in the position of trusting that whoever reviews the first CU's work is themselves trustworthy and competent. The user still doesn't get the personal authority to review and debate the details of the CU's work. Since my understanding is that CUs already check each other's work, I'm unsure that an increase in inquiries and appeals to supervisory groups would lead to a meaningful improvement as compared to the current system in CU accuracy or data privacy. So, what I foresee is an increase in workload for audit groups, but little meaningful increase to the assurance that the CU tool and data are used and contained properly. Additionally, as has been mentioned before, I worry about the risk of giving sockpuppets additional information that they might be able to use to evade detection. I agree with you that there might be bad CUs in the current system, although personally I haven't heard of any. Where I think we differ is on the question of what should be done to limit the risk of bad CUs while balancing other considerations. At this point, I think the available public evidence is that there are more problems with sophisticated and persistent sockpuppets than there are problems with current CUs. I hope and believe that current CUs and auditors are generally honest, competent, and vigilant about watching each other's work. Pine I do hear and understand the argument here, but it is somewhat problematic to have to have the argument if we do this, we'll be handing over information to sockpuppeteers we don't want them to have, and we can't tell you what that information is, because otherwise we'll be handing over information to sockpuppeteers we don't want them to have. While I think the methods currently used are probably sound, and the information would indeed give them more possibilities to evade the system, I can't be sure of it, because I can't be told what that information is. I don't think this is a viable long-term strategy. The Audit Committee is a way around this, but as indicated before, there is somewhat of an overlap between the committee and the Check-User in-crowd, which could (again, could, I'm not sure if it is indeed true). Apart from the 'timed release' of information I proposed earlier, I don't really see a viable solution for this, as I doubt we have enough people that are sufficiently qualified on a technical level to actually judge the checkuser results, who also have
Re: [Wikimedia-l] [Wikimedia Announcements] Wikimedia Foundation Report, May 2012
Looking into it. Thanks for the notice! On Fri, Jun 15, 2012 at 5:46 PM, James Salsman jsals...@gmail.com wrote: ... == Visitors and Guests == Visitors to the WMF office in May 2012 1. Jocelyn Berl (NexGenEdu) Jocelyn was visiting on behalf of hackthefuture.org, not NextGenEdu. I would correct that, but I am not permitted to edit Meta because two separate Foundation employees have claimed that I did not discuss the Inactive Administrators Survey with them before it was distributed. In fact, I did in both cases. Sincerely, James Salsman ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l -- Tilman Bayer Senior Operations Analyst (Movement Communications) Wikimedia Foundation IRC (Freenode): HaeB ___ Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l