Re: Google Summer of Code 2017
The proposed project list is here: https://wiki.mozilla.org/Community:SummerOfCode17 ... But bear in mind that while Mozilla has applied to participate in GSoC, Google's GSoC team will not announce whether or not Mozilla will be accepted for a few weeks yet. - mhoye On Feb 11, 2017 12:35, wrote: > Hello everyone, I am interested in contributing to Mozilla this year as > part of GSoC, is the ideas list not published yet? > ___ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform > ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: windows build anti-virus exclusion list?
Depending on your AV, if you don't exempt mozilla-central some of our tests will get quarantined and you won't be able to build at all. - mhoye On Mar 16, 2017 20:34, "Ben Kelly" wrote: On Thu, Mar 16, 2017 at 11:26 PM, Ben Kelly wrote: > - mozilla-build install dir > - visual studio install dir > - /users/bkelly/appdada/local/temp > - /users/bkelly (because temp dir was not enough) > FWIW, adding all these extra exclusions dropped my build time from ~22 minutes to ~14 minutes. I'm hoping I can narrow my home directory exclusion down to things like .bash_profile, .cargo, etc. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to remove: sensor APIs
On Aug 2, 2017 15:54, "Enrico Weigelt, metux IT consult" < enrico.weig...@gr13.net> wrote: Making that information visible to websites (even worse: movement tracking via g-sensor, etc), definitively looks like security nightmare which even the Stasi never dared dreaming of. You need to dial this rhetoric back about 100%. It is not acceptable to bring even an implied accusation like that to a technical discussion, or indeed any conversation at all, at Mozilla. We're always happy to listen to honest criticism and walk back our mistakes, but we are going have those discussions without demeaning the work or comparing the people doing that work to volkscryptopolitzei collaborators. I encourage you to read our community participation guidelines carefully, and to take them to heart before continuing. Thank you. -mhoye ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code Review Session
- Original Message - > From: "Benjamin Smedberg" > To: "Scott Johnson" > Cc: dev-platform@lists.mozilla.org, "Michael Hoye" > > * Automated tools: mhoye has identified lack of automated review as > one > of our biggest blockers to getting more mentors involved and having > successful mentoring for new volunteers. It turns out that nobody > wants > to mentor bugs when most of the interaction involves "please fix this > whitespace/style/etc". cc'ing him so he can provide more details. Yeah, so, about that. Having spoken to a handful of developers, this is #2 on the List Of Things People Dislike About Mentoring, having to go back-and-forth with a contributor about style conventions and whitespace. In short, everyone hates it, everyone understands that this should be completely and utterly automated. A question I'm trying to clear up for myself is: I understand that clang-format is somehow inadequate to our needs, but I see that there's an explicit "clang-format -style=Mozilla" option that claims to be doing the right Mozilla thing. So, I'm hoping somebody can give me a clearer idea of how clang-format is broken. Given the number of hours wasted every year on formatting nits, though, and the broader disenchantment with mentoring that it fosters, I'd really like to solve the hell out of this problem. As well, one of the really interesting things covered at MSR 2013 was how much machine learning and automation you can do if you've got tooling in place to keep track of your code-review process over a few months or years, and how there are significant quality and productivity gains to be found once you've got a corpus of knowledge built up there. So there's also that. So I'm looking at a few code-review tools, but the fact of it is I'm not really qualified to recommend one over the other. Feasible open-source options include, to my eye: - ReviewBoard - Phabricator - BarKeep , and - Gerrit ... in no particular order. I understand there's some love here for ReviewBoard and Gerrit, but I'm going to look into what sort of logging and integration they provide, and I'll let you know what I find. Thanks, - mhoye ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code Review Session
- Original Message - > From: "Mike Hommey" > To: "Michael Hoye" > > clang-format unfortunately only deals with whitespaces. It does have > neat formatting with them, but it's limited to that. I don't think that's true, or at least, it looks like that's only true for the Mozilla style option in the current clang-format code, but clang-format could be taught to do a lot more. Yeah, the decisions clang-format makes about Mozilla starts here: http://clang.llvm.org/doxygen/Format_8cpp_source.html down around line 178, but this: http://clang.llvm.org/doxygen/namespaceclang_1_1format.html#nested-classes implies that it could do a lot more (maybe most?) of what we need if it got a bit of love, and even just "most" would be a big win. - mhoye ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Furthering The Cause Of Science
Hi, everyone - In a little while, the Mozilla Science Lab will be running an experiment, and I want to know if you'd like to help. (Some of you are thinking, you had me at Mozilla Science Lab. I know; that's how they got me, too. Keep reading!) The gist of it is: This is a pilot program, an experiment where we pair up Mozilla's programmers with Scientists Who Do Actual Science, who rely on code they've written themselves to do their research. We'd like to see if the sort of code review we do here routinely, which is pretty much unheard-of in the academic-research environment, can make those scientists' software, research and lives better. This isn't expected to be a huge commitment of time - an hour or three, if that, sometime the next two months? - and I'd like to be able to pair people up with somebody working with a field that interests them, if at all possible. The specific questions we're trying to answer are: 1. How much scientific software can be reviewed by non-specialists, and how often is domain expertise required? 2. How much effort does this take compared to reviews of other kinds of software, and to reviews of papers themselves? 3. How useful do scientists find these reviews? If you'd like to help, please let me know; if you've got a particular interested in a specific field, please specify. Again, this is a pilot project - whether or not it grows beyond that depends on whether or not this turns out to be a valuable experience for all concerned. But at the very least I think it will be interesting. Let me know, - mhoye ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform