RE: DevTools: how to get list of mutation observers for an element
Should I file a bug for this? Yes, please. CC me Done https://bugzilla.mozilla.org/show_bug.cgi?id=912874 Honza -Original Message- From: smaug [mailto:sm...@welho.com] Sent: Wednesday, September 04, 2013 9:21 PM To: Jan Odvarko Subject: Re: DevTools: how to get list of mutation observers for an element On 09/04/2013 09:43 AM, Jan Odvarko wrote: It's currently possible to get registered event listeners for specific target (element, window, xhr, etc.) using nsIEventListenerService.getListenerInfoFor Is there any API that would allow to get also mutation observers? no Should I file a bug for this? Yes, please. CC me -Olli :smaug Honza ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
XUL splitmenu
Two questions about splitmenu element: #1) I wanted to displya a check-box in front of the splitmenu element, but setting type=checkbox and checked=true doesn't help. Shouldn't this just work? Is this a bug? #2) It looks like that the splitmenu element doesn't work on OSX. Correct? Honza ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
nsTHashtable changes
nsTHashtable and its subclasses no longer have an Init method; they are fully initialized on construction, like any good C++ object. You can specify an initial number of buckets by passing an integer parameter to the constructor. nsTHashtables are always initialized now; there is no uninitialized state. If you need an uninitialized nsTHashtable you'll have to wrap it in an nsAutoPtr and treat null as uninitialized, or something like that. Falllible initialization of nsTHashtables was removed because no-one uses it. Fallible Put is still available. The MT thread-safe hashtable subclasses were removed because no-one uses them. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w * * ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: No more Makefile.in boilerplate
Hi, out of curiousity, I recall that relativesrcdir was actually the trigger to switch on and off some l10n functionality in jar packaging. Is that now on everywhere? Axel On 9/5/13 2:34 AM, Mike Hommey wrote: Hi, Assuming it sticks, bug 912293 made it unnecessary to start Makefile.in files with the usual boilerplate: DEPTH = @DEPTH@ topsrcdir = @top_srcdir@ srcdir = @srcdir@ VPATH = @srcdir@ relativesrcdir = @relativesrcdir@ include $(DEPTH)/config/autoconf.mk All of the above can now be skipped. Directories that do require a different value for e.g. VPATH or relativesrcdir can still place a value that will be taken instead of the default. It is not recommended to do that in new Makefile.in files, or to change existing files to do that, but the existing files that did require such different values still do use those different values. Also, if the last line of a Makefile.in is: include $(topsrcdir)/config/rules.mk That can be skipped as well. Mike ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Changes to how EXPORTS are handled
Mike Hommey wrote: On Wed, Sep 04, 2013 at 11:35:19AM -0700, Gregory Szorc wrote: It's worth explicitly mentioning that tiers limit the ability of the build system to build concurrently. So, we have to choose between speed and a moving/complex target of dependency correctness. We have chosen to sacrifice this dependency correctness to achieve faster build speeds. If we want to keep ensuring dependency correctness, I believe we should accomplish that via static analysis, compiler plugins, standalone builds, or special build modes. Or not using -I$(DIST)/include. Some headers install into $(DIST)/include/mozilla and there's no other way to include them. -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: No more Makefile.in boilerplate
On Thu, Sep 05, 2013 at 12:24:11PM +0200, Axel Hecht wrote: Hi, out of curiousity, I recall that relativesrcdir was actually the trigger to switch on and off some l10n functionality in jar packaging. Is that now on everywhere? I didn't find any l10n jar.mn without a relativesrcdir in the corresponding Makefile.in. But maybe I didn't look properly? Mike ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: nsTHashtable changes
On Thu, Sep 5, 2013 at 10:16 PM, Kyle Huey m...@kylehuey.com wrote: Did you fix LDAP in comm-central to not use nsInterfaceHashtableMT? That's why I haven't finished Bug 849654. I guess that should get duped to wherever this happened. No, I completely stuffed up my search through comm-central and didn't find any uses of nsTHashtable and its subclasses. I'll fix that now. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w * * ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: partial GL buffer swap
From an API/feature point of view the partial buffer swap does not sound like a bad idea, especially since, as Mat said, the OMTC BasicLayers will need something along these lines to work efficiently. One thing to watch out for, though, is that it is the kind of fine tuning that, I suspect, will give very different results depending on the hardware. On tile based GPUs, doing this without well working extensions like QCOM_tiled_rendering will most likely yield bad performances, for example. More importantly I am not sure how much we can rely on these extensions reliably behaving across the different hardware. Would we use something like WebGL's blacklisting for this optimization? I heard that our WebGL blacklisting is a bit of a mess. Are these the lowest hanging fruits to improve performances? On Sun, Sep 1, 2013 at 6:53 AM, Matt Woodrow m...@mozilla.com wrote: We actually have code that does the computation of the dirty area already, see http://mxr.mozilla.org/mozilla-central/ident?i=LayerPropertiestree=mozilla-central . The idea is that we take a snapshot of the layer tree before we update it, and then do a comparison after we've finished updating it. We're currently only using this for main-thread BasicLayers, but we're almost certainly going to need to extend it to work on the compositor side too for OMTC BasicLayers. It shouldn't be too much work, we just need to make ThebesLayers shadow their invalid region, and update some of the LayerProperties comparison code to understand the **LayerComposite way of doing things. Once we have that, adding compositor specific implementations of restricting composition to that area should be easy! - Matt On 1/09/13 4:50 AM, Andreas Gal wrote: Soon we will be using GL (and its Windows equivalent) on most platforms to implement a hardware accelerated compositor. We draw into a back buffer and with up to 60hz we perform a buffer swap to display the back buffer and make the front buffer the new back buffer (double buffering). As a result, we have to recomposite the entire window with up to 60hz, even if we are only animating a single pixel. On desktop, this is merely bad for battery life. On mobile, this can genuinely hit hardware limits and we won't hit 60 fps because we waste a lot of time recompositing pixels that don't change, sucking up memory bandwidth. Most platforms support some way to only update a partial rect of the frame buffer (AGL_SWAP_RECT on Mac, eglPostSubBufferNVfor Linux, setUpdateRect for Gonk/JB). I would like to add a protocol to layers to indicate that the layer has changed since the last composition (or not). I propose the following API: void ClearDamage(); // called by the compositor after the buffer swap void NotifyDamage(Rect); // called for every update to the layer, in window coordinate space (is that a good choice?) I am using Damage here to avoid overloading Invalidate. Bike shedding welcome. I would put these directly on Layer. When a color layer changes, we damage the whole layer. Thebes layers receive damage as the underlying buffer is updated. The compositor accumulates damage rects during composition and then does a buffer swap of that rect only, if supported by the driver. Damage rects could also be used to shrink the scissor rect when drawing the layer. I am not sure yet whether its easily doable to take advantage of this, but we can try as a follow-up patch. Feedback very welcome. Thanks, Andreas PS: Does anyone know how this works on Windows? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: No more Makefile.in boilerplate
On Thu, Sep 05, 2013 at 04:08:50PM +0200, Axel Hecht wrote: On 9/5/13 1:17 PM, Mike Hommey wrote: On Thu, Sep 05, 2013 at 12:24:11PM +0200, Axel Hecht wrote: Hi, out of curiousity, I recall that relativesrcdir was actually the trigger to switch on and off some l10n functionality in jar packaging. Is that now on everywhere? I didn't find any l10n jar.mn without a relativesrcdir in the corresponding Makefile.in. But maybe I didn't look properly? There shouldn't have been. But https://hg.mozilla.org/mozilla-central/file/45097bc3a578/config/config.mk#l681 seems to be always on now? (also 685) Indeed. Although it's also possible to set relativesrcdir to nothing in Makefile.in to get back to the case without relativesrcdir. Mike ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: No more Makefile.in boilerplate
On 9/5/2013 10:25 AM, Mike Hommey wrote: There shouldn't have been. But https://hg.mozilla.org/mozilla-central/file/45097bc3a578/config/config.mk#l681 seems to be always on now? (also 685) Indeed. Although it's also possible to set relativesrcdir to nothing in Makefile.in to get back to the case without relativesrcdir. I think we should fix anything that depends on relativesrcdir being unset. This has become a standard part of our Makefiles. -Ted ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Detection of unlabeled UTF-8
On 9/5/13 09:10, Henri Sivonen wrote: Why should we surface this class of authoring error to the UI in a way that asks the user to make a decision considering how rare this class of authoring error is? It's not a matter of the user judging the rarity of the condition; it's the user being able to, by casual observation, look at a web page and tell that something is messed up in a way that makes it unusable for them. Are there other classes of authoring errors that you think should have UI for the user to second-guess the author? If yes, why? If not, why not? In theory, yes. In practice, I can't immediately think of any instances that fit the class other than this one and certain Content-Encoding issues. If you want to reduce it to principle, I would say that we should consider it for any authoring error that is (a) relatively common in the wild; (b) trivially detectable by a lay user; (c) trivially detectable by the browser; (d) mechanically reparable by the browser; and (e) has the potential to make a page completely useless. I would argue that we do, to some degree, already do this for things like Content-Encoding. For example, if a website attempts to send gzip-encoded bodies without a Content-Encoding header, we don't simply display the compressed body as if it were encoded according to the indicated type; we pop up a dialog box to ask the user what to do with the body. I'm proposing nothing more radical than this existing behavior, except in a more user-friendly form. As to the why, it comes down to balancing the need to let the publisher know that they've done something wrong against punishing the user for the publisher's sins. -- Adam Roach Principal Platform Engineer a...@mozilla.com +1 650 903 0800 x863 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Detection of unlabeled UTF-8
On 2013-09-05 10:10 AM, Henri Sivonen wrote: It's worth noting that for other classes of authoring errors (except for errors in https deployment) we don't give the user the tools to remedy authoring errors. Firefox silently remedies all kinds authoring errors. - mhoye ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Deploying more robust hg-git replication for gecko
On 2013-09-03 9:39 PM, Aki Sasaki wrote: On 9/3/13 8:25 AM, Ehsan Akhgari wrote: Thanks for the update on this, John! I have a number of questions about this: 1. On the issue of the hg tags, can you please be more specific about the problem that you're encountering? In my experience, git deals with updated tags the same way as it does with any non-fast-forward merge, and I've never experienced any problems with that. I'm explicitly not converting most tags, and only whitelisting certain ones to avoid the issue caused by moving tags. To quote https://www.kernel.org/pub/software/scm/git/docs/git-tag.html#_on_re_tagging on moving git tags: ``Does this seem a bit complicated? It should be. There is no way that it would be correct to just fix it automatically. People need to know that their tags might have been changed.'' We move tags regularly on hg repos; this is standard operating procedure for a release build 2, or if a build 1 has an automation hiccup. While we *could* convert the tags automatically, then either never move them or move them behind users' backs, users would then never get the updated tags unless they explicitly delete and re-fetch the tag by name... something people wouldn't typically do without prompting. In my opinion, tags pointing at the wrong revision are worse than no tags. Huh, interesting! I was actually under the impression that tags are updated in a non-fast-forward manner similar to branches if you want, but it seems to not be the case given the documentation. Also, I need to limit the tags pushed to gecko.git, as there is a hard fastforward-only rule there, and notifying partners to delete and recreate tags seems like a non-starter. So I built in tag-limiting whitelists for safety. However, there appears to be an issue with the way I'm limiting tags. Rather than delay things further, I decided to publish as-is and see if anyone really cares about the tags, or if they would be fine using the tip of each relbranch instead. Since you bring up the point of tags, can you give examples of how you use tags, or how you have seen others use tags? I don't use tags myself at all, I remember someone pointing out that my repo did not contain the hg tags a long time ago (2+ years) so I unfortunately don't remember who that person was. That's when I started pushing the tags as well. I just pointed this out as something that caught my attention. 2. How frequently are these branches updated? The current job is running on a 5 minute cron job. We can, of course, change that if needed. When I add a new repo or a slew of new branches to convert, the job can take longer to complete, but it typically finishes in about 6 minutes. Sounds good, that's exactly the same setup and frequency that I have as well. It seems to be working fine for most people. 3. What are the plans for adding the rest of the branches that we currently have in https://github.com/mozilla/mozilla-central, an what is the process for developers to request their project branches to be added to the conversion jobs? Right now the process is pretty light-weight (they usually just ping me on IRC or send me an email). It would be nice if we kept that property. There are two concerns here: Project branches are often reset, and also individual developers care about different subsets of branches. Providing a core set of branches that everyone uses, and which we can safely support at scale, seems a good set to include by default. For users of other branches, one approach we're looking at is to provide supported documentation that shows developers how to add any-branch-of-their-choice to their local repo. We're still figuring out what this default set should be, and as you have clearly expressed opinions on this in the past, we'd of course be interested in your current opinions here. I strongly disagree that providing the current subset of branches is enough. Firstly, to address your point about project branches being reset, it's true and it works perfectly fine for the people who are using those branches, since they will get a non-fast-forward merge which signals them about this change, which they can choose to avoid if they want. Also, if somebody gives up their project branch (such as is the case for twigs) we can always delete the branch from the main remote and doing that will not affect people who have local branches based on that. Git does the right thing in every case. About the fact that individual developers care about different subsets of branches, that's precisely correct, and it's fine, since when using git, you *always* ignore branches that you're not interested in, and are never affected by what changes happen in those branches. We have an extensive amount of experience with the current github mirror about both of these points and we _know_ that neither of these two are issues here. Part of the value of having a git mirror for developers is that we don't
Re: Detection of unlabeled UTF-8
Zack Weinberg schrieb: It is possible to distinguish UTF-8 from most legacy encodings heuristically with high reliability, and I'd like to suggest that we ought to do so, independent of locale. I would very much agree with doing that. UTF-8 is what is being suggested everywhere as the encoding to go with, and as we should be able to detect it easily enough, we should do it and switch to it when we find it. Robert Kaiser ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Python 2.7.3 now required to build
Just landed in inbound and making its way to a tree near you is the requirement that you use Python 2.7.3 or greater (but not Python 3) to build the tree. If you see any issues, bug 870420 is responsible. Previously we required Python 2.7.0 or greater. This change has been planned and agreed to for months and was recently unblocked to land. Hopefully things should just work for most people. MozillaBuild (Windows dev environment) is shipping Python 2.7.5 and most Linux distros are running 2.7.3+. OS X, however, lags. OS X 10.8 ships 2.7.2, so attempting to build out of the box will give you an error. If your Python is too old, the build system will instruct you what to do. For most people, run `mach bootstrap` and it will hopefully install a modern Python on your machine. If that doesn't work, patches welcome (code in /python/mozboot). ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Detection of unlabeled UTF-8
On 9/5/13 11:15 AM, Adam Roach wrote: I would argue that we do, to some degree, already do this for things like Content-Encoding. For example, if a website attempts to send gzip-encoded bodies without a Content-Encoding header, we don't simply display the compressed body as if it were encoded according to the indicated type Actually, we do, unless the indicated type is text/plain. The one fixup I'm aware of with Content-Encoding is that if the content type is application/gzip and the Content-Encoding is gzip and the file extension is .gz we ignore the Content-Encoding. Both of these are workarounds for a very widespread server misconfiguration (in particular, the default Apache configuration for many years had the text/plain problem and the default Apache configuration on most Linux distributions had the gzip problwm). -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
window.opener always returns null
I'm creating a FF extension that needs to detect whether a window was opened via javascript with the window.open() command. I've tried many different ways: window.opener, parent.window.opener, this.opener, etc. and it always returns null. I was also trying it using the code editor here: http://www.w3schools.com/js/tryit.asp?filename=try_win_focus The only way that it returned a valid opener object was when I changed the line to: myWindow.document.write(myWindow.opener); However, this implies that you have to know the object reference name in order to get the opener, but there is no way that I'm aware of to detect this in a loaded website. Any ideas of how I can reliably detect if a window was opened via javascript? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: vsync proposal
I had some off-thread discussion with Bas about lowering latency. We seem to have agreed on the following plan: On some non-main thread: 1. Wait for vsync event 2. Dispatch refresh driver ticks to all windows that don't already have a pending refresh driver tick unacknowledged by a layer tree update response 3. Wait for N ms, processing any layer-tree updates (where N = vsyncinterval - generously estimated time to composite all windows) 4. Composite all updated windows 5. Goto 1 This could happen on the compositor thread or some other thread depending on how wait for vsync event is implemented on each platform. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w * * ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: window.opener always returns null
On 9/5/13 10:41 PM, digitalc...@gmail.com wrote: I'm creating a FF extension that needs to detect whether a window was opened via javascript with the window.open() command. Generally, testing .opener on the relevant window will work. However note that pages can explicitly null this out to break references to the opener... -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform