Re: PSA: New C++11 features made available by dropping support for gcc-4.6 in Gecko 38

2015-04-30 Thread Robert O'Callahan
On Sat, Mar 21, 2015 at 4:14 AM, bo...@mozilla.com wrote:

 * member initializers


Should we have any rules around these, or should we use them
indiscriminately? I wonder particularly about initializers which are
complicated expressions.

Rob
-- 
oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
owohooo
osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
oioso
oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
owohooo
osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
ooofo
otohoeo ofoioroeo ooofo ohoeololo.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: New C++11 features made available by dropping support for gcc-4.6 in Gecko 38

2015-04-30 Thread Xidorn Quan
On Thu, Apr 30, 2015 at 10:14 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Sat, Mar 21, 2015 at 4:14 AM, bo...@mozilla.com wrote:

  * member initializers
 

 Should we have any rules around these, or should we use them
 indiscriminately? I wonder particularly about initializers which are
 complicated expressions.


I guess we probably should forbid using any expression with side effect for
member initializers.

- Xidorn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


String array from JS - C++ via IDL

2015-04-30 Thread janjongboom
I have an IDL file and I want to add a new attribute that contains an array of 
strings. The interface is implemented in JavaScript and I'm writing C++ code.

IDL:

readonly attribute nsIArray osPaths; // DOMString[]

Consuming in C++:

nsCOMPtrnsIArray bla;
app-GetOsPaths(getter_AddRefs(bla));

uint32_t length;
bla-GetLength(length);
printf(Length=%d\n, length);

All OK. Prints 'Length=1' when I add one element in the array. But now... how 
do I get the strings out of here. I found do_QueryElement and it's nsISupports* 
friends, but none of them say they can handle it...

for (uint32_t j = 0; j  length; ++j) {
  nsCOMPtrnsISupportsPrimitive isp = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsCString iscs = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsString iss = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsPRBool isb = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsPRUint8 isu8 = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsPRUint16 isu16 = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsPRUint32 isu32 = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsChar isc = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsPRInt16 isi16 = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsPRInt32 isi32 = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsFloat isf = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsDouble isd = do_QueryElementAt(bla, j);
  nsCOMPtrnsISupportsInterfacePointer isip = do_QueryElementAt(bla, j);
  
  printf(isp=%d , !!isp);
  printf(iscs=%d , !!iscs);
  printf(iss=%d , !!iss);
  printf(isb=%d , !!isb);
  printf(isu8=%d , !!isu8);
  printf(isu16=%d , !!isu16);
  printf(isu32=%d , !!isu32);
  printf(isc=%d , !!isc);
  printf(isi16=%d , !!isi16);
  printf(isi32=%d , !!isi32);
  printf(isf=%d , !!isf);
  printf(isd=%d , !!isd);
  printf(isip=%d , !!isip);
  printf(\n);
}

Result: isp=0 iscs=0 iss=0 isb=0 isu8=0 isu16=0 isu32=0 isc=0 isi16=0 isi32=0 
isf=0 isd=0 isip=0

So what type is in here, and how can I get it out of the array? I just want 
normal nsString objects.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: RFC: Navigation transitions

2015-04-30 Thread Borja Salguero
Based on the great article Chris has posted, I've been playing a little with 
the idea about how 'navigation' could fit within the new architecture model 
(based on 'threads.js') we are working on for Contacts App.

As we know, there is no clear guidelines regarding how panels should move in 
our OS (check how 'Settings' panel is shown in every app in Gaia), so this lack 
of specification sometimes has been creating some discrepancies between apps. 
This is an issue when thinking about the 'Transition API' proposal in our apps, 
because every panel must declare (using meta tags) how it should be 
navigated (both ways). If this panel is reused (or if the UX changes and we 
want a different transition), we would need to modify the HTML and the new set 
of styles within every view in order to achieve this.

So taking into account all these issues, I've created a library (just a 
prototype) in order to expose an API where a 'Content Wrapper' will manage all 
transitions between 2 iframes in a declarative way, and adding more value 
(for example letting us to render something in a panel before the navigation 
happens). This is based on 'Threads.js', so all communication is between the 
iframe and 'navigation.js' worker, which will be in charge of notifying all 
events.

API exposed through 'Threads.js' would be something like:

var navigation = threads.client('navigation-service');
navigation.method(
  'goto',
  origin, // URL or #id
  {
destination: destination, // URL or #id
effect: 'fade', // You can specify the transition (left, right, fade...);
params: {...} // All params you want to apss to 'destination'
  }
);

As you have noticed, we can pass params to the 'destination' before the 
navigation happens, which is great to preload data to show (imagine moving from 
the list of contacts to a specific contact, we would have 'Contacts detail' 
rendering content before moving from one to the other).

On 'destination' we will receive these params just listening to an event:

navigation.on('beforenavigating', function(params) {
  // Do whatever you need
});

When 'origin' is ready, it will notify the service, so the navigation is done 
using:

navigation.method('navigationready');

Once the navigation between iframes is done, we will notify panels involved 
with a 'navigationend' event, so declaring a listener we could do any task we 
want.

navigation.on('navigationend', function(params) {
// Do whatever you need
});

This is the ‘minimum’ set of events and methods that I’ve identified for having 
a basic navigation model, but it would be great if you can add your ideas and 
suggestions about the features that you expected from a library like this.

You can find the example working in [1] (actually it uses threads.js for 
retrieving contacts as a ’service', which is really cool!). Feedback and 
comments are really welcome!

[1] https://github.com/borjasalguero/contacts_prototype



De: Samuel Foster sfos...@mozilla.commailto:sfos...@mozilla.com
Fecha: martes, 28 de abril de 2015, 20:21
Para: Christopher Lord cl...@mozilla.commailto:cl...@mozilla.com
CC: Ting-Yu Chou tc...@mozilla.commailto:tc...@mozilla.com, 
dev-g...@lists.mozilla.orgmailto:dev-g...@lists.mozilla.org 
dev-g...@lists.mozilla.orgmailto:dev-g...@lists.mozilla.org, 
dev-platform@lists.mozilla.orgmailto:dev-platform@lists.mozilla.org 
dev-platform@lists.mozilla.orgmailto:dev-platform@lists.mozilla.org
Asunto: Re: RFC: Navigation transitions

It would be good to know how this plays with the visibility api? When does the 
outgoing document become hidden - at the end of the animation? And likewise 
for the incoming document. If visibility state is being used for say stopping 
some media being played, it makes sense to flip the state before animating away 
from a page. For the incoming page, I guess the same is true - for the purposes 
of the visibility api, the document is only visible once the animation ends?

/Sam

On Thu, Apr 23, 2015 at 3:39 AM, Christopher Lord 
cl...@mozilla.commailto:cl...@mozilla.com wrote:
Seems it has, sorry about that - here's a new one: 
http://chrislord.net/?p=273preview=1_ppp=d17048fbc3

I plan on publishing this (on my blog) today. The proposal and shim source is 
also visible permanently in git: https://gitlab.com/Cwiiis/gaia-navigator


This electronic message contains information from Telefonica UK or Telefonica 
Europe which may be privileged or confidential. The information is intended to 
be for the use of the individual(s) or entity named above. If you are not the 
intended recipient be aware that any disclosure, copying distribution or use of 
the contents of this information is prohibited. If you have received this 
electronic message in error, please notify us by telephone or email. 
Switchboard: +44 (0)113 272 2000 Email: feedb...@o2.com Telefonica UK Limited 
260 Bath Road, Slough, Berkshire SL1 4DX Registered in England and Wales: 
1743099. VAT number: GB 778 6037 85 Telefonica Europe plc 260 

A question about do_QueryInterface()

2015-04-30 Thread ISHIKAWA, Chiaki
Lately, I refreshed comm-central thunderbird code
and tested my local modification to enable buffering of writing
downloaded message to a local mail store.

(This is about when one uses POP3. Please bear this in mind.
Imap testing is further along.)

I noticed a couple of things:

(1) File API semantic change? : unexpected file pointer position.

First, this could be my mixup of refreshing the source tree
and merging the local modification to create a new set of hg patches,
but has there been a change to file-handling function code so that
a newly opened file's seek position (or file pointer to write or read
for the next operation in other words) be placed at the beginning (i.e.,
0 offset) even if the said file existed and has non-zero value and the
intention is to append?
I didn't write the original code, so am unsure, but the intention was to
append to the file. I don't think I touched that part of the code.

It seems to me that the file pointer was at the end of the file for
writing to Berkely style mbox-format Inbox (so we append to it)
previously before the refresh of source tree.
But after the refresh, I realized it is no longer positioned at the end,
but at 0-th offset, and so a call to |Seek()| before appending is now
indeed necessary.

I have been attempting to remove a call to |seek()| in a busy loop to
write to the Inbox: I don't think this |seek()| is needed.
[1]
The offending |Seek()| nullifies our attempt to write to Inbox with
buffering. This slows down I/O.

After a couple of months testing and debugging on and off, even
monitoring the checking of the file positions before and after |seek|,
I thought I can remove the offending |seek()| safely at least from my
limited testing.

Well what about other people's environment?
Screwing up one's mailbox is something that needs to be avoided at all cost.

Based on the suggestion of aceman, I was about to upload a proposed
patch for Beta to test that it is true that we can remove the |seek()|
and still be safe in other people's environment: the patch is meant to
check that the |seek()| has no effect, i.e., the file pointer position
does not change before and after the offending |seek()|. There are a few
preferences that can change the behavior of TB and so I wanted to make
sure. Once we know for sure that it is safe to remove |Seek()|, I intend
to upload the final patch to remove the |Seek()|.

Above slight change in the file pointer position after opening
mbox-style Inbox, I now need to insert a |Seek| once after we begin
writing to it. Just once.

All is well now.

But I am curious if anyone is aware of such file API change.
I tried to read through the changes to M-C tree for the last week, but
it was long and I could not figure out if the change is in the last week
or not. Thus I have to ask experts here.


(2)  This is more like C++ semantic issue and the way XPCOM is built.

(After I wrote down the following, I found out from system trace, that
maybe my change does not buffer the writing to temporary file still :-(.
I will follow up when the dust settles.)

With a user preference setting of mailnews.downloadToTempFile to true,
thunderbird mail client can be taught to download a POP3 message
first to a temporary file and close it, and then
read the content of this file and append it to mbox-style Inbox.

I think this is meant to deal with arcane very spartan anti-virus
software that would REMOVE/QUARANTINE a file when a virus or malware is
detected. If you receive a spam e-mail with malware, and your whole
Inbox is removed/quarantined, it is a disaster.

I think today's AV software is more benign and handle the situation more
elegantly (I don't know the details.)

Although mailnews.downloadToTempFile does not even exist by default,
and you need to manually set it to test the feature,
the code path to handle this feature DOES exist, and I
needed to test the path in my quest for better I/O error handling in
TB.[2][3][4][5].

When I enabled buffering of writing to a local file during
downloading as in [1], and the local file is temporary (by setting
mailnews.downloadToTempFile to true), I hit a snug.

There is a code in the mailnews.downloadToTempFile is true  path.
http://mxr.mozilla.org/comm-central/source/mailnews/local/src/nsPop3Sink.cpp#787


*   787   nsCOMPtr nsIInputStream inboxInputStream =
do_QueryInterface(m_outFileStream);
788   rv = MsgReopenFileStream(m_tmpDownloadFile, inboxInputStream);

Before, as in the current release, m_outFileStream is not buffered.
And the code on line 787 produces non-null inboxInputStream.

However, once m_outFileStream is turned into a buffered output stream
using, say,

  m_outFileStream = NS_BufferOutputStream(m_outFileStream, 64 * 1024 );

the code on line 787 produces nullptr.

Is this to be expected?

Up until now, I thought of do_QueryInterface() as mere sugar-coating for
certain type-mutation or something. But I now know I am wrong.

I read a page about do_QueryInterface() but it does not

Re: A question about do_QueryInterface()

2015-04-30 Thread Boris Zbarsky
On 4/30/15 2:25 PM, ISHIKAWA, Chiaki wrote:
 Is this to be expected?

Sure.  You're taking an _output_ stream and QIing it to
nsI_Input_Stream.

It might happen that some objects implement both interfaces (and looks
like nsMsgFileStream  does).  The object returned by
NS_BufferOutputStream does not: it's an output stream, but not an input
stream.

I recommend keeping track of both the input _and_ output stream in
members, and buffering only the output, or buffering them separately, as
you prefer.

 I read a page about do_QueryInterface() but it does not
 explain the principle very much.

It's simple:  You pass the object a 128-bit IID, it returns a pointer to
something that implements that interface if that's an IID it implements,
and null otherwise.

   X = do_QeuryInterface(Y) is possible only when X is the direct or
 indirect  descendant of Y?

No, that has nothing to do with it.  It's all about what the object
claims to implement.

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A question about do_QueryInterface()

2015-04-30 Thread ISHIKAWA, Chiaki
Thank you for the clarification.

On 2015/05/01 3:38, Boris Zbarsky wrote:
 On 4/30/15 2:25 PM, ISHIKAWA, Chiaki wrote:
 Is this to be expected?
 
 Sure.  You're taking an _output_ stream and QIing it to
 nsI_Input_Stream.
 

Yes, that is how the original code was written.

 It might happen that some objects implement both interfaces (and looks
 like nsMsgFileStream  does).  The object returned by
 NS_BufferOutputStream does not: it's an output stream, but not an input
 stream.

Oh, I see. So non-buffered version was OK, but now that I introduce
buffering, it is no longer possible.

 I recommend keeping track of both the input _and_ output stream in
 members, and buffering only the output, or buffering them separately, as
 you prefer.
 

I will try to do something sensible, but I do not want to meddle with
the original code as much as possible, too.

 I read a page about do_QueryInterface() but it does not
 explain the principle very much.
 
 It's simple:  You pass the object a 128-bit IID, it returns a pointer to
 something that implements that interface if that's an IID it implements,
 and null otherwise.
 
X = do_QeuryInterface(Y) is possible only when X is the direct or
 indirect  descendant of Y?
 
 No, that has nothing to do with it.  It's all about what the object
 claims to implement.

Now I see. I thought of do_QueryInterface() has something to do with
class hierarchy, but it has nothing to do with.
It is about the object's claim that it implements certain interface.

Thank you again.

BTW, the seeming lack of buffering was that I failed to
remove the offencing |Seek()| that negates buffering.
Once I disabled it in my local builds, everything seems AOK!



 -Boris

Chiaki Ishikawa

 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A question about do_QueryInterface()

2015-04-30 Thread Joshua Cranmer 

On 4/30/2015 1:25 PM, ISHIKAWA, Chiaki wrote:

*   787   nsCOMPtr nsIInputStream inboxInputStream =
do_QueryInterface(m_outFileStream);
 788   rv = MsgReopenFileStream(m_tmpDownloadFile, inboxInputStream);

Before, as in the current release, m_outFileStream is not buffered.
And the code on line 787 produces non-null inboxInputStream.

However, once m_outFileStream is turned into a buffered output stream
using, say,

   m_outFileStream = NS_BufferOutputStream(m_outFileStream, 64 * 1024 );

the code on line 787 produces nullptr.

Is this to be expected?


In short, yes. What happens is that the original m_outFileStream happens 
to be of type nsFileStreams (or something like that), which inherits 
from both nsIInputStream and nsIOutputStream. When you wrap that in a 
buffered output stream, the resulting type of m_outFileStream is of 
nsBufferedOutputStream, which does not inherit nsIInputStream; therefore 
the cast to nsIInputStream fails.


Up until now, I thought of do_QueryInterface() as mere sugar-coating for
certain type-mutation or something. But I now know I am wrong.


do_QueryInterface is the equivalent of a type-checked downcast, e.g. 
(ClassName)foo in Java.  (Regular C++ downcasts are not dynamically 
type-checked).


I read a page about do_QueryInterface() but it does not
explain the principle very much.

Is the reason of failure something like as follows.
I am using a very general class hierarchy.


  A   base class
  |
  +---+---+
  B   C B and C are derived from base class A
  |
--+--+
 |
 D D is derived further from Class D.

Let's say Class B and C are derived from Class A.
Class D is further derived from Class C.
Let us assume there are corresponding XPCOM class/object A', B', C', D'.

By using do_QueryInterface() on objects,
 we can follow the path of  direct derives relation
  B' = do_QueryInterface (A') (or is it the other way round?)

 and maybe between B' and C' (? Not sure about this.)

 but we can NOT follow the direction of
   B' = do_QueryInterface (D')
 That is
X = do_QeuryInterface(Y) is possible only when X is the direct or
indirect  descendant of Y?


No, you are incorrect. The issue is the dynamic type of the object (if 
you have A *x = new B, the static type of x is A whereas the dynamic 
type is B). In the pre-modified code, the dynamic type of 
m_outFileStream supported the interface in question, but your 
modification changed the dynamic type to one that did not support the 
interface.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Cleopatra can take larger profiles now.

2015-04-30 Thread Mike Conley
Good news everybody!

TL;DR: Cleopatra used to have a limitation where it would only accept
profiles of around 10MB. We've now made it so that Cleopatra can accept
much, much larger profiles. This is going to become increasingly
important as more profile-able processes get added.

As an added bonus, uploads should be much smaller, resulting in faster
upload times.

Here's the nitty-gritty:

The storage system we use for profiles is hosted on Google AppEngine.
When a user uploads a profile, we used to do server-side compression -
but the machine would run out of memory while attempting to compress a
large profile, and the OS would kill the process.

We have now exposed an endpoint that accepts pre-compressed profiles,
and added some code[1] to Cleopatra to do the client-side gzip[2]
compression of profiles.

So if you were all bummed out because Cleopatra couldn't accept your
gargantuan profiles, rejoice!

Thanks to jrmuizel and mstange for their server-side work and reviews.

-Mike

[1]:
https://github.com/bgirard/cleopatra/commit/fd7a0e08e51472c3f9b76447e386c44323dc4cbc
[2]: Emscripten'd zlib! https://github.com/kripken/zee.js
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Tab titles no longer underlined in e10s

2015-04-30 Thread Bill McCloskey
A very minor announcement:

Starting in tomorrow's nightly, we will no longer underline tab titles in
e10s. If you want to find out if a tab is remote, look at its tooltip. For
remote tabs it will be title - e10s.

The New e10s window menu item is also going away. If you want an e10s
window, you need to set the e10s preference and restart. New non-e10s
window will still be around for testing.

-Bill
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: NS_Alloc, NS_Realloc, NS_Free are no more (Was: PSA: moz_malloc, moz_realloc, moz_calloc and moz_free are no more)

2015-04-30 Thread Mike Hommey
Today was NS_Alloc, NS_Realloc and NS_Free's turn.

Mike

On Thu, Apr 02, 2015 at 08:31:17AM +0900, Mike Hommey wrote:
 And now, nsMemory::Alloc, nsMemory::Free and nsMemory::Realloc are gone
 as well.
 
 Mike
 
 On Tue, Mar 31, 2015 at 02:59:20PM +0900, Mike Hommey wrote:
  Hi,
  
  In the next few weeks, there is going to be a reduction in the number of
  our memory allocator wrappers/functions, for essentially the following
  reasons:
  - we have too many of them,
  - developers rarely know which one to use, which results in:
  - developers often make mistakes[1]
  
  This started today with the landing of bug 1138293, which effectively
  removed moz_malloc, moz_realloc, moz_calloc and moz_free.
  
  They were replaced, respectively, with malloc, realloc, calloc and free,
  because that works™.
  
  If you have pending patches that use moz_malloc, moz_realloc,
  moz_calloc, moz_free, you can just remove the moz_ prefix.
  
  The infallible moz_xmalloc, moz_xrealloc and moz_xcalloc still do exist,
  and memory allocated with them can be freed with free.
  
  With that being said, please refrain from using any of the functions
  mentioned above. Please prefer the C++y new and delete. new is
  infallible by default (equivalent to moz_xmalloc). If you need an
  equivalent to moz_malloc, use fallible new instead:
  
  new (fallible) Foo()
  
  Please note that this shouldn't make uplifting harder. Platform patches
  using malloc/free/new/delete should apply and work just fine on beta,
  aurora and esr (with a few exceptions ; if you uplift something from
  mfbt/mozglue that uses the memory allocator, please check with me).
  
  Cheers,
  
  Mike
  
  
  1. if you look for it, you'll find cases of one family used for
 allocation and another for deallocation, for possibly close to all
 combinations of families (NS_Alloc, nsMemory, moz_malloc, malloc,
 new).
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-30 Thread Eric Rescorla
On Thu, Apr 30, 2015 at 5:57 PM, diaf...@gmail.com wrote:

 Here's two relevant Bugzilla bugs:

 Self-signed certificates are treated as errors:
 https://bugzilla.mozilla.org/show_bug.cgi?id=431386

 Switch generic icon to negative feedback for non-https sites:
 https://bugzilla.mozilla.org/show_bug.cgi?id=1041087

 Here's a proposed way of phasing this plan in over time:

 1. Mid-2015: Start treating self signed certificates as unencrypted
 connections (i.e. stop showing a warning, but the UI would just show the
 globe icon, not the lock icon). This would allow website owners to choose
 to block passive surveillance without causing any cost to them or any
 problems for their users.


I think you're over-focusing on the lock icon and not thinking enough about
the referential semantics.

The point of the https: URI is that it tells the browser that this is
supposed to be a secure connection and the browser needs to enforce
this regardless of the UI it shows.

To give a concrete example, say the user enters his password in a form that
is intended to be submitted over HTTPS and the site presents a self-signed
certificate. If the browser send the password, then it has possible
compromised the user's password even if it subsequently doesn't show the
secure UI (because the attacker could supply a self-signed certificate).

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-30 Thread imfasterthanneutrino
 1.Setting a date after which all new features will be available only to 
 secure websites

I propose the date to be one year after Let's Encrypt is launched, which is 
about mid-2016. 

By the way, I hope Mozilla's own official website (Mozilla.org) should move to 
HTTPS-only as soon as possible. Currently www.mozilla.org forces HTTPS, but 
many mozilla.org subdomains do not, such as http://people.mozilla.org/, 
http://release.mozilla.org/, and http://website-archive.mozilla.org. It will be 
great if *.Mozilla.org can be added to browsers' built-in HSTS list.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-04-30 Thread Chris Hofmann
check to see if we still have any automated crawlers still running that
could go looking for problems.

give the the folks that run the crawlers an instrumented build, and strong
liquor for best results.

-chofmann

On Thu, Apr 30, 2015 at 4:00 PM, Jason Duell jdu...@mozilla.com wrote:

 +1 to asserting during tests. I'd feel better about doing it on nightly too
 if there were a way to include the offending URI in the crash report.  But
 I'm guessing there's not?

 On Thu, Apr 30, 2015 at 3:42 PM, Jet Villegas jville...@mozilla.com
 wrote:

  I wonder why we'd allow *any* parsing differences here? Couldn't you just
  assert and fail hard while you're testing against our tests and in
 Nightly?
  I imagine the differences you don't catch this way will be so subtle that
  crowd-sourcing is unlikely to catch them either.
 
  --Jet
 
  On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu valentin.g...@gmail.com
  wrote:
 
   As some of you may know, Rust is approaching its 1.0 release in a
 couple
  of
   weeks. One of the major goals for Rust is using a rust library in
 Gecko.
   The specific one I'm working at the moment is adding rust-url as a
 safer
   alternative to nsStandardURL.
  
   This project is still in its infancy, but we're making good progress. A
  WIP
   patch is posted in bug 1151899, while infrastructure support for the
 rust
   compiler is tracked in bug 1135640.
  
   One of the main problems in this endeavor is compatibility. It would be
   best if this change wouldn't introduce any changes in the way we parse
  and
   encode/decode URLs, however rust-url does differ a bit from Gecko's own
   parser. While we can account for the differences we know of, there may
  be a
   lot of other cases we are not aware of. I propose using our volunteer
  base
   in trying to find more of these differences by reporting them on
 Nightly.
  
   My patch currently uses printf to note when a parsing difference
 occurs,
  or
   when any of the getters (GetHost, GetPath, etc) returns a string that's
   different from our native implementation. Printf might not be the best
  way
   of logging these differences though. NSPR logging might work, or even
   writing to a log file in the current directory.
  
   These differences are quite privacy sensitive, so an automatic
 reporting
   tool probably wouldn't work. Has anyone done something like this
 before?
   Would fuzzing be a good way of finding more cases?
  
   I'm waiting for any comments and suggestions you may have.
   Thanks!
   ___
   dev-platform mailing list
   dev-platform@lists.mozilla.org
   https://lists.mozilla.org/listinfo/dev-platform
  
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 



 --

 Jason
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some more PLDHashTable API tweaks

2015-04-30 Thread Nicholas Nethercote
An update on some pldhash changes that were backed out and then
gradually relanded...

On Wed, Feb 4, 2015 at 7:45 PM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
 Hi,

 I just landed the patches in
 https://bugzilla.mozilla.org/show_bug.cgi?id=1050035. They
 affect PLDHashTable's API in the following ways.

 - PLDHashTable now allocates its entry storage lazily. (nsTHashtable and
   friends do too, since they are just layers on top of PLDHashTable.) This is 
 a
   nice space win because about 45% of all created PLDHashTables never get any
   elements inserted into them.

This relanded in late February as part of bug 1050035.

 - As a result, PL_DHashTableInit() is now infallible. This is possible because
   the allocation of entry storage now only occurs on table insertion, in
   PL_DHashTableAdd().

This just relanded on mozilla-inbound as part of bug 1159972.

 - An infallible version of PL_DHashTableAdd() has been added. To use the
   fallible version you need a |fallible_t| argument. All the old callsites 
 were
   updated appropriately, to keep them fallible.

This relanded in mid-Feburary as part of bug 1131901.

 - PLD_NewDHashTable() and PLD_HashTableDestroy() have been removed. You should
   now just use |new|+PL_DHashTableInit() and PLD_HashTableDestroy()+|delete|,
   which are more obvious and only slightly more verbose. (And I have plans to
   add a destructor and an initializing constructor, so in the end you'll be
   able to just use |new| and |delete|).

This has not yet relanded. I hope to get to all that soon.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-30 Thread Matthew Phillips
I think this is a grave mistake.

The simplicity of the web was the primary factor in its explosive growth. By 
putting up barriers to entry you are discouraging experimentation, discouraging 
one-off projects, and discouraging leaving inactive websites running (as 
keeping certs up to date will be a yearly burden).

I understand that there are proposed solutions to these problems but they don't 
exist today and won't be ubiquitous for a while.  That *has* to come first. 
Nothing is more important than the free speech the web allows. Not even 
security.

That the leading minds of the web no longer value this makes me feel like an 
old fogey, an incredibly sad one.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-30 Thread peter . eckersley
On Thursday, April 30, 2015 at 6:02:44 PM UTC-7, peter.e...@gmail.com wrote:
 On Thursday, April 30, 2015 at 5:57:13 PM UTC-7, dia...@gmail.com wrote:
 
  1. Mid-2015: Start treating self signed certificates as unencrypted 
  connections (i.e. stop showing a warning, but the UI would just show the 
  globe icon, not the lock icon). This would allow website owners to choose 
  to block passive surveillance without causing any cost to them or any 
  problems for their users.
 
 In Mid-2015 we will be launching Let's Encrypt to issue free certificates 
 using automated protocols, so we shouldn't need to do this.

The thing that may actually be implemented, which is similar to what you want, 
is the HTTP opportunistic encryption feature of HTTP/2.0.  That's strictly 
better than unencrypted HTTP (since it is safe against passive attacks) and 
strictly worse than authenticated HTTPS (because it fails instantly against 
active attacks).  So if clients implement it, it has a natural ordinal position 
in the UI and feature-access hierarchy.

If the Let's Encrypt launch goes as planned, it would probably be a mistake to 
encourage sites to use unauthenticated opportunistic HTTP encryption.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-30 Thread diafygi
Here's two relevant Bugzilla bugs:

Self-signed certificates are treated as errors: 
https://bugzilla.mozilla.org/show_bug.cgi?id=431386

Switch generic icon to negative feedback for non-https sites: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1041087

Here's a proposed way of phasing this plan in over time:

1. Mid-2015: Start treating self signed certificates as unencrypted connections 
(i.e. stop showing a warning, but the UI would just show the globe icon, not 
the lock icon). This would allow website owners to choose to block passive 
surveillance without causing any cost to them or any problems for their users.

2. Late-2015: Switch the globe icon for http sites to a gray unlocked lock. The 
self signed certs would still be the globe icon. The would incentivize website 
owners to at least start blocking passive surveillance if they want to keep the 
same user experience as previous. Also, this new icon wouldn't be loud or 
intrusive to the user.

3. Late-2016: Change the unlocked icon for http sites to a yellow icon. 
Hopefully, by the end of 2016, Let's Encrypt has taken off and has a lot of 
frameworks like wordpress including tutorials on how to use it. This increased 
uptake of free authenticated https, plus the ability to still use self-signed 
certs for unauthenticated https (remember, this still blocks passive 
adversaries), would allow website owners enough alternative options to start 
switching to https. The yellow icon would push most over the edge.

4. Late-2017: Switch the unlocked icon for http to red. After a year of yellow, 
most websites should already have switched to https (authenticated or 
self-signed), so now it's time to drive the nail in the coffin and kill http on 
any production site with a red icon.

5. Late-2018: Show a warning for http sites. This experience would be similar 
to the self-signed cert experience now, where users have to manually choose to 
continue. Developers building websites would still be able to choose to 
continue to load their dev sites, but no production website would in their 
right mind choose to use http only.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A question about do_QueryInterface()

2015-04-30 Thread ishikawa
On 2015年05月01日 06:30, Seth Fowler wrote:
 
 On Apr 30, 2015, at 12:09 PM, Joshua Cranmer  pidgeo...@gmail.com wrote:

 do_QueryInterface is the equivalent of a type-checked downcast, e.g. 
 (ClassName)foo in Java.  (Regular C++ downcasts are not dynamically 
 type-checked).
 
 do_QueryInterface is, in other words, essentially equivalent to dynamic_cast 
 in C++, except that because it’s implemented manually people can do strange 
 things if they want to. They almost never do, though, so dynamic_cast is a 
 pretty good mental model.
 
 - Seth
 

Quoting Joshua,
 if you have A *x = new B, the static type of x is A whereas the dynamic type 
 is B). 

do_QueryInterface() handles the XPCOM interface issues, though.
I need to investigate a little more about how similar class/type objects can
produce
the difference between
the set of XPCOM interfaces supported by A and the set of interfaes
supported by B.

I think my main question after the knowledge gained would be:
Does inheritance of class inherits the XPCOM interfaces supported by the
base class automatically?
[Come to think of it, no I don't thinkso. XPCOM is an artificial framework
tacked on C++. Or XPCOM interface in C++ is written in such a manner that
they are automatically inherited?]
Or do we have to manually and explicitly state that
a set of interfaces are inherited (and of course, implemented, too)?
Or considering the implementation issue, may the answer not be quite crystal
clear (???)


TIA


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-04-30 Thread Kevin Brosnan
URLs are a user decision to submit.

Kevin

On Thu, Apr 30, 2015 at 4:00 PM, Jason Duell jdu...@mozilla.com wrote:

 +1 to asserting during tests. I'd feel better about doing it on nightly too
 if there were a way to include the offending URI in the crash report.  But
 I'm guessing there's not?

 On Thu, Apr 30, 2015 at 3:42 PM, Jet Villegas jville...@mozilla.com
 wrote:

  I wonder why we'd allow *any* parsing differences here? Couldn't you just
  assert and fail hard while you're testing against our tests and in
 Nightly?
  I imagine the differences you don't catch this way will be so subtle that
  crowd-sourcing is unlikely to catch them either.
 
  --Jet
 
  On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu valentin.g...@gmail.com
  wrote:
 
   As some of you may know, Rust is approaching its 1.0 release in a
 couple
  of
   weeks. One of the major goals for Rust is using a rust library in
 Gecko.
   The specific one I'm working at the moment is adding rust-url as a
 safer
   alternative to nsStandardURL.
  
   This project is still in its infancy, but we're making good progress. A
  WIP
   patch is posted in bug 1151899, while infrastructure support for the
 rust
   compiler is tracked in bug 1135640.
  
   One of the main problems in this endeavor is compatibility. It would be
   best if this change wouldn't introduce any changes in the way we parse
  and
   encode/decode URLs, however rust-url does differ a bit from Gecko's own
   parser. While we can account for the differences we know of, there may
  be a
   lot of other cases we are not aware of. I propose using our volunteer
  base
   in trying to find more of these differences by reporting them on
 Nightly.
  
   My patch currently uses printf to note when a parsing difference
 occurs,
  or
   when any of the getters (GetHost, GetPath, etc) returns a string that's
   different from our native implementation. Printf might not be the best
  way
   of logging these differences though. NSPR logging might work, or even
   writing to a log file in the current directory.
  
   These differences are quite privacy sensitive, so an automatic
 reporting
   tool probably wouldn't work. Has anyone done something like this
 before?
   Would fuzzing be a good way of finding more cases?
  
   I'm waiting for any comments and suggestions you may have.
   Thanks!
   ___
   dev-platform mailing list
   dev-platform@lists.mozilla.org
   https://lists.mozilla.org/listinfo/dev-platform
  
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 



 --

 Jason
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tab titles no longer underlined in e10s

2015-04-30 Thread Bill McCloskey
There are some bugs that only happen when opening a new e10s window from a
non-e10s browser (the worst is that the Java plugin doesn't work, and will
cause crashes, if you try to open it in an e10s window of a non-e10s
browser). 75% of our nightly population has e10s enabled and we're hoping
to enable it for another 19% when we fix some accessibility issues. So it
didn't seem worth keeping.

-Bill

On Thu, Apr 30, 2015 at 5:16 PM, Daniel Dabrowski 
daniel_dabrow...@hotmail.com wrote:

 Any reason the “New e10s window” is going away? Was quite useful to test a
 few things here and there, without having to fully enable e10s.

 From: Bill McCloskey
 Sent: 30 Apr 2015 21:35
 To: dev-platform ; firefox-dev-owner list
 Subject: Tab titles no longer underlined in e10s

 A very minor announcement:


 Starting in tomorrow's nightly, we will no longer underline tab titles in
 e10s. If you want to find out if a tab is remote, look at its tooltip. For
 remote tabs it will be title - e10s.


 The New e10s window menu item is also going away. If you want an e10s
 window, you need to set the e10s preference and restart. New non-e10s
 window will still be around for testing.


 -Bill





 
 ___
 firefox-dev mailing list
 firefox-...@mozilla.org
 https://mail.mozilla.org/listinfo/firefox-dev
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tab titles no longer underlined in e10s

2015-04-30 Thread Daniel Dabrowski
Any reason the “New e10s window” is going away? Was quite useful to test a few 
things here and there, without having to fully enable e10s. 

From: Bill McCloskey 
Sent: 30 Apr 2015 21:35
To: dev-platform ; firefox-dev-owner list 
Subject: Tab titles no longer underlined in e10s

A very minor announcement:


Starting in tomorrow's nightly, we will no longer underline tab titles in e10s. 
If you want to find out if a tab is remote, look at its tooltip. For remote 
tabs it will be title - e10s.


The New e10s window menu item is also going away. If you want an e10s window, 
you need to set the e10s preference and restart. New non-e10s window will 
still be around for testing.


-Bill





___
firefox-dev mailing list
firefox-...@mozilla.org
https://mail.mozilla.org/listinfo/firefox-dev
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A question about do_QueryInterface()

2015-04-30 Thread Seth Fowler

 On Apr 30, 2015, at 12:09 PM, Joshua Cranmer  pidgeo...@gmail.com wrote:
 
 do_QueryInterface is the equivalent of a type-checked downcast, e.g. 
 (ClassName)foo in Java.  (Regular C++ downcasts are not dynamically 
 type-checked).

do_QueryInterface is, in other words, essentially equivalent to dynamic_cast in 
C++, except that because it’s implemented manually people can do strange things 
if they want to. They almost never do, though, so dynamic_cast is a pretty good 
mental model.

- Seth
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-04-30 Thread Jet Villegas
I wonder why we'd allow *any* parsing differences here? Couldn't you just
assert and fail hard while you're testing against our tests and in Nightly?
I imagine the differences you don't catch this way will be so subtle that
crowd-sourcing is unlikely to catch them either.

--Jet

On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu valentin.g...@gmail.com
wrote:

 As some of you may know, Rust is approaching its 1.0 release in a couple of
 weeks. One of the major goals for Rust is using a rust library in Gecko.
 The specific one I'm working at the moment is adding rust-url as a safer
 alternative to nsStandardURL.

 This project is still in its infancy, but we're making good progress. A WIP
 patch is posted in bug 1151899, while infrastructure support for the rust
 compiler is tracked in bug 1135640.

 One of the main problems in this endeavor is compatibility. It would be
 best if this change wouldn't introduce any changes in the way we parse and
 encode/decode URLs, however rust-url does differ a bit from Gecko's own
 parser. While we can account for the differences we know of, there may be a
 lot of other cases we are not aware of. I propose using our volunteer base
 in trying to find more of these differences by reporting them on Nightly.

 My patch currently uses printf to note when a parsing difference occurs, or
 when any of the getters (GetHost, GetPath, etc) returns a string that's
 different from our native implementation. Printf might not be the best way
 of logging these differences though. NSPR logging might work, or even
 writing to a log file in the current directory.

 These differences are quite privacy sensitive, so an automatic reporting
 tool probably wouldn't work. Has anyone done something like this before?
 Would fuzzing be a good way of finding more cases?

 I'm waiting for any comments and suggestions you may have.
 Thanks!
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-04-30 Thread Mike Hommey
On Thu, Apr 30, 2015 at 04:00:33PM -0700, Jason Duell wrote:
 +1 to asserting during tests. I'd feel better about doing it on nightly too
 if there were a way to include the offending URI in the crash report.  But
 I'm guessing there's not?

CrashReporter::AnnotateCrashReport, but as Valentin said, the URL might
be privacy-sensitive.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-04-30 Thread Jason Duell
+1 to asserting during tests. I'd feel better about doing it on nightly too
if there were a way to include the offending URI in the crash report.  But
I'm guessing there's not?

On Thu, Apr 30, 2015 at 3:42 PM, Jet Villegas jville...@mozilla.com wrote:

 I wonder why we'd allow *any* parsing differences here? Couldn't you just
 assert and fail hard while you're testing against our tests and in Nightly?
 I imagine the differences you don't catch this way will be so subtle that
 crowd-sourcing is unlikely to catch them either.

 --Jet

 On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu valentin.g...@gmail.com
 wrote:

  As some of you may know, Rust is approaching its 1.0 release in a couple
 of
  weeks. One of the major goals for Rust is using a rust library in Gecko.
  The specific one I'm working at the moment is adding rust-url as a safer
  alternative to nsStandardURL.
 
  This project is still in its infancy, but we're making good progress. A
 WIP
  patch is posted in bug 1151899, while infrastructure support for the rust
  compiler is tracked in bug 1135640.
 
  One of the main problems in this endeavor is compatibility. It would be
  best if this change wouldn't introduce any changes in the way we parse
 and
  encode/decode URLs, however rust-url does differ a bit from Gecko's own
  parser. While we can account for the differences we know of, there may
 be a
  lot of other cases we are not aware of. I propose using our volunteer
 base
  in trying to find more of these differences by reporting them on Nightly.
 
  My patch currently uses printf to note when a parsing difference occurs,
 or
  when any of the getters (GetHost, GetPath, etc) returns a string that's
  different from our native implementation. Printf might not be the best
 way
  of logging these differences though. NSPR logging might work, or even
  writing to a log file in the current directory.
 
  These differences are quite privacy sensitive, so an automatic reporting
  tool probably wouldn't work. Has anyone done something like this before?
  Would fuzzing be a good way of finding more cases?
 
  I'm waiting for any comments and suggestions you may have.
  Thanks!
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform




-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-30 Thread Richard Barnes
Hey all,

Thanks a lot for the really robust discussion here.  There have been
several important points raised here:

1. People are more comfortable with requiring HTTPS for new features than
requiring it for features that are currently accessible to non-HTTPS
origins.  Removing or limiting features that are currently available will
require discussion of trade-offs between security and compatibility.

2. Enabling HTTPS can still be a challenge for some website operators.

3. HTTP caching is an important feature for constrained networks.  Content
served over

4. There will still be a need for the browser to be able to connect to
things like home routers, which often don’t have certificates

5. It may be productive to take some interim steps, such as placing
limitations on cookies stored by non-HTTPS sites.

It seems to me that these are important considerations to keep in mind as
we move more of the web to HTTPS, but they don’t need to be blocking on a
gradual deprecation of non-secure HTTP.  (More detailed comments are
below.)  So I’m concluding that there’s rough consensus here behind the
idea of limiting features to secure contexts as a way to move the web
toward HTTPS.   I’ve posted a summary of our plans going forward on the
Mozilla security blog [1].

Thanks
--Richard

[1]
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/ ‎

Some more detailed thoughts:

1. Obviously, lots of caution will be necessary if and when we start
removing features from non-secure contexts.  However, based on the
discussions of things like limiting cookies in this thread, and other
discussions in the “powerful features” threads, it seems that there’s still
some interest in trying to find features where the degree of breakage is
small enough to be compensated by the security benefit.  So it makes sense
to keep the removal or limitation of existing features on the overall
roadmap, with the caveat that we will need to calibrate the
breakage/security trade-offs before taking action.

2. While enabling HTTPS inherently involves more work than enabling
non-secure HTTP, there’s a lot of work going on to make it easier, ranging
from Cloudflare’s Universal SSL to Let’s Encrypt.  Speaking practically,
this non-secure HTTP deprecation process won’t be causing problems for
existing non-secure websites for some time, so there’s time for these
efforts to make progress before the pressure to use HTTPS really sets in.

3. Caching and performance are important, but so is user privacy.  It is
possible to do secure caching, but it will need to be carefully engineered
to avoid leaking more information than necessary.  (I think Martin Thomson
and Patrick McManus have done some initial work here.)  As with the prior
point, the fact that this non-secure HTTP deprecation will happen gradually
means that we have time to evaluate the requirements here and develop any
technology that might be necessary.

4. This seems like a problem that can be solved by the home router vendors
if they want to solve it.  For example, Vendor X could provision routers
with names like “router-123.vendorx.com” and certificates for those names,
and print the router name on the side of the router (just like WPA keys
today).  Also, interfaces to these sorts of devices don’t typically use a
lot of advanced web features, so may not be impacted by this deprecation
plan for a long time (if ever).

5. We can take these interim steps *and* work toward deprecation.


On Mon, Apr 13, 2015 at 7:57 AM, Richard Barnes rbar...@mozilla.com wrote:

 There's pretty broad agreement that HTTPS is the way forward for the web.
 In recent months, there have been statements from IETF [1], IAB [2], W3C
 [3], and even the US Government [4] calling for universal use of
 encryption, which in the case of the web means HTTPS.

 In order to encourage web developers to move from HTTP to HTTPS, I would
 like to propose establishing a deprecation plan for HTTP without security.
 Broadly speaking, this plan would entail  limiting new features to secure
 contexts, followed by gradually removing legacy features from insecure
 contexts.  Having an overall program for HTTP deprecation makes a clear
 statement to the web community that the time for plaintext is over -- it
 tells the world that the new web uses HTTPS, so if you want to use new
 things, you need to provide security.  Martin Thomson and I drafted a
 one-page outline of the plan with a few more considerations here:


 https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

 Some earlier threads on this list [5] and elsewhere [6] have discussed
 deprecating insecure HTTP for powerful features.  We think it would be a
 simpler and clearer statement to avoid the discussion of which features are
 powerful and focus on moving all features to HTTPS, powerful or not.

 The goal of this thread is to determine whether there is support in the
 Mozilla community for a plan of this general form.  

Re: It is now possible to apply arbitrary tags to tests/manifests and run all tests with a given tag

2015-04-30 Thread Christopher Manchester
You can now add --tag arguments to try syntax and they will get passed to
test harnesses in your try push. Details of the implementation are in bug
978846, but if you're interested in passing other arguments from try syntax
to a test harness, this can be done by adding those arguments to
testing/config/mozharness/try_arguments.py. Note this is still rather
coarse in the sense that arguments are forwarded without regard for whether
a harness supports a particular argument, but I can imagine it being useful
in a number of cases (for instance, when testing the feature with xpcshell
and --tag devtools, I was able to get feedback in about ten minutes
whether things were working rather than waiting for every xpcshell test to
run).

Chris

On Thu, Apr 2, 2015 at 2:22 PM, Andrew Halberstadt ahalberst...@mozilla.com
 wrote:

 Minor update. It was pointed out that other list-like manifestparser
 attributes (like head and support-files) are whitespace delimited instead
 of comma delimited. To be consistent I switched tags to whitespace
 delimitation as well.

 E.g both these forms are ok:

 [test_foo.html]
 tags = foo bar baz

 [test_bar.html]
 tags =
 foo
 bar
 baz

 -Andrew


 On 31/03/15 12:30 PM, Andrew Halberstadt wrote:

 As of bug 987360, you can now run all tests with a given tag for
 mochitest (and variants), xpcshell and marionette based harnesses. Tags
 can be applied to either individual tests, or the DEFAULT section in
 manifests. Tests can have multiple tags, in which case they should be
 comma delimited. To run all tests with a given tag, pass in --tag tag
 name to the mach command.

 For example, let's say we want to group all mochitest-plain tests
 related to canvas together. First we'd add a 'canvas' tag to the DEFAULT
 section in

 https://dxr.mozilla.org/mozilla-central/source/dom/canvas/test/mochitest.ini


 [DEFAULT]
 tags = canvas

 We notice there is also a canvas related test under dom/media, namely:

 https://dxr.mozilla.org/mozilla-central/source/dom/media/test/mochitest.ini#541


 Let's pretend it is already tagged with the 'media' tag, but that's ok,
 we can add a second tag no problem:

 [test_video_to_canvas.html]
 tags = media,canvas

 Repeat above for any other tests or manifests scattered in the tree that
 are related to canvas. Now we can run all mochitest-plain tests with:

 ./mach mochitest-plain --tag canvas

 You can also run the union of two tags by specifying --tag more than
 once (though the intersection of two tags is not supported):

 ./mach mochitest-plain --tag canvas --tag media

 So far the xpcshell (./mach xpcshell-test --tag name) and marionette
 (./mach marionette-test --tag name) commands are also supported. Reftest
 is not supported as it has its own special manifest format.

 Applying tags to tests will not affect automation or other people's
 tags. So each organization or team should feel free to use tags in
 whatever creative ways they see fit. Eventually, we'll start using tags
 as a foundation for some more advanced features and analysis. For
 example, we may implement a way to run all tests with a given tag across
 multiple different suites.

 If you have any questions or things aren't working, please let me know!

 Cheers,
 Andrew


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-04-30 Thread Bob Clary

On 04/30/2015 04:08 PM, Chris Hofmann wrote:

check to see if we still have any automated crawlers still running that
could go looking for problems.

give the the folks that run the crawlers an instrumented build, and strong
liquor for best results.

-chofmann


I run a system called Bughunter that scans a subset of the urls 
submitted to the crash reports system in the attempt to reproduce the 
crashes our users see. It runs on Windows 7, OSX 10.{6,8,9} and RHEL6.


If this were to land in Nightly and if it were to produce a fatal 
assertion such as Assertion failure:... or ABORT:... or even a 
non-fatal ASSERTION, it would automatically be recorded in our webapp [*].


I've also used these vms for other scanning purposes. For example, I 
scanned the top 1 million sites from Alexa over a period in December 
2014 - January 2015 and produced data and files for all Flash files 
discovered for use by the Shumway folks.


So, this is something we could support with Bughunter or a subset of the 
Bughunter vms. Hopefully this will not produce a large number of 
Assertions, since I wouldn't want it to detract from our finding 
security sensitive crashes.


I might need some help in producing these Frankenbuilds on all three 
platforms.


/bc

[*] The urls tested come from our users via the Crash Reports system and 
are privacy sensitive. The urls also frequently contain extreme NSFW 
content and have in the past contained illegal content which may be 
harmful to the viewer.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used

2015-04-30 Thread Trevor Saunders
On Wed, Apr 29, 2015 at 02:53:03PM -0400, Ehsan Akhgari wrote:
 On 2015-04-27 9:54 PM, Trevor Saunders wrote:
 On Mon, Apr 27, 2015 at 09:07:51PM -0400, Ehsan Akhgari wrote:
 On Mon, Apr 27, 2015 at 5:45 PM, Trevor Saunders tbsau...@tbsaunde.org
 wrote:
 
 On Mon, Apr 27, 2015 at 03:48:48PM -0400, Ehsan Akhgari wrote:
 Right now, our coding style requires that both the virtual and override
 keywords to be specified for overridden virtual functions.  A few things
 have changed since we decided that a number of years ago:
 
 1. The override and final keywords are now available on all of the
 compilers that we build with, and we have stopped supporting compilers
 that
 do not support these features.
 2. We have very recently gained the ability to run clang-based mass
 source
 code transformations over our code that would let us enforce the coding
 style [1].
 
 I would like to propose a change to our coding style, and run a mass
 transformation over our code in order to enforce it.  I think we should
 change it to require the usage of exactly one of these keywords per
 *overridden* function: virtual, override, and final.  Here are the
 advantages:
 
 I believe we have some cases in the tree where a virtual function
 doesn't override but is final so you need to write virtual final.  I
 believe one of those cases may be so a function can be called indirectly
 from outside libxul, and another might be to prevent people using some
 cc macros incorrectly.
 
 
 Any chance you could point us to those functions, please?
 
 http://mxr.mozilla.org/mozilla-central/source/xpcom/glue/nsCycleCollectionParticipant.h#548
 
 Hmm, we can probably make an exception for this one.
 
 and this one isn't final, but we could make it final if the tu will be
 built into libxul (because then devirtualization is fine)
 http://mxr.mozilla.org/mozilla-central/source/dom/base/nsIDocument.h#1287
 
 I'm not sure why that function is virtual.  If it's just in order to make it
 possible to call it through a vtable from outside of libxul, I'm not sure
 why we need to treat it differently than other XPCOM APIs for example.  If
 this is not used outside of libxul, we can probably make it non-virtual.

its the former.  We don't need to do anything special, but we could if
we wanted.  All that said it turns out its not terribly related.

 * It is a more succinct, as |virtual void foo() override;| doesn't convey
 more information than |void foo() override;|.
 
 personally I think it takes significantly more mental effort to read
 void foo() final; to mean it overrides something and is virtual than if
 its explicit as in virtual void foo() override final;
 
 and actually I'd also like it if C++ changed to allow override and final
 on non virtual functions in which case this would be a loss of
 information.
 
 
 Well, we're not talking about changing C++.  ;-)  But why do you find it
 
 I didn't say we were, but should such a change happen this would then be
 confusing.
 
 C++ doesn't usually change in backwards incompatible ways, and for all
 practical intents and purposes we can proceed under the assumption that this
 will never happen, so we don't need to protect against it.

I don't think it would actually be backward incompatible the only
changes would be turning invalid programs into valid ones.

 more clear to say |virtual ... final| than |... final|?  They both convey
 the exact same amount of information.  Is it just habit and/or personal
 preference?
 
 its explicit vs implicit yes it is true that you can derive foo is
 virtual from void foo() final; it doesn't take any habit or thinking to
 see that from virtual void foo() override final; but I would claim its
 not as obvious with void foo() final;  I don't think that's really a
 preference more than say prefering text files to gziped files ;)
 
 I disagree.  I understand the argument of someone not knowing these new
 keywords not understanding the distinction, but since we are already using
 these keywords, I think it is reasonable to expect people to either know or
 learn what these keywords mean.  And once they do, then it really becomes a
 matter of preference.
 
 Another way to phrase this would be: if someone wonders whether foo in |void
 foo() final;| is virtual or not, they need to study the meaning of these
 keywords.  :-)

yes, but they also need to think about the meaning instead of just
reading.

 * It can be easily enforced using clang-tidy across the entire code base.
 
 That doesn't really seem like an argument for choosing this particular
 style we could as easily check that virtual functions are always marked
 virtual, and marked override if possible.
 
 
 Except that is more tricky to get right.  Please see bug 1158776.
 
 not if we have a static analysis that checks override is always present.
 
 We don't have that.  Please note that I'm really not interested in building
 a whole set of infrastructure just in order to keep the current wording of
 the coding style.  I 

Re: Announcing Operation Instrument

2015-04-30 Thread Honza Bambas

On 4/30/2015 2:08, Robert O'Callahan wrote:

On Thu, Apr 30, 2015 at 2:52 AM, Honza Bambas hbam...@mozilla.com wrote:


Just let you know about my intensive work on Backtrack or Caller Chain
which is about connecting Gecko Profiler and Task Tracer together to catch
all (instrumented) inter-object and inter-thread calls + IO blocking, lock
waits, queuing, event (nsIRunnable) dispatches, network query/response
pairing, etc.  In the captured chain it's then possible to select a path
that is then back-walked and markers on it annotated as on the path.
Result is a per-thread tape-like timeline showing where your selected path
is going though and how it is blocked by other stuff running in
parallel/sequentially on other threads (dispatch/IO/dequeue/network
response delays).  This all should completely replace Visual Event Tracer
that is a good logging tool but poor analytical tool.  I'm piggybacking the
existing PROFILER_LABEL_* macros and TT's automatic instrumentation.  UI
should eventually end up being part of Cleopatra (the Gecko Profiler UI).


That sounds awesome!

It does seem like a good idea to integrate this with devtools somehow...

Rob


I think devtools are for web developers.  What I do is more on the SPS 
level - for us, Gecko developers.  Hence I more write it as a feature 
you have to turn on in build time and don't flood webdev tools with it.  
But we'll see.


-hb-
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: String array from JS - C++ via IDL

2015-04-30 Thread Jan Jongboom
On Thursday, April 30, 2015 at 4:29:21 PM UTC+2, Jan Jongboom wrote:
 I have an IDL file and I want to add a new attribute that contains an array 
 of strings. The interface is implemented in JavaScript and I'm writing C++ 
 code.
 
 IDL:
 
 readonly attribute nsIArray osPaths; // DOMString[]
 
 Consuming in C++:
 
 nsCOMPtrnsIArray bla;
 app-GetOsPaths(getter_AddRefs(bla));
 
 uint32_t length;
 bla-GetLength(length);
 printf(Length=%d\n, length);
 
 All OK. Prints 'Length=1' when I add one element in the array. But now... how 
 do I get the strings out of here. I found do_QueryElement and it's 
 nsISupports* friends, but none of them say they can handle it...
 
 for (uint32_t j = 0; j  length; ++j) {
   nsCOMPtrnsISupportsPrimitive isp = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsCString iscs = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsString iss = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsPRBool isb = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsPRUint8 isu8 = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsPRUint16 isu16 = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsPRUint32 isu32 = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsChar isc = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsPRInt16 isi16 = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsPRInt32 isi32 = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsFloat isf = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsDouble isd = do_QueryElementAt(bla, j);
   nsCOMPtrnsISupportsInterfacePointer isip = do_QueryElementAt(bla, j);
   
   printf(isp=%d , !!isp);
   printf(iscs=%d , !!iscs);
   printf(iss=%d , !!iss);
   printf(isb=%d , !!isb);
   printf(isu8=%d , !!isu8);
   printf(isu16=%d , !!isu16);
   printf(isu32=%d , !!isu32);
   printf(isc=%d , !!isc);
   printf(isi16=%d , !!isi16);
   printf(isi32=%d , !!isi32);
   printf(isf=%d , !!isf);
   printf(isd=%d , !!isd);
   printf(isip=%d , !!isip);
   printf(\n);
 }
 
 Result: isp=0 iscs=0 iss=0 isb=0 isu8=0 isu16=0 isu32=0 isc=0 isi16=0 isi32=0 
 isf=0 isd=0 isip=0
 
 So what type is in here, and how can I get it out of the array? I just want 
 normal nsString objects.

Solved it. I was not passing an nsIArray from JS side but a normal array.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform