Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-21 Thread Dao

On 21.05.2014 01:27, Rik Cabanier wrote:

Likewise here. I don't think anyone is saying that hardwareConcurrency
is failing on the grounds of exposing too much system information alone.
The way I read this thread, people either aren't convinced that it's the
right compromise given its usefulness, or that it's the right API for the
task at hand in the first place.



Yeah, I don't think anyone has the answer. My thoughts are that if this
proposed feature works on other platforms, why not here? I understand
Ehsan's points but they can be made to any other platform where this is
used successfully (ie photoshop, parallel builds, web servers, databases,
games, etc)
I don't understand people's assertions why the web platform needs to be
different.


It's generally expected that native applications need to be updated, 
recompiled or completely rewritten after some time as platforms and 
hardware architectures change. (Microsoft traditionally tries hard to 
keep Windows backward compatible, but this is only ever a compromise and 
doesn't work universally.) This is not how the Web is supposed to work 
-- Web technology needs to be forward compatible. People have previously 
pointed out that navigator.hardwareConcurrency could become increasingly 
meaningless if not harmful in the foreseeable future.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-20 Thread Gavin Sharp
I think it might help your case to acknowledge the often significant
difference between technically possible, but expensive and
unreliable and extremely simple and 100% reliable. That something
is already technically possible does not mean that making it easier
has no consequences. Arguing that the incremental fingerprinting risk
is negligible is reasonable, but you lose credibility if you suggest
it doesn't exist.

Gavin

On Tue, May 20, 2014 at 12:30 AM, Rik Cabanier caban...@gmail.com wrote:
 On Tue, May 20, 2014 at 12:24 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, May 19, 2014 at 7:14 PM, Mike Hommey m...@glandium.org wrote:
  On Mon, May 19, 2014 at 06:35:49PM -0700, Jonas Sicking wrote:
  On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com
 wrote:
   I don't see why the web platform is special here and we should trust
 that
   authors can do the right thing.
 
  I'm fairly sure people have already pointed this out to you. But the
  reason the web platform is different is that because we allow
  arbitrary application logic to run on the user's device without any
  user opt-in.
 
  I.e. the web is designed such that it is safe for a user to go to any
  website without having to consider the risks of doing so.
 
  This is why we for example don't allow websites to have arbitrary
  read/write access to the user's filesystem. Something that all the
  other platforms that you have pointed out do.
 
  Those platforms instead rely on that users make a security decision
  before allowing any code to run. This has both advantages (easier to
  design APIs for those platforms) and disadvantages (malware is pretty
  prevalent on for example Windows).
 
  As much as I agree the API is not useful, I don't buy this argument
  either. What prevents a web app to just use n workers, where n is a much
  bigger number than what would be returned by the API?

 Nothing. The attack I'm trying to prevent is fingerprinting. Allowing
 workers to run a large number of workers does not allow
 fingerprinting.


 Eli's polyfill can already be used to do fingerprinting [1]. It's not very
 good at giving a consistent and accurate results which makes it less
 suitable to plan your workload. It also wastes a lot of CPU cycles.

 1: http://wg.oftn.org/projects/core-estimator/demo/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-19 Thread Jonas Sicking
On Mon, May 12, 2014 at 5:03 PM, Rik Cabanier caban...@gmail.com wrote:
 Primary eng emails
 caban...@adobe.com, bugm...@eligrey.com

 *Proposal*
 http://wiki.whatwg.org/wiki/NavigatorCores

 *Summary*
 Expose a property on navigator called hardwareConcurrency that returns the
 number of logical cores on a machine.

 *Motivation*
 All native platforms expose this property, It's reasonable to expose the
 same capabilities that native applications get so web applications can be
 developed with equivalent features and performance.

 *Mozilla bug*
 https://bugzilla.mozilla.org/show_bug.cgi?id=1008453
 The patch is currently not behind a runtime flag, but I could add it if
 requested.

 *Concerns*
 The original proposal required that a platform must return the exact number
 of logical CPU cores. To mitigate the fingerprinting concern, the proposal
 was updated so a user agent can lie about this.
 In the case of WebKit, it will return a maximum of 8 logical cores so high
 value machines can't be discovered. (Note that it's already possible to do
 a rough estimate of the number of cores)

Here's the responses that I sent to blink-dev before you sent the
above email here.

For what it's worth, in Firefox we've avoided implementing this due to
the increased fingerprintability. Obviously we can't forbid any APIs
which increase fingerprintability, however in this case we felt that
the utility wasn't high enough given that the number of cores on the
machine often does not equal the number of cores available to a
particular webpage.

A better approach is an API which enables the browser to determine how
much to parallelize a particular computation.

and

Do note that the fact that you can already approximate this API using
workers is just as much an argument for that there are no additional
fingerprinting entropy exposed here, as it is an argument for that
this use case has already been resolved.

Additionally many of the people that are fingerprinting right now are
unlikely to be willing to peg the CPU for 20 seconds in order to get a
reliable fingerprint. Though obviously there are exceptions.

Another relevant piece of data here is that we simply haven't gotten
high priority requests for this feature. This lowers the relative
value-to-risk ratio.

I still feel like the value-to-risk ratio here isn't good enough. It
would be relatively easy to define a WorkerPool API which spins up
additional workers as needed.

A very simple version could be something as simple as:

page.html:
var wp = new WorkerPool(worker.js);
wp.onmessage = resultHandler;
myArrayOfWorkTasks.forEach(x = wp.postMessage(x));

worker.js:
onmessage = function(e) {
  var res = doHeavyComputationWith(e.data);
  postMessage(res);
}
function doHeavyComputationWith(val) {
  ...
}

This obviously is very handwavey. It's definitely missing some
mechanism to make sure that you get the results back in a reasonable
order. But it's not rocket science to get this to be a coherent
proposal.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-19 Thread Benoit Jacob
+1000! Thanks for articulating so clearly the difference between the
Web-as-an-application-platform and other application platforms.

Benoit




2014-05-19 21:35 GMT-04:00 Jonas Sicking jo...@sicking.cc:

 On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com wrote:
  I don't see why the web platform is special here and we should trust that
  authors can do the right thing.

 I'm fairly sure people have already pointed this out to you. But the
 reason the web platform is different is that because we allow
 arbitrary application logic to run on the user's device without any
 user opt-in.

 I.e. the web is designed such that it is safe for a user to go to any
 website without having to consider the risks of doing so.

 This is why we for example don't allow websites to have arbitrary
 read/write access to the user's filesystem. Something that all the
 other platforms that you have pointed out do.

 Those platforms instead rely on that users make a security decision
 before allowing any code to run. This has both advantages (easier to
 design APIs for those platforms) and disadvantages (malware is pretty
 prevalent on for example Windows).

 / Jonas
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-19 Thread Rik Cabanier
On Mon, May 19, 2014 at 6:35 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com wrote:
  I don't see why the web platform is special here and we should trust that
  authors can do the right thing.

 I'm fairly sure people have already pointed this out to you. But the
 reason the web platform is different is that because we allow
 arbitrary application logic to run on the user's device without any
 user opt-in.

 I.e. the web is designed such that it is safe for a user to go to any
 website without having to consider the risks of doing so.

 This is why we for example don't allow websites to have arbitrary
 read/write access to the user's filesystem. Something that all the
 other platforms that you have pointed out do.

 Those platforms instead rely on that users make a security decision
 before allowing any code to run. This has both advantages (easier to
 design APIs for those platforms) and disadvantages (malware is pretty
 prevalent on for example Windows)


I'm unsure what point you are trying to make.
This is not an API that exposes any more information than a user agent
sniffer can approximate.
It will just be more precise and less wasteful. For the high value system
(= lots of cores), we intentionally limited the number of cores to 8. This
number of cores is very common and most applications won't see much use
above 8 anyway.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-19 Thread Rik Cabanier
On Mon, May 19, 2014 at 6:46 PM, Benoit Jacob jacob.benoi...@gmail.comwrote:

 +1000! Thanks for articulating so clearly the difference between the
 Web-as-an-application-platform and other application platforms.


It really surprises me that you would make this objection.
WebGL certainly would *not* fall into this Web-as-an-application-platform
category since it exposes machine information [1] and is generally insecure
[2] according to Apple and (in the past) Microsoft.

Please note that I really like WebGL and not worried about these issues.
Just pointing out your double standard.

1: http://renderingpipeline.com/webgl-extension-viewer/
2: http://lists.w3.org/Archives/Public/public-fx/2012JanMar/0136.html


 2014-05-19 21:35 GMT-04:00 Jonas Sicking jo...@sicking.cc:

  On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com
 wrote:
  I don't see why the web platform is special here and we should trust
 that
  authors can do the right thing.

 I'm fairly sure people have already pointed this out to you. But the
 reason the web platform is different is that because we allow
 arbitrary application logic to run on the user's device without any
 user opt-in.

 I.e. the web is designed such that it is safe for a user to go to any
 website without having to consider the risks of doing so.

 This is why we for example don't allow websites to have arbitrary
 read/write access to the user's filesystem. Something that all the
 other platforms that you have pointed out do.

 Those platforms instead rely on that users make a security decision
 before allowing any code to run. This has both advantages (easier to
 design APIs for those platforms) and disadvantages (malware is pretty
 prevalent on for example Windows).

 / Jonas
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-18 Thread Rik Cabanier
FYI this attribute landed in WebKit today:
http://trac.webkit.org/changeset/169017


On Thu, May 15, 2014 at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:




 On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari 
 ehsan.akhg...@gmail.comwrote:

 On 2014-05-13, 9:01 PM, Rik Cabanier wrote:

 ...

 The problem is that the API doesn't really make it obvious that
 you're not supposed to take the value that the getter returns and
 just spawn N workers.  IOW, the API encourages the wrong behavior by
 design.


 That is simply untrue.


 I'm assuming that the goal of this API is to allow authors to spawn as
 many workers as possible so that they can exhaust all of the cores in the
 interest of finishing their computation faster.


 That is one way of using it but not the only one.
 For instance, let's say that I'm writing on a cooperative game. I might
 want to put all my network logic in a worker and want to make sure that
 worker is scheduled. This worker consumes little (if any) cpu, but I want
 it to be responsive.
 NumCores = 1 - do everything in the main thread and try to make sure the
 network code executes
 NumCores = 2 - spin up a worker for the network code. Everything else in
 the main thread
 NumCores = 3 - spin up a worker for the network code + another one for
 physics and image decompression. Everything else in the main thread


  I have provided reasons why any thread which is running at a higher
 priority on the system busy doing work is going to make this number an over
 approximation, I have given you two examples of higher priority threads
 that we're currently shipping in Firefox (Chrome Workers and the
 MediaStreamGraph thread)


 You're arguing against basic multithreading functionality. I'm unsure how
 ANY thread framework in a browser could fix this since there might be other
 higher priority tasks in the system.
 For your example of Chrome Workers and MediaStreamGraph, I assume those
 don't run at a constant 100% so a webapp that grabs all cores will still
 get more work done.


 and have provided you with experimental evidence of running Eli's test
 cases trying to exhaust as many cores as it can fails to predict the number
 of cores in these situations.


 Eli's code is an approximation. It doesn't prove anything.
 I don't understand your point here.


  If you don't find any of this convincing, I'd respectfully ask us to
 agree to disagree on this point.


 OK.


  For the sake of argument, let's say you are right. How are things worse
 than before?


 I don't think we should necessarily try to find a solution that is just
 not worse than the status quo, I'm more interested in us implementing a
 good solution here (and yes, I'm aware that there is no concrete proposal
 out there that is better at this point.)


 So, worst case, there's no harm.
 Best case, we have a more responsive application.

  ...


 That's fine but we're coming right back to the start: there is no way
 for informed authors to make a decision today.


 Yes, absolutely.


  The let's build something complex that solves everything proposal
 won't be done in a long time. Meanwhile apps can make responsive UI's
 and fluid games.


 That's I think one fundamental issue we're disagreeing on.  I think that
 apps can build responsive UIs and fluid games without this today on the Web.


 Sure. You can build apps that don't tax the system or that are
 specifically tailored to work well on a popular system.


  There were 24,000 hits for java which is on the web and a VM but now you
 say that it's not a vote of popularity?


 We may have a different terminology here, but to me, positive feedback
 from web developers should indicate a large amount of demand from the web
 developer community for us to solve this problem at this point, and also a
 strong positive signal from them on this specific solution with the flaws
 that I have described above in mind.  That simply doesn't map to searching
 for API names on non-Web technologies on github. :-)


 This was not a simple search. Please look over the examples especially the
 node.js ones and see how it's being used.
 This is what we're trying to achieve with this attribute.


 Also, FTR, I strongly disagree that we should implement all popular Java
 APIs just because there is a way to run Java code on the web.  ;-)

  ...

 Can you restate the actual problem? I reread your message but didn't
 find anything that indicates this is a bad idea.


 See above where I re-described why this is not a good technical solution
 to achieve the goal of the API.

 Also, as I've mentioned several times, this API basically ignores the
 fact that there are AMP systems shipping *today* and dies not take the fact
 that future Web engines may try to use as many cores as they can at a
 higher priority (Servo being one example.)


 OK. They're free to do so. This is not a problem (see previous messages)
 It seems like you're arguing against basic multithreading again.


Others 

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-18 Thread Xidorn Quan
IMO, though we may have a better model in the future, it is at least not
harmful to have such attribute with some limitation. The WebKit guys think
it is not a fingerprinting when limiting the max value to 8. I think it
might be meaningful to also limit the number to power of 2 (very few people
has 3 or 6 cores so they will be fingerprinting as well).

And I think it makes sense to announce that, UA does not guarantee this
value a constant, so that UA can return whatever value it feels comfortable
with when the getter is invoked. Maybe in the future, we can even have an
event to notify the script that the number has been changed.

In addition, considering that WebKit has landed this feature, and Blink is
also going to implement that, it is not a bad idea for us to have the
attribute as well.


On Mon, May 19, 2014 at 9:23 AM, Rik Cabanier caban...@gmail.com wrote:

 FYI this attribute landed in WebKit today:
 http://trac.webkit.org/changeset/169017


 On Thu, May 15, 2014 at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:
 
  On 2014-05-13, 9:01 PM, Rik Cabanier wrote:
 
  ...
 
  The problem is that the API doesn't really make it obvious that
  you're not supposed to take the value that the getter returns and
  just spawn N workers.  IOW, the API encourages the wrong behavior
 by
  design.
 
 
  That is simply untrue.
 
 
  I'm assuming that the goal of this API is to allow authors to spawn as
  many workers as possible so that they can exhaust all of the cores in
 the
  interest of finishing their computation faster.
 
 
  That is one way of using it but not the only one.
  For instance, let's say that I'm writing on a cooperative game. I might
  want to put all my network logic in a worker and want to make sure that
  worker is scheduled. This worker consumes little (if any) cpu, but I want
  it to be responsive.
  NumCores = 1 - do everything in the main thread and try to make sure the
  network code executes
  NumCores = 2 - spin up a worker for the network code. Everything else in
  the main thread
  NumCores = 3 - spin up a worker for the network code + another one for
  physics and image decompression. Everything else in the main thread
 
 
   I have provided reasons why any thread which is running at a higher
  priority on the system busy doing work is going to make this number an
 over
  approximation, I have given you two examples of higher priority threads
  that we're currently shipping in Firefox (Chrome Workers and the
  MediaStreamGraph thread)
 
 
  You're arguing against basic multithreading functionality. I'm unsure how
  ANY thread framework in a browser could fix this since there might be
 other
  higher priority tasks in the system.
  For your example of Chrome Workers and MediaStreamGraph, I assume those
  don't run at a constant 100% so a webapp that grabs all cores will still
  get more work done.
 
 
  and have provided you with experimental evidence of running Eli's test
  cases trying to exhaust as many cores as it can fails to predict the
 number
  of cores in these situations.
 
 
  Eli's code is an approximation. It doesn't prove anything.
  I don't understand your point here.
 
 
   If you don't find any of this convincing, I'd respectfully ask us to
  agree to disagree on this point.
 
 
  OK.
 
 
   For the sake of argument, let's say you are right. How are things worse
  than before?
 
 
  I don't think we should necessarily try to find a solution that is just
  not worse than the status quo, I'm more interested in us implementing a
  good solution here (and yes, I'm aware that there is no concrete
 proposal
  out there that is better at this point.)
 
 
  So, worst case, there's no harm.
  Best case, we have a more responsive application.
 
   ...
 
 
  That's fine but we're coming right back to the start: there is no way
  for informed authors to make a decision today.
 
 
  Yes, absolutely.
 
 
   The let's build something complex that solves everything proposal
  won't be done in a long time. Meanwhile apps can make responsive UI's
  and fluid games.
 
 
  That's I think one fundamental issue we're disagreeing on.  I think that
  apps can build responsive UIs and fluid games without this today on the
 Web.
 
 
  Sure. You can build apps that don't tax the system or that are
  specifically tailored to work well on a popular system.
 
 
   There were 24,000 hits for java which is on the web and a VM but now
 you
  say that it's not a vote of popularity?
 
 
  We may have a different terminology here, but to me, positive feedback
  from web developers should indicate a large amount of demand from the
 web
  developer community for us to solve this problem at this point, and
 also a
  strong positive signal from them on this specific solution with the
 flaws
  that I have described above in mind.  That simply doesn't map to
 searching
  for API names on non-Web 

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-18 Thread Rik Cabanier
On Sun, May 18, 2014 at 4:51 PM, Xidorn Quan quanxunz...@gmail.com wrote:

 IMO, though we may have a better model in the future, it is at least not
 harmful to have such attribute with some limitation. The WebKit guys think
 it is not a fingerprinting when limiting the max value to 8. I think it
 might be meaningful to also limit the number to power of 2 (very few people
 has 3 or 6 cores so they will be fingerprinting as well).


There are CPU's from Intel [1], AMD [2], Samsung [3] and possible others
that have 6 cores. I'm unsure why we would treat them differently since
they're not high-value systems.

And I think it makes sense to announce that, UA does not guarantee this
 value a constant, so that UA can return whatever value it feels comfortable
 with when the getter is invoked. Maybe in the future, we can even have an
 event to notify the script that the number has been changed.


Yes, if a user agent wants to return a lower number (ie so a well-behaved
application leaves a CPU free), it's free to do so.
I'm unsure if the event is needed but that can be addressed later.


 In addition, considering that WebKit has landed this feature, and Blink is
 also going to implement that, it is not a bad idea for us to have the
 attribute as well.


The WebKit patch limits the maximum number to 8. The blink patch currently
does not limit what it returns.
My proposed mozilla patch [4] makes the maximum return value configurable
through a dom.maxHardwareConcurrency preference key. It currently has a
default value of 8.

1: http://ark.intel.com/products/63697 http://ark.intel.com/products/77780
2:
http://products.amd.com/en-us/DesktopCPUDetail.aspx?id=811f1=f2=f3=f4=f5=f6=f7=f8=f9=f10=f11=f12=
3:
http://www.samsung.com/global/business/semiconductor/minisite/Exynos/products5hexa.html
4: https://bugzilla.mozilla.org/show_bug.cgi?id=1008453


 On Mon, May 19, 2014 at 9:23 AM, Rik Cabanier caban...@gmail.com wrote:

 FYI this attribute landed in WebKit today:
 http://trac.webkit.org/changeset/169017


 On Thu, May 15, 2014 at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari 
 ehsan.akhg...@gmail.comwrote:
 
  On 2014-05-13, 9:01 PM, Rik Cabanier wrote:
 
  ...
 
  The problem is that the API doesn't really make it obvious that
  you're not supposed to take the value that the getter returns and
  just spawn N workers.  IOW, the API encourages the wrong behavior
 by
  design.
 
 
  That is simply untrue.
 
 
  I'm assuming that the goal of this API is to allow authors to spawn as
  many workers as possible so that they can exhaust all of the cores in
 the
  interest of finishing their computation faster.
 
 
  That is one way of using it but not the only one.
  For instance, let's say that I'm writing on a cooperative game. I might
  want to put all my network logic in a worker and want to make sure that
  worker is scheduled. This worker consumes little (if any) cpu, but I
 want
  it to be responsive.
  NumCores = 1 - do everything in the main thread and try to make sure
 the
  network code executes
  NumCores = 2 - spin up a worker for the network code. Everything else
 in
  the main thread
  NumCores = 3 - spin up a worker for the network code + another one for
  physics and image decompression. Everything else in the main thread
 
 
   I have provided reasons why any thread which is running at a higher
  priority on the system busy doing work is going to make this number an
 over
  approximation, I have given you two examples of higher priority threads
  that we're currently shipping in Firefox (Chrome Workers and the
  MediaStreamGraph thread)
 
 
  You're arguing against basic multithreading functionality. I'm unsure
 how
  ANY thread framework in a browser could fix this since there might be
 other
  higher priority tasks in the system.
  For your example of Chrome Workers and MediaStreamGraph, I assume those
  don't run at a constant 100% so a webapp that grabs all cores will still
  get more work done.
 
 
  and have provided you with experimental evidence of running Eli's test
  cases trying to exhaust as many cores as it can fails to predict the
 number
  of cores in these situations.
 
 
  Eli's code is an approximation. It doesn't prove anything.
  I don't understand your point here.
 
 
   If you don't find any of this convincing, I'd respectfully ask us to
  agree to disagree on this point.
 
 
  OK.
 
 
   For the sake of argument, let's say you are right. How are things
 worse
  than before?
 
 
  I don't think we should necessarily try to find a solution that is just
  not worse than the status quo, I'm more interested in us implementing a
  good solution here (and yes, I'm aware that there is no concrete
 proposal
  out there that is better at this point.)
 
 
  So, worst case, there's no harm.
  Best case, we have a more responsive application.
 
   ...
 
 
  That's fine but we're coming right back to the start: there is no way

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-16 Thread lrbabe
Do you think it would be feasible that the browser fires events every time the 
number of cores available for a job changes? That might allow to build an 
efficient event-based worker pool.

In the meantime, there are developers out there who are downloading 
micro-benchmarks on every client to stress-test the browser and determine the 
number of physical core. This is nonsense, we can all agree, but unless you 
give them a short-term alternative, they'll keep doing exactly that. And 
native will keep looking a lot more usable than the web.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-16 Thread lrbabe
Here's the naive worker pool implementation I was thinking about. It requires 
that the browser fires an event everytime a core becomes available (only in an 
active tab of course), and provide a property that tells whether or not a core 
is available at a given time:

// a handler that runs when a job is added to the queue or when a core becomes 
available
jobHandler() {
  if ( isJobInTheQueue  isCoreAvailable ) {
if ( noWorkerAvailable ) {
  pool.spawnWorker();
}
pool.distribute( queue.pullJob() );
  }
}

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-16 Thread Rik Cabanier
On Fri, May 16, 2014 at 11:03 AM, lrb...@gmail.com wrote:

 Do you think it would be feasible that the browser fires events every time
 the number of cores available for a job changes? That might allow to build
 an efficient event-based worker pool.


I think this will be very noisy and might cause a lot of confusion.
Also I'm unsure how we could even implement this since the operating
systems don't give us such information.


 In the meantime, there are developers out there who are downloading
 micro-benchmarks on every client to stress-test the browser and determine
 the number of physical core. This is nonsense, we can all agree, but unless
 you give them a short-term alternative, they'll keep doing exactly that.
 And native will keep looking a lot more usable than the web.


I agree.
Do you have pointers where people are describing this?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-16 Thread lrbabe
  Do you think it would be feasible that the browser fires events every time
  the number of cores available for a job changes? That might allow to build
  an efficient event-based worker pool.
 
 I think this will be very noisy and might cause a lot of confusion.
 Also I'm unsure how we could even implement this since the operating
 systems don't give us such information.

  In the meantime, there are developers out there who are downloading
  micro-benchmarks on every client to stress-test the browser and determine
  the number of physical core. This is nonsense, we can all agree, but unless
  you give them a short-term alternative, they'll keep doing exactly that.
  And native will keep looking a lot more usable than the web.
 
 I agree.
 Do you have pointers where people are describing this?

I'm about to add a worker pool to Prototypo[1] and I'll have to use this 
script. Today it provides the most reliable source of info on how many jobs I 
can run concurrently. And it saddens me.

[1] http://www.prototypo.io
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-15 Thread Rik Cabanier
On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2014-05-13, 9:01 PM, Rik Cabanier wrote:

 ...

 The problem is that the API doesn't really make it obvious that
 you're not supposed to take the value that the getter returns and
 just spawn N workers.  IOW, the API encourages the wrong behavior by
 design.


 That is simply untrue.


 I'm assuming that the goal of this API is to allow authors to spawn as
 many workers as possible so that they can exhaust all of the cores in the
 interest of finishing their computation faster.


That is one way of using it but not the only one.
For instance, let's say that I'm writing on a cooperative game. I might
want to put all my network logic in a worker and want to make sure that
worker is scheduled. This worker consumes little (if any) cpu, but I want
it to be responsive.
NumCores = 1 - do everything in the main thread and try to make sure the
network code executes
NumCores = 2 - spin up a worker for the network code. Everything else in
the main thread
NumCores = 3 - spin up a worker for the network code + another one for
physics and image decompression. Everything else in the main thread


  I have provided reasons why any thread which is running at a higher
 priority on the system busy doing work is going to make this number an over
 approximation, I have given you two examples of higher priority threads
 that we're currently shipping in Firefox (Chrome Workers and the
 MediaStreamGraph thread)


You're arguing against basic multithreading functionality. I'm unsure how
ANY thread framework in a browser could fix this since there might be other
higher priority tasks in the system.
For your example of Chrome Workers and MediaStreamGraph, I assume those
don't run at a constant 100% so a webapp that grabs all cores will still
get more work done.


 and have provided you with experimental evidence of running Eli's test
 cases trying to exhaust as many cores as it can fails to predict the number
 of cores in these situations.


Eli's code is an approximation. It doesn't prove anything.
I don't understand your point here.


  If you don't find any of this convincing, I'd respectfully ask us to
 agree to disagree on this point.


OK.


  For the sake of argument, let's say you are right. How are things worse
 than before?


 I don't think we should necessarily try to find a solution that is just
 not worse than the status quo, I'm more interested in us implementing a
 good solution here (and yes, I'm aware that there is no concrete proposal
 out there that is better at this point.)


So, worst case, there's no harm.
Best case, we have a more responsive application.

...

 That's fine but we're coming right back to the start: there is no way
 for informed authors to make a decision today.


 Yes, absolutely.


  The let's build something complex that solves everything proposal
 won't be done in a long time. Meanwhile apps can make responsive UI's
 and fluid games.


 That's I think one fundamental issue we're disagreeing on.  I think that
 apps can build responsive UIs and fluid games without this today on the Web.


Sure. You can build apps that don't tax the system or that are specifically
tailored to work well on a popular system.


 There were 24,000 hits for java which is on the web and a VM but now you
 say that it's not a vote of popularity?


 We may have a different terminology here, but to me, positive feedback
 from web developers should indicate a large amount of demand from the web
 developer community for us to solve this problem at this point, and also a
 strong positive signal from them on this specific solution with the flaws
 that I have described above in mind.  That simply doesn't map to searching
 for API names on non-Web technologies on github. :-)


This was not a simple search. Please look over the examples especially the
node.js ones and see how it's being used.
This is what we're trying to achieve with this attribute.


 Also, FTR, I strongly disagree that we should implement all popular Java
 APIs just because there is a way to run Java code on the web.  ;-)

...

 Can you restate the actual problem? I reread your message but didn't
 find anything that indicates this is a bad idea.


 See above where I re-described why this is not a good technical solution
 to achieve the goal of the API.

 Also, as I've mentioned several times, this API basically ignores the fact
 that there are AMP systems shipping *today* and dies not take the fact that
 future Web engines may try to use as many cores as they can at a higher
 priority (Servo being one example.)


OK. They're free to do so. This is not a problem (see previous messages)
It seems like you're arguing against basic multithreading again.


Others do this is just not going to convince me here.

 What would convince you? The fact that every other framework provides
 this and people use it, is not a strong indication?
 It's not possible for me 

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-15 Thread Ben Kelly
On May 15, 2014, at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:
 On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari 
 ehsan.akhg...@gmail.comwrote:
...
 
 Make it possible for authors to make a semi-informed decision on how to
 divide the work among workers.
 
 
 That can already be done using the timing attacks at the waste of some CPU
 time.
 
 
 It's imprecise and wasteful. A simple attribute check is all this should
 take.

If we want to support games on mobile platforms like Firefox OS, then this 
seems like a pretty important point.

Do we really want apps on buri (or tarako) wasting CPU, memory, and power to 
determine that they should not spin up web workers?

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-14 Thread Ehsan Akhgari

On 2014-05-13, 9:01 PM, Rik Cabanier wrote:




On Tue, May 13, 2014 at 3:16 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
mailto:ehsan.akhg...@gmail.com wrote:


...


That is not the point of this attribute. It's just a hint for
the author
so he can tune his application accordingly.
Maybe the application is tuned to use fewer cores, or maybe
more. It all
depends...


The problem is that the API doesn't really make it obvious that
you're not supposed to take the value that the getter returns and
just spawn N workers.  IOW, the API encourages the wrong behavior by
design.


That is simply untrue.


I'm assuming that the goal of this API is to allow authors to spawn as 
many workers as possible so that they can exhaust all of the cores in 
the interest of finishing their computation faster.  I have provided 
reasons why any thread which is running at a higher priority on the 
system busy doing work is going to make this number an over 
approximation, I have given you two examples of higher priority threads 
that we're currently shipping in Firefox (Chrome Workers and the 
MediaStreamGraph thread) and have provided you with experimental 
evidence of running Eli's test cases trying to exhaust as many cores as 
it can fails to predict the number of cores in these situations.  If you 
don't find any of this convincing, I'd respectfully ask us to agree to 
disagree on this point.



For the sake of argument, let's say you are right. How are things worse
than before?


I don't think we should necessarily try to find a solution that is just 
not worse than the status quo, I'm more interested in us implementing a 
good solution here (and yes, I'm aware that there is no concrete 
proposal out there that is better at this point.)



 (Note that I would be very eager to discuss a proposal that
actually
 tries to solve that problem.)


You should do that! People have brought this up in the past but no
progress has been made in the last 2 years.
However, if this simple attribute is able to stir people's
emotions, can
you imagine what would happen if you propose something complex? :-)


Sorry, but I have a long list of things on my todo list, and
honestly this one is not nearly close to the top of the list,
because I'm not aware of people asking for this feature very often.
  I'm sure there are some people who would like it, but there are
many problems that we are trying to solve here, and this one doesn't
look very high priority.


That's fine but we're coming right back to the start: there is no way
for informed authors to make a decision today.


Yes, absolutely.


The let's build something complex that solves everything proposal
won't be done in a long time. Meanwhile apps can make responsive UI's
and fluid games.


That's I think one fundamental issue we're disagreeing on.  I think that 
apps can build responsive UIs and fluid games without this today on the Web.



 I don't have any other cases where this is done.


 That really makes me question the positive feedback from web
 developers cited in the original post on this thread.  Can you
 please point us to places where that feedback is documented?

...
Python:

 multiprocessing.cpu_count()

 11,295 results


https://github.com/search?q=__multiprocessing.cpu_count%28%__29+extension%3Apytype=Code__ref=advsearchl=

https://github.com/search?q=multiprocessing.cpu_count%28%29+extension%3Apytype=Coderef=advsearchl=

...
Java:

 Runtime.getRuntime().__availableProcessors()

 23,967 results


https://github.com/search?q=__availableProcessors%28%29+__extension%3Ajavatype=Code__ref=searchresults

https://github.com/search?q=availableProcessors%28%29+extension%3Ajavatype=Coderef=searchresults

...

node.js is also exposing it:

 require('os').cpus()

 4,851 results


https://github.com/search?q=__require%28%27os%27%29.cpus%28%__29+extension%3Ajstype=Code__ref=searchresults

https://github.com/search?q=require%28%27os%27%29.cpus%28%29+extension%3Ajstype=Coderef=searchresults


I don't view platform parity as a checklist of features, so I really
have no interest in checking this checkbox just so that the Web
platform can be listed in these kinds of lists.  Honestly a list of
github hits without more information on what this value is actually
used for etc. is not really that helpful.  We're not taking a vote
of popularity here.  ;-)


Wait, you stated:

Native apps don't typically run in a VM which provides highly
sophisticated functionality for them.

and

That really makes me question the positive feedback from
web developers cited in the original post on 

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Rik Cabanier
On Tue, May 13, 2014 at 8:20 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On Tue, May 13, 2014 at 2:37 AM, Rik Cabanier caban...@gmail.com wrote:

 On Mon, May 12, 2014 at 10:15 PM, Joshua Cranmer  pidgeo...@gmail.com
 wrote:

  On 5/12/2014 7:03 PM, Rik Cabanier wrote:
 
  *Concerns*
 
  The original proposal required that a platform must return the exact
  number
  of logical CPU cores. To mitigate the fingerprinting concern, the
 proposal
  was updated so a user agent can lie about this.
  In the case of WebKit, it will return a maximum of 8 logical cores so
 high
  value machines can't be discovered. (Note that it's already possible
 to do
  a rough estimate of the number of cores)
 
 
  The discussion on the WHATWG mailing list covered a lot more than the
  fingerprinting concern. Namely:
  1. The user may not want to let web applications hog all of the cores
 on a
  machine, and exposing this kind of metric makes it easier for
 (good-faith)
  applications to inadvertently do this.
 

 Web applications can already do this today. There's nothing stopping them
 from figuring out the CPU's and trying to use them all.
 Worse, I think they will likely optimize for popular platforms which
 either
 overtax or underutilize non-popular ones.


 Can you please provide some examples of actual web applications that do
 this, and what they're exactly trying to do with the number once they
 estimate one?  (Eli's timing attack demos don't count. ;-)


Eli's listed some examples:
http://wiki.whatwg.org/wiki/NavigatorCores#Example_use_cases
I don't have any other cases where this is done. Maybe PDF.js would be
interested. They use workers to render pages and decompress images so I
could see how this is useful to them.


  2. It's not clear that this feature is necessary to build high-quality
  threading workload applications. In fact, it's possible that this
 technique
  makes it easier to build inferior applications, relying on a potentially
  inferior metric. (Note, for example, the disagreement on figuring out
 what
  you should use for make -j if you have N cores).


 Everyone is in agreement that that is a hard problem to fix and that there
 is no clear answer.
 Whatever solution is picked (maybe like Grand Central or Intel TBB), most
 solutions will still want to know how many cores are available.
 Looking at the native platform (and Adobe's applications), many query the
 operating system for this information to balance the workload. I don't see
 why this would be different for the web platform.


 I don't think that the value exposed by the native platforms is
 particularly useful.  Really if the use case is to try to adapt the number
 of workers to a number that will allow you to run them all concurrently,
 that is not the same number as reported traditionally by the native
 platforms.


Why not? How is the web platform different?


 If you try Eli's test case in Firefox under different workloads (for
 example, while building Firefox, doing a disk intensive operation, etc.),
 the utter inaccuracy of the results is proof in the ineffectiveness of this
 number in my opinion.


As Eli mentioned, you can run the algorithm for longer and get a more
accurate result. Again, if the native platform didn't support this, doing
this in C++ would result in the same.


 Also, I worry that this API is too focused on the past/present.  For
 example, I don't think anyone sufficiently addressed Boris' concern on the
 whatwg thread about AMP vs SMP systems.


Can you provide a link to that? Are there systems that expose this to the
user? (AFAIK slow cores are substituted with fast ones on the fly.)


 This proposal also assumes that the UA itself is mostly contempt with
 using a single core, which is true for the current browser engines, but
 we're working on changing that assumption in Servo.  It also doesn't take
 the possibility of several ones of these web application running at the
 same time.


How is this different from the native platform?


 Until these issues are addressed, I do not think we should implement or
 ship this feature.


FWIW these issues were already discussed in the WebKit bug.
I find it odd that we don't want to give authors access to such a basic
feature. Not everything needs to be solved by a complex framework.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Tom Schuster
I recently saw this bug about implementing navigator.getFeature, wouldn't
it make sense for this to be like hardware.memory, but hardware.cores?


On Tue, May 13, 2014 at 6:25 PM, Rik Cabanier caban...@gmail.com wrote:

 On Tue, May 13, 2014 at 8:20 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:

  On Tue, May 13, 2014 at 2:37 AM, Rik Cabanier caban...@gmail.com
 wrote:
 
  On Mon, May 12, 2014 at 10:15 PM, Joshua Cranmer  
 pidgeo...@gmail.com
  wrote:
 
   On 5/12/2014 7:03 PM, Rik Cabanier wrote:
  
   *Concerns*
  
   The original proposal required that a platform must return the exact
   number
   of logical CPU cores. To mitigate the fingerprinting concern, the
  proposal
   was updated so a user agent can lie about this.
   In the case of WebKit, it will return a maximum of 8 logical cores so
  high
   value machines can't be discovered. (Note that it's already possible
  to do
   a rough estimate of the number of cores)
  
  
   The discussion on the WHATWG mailing list covered a lot more than the
   fingerprinting concern. Namely:
   1. The user may not want to let web applications hog all of the cores
  on a
   machine, and exposing this kind of metric makes it easier for
  (good-faith)
   applications to inadvertently do this.
  
 
  Web applications can already do this today. There's nothing stopping
 them
  from figuring out the CPU's and trying to use them all.
  Worse, I think they will likely optimize for popular platforms which
  either
  overtax or underutilize non-popular ones.
 
 
  Can you please provide some examples of actual web applications that do
  this, and what they're exactly trying to do with the number once they
  estimate one?  (Eli's timing attack demos don't count. ;-)
 

 Eli's listed some examples:
 http://wiki.whatwg.org/wiki/NavigatorCores#Example_use_cases
 I don't have any other cases where this is done. Maybe PDF.js would be
 interested. They use workers to render pages and decompress images so I
 could see how this is useful to them.


   2. It's not clear that this feature is necessary to build high-quality
   threading workload applications. In fact, it's possible that this
  technique
   makes it easier to build inferior applications, relying on a
 potentially
   inferior metric. (Note, for example, the disagreement on figuring out
  what
   you should use for make -j if you have N cores).
 
 
  Everyone is in agreement that that is a hard problem to fix and that
 there
  is no clear answer.
  Whatever solution is picked (maybe like Grand Central or Intel TBB),
 most
  solutions will still want to know how many cores are available.
  Looking at the native platform (and Adobe's applications), many query
 the
  operating system for this information to balance the workload. I don't
 see
  why this would be different for the web platform.
 
 
  I don't think that the value exposed by the native platforms is
  particularly useful.  Really if the use case is to try to adapt the
 number
  of workers to a number that will allow you to run them all concurrently,
  that is not the same number as reported traditionally by the native
  platforms.
 

 Why not? How is the web platform different?


  If you try Eli's test case in Firefox under different workloads (for
  example, while building Firefox, doing a disk intensive operation, etc.),
  the utter inaccuracy of the results is proof in the ineffectiveness of
 this
  number in my opinion.
 

 As Eli mentioned, you can run the algorithm for longer and get a more
 accurate result. Again, if the native platform didn't support this, doing
 this in C++ would result in the same.


  Also, I worry that this API is too focused on the past/present.  For
  example, I don't think anyone sufficiently addressed Boris' concern on
 the
  whatwg thread about AMP vs SMP systems.
 

 Can you provide a link to that? Are there systems that expose this to the
 user? (AFAIK slow cores are substituted with fast ones on the fly.)


  This proposal also assumes that the UA itself is mostly contempt with
  using a single core, which is true for the current browser engines, but
  we're working on changing that assumption in Servo.  It also doesn't take
  the possibility of several ones of these web application running at the
  same time.
 

 How is this different from the native platform?


  Until these issues are addressed, I do not think we should implement or
  ship this feature.
 

 FWIW these issues were already discussed in the WebKit bug.
 I find it odd that we don't want to give authors access to such a basic
 feature. Not everything needs to be solved by a complex framework.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Ehsan Akhgari

On 2014-05-13, 9:55 AM, Tom Schuster wrote:

I recently saw this bug about implementing navigator.getFeature,
wouldn't it make sense for this to be like hardware.memory, but
hardware.cores?


No, because that would have all of the same issues as the current API.

Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Ehsan Akhgari

On 2014-05-13, 10:54 AM, Eli Grey wrote:

On Tue, May 13, 2014 at 1:43 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

supporting a worker pool that actually scales to how many cores you have 
available


1) What is an available core to you? An available core to me is a
core that I can use to compute. A core under load (even 100% load) is
still a core I can use to compute.


No, you're wrong.  An available core is a core which your application 
can use to run computations on.  If another code is already keeping it 
busy with a higher priority, it's unavailable by definition.



2) Web workers were intentionally made to be memory-heavy, long-lived,
reusable interfaces. The startup and unload overhead is massive if you
actually want to dynamically resize your threadpool. Ask the people
who put Web Workers in the HTML5 spec or try benchmarking it (rapid
threadpool resizing) yourself--they are not meant to be lightweight.


How does this support your argument exactly?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Eli Grey
On Tue, May 13, 2014 at 1:43 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 supporting a worker pool that actually scales to how many cores you have 
 available

1) What is an available core to you? An available core to me is a
core that I can use to compute. A core under load (even 100% load) is
still a core I can use to compute.
2) Web workers were intentionally made to be memory-heavy, long-lived,
reusable interfaces. The startup and unload overhead is massive if you
actually want to dynamically resize your threadpool. Ask the people
who put Web Workers in the HTML5 spec or try benchmarking it (rapid
threadpool resizing) yourself--they are not meant to be lightweight.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Eli Grey
On Tue, May 13, 2014 at 1:57 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:

 No, you're wrong.  An available core is a core which your application can
 use to run computations on.  If another code is already keeping it busy
 with a higher priority, it's unavailable by definition.


Run this code https://gist.github.com/eligrey/9a48b71b2f5da67b834b in
your browser. All cores are at 100% CPU usage, so clearly by your
definition all cores are now unavailable. How are you able to interact
with your OS? It must be some kind of black magic... or maybe it's because
your OS scheduler knows how to prioritize threads properly so that you can
multitask under load.

On Tue, May 13, 2014 at 1:57 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:

 How does this support your argument exactly?


It has nothing to do with my argument, it has to do with yours. You are
suggesting that people should dynamically resize their threadpools. I'm
bringing up the fact that web workers were *designed* to not be used in
this manner in the first place.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Kip Gilbert

Just wish to throw in my 2c...

Many game engines will query the core count to determine if they should 
follow a simple (one main thread, one render thread, one audio thread, 
one streamer thread) or more parallel (multiple render threads, multiple 
audio threads, gameplay/physics/ai broken up into separate workers) 
approach.  If there are sufficient cores, this is necessary to get the 
greatest possible framerate (keep the GPU fed), best quality audio (i.e. 
more channels, longer reverb), and things such as secondary animations 
that would not be enabled otherwise.


Even if not enabling all features and quality levels, the overhead of 
fencing, double buffering, etc, should be avoided on systems with fewer 
cores.


I also see that there are reasons that this may not be good for the 
web.  NUMA (Non Uniform Memory Architecture) and Hyper-threading 
attributes also need to be taken into account to effectively optimize 
for core count.  This seems out of place given the level of abstraction 
web developers expect.  I can also imagine a very-short-term future 
where CPU core count will be an outdated concept.


Cheers,
- Kearwood Kip Gilbert


On 2014-05-13, 10:58 AM, Joshua Cranmer  wrote:

On 5/13/2014 12:35 PM, Eli Grey wrote:

Can you back that up with a real-world example desktop application
that behaves as such?


The OpenMP framework?



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Ehsan Akhgari

On 2014-05-13, 11:14 AM, Eli Grey wrote:

On Tue, May 13, 2014 at 1:57 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
mailto:ehsan.akhg...@gmail.com wrote:

No, you're wrong.  An available core is a core which your
application can use to run computations on.  If another code is
already keeping it busy with a higher priority, it's unavailable by
definition.


Run this code https://gist.github.com/eligrey/9a48b71b2f5da67b834b in
your browser. All cores are at 100% CPU usage, so clearly by your
definition all cores are now unavailable.


They are unavailable to *all* threads running on your system with a 
lower priority.  (Note that Gecko runs Web Workers with a low priority 
already, so that they won't affect any of your normal apps, including 
Firefox's UI.)



How are you able to interact
with your OS? It must be some kind of black magic... or maybe it's
because your OS scheduler knows how to prioritize threads properly so
that you can multitask under load.


There is no magic involved here.


On Tue, May 13, 2014 at 1:57 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
mailto:ehsan.akhg...@gmail.com wrote:

How does this support your argument exactly?


It has nothing to do with my argument, it has to do with yours. You are
suggesting that people should dynamically resize their threadpools. I'm
bringing up the fact that web workers were /designed/ to not be used in
this manner in the first place.


OK, so you're asserting that it's impossible to implement a resizing 
worker pool on top of Web Workers.  I think you're wrong, but I'll grant 
you this assumption.  ;-)  Just wanted to make it clear that doing that 
won't bring us closer to a conclusion in this thread.


Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-13 Thread Xidorn Quan
As the main usage of this number is to maintain a fixed thread pool, I feel
it might be better to have a higher level API, such as worker pool.

I do agree that thread pool is very useful, but exposing the number of
cores directly seems not to be the best solution. We could have a better
abstraction, and let UAs to dynamically control the pool to have better
throughput.


On Wed, May 14, 2014 at 8:17 AM, Rik Cabanier caban...@gmail.com wrote:

 On Tue, May 13, 2014 at 2:59 PM, Boris Zbarsky bzbar...@mit.edu wrote:

  On 5/13/14, 2:42 PM, Rik Cabanier wrote:
 
  Why would that be? Are you burning more CPU resources in servo to do the
  same thing?
 
 
  In some cases, possibly yes.
 
 
   If so, that sounds like a problem.
 
 
  It depends on what your goals are.  Any sort of speculation, prefetch or
  prerender is burning more CPU resources to in the end do the same thing.
   But it may provide responsiveness benefits that are worth the extra CPU
  cycles.
 
  Current browsers don't do those things very much in the grand schemed of
  things because they're hard to do without janking the UI.  Servo should
 not
  have that problem, so it may well do things like speculatively starting
  layout in the background when a script changes styles, for example, and
  throwing the speculation away if more style changes happen before the
  layout is done.


 I agree that this isn't a problem. Sorry if I sounded critical.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform