Re: Non-extensibility of Typed Arrays

2013-08-28 Thread Steve Fink
On 08/27/2013 09:35 AM, Oliver Hunt wrote:
 My complaint is that this appears to be removing functionality that has been 
 present in the majority of shipping TA implementations, assuming from LH's 
 comment that Chakra supports expandos.

Note that even in the engines that support expandos, they will probably
not survive a structured clone. I just tried in Chrome and they get
stripped off. This further limits their utility in today's Web.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Steve Fink
On 09/04/2013 02:41 PM, Brendan Eich wrote:
 But lost expandos due to loss of identity are an especially nasty
 kind of bug to find. Is there any use-case here? We've never had a bug
 report asking us to make SpiderMonkey's typed arrays extensible, AFAIK.

We have: https://bugzilla.mozilla.org/show_bug.cgi?id=695438

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Non-extensibility of Typed Arrays

2013-09-05 Thread Steve Fink
On 09/04/2013 04:15 PM, Filip Pizlo wrote:

 On Sep 4, 2013, at 3:09 PM, Brendan Eich bren...@mozilla.com
 mailto:bren...@mozilla.com wrote:

 Filip Pizlo mailto:fpi...@apple.com
 September 4, 2013 12:34 PM
 My point is that having custom properties, or not, doesn't change
 the overhead for the existing typed array spec and hence has no
 effect on small arrays.  The reasons for this include:

 - Typed arrays already have to be objects, and hence have a
 well-defined behavior on '=='.

 - Typed arrays already have to be able to tell you that they are in
 fact typed arrays, since JS doesn't have static typing.

 - Typed arrays already have prototypes, and those are observable
 regardless of expandability.  A typed array from one global object
 will have a different prototype than a typed array from a different
 global object.  Or am I misunderstanding the spec?

 - Typed arrays already have to know about their buffer.

 - Typed arrays already have to know about their offset into the
 buffer.  Or, more likely, they have to have a second pointer that
 points directly at the base from which they are indexed.

 - Typed arrays already have to know their length.

 You're not proposing changing these aspects of typed arrays, right?

 Of course not, but for very small fixed length arrays whose .buffer
 is never accessed, an implementation might optimize harder.

 As I said, of course you can do this, and one way you could try
 harder is to put the buffer pointer in a side table.  The side table
 maps array object pointers to their buffers, and you only make an
 entry in this table if .buffer is mentioned.

 But if we believe that this is a sensible thing for a VM to do - and
 of course it is! - then the same thing can be done for the custom
 property storage pointer.

 It's hard for me to say no, Filip's analysis shows that's never
 worthwhile, for all time.

 The super short message is this: so long as an object obeys object
 identity on '==' then you can have free if unused, suboptimal if
 you use them custom properties by using a weak map on the side.
  This is true of typed arrays and it would be true of any other
 object that does object-style ==.  If you allocate such an object
 and never add a custom property then the weak map will never have an
 entry for it; but if you put custom properties in the object then
 the map will have things in it.  But with typed arrays you can do
 even better as my previous message suggests: so long as an object
 has a seldom-touched field and you're willing to eat an extra
 indirection or an extra branch on that field, you can have free if
 unused, still pretty good if you use them custom properties by
 displacing that field.  Typed arrays have both of these properties
 right now and so expandability is a free lunch.

 The last sentence makes a for-all assertion I don't think
 implementations must be constrained by.

 How so?  It is true that some VM implementations will be better than
 others.  But ultimately every VM can implement every optimization that
 every other VM has; in fact my impression is that this is exactly what
 is happening as we speak.

 So, it doesn't make much sense to make language design decisions
 because it might make some implementor's life easier right now.  If
 you could argue that something will /never/ be efficient if we add
 feature X, then that might be an interesting argument.  But as soon as
 we identify one sensible optimization strategy for making something
 free, I would tend to think that this is sufficient to conclude that
 the feature is free and there is no need to constrain it.  If we don't
 do this then we risk adding cargo-cult performance features that
 rapidly become obsolete.

This general argument bothers me slightly, because it assumes no
opportunity cost in making something free(ish). Even if you can
demonstrate that allowing X can be made fast, it isn't a complete
argument for allowing X, since disallowing X might enable some other
optimization or feature or semantic simplification.  Such demonstrations
are still useful, since they can shoot down objections based solely on
performance.

But maybe I'm misinterpreting ...sufficient to conclude...that there is
no need to constrain [the feature]. Perhaps you only meant that there
is no need to constrain it *for reasons of performance*? If so, then you
only need consider the opportunity cost of other optimizations.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: RegExps that don't modify global state?

2014-09-16 Thread Steve Fink
On 09/16/2014 10:13 PM, Jussi Kalliokoski wrote:
 On Wed, Sep 17, 2014 at 3:21 AM, Alex Kocharin a...@kocharin.ru
 mailto:a...@kocharin.ru wrote:

  
 What's the advantage of `re.test(str); RegExp.$1` over `let
 m=re.match(str); m[1]`?


 Nothing. However, with control structures it removes a lot of awkwardness:

 * `if ( /foo:(\d+)/.test(str)  parseInt(RegExp.$1, 10)  15 ) { ...`
 * `if ( /name: (\w+)/).test(str) ) { var name = RegExp.$1; ...`

Is

  if ((m = /foo:(\d+)/.exec(str))  parseInt(m[1], 10)  15) { ... }

so bad? JS assignment is an expression; make use of it. Much better than
Python's refusal to allow such a thing, requiring indentation trees of
doom or hacky workarounds when you just want to case-match a string
against a couple of regexes.

The global state *is* bad, and you don't need turns or parallelism to be
bitten by it.

function f(s) {
  if (s.test(/foo(\d+/)) {
print(Found in  + formatted(s));
return RegExp.$1; // Oops! formatted() does a match internally.
  }
}

Global variables are bad. They halfway made sense in Perl, but even the
Perl folks wish they'd been lexical all along.


 I personally find this functionality very useful and would be saddened
 if /u which I want to use all of the sudden broke this feature. Say
 what you mean. Unicode flag disabling features to enable parallelism
 is another footnote for WTFJS.
  

  
 I assume RegExp[$'] and RegExp[$`] are nice to have, I
 remember them from perl, but never actually used them in javascript.
  
  
 16.09.2014, 23:03, Andrea Giammarchi
 andrea.giammar...@gmail.com mailto:andrea.giammar...@gmail.com:
 I personally find the `re.test(str)` case a good reason to keep
 further access to `RegExp.$1` and others available instead of
 needing to test **and** grab eventually a match (redundant,
 slower, etc)
  
 As mentioned already `/u` will be used by default as soon as
 supported; having this implicit opt-out feels very wrong to me
 since `/u` meaning is completely different.
  
 Moreover, AFAIK JavaScript is single threaded per each EventLoop
 so I don't see conflicts possible if parallel execution is
 performed elsewhere, where also globals will (will them?) be a
 part, as every sandbox/iframe/worker has worked until now.
  
 I'd personally +1 an explicit opt-out and indifferently accept a
 re-opt as modifier such `/us` where `s` would mean stateful (or
 any other char would do as long as `RegExp.prototype.test` won't
 loose it's purpose and power).
  
 Regards
  
 P.S. there's no such thing as RegExp.$0 but RegExp['$'] will
 provide the (probably) intended result
 P.S. to know more about RegExp and these proeprties my old slides
 from BerlinJS event should do:
 http://webreflection.blogspot.co.uk/2012/02/berlin-js-regexp-slides.html

 On Tue, Sep 16, 2014 at 7:35 PM, Allen Wirfs-Brock
 al...@wirfs-brock.com mailto:al...@wirfs-brock.com wrote:


 On Sep 16, 2014, at 11:16 AM, Domenic Denicola wrote:

  I had a conversation with Jaswanth at JSConf EU that
 revealed that RegExps cannot be used in parallel JS because
 they modify global state, i.e. `RegExp.$0` and friends.
 
  We were thinking it would be nice to find some way of
 getting rid of this wart. One idea would be to bundle the
 don't-modify-global-state behavior with the `/u` flag.
 Another would be to introduce a new flag to opt-out. The
 former is a bit more attractive since people will probably
 want to use `/u` all the time anyway. I imagine there might
 be other possibilities others can think of.
 
  I also noticed today that the static `RegExp` properties
 are not specced, which seems at odds with our new mandate to
 at least Annex B-ify the required-for-web-compat stuff.

 Yes, they should be in Annex B.  But that means that somebody
 needs to write a spec. that defines their behavior.

 We could then add that extension to clause 16.1 as being
 forbidden for RegExps created with the /u flag.

 Allen

 ___
 es-discuss mailing list
 es-discuss@mozilla.org mailto:es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

 ,

 ___
 es-discuss mailing list
 es-discuss@mozilla.org mailto:es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


 ___
 es-discuss mailing list
 es-discuss@mozilla.org mailto:es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


Re: Maximally minimal stack trace standardization

2014-09-29 Thread Steve Fink
On 09/29/2014 09:14 AM, Sam Tobin-Hochstadt wrote:
 On Mon, Sep 29, 2014 at 10:55 AM, John Lenz concavel...@gmail.com wrote:
 I really have no idea what the behavior should be in the faces of optimized
 tail calls (which is must broader than simply self recursive methods that
 can be rewritten as a loop).   I've seen various suggestions (a capped call
 history) but I'm curious how efficient functional languages deal with this.
 Different functional languages do a variety of things here:

 - simply show the current stack, without the functions that made tail
 calls (this is probably the most common)
 - have a bounded buffer for stack traces
 - implement tail calls via a trampoline; this has the side-effect that
 the stack has recent tail calls in it already

 I'm sure there are other choices here that people have made.

Stack traces are really an overload of (at least?) 3 different concepts:

1. A record of how execution reached the current state. What debuggers
want, mostly.
2. The continuation from this point on - what function will be returned
to when the current function returns normally, recursively up the call
chain.
3. A description of the actual state of the stack.

In all of these, the semantics of the youngest frame are different from
all other frames in the stack trace.

For #2, thrown exceptions make the implied continuation ordering a lie,
or at least a little more nuanced. You sort of want to see what frames
will catch exceptions. (But that's not a trivial determination if you
have some native frames mixed in there, with arbitrary logic for
determining whether to catch or propagate an exception. Even JS frames
may re-throw.)

Inlined functions may cause gaps in #1 and #2, unless the implementation
takes pains to fill them in with dummy frames (in which case it's not
really #3 anymore.)

Unless the implementation plays games, tail calls can make #1 lie as
well. You really called f(), but it doesn't appear because its frame was
used for executing g() before pushing the remaining frames on your
stack. Tail calls don't really muck with #2 afaict.

All three meanings are legitimate things to want, and all of them
require some implementation effort. Even #3 is tricky with a JIT
involved. And I'm not even considering floating generator frames, which
may not fit into a linear structure at all. Or when users want long
stacks for callbacks, where the stack in effect when a callback was set
is relevant.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Abstract References

2014-10-22 Thread Steve Fink
On 10/22/2014 07:45 AM, Mark S. Miller wrote:

 * Only objects that have been used as keys in FastWeakMaps would ever
 have their [[Shadow]] set, so this could also be allocated on demand,
 given only a bit saying whether it is present. Besides this storage of
 this bit, there is no other effect or cost on any non-weakmap objects.

 * Since non-weakmap code doesn't need to test this bit, there is zero
 runtime cost on non-weakmap code.

 * Whether an object has been used as a key or not (and therefore
 whether an extra shadow has been allocated or not), normal non-weak
 property lookup on the object is unaffected, and pays no additional cost.

Maybe it's because I work on a garbage collector, but I always think of
the primary cost of WeakMaps as being the GC. The above analysis doesn't
take GC into account.

In the straightforward iterative implementation, you record all of the
live WeakMaps found while scanning through the heap. Then you go through
them, checking each key to see if it is live. For each such key, you
recursively mark the value. This marking can discover new live WeakMaps,
so you iterate to a fixed point.

In the current web, this implementation seems to work fine. The worst
case is O(n^2) in the size of the heap, which is pretty much fatal if
you ever hit it. But that requires lots of paths through multiple
WeakMaps, and in practice, it seems WeakMaps aren't being used much.
I've never seen our WeakMap marking phase show up as a significant cost.

For an algorithmically more robust solution, you could add a check
whenever marking an object. The check would test whether the object is
(or might be) used as a WeakMap key. This would slow down marking all
objects, so in practice you want to be clever about avoiding the test.

Anyway, my point is that WeakMaps have serious GC ramifications,
possibly extending to non-key objects, and any performance impact
analysis of using WeakMaps more extensively is incomplete without
considering GC costs.

 A realistic implementation should seek to avoid allocating the extra
 shadow objects. However, even if not, we are much better off with the
 above scheme than we are with the current slow WeakMap.

Perhaps. But multiple WeakMaps introduce the potential for many more
cycles than a single WeakMap. So I think a final conclusion is premature.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Abstract References

2014-10-22 Thread Steve Fink
On 10/22/2014 02:26 PM, Mark Miller wrote:


 On Wed, Oct 22, 2014 at 1:44 PM, Steve Fink sph...@gmail.com
 mailto:sph...@gmail.com wrote:

 On 10/22/2014 07:45 AM, Mark S. Miller wrote:
 
  * Only objects that have been used as keys in FastWeakMaps would
 ever
  have their [[Shadow]] set, so this could also be allocated on
 demand,
  given only a bit saying whether it is present. Besides this
 storage of
  this bit, there is no other effect or cost on any non-weakmap
 objects.
 
  * Since non-weakmap code doesn't need to test this bit, there is
 zero
  runtime cost on non-weakmap code.
 
  * Whether an object has been used as a key or not (and therefore
  whether an extra shadow has been allocated or not), normal non-weak
  property lookup on the object is unaffected, and pays no
 additional cost.

 Maybe it's because I work on a garbage collector, but I always
 think of
 the primary cost of WeakMaps as being the GC. The above analysis
 doesn't
 take GC into account.


 I should have been more explicit, but GC costs are almost my entire
 point. These costs aside, my FastWeakMaps are more expensive in all
 ways than SlowWeakMaps, though only by a constant factor, since each
 FastWeakMap operation must also perform the corresponding SlowWeakMap
 operation.

Ah, sorry, I totally missed your point.



 In the straightforward iterative implementation, you record all of the
 live WeakMaps found while scanning through the heap. Then you go
 through
 them, checking each key to see if it is live. For each such key, you
 recursively mark the value. This marking can discover new live
 WeakMaps,
 so you iterate to a fixed point.


 That is when you find yourself doing an ephemeron collection. The
 point of the transposed representation is to collect most ephemeron
 garbage using conventional collection. Consider

Ok, I get it now, and completely agree with your analysis, with the
caveat that supporting [[Shadow]] gives me the heebie-jeebies. It turns
a read into a write, for one thing. (The read of the key, I mean.) Could
the extra shadow table be kept separate from the key object? I know!
Let's use a WeakMap! :-)

 Here's the key important thing: In a generational collector, at this
 point we'd typically postpone ephemeron collection. To do so, we would
 complete the mark phase conventionally, by simply marking the values
 held by slowField. This marks slowValue, causing it to get promoted to
 the next older generation. THIS IS EXPENSIVE.

Yes, this is a big deal.


 In the current web, this implementation seems to work fine. The worst
 case is O(n^2) in the size of the heap, which is pretty much fatal if
 you ever hit it. But that requires lots of paths through multiple
 WeakMaps, and in practice, it seems WeakMaps aren't being used much.
 I've never seen our WeakMap marking phase show up as a significant
 cost.


 Chicken and egg. If WeakMaps are used for private state (and
 trademarks and...), they will be used a lot. But they will only be
 used for those things if it isn't fatally slow to do so.

Yes, I fully expect WeakMaps to start mattering soon-ish, though I'm
still procrastinating on doing anything about our current implementation.



 For an algorithmically more robust solution, you could add a check
 whenever marking an object. The check would test whether the object is
 (or might be) used as a WeakMap key. This would slow down marking all
 objects, so in practice you want to be clever about avoiding the test.


 Yeah, I'm very curious about whether this can be made cheap enough
 that implementations would be willing to do it. If so, then everything
 is much better, whether we transpose the representation or not.

We'll probably all end up at some messy point in the middle. Maybe a
fast initial pass without the checks. It'll be something that depends on
a bunch of assumptions for normal-case performance, but doesn't
completely break down in the pathological cases.
 


 Anyway, my point is that WeakMaps have serious GC ramifications,
 possibly extending to non-key objects, and any performance impact
 analysis of using WeakMaps more extensively is incomplete without
 considering GC costs.


 Exactly! I should have been clearer that these were the only costs I
 am concerned about here. Regarding all other costs, my example code
 only adds expense.

If I had read more closely, I probably would have noticed that...

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed array filling convenience AND performance

2014-10-30 Thread Steve Fink
On 10/30/2014 06:14 AM, Adrian Perez de Castro wrote:
 On Thu, 30 Oct 2014 09:29:36 +0100, Florian Bösch pya...@gmail.com wrote:

 The usecases:

 [...]

 *3) Initializing an existing array with a repeated numerical value*

 For audio processing, physics and a range of other tasks it's important to
 initialize an array with the same data.

 for(var i=0; isize; i++){ someArray[i] = 0; }
 For this use case there is %TypedArray%.prototype.fill(), see:

   
 http://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.fill

 JavaScript engines are expected to implement it at some point. For example
 I am implementing this in V8, along with other new typed array methods. The
 engines should be able to generate quite good code for uses of this function
 and/or provide optimized versions relying on knowledge of the underlying
 element type of the typed array they are applied to.

I implemented this for Firefox 2 years ago, but never landed it -
https://bugzilla.mozilla.org/show_bug.cgi?id=730880

Now there is %TypedArray%.prototype.fill. But I've become generally
skeptical about it as an answer to performance concerns. I would rather
see engines hyperoptimize

  for(var i=0; isize; i++){ someArray[i] = 0; }

based on observed type information. Which is not to say that we wouldn't
want to make TA#fill fast too, but the above seems more generally useful.

On a related note, I *would* like to have some way of getting the OS to
decommit memory. See https://bugzilla.mozilla.org/show_bug.cgi?id=855669
(start reading at about comment 22) for our discussion and attempt at
this, which looks like it mysteriously trailed off this last March.
Theoretically, the above loop could also trigger a decommit, but I think
it's too much to expect the engine to guess when that's going to be a
good idea. On the other hand, from a spec POV it's unobservable
behavior, which makes it weird.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed array filling convenience AND performance

2014-11-04 Thread Steve Fink
On 11/04/2014 11:08 AM, Brendan Eich wrote:
 Steve Fink wrote:
 On a related note, I*would*  like to have some way of getting the OS to
 decommit memory. Seehttps://bugzilla.mozilla.org/show_bug.cgi?id=855669
 (start reading at about comment 22) for our discussion and attempt at
 this, which looks like it mysteriously trailed off this lastMarch.
 Theoretically, the above loop could also trigger a decommit, but I think
 it's too much to expect the engine to guess when that's going to be a
 good idea. On the other hand, from a spec POV it's unobservable
 behavior, which makes it weird.

 ArrayBuffer.transfer
 (https://gist.github.com/andhow/95fb9e49996615764eff) is an ES7 stage
 0 proposal, needs to move to stage 1 soon. It enables decommitting
 memory.

I'm not sure we're talking about the same thing. I'm talking about what
would be madvise(MADV_DONTNEED) on POSIX or VirtualAlloc(MEM_RESET) on
Windows. Er... actually, I think it would be MEM_RESET followed by
MEM_COMMIT to get the zero-filling while still releasing the physical pages.

Unless there's some tricky way of using ArrayBuffer.transfer to signal
that memory can be decommitted, but I don't see it.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Syntax sugar for single exit and early exit functions.

2014-11-18 Thread Steve Fink
I have wanted something similar to this. But I think of it as having
RAII in JS.

So what I would like is:

function f() {
  let x = g();
  finally { x.cleanup(); }
  let y = h();
  finally { y.cleanup(); }
  doStuff(x, y);
}

You can sort of do this with try..finally:

function f() {
  let x, y;
  try {
x = g();
y = h();
doStuff(x, y);
  } finally {
x.cleanup();
y.cleanup();
  }
}

The difference is that my 'finally' (1) may be placed directly after the
related setup code, and (2) is lexically scoped. (Also note that my
syntax suggestion probably wouldn't work, because a finally directly
after a try..catch block is ambiguous -- or would it work, because they
have identical behavior?)

As for:

function f() {
  if (cond) {
finally { print(1); }
  } else {
finally { print(2); }
  }
}

I was kind of hoping that each finally{} could use a minimal surrounding
lexical scope (so here, the bodies of the 'if' consequents), so only one
of these finally{} blocks would run. But perhaps that's a new sort of
scope from what already exists? There could be tension between finally{}
and let scoping.

Also, in

for (var x of foo()) {
  finally { print(x); }
}

I would expect the finally{} block to run on every iteration.

Could something like this fly?

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Steve Fink
On 12/04/2014 08:00 PM, Katelyn Gadd wrote:
 I do still use WeakMap in a few other places, for example to implement
 Object.GetHashCode. This is a case where the transposed representation
 is likely optimal - though in practice, I shouldn't need any sort of
 container here, if only the hashing mechanisms clearly built into the
 VM were exposed to user JS.

If I am understanding correctly, I don't think there is any such hashing
mechanism in the Spidermonkey VM. We hash on an object's pointer
address, which can change during a moving GC. (We update any hashtables
that incorporate an object's numeric address into their hash key
computations.)

I'm a little curious what you're generating the hashcode from. Is this
mimicking a value object? If the contents of the object change, would
you want the hashcode to change? Or are the hashcodes just
incrementing numerical object ids?

(Sorry for the tangent to the current thread.)

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Should `use strict` be a valid strict pragma?

2015-02-05 Thread Steve Fink

On 02/05/2015 05:12 AM, Andy Earnshaw wrote:
I think you're missing the point Leon is trying to make.  He's saying 
that, in ES 6 we have a new way to write strings.  In some ways, these 
more powerful strings may condition some people to use ` as their main 
string delimiter. An unsuspecting person may liken this to PHP's 
double quotes vs single quotes, thinking that the only difference is 
that you can use `${variable}` in strings that are delimited with 
backticks, but other than that everything is the same.  When they 
write this in their code:


```
`use strict`;
```

They may introduce bugs by writing non-strict code that doesn't throw 
when it should.  Adding it to the spec wouldn't be difficult and it 
would avoid any potential confusion or difficult-to-debug issues.  
It's definitely easier than educating people, IMO.


'use strict' and use strict are magic tokens and should stay that way, 
not propagate to other ways of writing literal strings. Literal strings 
are different things, which happen to share the same syntax for 
backwards-compatibility reasons.


If people switch to backticks for all their literal strings, so much the 
better -- then single and double quotes will only be used for 
directives, and there will be less confusion. (I don't actually believe 
that. At the very least, I don't expect JSON to allow backticks anytime 
soon. Nor do I think that using backticks indiscriminately is good 
practice.)


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Maybe we need a reflect API to iterate over instance members

2015-05-31 Thread Steve Fink

Forgive me for golfing it, but

function getAllPropertyNames(o) {
if (!o) return [];
return Object.getOwnPropertyNames(o) + 
getAllPropertyNames(Object.getPrototypeOf(o));

}

or as a generator

function* allPropertyNames(o) {
if (!o) return;
yield* Object.getOwnPropertyNames(o);
yield* allPropertyNames(Object.getPrototypeOf(o));
}

don't seem too onerous.

Though on the other hand, didn't I hear that prototype loops are now 
possible with Proxies? If so, then you'd need to handle that.


Then again, if you're going to handle weird cases, then what should it 
even return if you go through a Proxy's getPrototypeOf trap that mutates 
the set of properties?


On 05/31/2015 04:42 AM, Gray Zhang wrote:


Since class’s members are non-enumerable by default (which is a good 
choice) we cannot use |for .. in| to iterate over all members of an 
instance, the same problem could exists in a plain object when we use 
|Object.defineProperty| API.


In real world there are some scenarios where we need to iterate over 
members, A common example is we need to find all |set{SomeThing}| 
methods so we can do an auto dependency injection.


Certainly we can write a 3rd-party function to find all members 
through prototype chain:


|function getAllMembersKeys(obj) {
 let keys = [];
  
 while (obj) {

 keys.push(...Object.getOwnPropertyNames(obj));
 obj = Object.getPrototypeOf(obj);
 }
  
 return keys;

}
|

But it doesn’t look nice and lacks considerations of many things such 
as Symbol’d keys.


Look around other languages with reflection API, most of them would 
provide a method to iterate over all members / properties / methods of 
an instance, so why not we provide a set of utility API:


  * |Reflect.getAllMembersNames|
  * |Reflect.getAllMemberDescriptors|
  * |Reflect.getAllMethodNames|
  * |Reflect.getAllMethodDescriptors|
  * |Reflect.getAllPropertyNames|
  * |Reflect.getAllPropertyDescriptors|





Best regards

Gray Zhang




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: let function

2015-05-19 Thread Steve Fink

On 05/19/2015 12:23 AM, Alan Schmitt wrote:

On 2015-05-19 06:09, Bergi a.d.be...@web.de writes:


Alternatively just use a single equals sign with a parameter list:

let f(x) = y
let f() = y

This looks very nice indeed.


That visually collides with destructuring for me.

let [a, b] = foo();
let {a, b} = foo();
let f(a, b) = foo(); # Very different

I almost expect that last one to use f as a custom matcher of some sort, 
given the previous two.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Existential Operator / Null Propagation Operator (Laurentiu Macovei)

2015-10-29 Thread Steve Fink

Uh, isn't that a pretty large compatibility risk?



You're suddenly calling doUntestedStuff() where before it was harmlessly 
erroring.



On 10/29/2015 12:30 PM, Ron Waldon wrote:


Has anyone considering just making dot-property access return 
intermediate undefined or null values by default?


Not having to introduce new syntax would be a bonus. I'm trying to 
think of existing code that this would break and can't think of any 
good examples.


The only compatibility issue I have thought of so far is code that 
relies on an Error being thrown but also does not check the value:


```js
let value;
try { value = deep.deep.deep.prop; } catch (err) { /* ... */ }
// use value without even a basic truthy test
```

On Fri, 30 Oct 2015, 06:07  > wrote:



-- Forwarded message --
From: Laurentiu Macovei >
To: Sander Deryckere >
Cc: "es-discuss@ mozilla.org
 list" >
Date: Thu, 29 Oct 2015 19:52:37 +0100
Subject: Re: Re: Existential Operator / Null Propagation Operator

Yes! I have updated my answer using markdown and also posted on
the original issue of TypeScript. https
://
github.com
/Microsoft/
TypeScript
/issues/16


Is there a better place to propose it for `ES6`/`ES7` ?

This would be amazing operator!! Especially for
`ES6`/`ES7`/`TypeScript`

```js

var error = a.b.c.d; //this would fail with error if a, b or c are
null or undefined.

var current = a && a.b && a.b.c && a.b.c.d; // the current messy
way to handle this

var currentBrackets = a && a['b'] && a['b']['c'] &&
a['b']['c']['d']; //the current messy way to handle this

var typeScript = a?.b?.c?.d; // The typescript way of handling the
above mess with no errors

var typeScriptBrackets = a?['b']?['c']?['d']; //The typescript of
handling the above mess with no errors

```

However I propose a more clear one - as not to confuse ? from the
a ? b : c statements with a?.b statements:

```js

var doubleDots = a..b..c..d; //this would be ideal to understand
that you assume that if any of a, b, c is null or undefined the
result will be null or undefined.

var doubleDotsWithBrackets = a..['b']..['c']..['d'];

```

For the bracket notation, I recommend two dots instead of a single
one as it's consistent with the others when non brackets are used.
Hence only the property name is static or dynamic via brackets.

Two dots, means if its null or undefined stop processing further
and assume the result of expression is null or undefined. (as d
would be null or undefined).

Two dots make it more clear, more visible and more space-wise so
you understand what's going on.

This is not messing with numbers too - as is not the same case e.g.

```js

1..toString(); // works returning '1'

var x = {};

x.1 = {y: 'test' }; //fails currently

x[1] = {y: 'test' }; //works currently

var current = x[1].y; //works

var missing= x[2].y; //throws exception

var assume= x && x[2] && x[2].y; // works but very messy

```

About numbers two options: Your call which one can be adopted, but
I recommend first one for compatibility with existing rules!

1. Should fail as it does now (`x.1.y` == `runtime error`)

```js

var err = x..1..y; // should fail as well, since 1 is not a good
property name, nor a number to call a method, since it's after x
object.

```

2. Should work since it understands that is not a number calling a
property from `Number.prototype`

```js

var err = x..1..y; // should work as well, resulting 'test' in
this case

var err = x..2..y; // should work as well, resulting undefined in
this case

```

With dynamic names:

```js

var correct1 = x..[1]..y; //would work returning 'test'

var correct2 = x..[2]..y; //would work returning undefined;

```

What do you think folks?

Best Regards,

Laurenţiu Macovei




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak Graph

2015-11-06 Thread Steve Fink

On 11/04/2015 08:09 AM, Jussi Kalliokoski wrote:
It provides the needed interface and the unused child revisions get 
cleaned up properly. However:


* This is a complete nightmare for GC performance because of cyclical 
weak references.


Not necessarily. Current Spidermonkey should handle it ok, using a 
linear-time algorithm for weakmap marking. (I think an inverted 
representation would also be good for this? I haven't thought it through.)



* Any reference to a child will maintain references to all its parents.

However this doesn't necessarily need to be the case because the 
stored ancestry is not observable to anything that creates a 
WeakGraph, except to the oldest ancestor that has a reference elsewhere.


I'm not sure if this use case alone warrants adding a new feature to 
the language, or if I'm just missing something and it can be 
implemented with existing constructs or if there should be some other 
lower level primitive that would allow building a WeakGraph on the 
user level.


The brute-force approach would be to give each node its own weakmap, and 
add an entry for *every* ancestor:


```JS
this.getLineage() = function (node, ancestor) {
  const lineage = [];
  while (node) {
lineage.push(node);
if (node == ancestor)
  return lineage;
node = node.parentForAncestor.get(ancestor);
  }
}

this.addNode = function (node, parent, ancestor) {
  node.parentForAncestor = new WeakMap();
  for (let key of this.getLineage(parent, ancestor))
node.parentForAncestor.set(key, parent);
};
```

...but notice that addNode now requires an ancestor to be specified, and 
it will only work as far back as that. And of course, this defeats the 
whole point of the exercise, in that it requires space quadratic in the 
number of live nodes. And insertion is linear in the distance from the 
node to the chosen ancestor. Which could still be a win in rare cases, I 
guess, but my point is only to show that the leak could be "fixed".


I should note that in your example, you have a WeakMap with basically 
infinite lifetime. That means that keys always hold values live, in 
which case you might as well store them directly as properties on the 
key (node.parent = parent).


I also pondered what language primitives would fix this case. The 
straightforward approach is to expose something tailored specifically to 
this use case -- call it a WeakList. You cannot look up elements by 
index. You can call WeakList.prototype.get(begin, end) and it will 
return an Array of elements from begin..end (where 'begin' and 'end' are 
actual elements of the list), or undefined if either begin or end is not 
present in the list. Internally, the implementation would allowed to 
discard all leading and trailing dead elements. It would be stored as a 
dag to share space with overlapping lists.


A totally different option is something I'm calling a VagueMap. I should 
mention here that I don't know of any prior art, nor have I looked for 
any, so my apologies if this is known by another name. A VagueMap is 
sort of the dual of a WeakMap, where the value keeps the entry (and key) 
alive rather than the key keeping the value alive. But to avoid exposing 
GC behavior, you cannot just look up values by key (since then if you 
wanted to know when something gets GC'ed, you'd just stick it in a 
VagueMap under a well-known key and look it up periodically.) Instead, 
get(key) returns a "vague value", which can only be used in two ways: 
for equality testing, and as a VagueMap key. VagueMap lookups would 
treat vague values and their equivalent non-vague values identically. 
VagueMap would also support has().


Note that VagueMap does not automatically keep its keys live (as in, 
only the ones with live values will be kept live.) So you still can't 
iterate over the keys.


This still doesn't fix the original example. The simple replacement of 
WeakMap with VagueMap will "work", but getLineage() will return an array 
of vague values, which aren't much use. So I'll need to add another 
funky feature to VagueMap: if you give it a (non-vague) value, it will 
hand you back the non-vague, guaranteed live, key. I'll call it 
VagueMap.prototype.getKey(vkey, value) where vkey is a possibly-vague 
key that maps to value.


We get back very close to the original code, with an added loop to reify 
the vague nodes (oh, and I include the ancestor in the lineage -- which 
means the do/while loop could now easily be a for loop, but I'll stick 
close to the original):


```JS
function WeakGraph () {
const parentByNode = new VagueMap();

this.getLineage = function (node, ancestor) {
const lineage = [];

let currentNode = node;
do {
lineage.push(currentNode);
if ( !parentByNode.has(currentNode) ) { throw new 
Error("node is not a descendant of ancestor"); }

currentNode = parentByNode.get(currentNode);
} while ( currentNode !== ancestor );

Re: Swift style syntax

2015-10-13 Thread Steve Fink

On 10/12/2015 11:06 PM, Isiah Meadows wrote:


+1 for operators as functions (I frequently is them in languages that 
have them), but there is an ambiguous case that frequently gets me: 
does `(-)` represent subtraction or negation. It's usually the former 
in languages with operators as functions.


But here's a couple other potential syntactical ambiguities, dealing 
with ASI:


```js
// Is this `x => f(x)` or `x = (>); f(x)`
x =>
f(x)

// Is this `-x` or `-; x`?
-
x
```

Those can be addressed with a cover production to be used for 
expression statements and direct value assignment, requiring 
parentheses to clarify the latter case in each.


A similar ambiguity problem, arguably harder to resolve, is partially 
applied subtraction, such as `(- 2)`. Is that a -2 or is it equivalent 
to `x => x - 2`? I will caution on this idea, as I know that's the 
next logical step.




It it just me? I find all this talk of bare operators to be 
completely... uh, I'll go with "inadvisable".


I can believe that you could carve out an unambiguous path through the 
grammar. But (a) it's going the way of line noise, (b) it uses up lots 
of possibilities for future expansion on something that isn't all that 
useful in the first place, and (c) it seems to be choosing concise 
syntax over readability in a big way.


C++ has an 'operator' keyword (and even then it comes out pretty ugly -- 
operator()(), anyone?) Perl6 has better syntax (syntax syntax?) for this:


infix:<+>
circumfix:«( )»

or whatever. And of course Python uses double __underscores__ with ASCII 
operator names. All those are preferable to bare operators, to me.


   -compose(+, *)(++x, +(3, 4), --y) - (3 + 4) - -(1, 2);

I don't really *want* that to parse! At least make it

  list.sort(#`>`);

or something.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-08-27 Thread Steve Fink

On 08/27/2015 09:25 AM, Dean Tribble wrote:
Ideally syntax proposals should include some frequency information to 
motivate any change. Is there an easy search to estimate the frequency 
of Math.pow? In my application codebase (financial app with only 
modest JS use), there are very few uses, and there are as many uses of 
Math.sin as there are of Math.pow.


Frequency relative to what, though? If code that does nontrivial math is 
a very small proportion of total JS code, and yet the exponentiation 
operator makes that code much more readable, then what is the 
conclusion? I would argue that ** precedence confusion is irrelevant to 
code that isn't going to use Math.pow in the first place. So it's a 
question of whether ** is a big enough readability win in code that 
computes exponents.




Anecdotally, my eyes caught on: -Math.pow(2,-10*a/1) (from a 
charting library) which makes me not want to have to review code where 
I'm worried about the precedence of exponentiation.




I'd have to write that out: -2**(-10*a/1). That doesn't seem too bad.

For myself, I do very much prefer Math.sqrt(a**2 + b**2) to 
Math.sqrt(Math.pow(a, 2) + Math.pow(b, 2)). The verbosity and uneven 
density of notation is really bothersome -- for any equation like the 
second one, I guarantee that I'll rewrite it on paper to figure out what 
it's saying. (Ok, maybe not for that specific formula, but even there 
I'll mentally render it.) I would not need to do so with the first one. 
Jumping between prefix and infix is jarring.


Then again, I could make the same argument for Math.sqrt(a**2 + b**2) vs 
(a**2 + b**2) ** 0.5. And I don't like the second one much. But people 
don't interchange those when handwriting formulas, either.


Math.sqrt(a.pow(2) + b.pow(2)) is an interesting middle point. I 
initially thought it struck the right balance, but seeing it written 
out, it still looks far inferior to me.


A more complex example might help:

  a * (b - a)**(x - 1/2 * (b - a)**2)

vs

  a * Math.pow(b - a, x - 1/2 * Math.pow(b - a, 2))

vs

  a * (b - a).pow(x - 1/2 * (b - a).pow(2))

For me, the middle one is a mess. I can't make sense of it, and I can't 
spot the common (b - a) expression at all. The first one is as readable 
as such formulas ever are when written out with ASCII text. The third 
one is somewhere in between. I can see the common (b - a), and perhaps 
if I got more used to seeing .pow I could mentally make use of it 
without writing it on paper, but for now I cannot. Part of the problem 
is that I can easily translate x**2 into x squared, but x.pow(2) 
is raising x to the power of 2.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-08-27 Thread Steve Fink

On 08/27/2015 11:20 AM, Ethan Resnick wrote:

Long-time esdiscuss lurker; hopefully this perspective is helpful.

I think the problem here is that traditional mathematic notation uses 
visual cues to imply precedence that JS can't take advantage of. When 
-3 ** 2 is written out on paper, the 2 is very clearly grouped 
visually with the 3. In fact, the superscript almost makes the 2 feel 
like an appendage of the 3. That makes it more natural to read it as 
two items: the negative sign, and (3 ** 2).


By contrast, when (-3 ** 2) is written out in code, the negative sign 
is way closer visually to the 3 than the 2 is, so I find myself 
instinctively pulling out a -3 first and reading the expression as 
(-3)**2.


If we're making ** bind tighter than unary -, then I would hope it would 
be written -3**2, not -3 ** 2. The latter is indeed deceptive.


For me, x**y**z is rare enough that I don't really care if ** is right 
associative or nonassociative. Parentheses are part of the cost you have 
to pay for rendering things as plain text -- and yet, I see no reason 
not to make x**y**z just do the right thing.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: rest parameters

2015-10-02 Thread Steve Fink

On 10/02/2015 11:52 AM, Michaël Rouges wrote:

Hi all,

I'm coming to you for a tiny question... excuse me if already replied...

Where the rest parameter are only declarable at the end of the 
arguments list, like this, please?


`
void function (a, ...b, c) {
// b = [2, 3]
}(1, 2, 3, 4);
`

Any real reasons?


I don't know, but I can speculate. It's not at all obvious how ...args 
in the middle should behave: what if you have two rest arguments? Is 
that forbidden, or is one greedy? What if one of the trailing parameters 
has a default value? Also, iiuc the spec treats "undefined" the same as 
"nonexistent" in most places. So what should your function do when 
passed (1, 2, 3, undefined)?


In short, it seems like a hairball of complexity for no real gain.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: PRNG - currently available solutions aren't addressing many use cases

2015-12-02 Thread Steve Fink

On 12/01/2015 01:45 PM, David Bruant wrote:

Le 01/12/2015 20:20, Michał Wadas a écrit :


As we all know, JavaScript as language lacks builtin randomness 
related utilities.
All we have is Math.random() and environment provided RNG - 
window.crypto in browser and crypto module in NodeJS.

Sadly, these APIs have serious disadvantages for many applications:

Math.random
- implementation dependant
- not seedable
- unknown entropy
- unknown cycle
(...)

I'm surprised by the level of control you describe (knowing the cycle, 
seeding, etc.). If you have all of this, then, your PRNG is just a 
deterministic function. Why generating numbers which "look" random if 
you want to control how they're generated?


I don't think the idea is that you need to know the cycle length, it's 
more that the spec does not currently mandate a minimum cycle length so 
implementations can and do implement Math.random in a way that produces 
cycle lengths much too short for some uses that might be expected to be 
reasonable. For example, if the generator internally uses independent 32 
bit values and doesn't mix them together before producing a 64 bit 
result, then the cycle length of each half of that result is at most 
2^32. You could record the whole set of them and perfectly predict the 
sequence with a couple of GB storage, much less if you can side-effect 
the generator you're after by drawing values from it yourself. Which 
perhaps doesn't matter, since you should be using a CPRNG if you're 
worried about prediction in the first place, but having a short cycle 
length for a subset of the bits will still bite you if you're masking 
off most of the bits (directly or indirectly). Having a birthday 
collision can really suck -- you thought you rented out the whole place 
for your party, only to find you have to share it with 3 other people. 
And they like *very* different music.


On the other hand, mandating a minimum cycle length may not help that, 
if the problem is with subsets of bits.


I'm not sure what "unknown entropy" means. I mean, in a way, if you seed 
it then there's zero entropy. Perhaps this refers to the capability of 
pulling from an external entropy source?


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Error stack strawman

2016-02-24 Thread Steve Fink

On 02/19/2016 01:26 AM, Andreas Rossberg wrote:
On 19 February 2016 at 03:13, Gary Guo > wrote:


If we are not going to indicate tail call some way, debugging
might be extremely difficult, and the stack result might be making
no sense at all.


A tail call is a jump. Just like other jumps, you shouldn't expect 
their history to be visible in the continuation (which is what a stack 
trace represents). I agree that JS programmers might be surprised, and 
will have to relearn what they know. But wrt to debugging the 
situation is the same as for loops: you can't inspect their history 
either. (And functional programmers in fact see loops as just an ugly 
way to express self tail recursion. :) )


To be even more pedantic: the stack trace isn't "the" continuation, it 
is one possible continuation. Other continuations are possible if you 
throw an exception. I guess you could say the stack trace plus the code 
allows you to statically derive the full set of possible continuations.


But I agree that it's worthwhile to remember the difference, since what 
is being requested for stacks really *is* a history, not a continuation. 
For example, it is desireable to encode "long stacks" or "async stacks" 
or whatever they're being called these days, where eg for an event 
handler you get the stack trace at the point the handler was installed. 
That is not a continuation, that is history. I would be very wary of 
mandating that full history be preserved, since it's easy for it to 
prevent optimizations or inadvertently leak details of the underlying 
implementation (tail calls, inlining, captured environments).


Does it work to specify something like "if and only if the information 
is available, it shall be encoded like this:..."? That can still leak 
information if not handled carefully, but at least it doesn't inhibit 
optimizations.


For a wild handwavy example of an information leak: say you do not 
include inlined calls in stack frames, and you only inline a call after 
the 10th invocation. Further assume that you self-host some JS feature. 
The caller can now learn something about how many times that self-hosted 
feature has been used. That feature might happen to be Math.something 
used only for processing non-latin1 characters in a password, or more 
likely just some feature used only if you are logged into a certain 
site. (Perhaps Error.stack is already specced to avoid this, by 
requiring all frames to be included whether inlined or not? Sorry, I 
don't know anything about it; I'm just posting to ask the question about 
what specifying stack formats encompasses.)


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Error stack strawman

2016-02-24 Thread Steve Fink

On 02/24/2016 01:30 PM, Mark S. Miller wrote:
[2] This solves only one of the cross-realm issue with stacks. It does 
nothing to address worries about cross-realm stacks.




We do have code in FF that handles cross-realm stacks, or at least a 
close moral equivalent to them. The stacks are stored internally as 
objects, and each frame records where it comes from, so a user will only 
see frames that it has privileges for. Obviously, once you convert to a 
string, you're past the point of control.


(Or at least, that's my understanding of what is going on. I'm not sure 
if that stuff is used for Error.stack yet.)


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: es7 proposal/polyfill?: base Map/WeakMap off Proxy + Weak References

2016-02-22 Thread Steve Fink

On 02/19/2016 01:06 PM, Coroutines wrote:

On Fri, Feb 19, 2016 at 1:03 PM, Tab Atkins Jr.  wrote:

On Fri, Feb 19, 2016 at 12:59 PM, Boris Zbarsky  wrote:

On 2/19/16 3:50 PM, Coroutines wrote:

Side discussion: Why does Javascript have this limitation? - what I
view as a limitation?  You'd think this could be supported without
breaking older JS..

I don't see how it could.  I'll bet $50 someone out there is using
obj[location] for example.

Yes, relying on the stringification behavior is very common.
Absolutely no way to change it at this point without something like
AWB's (abandoned) Object Model Reformation proposal

that would allow changing the behavior of [].

Learning to dance around the broken relics of the old world forever... :(


Not in this case, imho. A string-keyed map is a very different data 
structure in my mind than an object-keyed map. An object-keyed map works 
off of object identity, which is straightforward until you start 
introducing value objects. But it also means that in order to look 
something up, you have to have the *original* object. That data 
structure is useful, and I'm glad we finally have it available as Map, 
but restricting property keys to strings (and Symbols) makes things a 
lot simpler both from users' and implementers' points of view. (We 
[SpiderMonkey] used to have object-keyed properties with E4X, and it was 
a nuisance.)


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: GC/Compile requests, with 3D engine demonstration

2016-03-14 Thread Steve Fink

On 03/13/2016 02:50 PM, Brian Barnes wrote:
On Mar 13, 2016, at 5:22 PM, Steve Fink <sph...@gmail.com 
<mailto:sph...@gmail.com>> wrote:


This is a good time to bring up the other half of my original email 
because a number of other people have chimed in with their 
experiences with GC when attempting to develop more time critical 
applications without stutter.


I really don't think you want a System.gc() call for that. What if 
you call that after you're placed in a background tab, when you're 
sharing the JS heap with a latency-sensitive foreground tab? Do you 
really want to stutter the foreground tab for (up to) a few seconds? 
Probably not, in which case the name System.gc() would be a lie.


I think the closest you would want to go is to offer hints to the 
runtime. AppIsIdleStartingNowForAtLeast(500)? 
IDoNotMindIfYouDontCallMeAgainFor(500)? (units are milliseconds). The 
runtime can ignore it, or it can schedule an up-to-500ms incremental 
GC slice, or whatever. That does not negate all of the issues Boris 
was referring to, but IMHO it's a reasonable middle ground. We could 
double-check it against pending timers or whatever.


System.gc() would have a callback; it would block until you regained 
front status.  That has some edge cases, but that’s something the 
programmer would have to be aware of.


That was one example off the top of my head, which as you say can be 
resolved if specifically addressed, and it still spawns edge cases. 
There are sure to be many other problematic cases, and if you don't 
handle all of them and their edge cases, then by implementing this we 
are likely to paint ourselves into a corner -- you use it, everything 
works fine until some other engine or user code optimization exposes 
edge case X, but we can't fix that without breaking other users.


Not to mention that your specific workaround make GC observable, which 
is an information leak making it possible to depend on GC scheduling, 
which means making it possible to prevent engines' GC behavior from 
changing.


In a specific embedding, what you're asking for is reasonable. I would 
even go so far as saying that it might be good to reserve some chunk of 
the namespace for hints or whatever (though perhaps Symbols make that 
unnecessary.) If you are in an embedding that ships with a specific 
version of a JS engine and doesn't need to share anything with other 
things running JS, then it's fine to give user control over GC 
scheduling, manipulate bare pointers, suppress type guards and operate 
blindly on known types, generate machine code and run it, or whatever 
else your application desires. But you're not going to get stuff like 
that added to the spec for a language that needs to work in environments 
with evolving JS engines running frozen shipped JS code or hostile code 
sharing the same VM.


For all I know, es *may* start carving out parts of the spec that only 
apply to "privileged contexts". I haven't been following it, but I could 
imagine such a thing might be needed for SharedArrayBuffers and related 
things. Heck, browsers have fullscreen modes, where I am free to mock up 
your bank site's login page. But privileged stuff will very careful 
handling to avoid hobbling ourselves with long-term 
evolution/compatibility hazards and security problems.






The second part of this is native primitive types; having int/etc 
means they can be passed by value which means these checks are 
easier, but that’s probably something others have argued back and 
forth for a long time :)


The dynamic profiling can figure out whether *in practice* particular 
values are always int/etc while certain code executes, and compile 
with that assumption. Declaring types can give that a head start, but 
it'll still need to be double-checked when not easily statically 
provable, and may end up just wasting time prematurely compiling code 
with incorrect assumptions. JS right now is simply too dynamic to 
prevent all possibility of strange things happening to seemingly 
simple values. Besides, just observing the types seems to work pretty 
well in practice. The main benefit of type declarations would be in 
preventing errors via type checking, IMO.


Right, I understand all that, and to me, that would be part of 
compilation.  If a lower level pass through, or if a number of them, 
is required, than that’ll will have to be done (before types.)  If 
there are levels before full compilation that can be skipped, then 
that’s all this hint would ask for.


I have to apologize because I think people keep thinking I’m asking 
for something that solves a specific problem in a specific way; what 
I’m asking for is something that is more contactually simpler … “X 
always gets the most aggressive compilation”.  If that takes multiple 
slow passes, then that’s fine.  It’s not “NO slow passes” it’s “always 
strive to maximize speed over start up time."


Well, "the most aggressive compilation" 

Re: GC/Compile requests, with 3D engine demonstration

2016-03-14 Thread Steve Fink

On 03/14/2016 06:35 AM, Brian Barnes wrote:
The more we discuss this, the more I think this problem isn't solvable 
without something radical that makes Javascript more C like. Which, I 
think, is probably some of the reason for asm.js.


The problem: People want to create realtime games, VR, animations, 
without stutter.  You can get away with this by pre-allocating 
everything into global (I do a lot of that and get solid, predictable 
frame rates: www.klinksoftware.com/ws) GC engines just aren't the 
place for that.


A real multi-part solution would be:

1. Introduce types.  Make them pass by value, and "set by value". This 
allows local variables to be on the stack, and allows for faster to 
compilation steps as you don't have to run functions to analyze the 
types.


foo(x)
{
int y; // local, on stack

foo2(y); // the y inside of foo2 is a local in foo2, passed by value
globalY=y; // this is also copying the value, y is still local

y=0.0; // probably an error, should be forced to convert
y=Math.trunc(0.0); // not an error

return(y); // also a copying of the value
// y is popped from the stack
}

This isn't new, it's how C obviously does it.


Your example is of primitive types, so I will only address that. You are 
asserting that this would help performance. I assert that the existing 
dynamic type-discovery mechanisms get you almost all of the performance 
benefit already (and note that I'm talking about overall performance -- 
code that runs 1 million times matters far, far more than code that runs 
a handful of times during startup.) And they work for more situations, 
since they capture types of unnamed temporaries and cases where the type 
is not statically fixed but during actual execution either never changes 
or stops changing after a startup period. And they do not require the 
programmer to get things right.


Do you have evidence for your assertion? (I haven't provided evidence 
for mine, but you're the one proposing a change.)


2. Introduce a new mode that changes the behavior of new in scripts to 
require a free, say "use free"; .  Heaps are compacted (if required) 
on new, so realtime can avoid that by pre-allocating.


{
let x=new myClass(y);
free(x);
}

This would probably require a separate heap, away from the GC heap.

I'm just thinking that anytime spent on a solution that tries to put 
some kind of control over GC is destined to meet resistance because of 
the variety of GC schemes and worse, never actually solve the problem, 
just move it around.


Please don't ignore resistance just because it's getting in the way of 
what you want. Your needs are real, it's just that there's a huge 
constellation of issues that are not immediately obvious in the context 
of individual problems. Language design is all about threading the 
needle, finding ways to address at least some of the issues without 
suffering from longer term side effects.




Yes, this is something really radical that would be years in showing 
up in any browser.  I think for now I'll just have to stick to a lot 
of pre-allocated globals.


As asm.js shows, you have this already. Your heap is an ArrayBuffer, and 
cannot point to GC things so it will be ignored by the GC. You "just" 
have to manage the space manually. (And you can emscripten-compile a 
malloc library to help out, if you like.)


Typed Objects should go a long way towards making this nicer to use from 
hand-written JS.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: GC/Compile requests, with 3D engine demonstration

2016-03-13 Thread Steve Fink

On 03/13/2016 01:06 PM, Brian Barnes wrote:

This is a good time to bring up the other half of my original email because a 
number of other people have chimed in with their experiences with GC when 
attempting to develop more time critical applications without stutter.


I really don't think you want a System.gc() call for that. What if you 
call that after you're placed in a background tab, when you're sharing 
the JS heap with a latency-sensitive foreground tab? Do you really want 
to stutter the foreground tab for (up to) a few seconds? Probably not, 
in which case the name System.gc() would be a lie.


I think the closest you would want to go is to offer hints to the 
runtime. AppIsIdleStartingNowForAtLeast(500)? 
IDoNotMindIfYouDontCallMeAgainFor(500)? (units are milliseconds). The 
runtime can ignore it, or it can schedule an up-to-500ms incremental GC 
slice, or whatever. That does not negate all of the issues Boris was 
referring to, but IMHO it's a reasonable middle ground. We could 
double-check it against pending timers or whatever.



The second part was a hint to tell the engine to always take the most 
aggressive route with optimization; for instance, in Safari’s engine, as I 
remember, I think there are three levels, interpreted, a half-n-half solution, 
and an actual full compile of the code.  This hint would say “always compile to 
native” or if an engine never goes that far, always compile to the VM soup 
(though I suspect at this point most engines can do a native version.)


That's a nice simple mental model, but it's inaccurate in some important 
ways.



This would be used, again, for trading time in one place for time in another.  
A longer start-up time is something you want for a thing that will run 
continually and will almost always be guaranteed to fall into the compile path 
eventually.  It’s not something you’d want for javascript that runs on a normal 
button click to post a form.


But you may *need* to run it in a slower mode 1 or more times in order 
for that native compilation to be effective. The fastest JIT levels 
compile the code under certain simplifying assumptions to make the 
generated code more efficient -- it may assume that you're not going to 
randomly add or delete a property from an object handled by the compiled 
code, for example, so it can compile in direct offsets to the object's 
slots. If you violate one of those assumptions, the compiled code will 
need to be discarded and re-generated with a new weaker set of assumptions.


So you don't want to go straight to the highest optimization level, as 
it might then not optimize as highly. You need to first run in a slower 
mode, perhaps even one that pays some overhead in order to gather 
information about the types and control paths actually used. And that's 
the next gotcha -- it's reasonable to skip collecting that info on the 
first time (or few times?) you run the code, because the majority of 
code is only run once so any gathered profiling info is useless.


Which is not to say that there's nothing useful you can tell the system. 
If you hinted to it that something was going to be important and run 
frequently, then it could choose to gather profiling information earlier 
and additionally lower the threshold for jumping up optimization levels. 
As with the GC case, though, you do *not* want to tell the VM exactly 
what to do. The best approach might be something like: spawn off a 
background compile, and in the meantime interpret the code without 
gathering profiling info, then switch to the compiled profiling code 
when it's ready, and then do an optimized compile based on that 
information as soon as you've observed "enough". The user code has no 
way to know how long that background compilation will take, whether 
there are spare resources to do it in the background, etc. And it varies 
by platform. So at best, I think you can drop hints like "this code is 
going to run a lot, and I'm pretty sure its runtime matters to me." In 
practice, people will end up cutting & pasting the magic bits that make 
things fast from stackoverflow and misapplying them all of the place, to 
the extent that engines might end up just completely ignoring them, but 
it may also turn out that they're a good enough signal that engines will 
pay attention. I don't know; I can't predict.




Being compiled gives you another benefit, say you have this class:

class ….
{
test()
{
let x,y,z;
…
….
return(x);
}

The locals y and z are never move beyond the function scope; in this manner 
they could be stack based variables that never touch the heap (yes, I know 
that’s probably a very difficult implementation.)  Or even variables that fall 
into a special section of the heap that only deals with locals that never scope 
outside the function and are always cleaned up automatically on function exit 
(basically, reference counting where you know the reference is always 0.)
I'm a little confused about variables vs values here, but 

Re: Is `undefined` garabage collectable?

2016-05-04 Thread Steve Fink

On 05/04/2016 01:43 PM, /#!/JoePea wrote:
For example, I have some code that uses a Map just to keep a 
collection of things (the keys) but values are not important, so they 
are undefined, like this:


```js
let something = {}
let otherThing = {}
let m = new Map

m.set(something)
m.set(
​ otherThing​
)
```

where the values for those object keys are `undefined` since there's 
not second arg to `m.set`. If I add the line


```js
m.clear()
```

and retain references to `something`, `otherThing`, and `m`, I know 
that those objects won't be GCed. What about the `undefined` values? 
Are those `undefined` values something that the GC has to collect? Or 
do `undefined` values literally reference nothing, not needing to be 
collected?


First, use Set instead of Map if a set is what you want.

Second, this is an implementation question, since current GC is 
unobservable except for performance and even with WeakRef or something 
making it observable, it wouldn't come into play for your example. So 
this isn't really the list for it.


But nobody's going to make undefined be a GC thing.

On the other hand, there are language-invisible entries in a hashtable 
hiding behind your Map or Set. And those *might* be GC-able. Heck, they 
probably will be, since you wouldn't really want to run out of memory if 
you repetitively threw stuff into a Set and clear()ed it out over and 
over again.




Just wondering because I want to avoid GC while rendering animations 
at 60fps. I know I can prevent GC if I retain a value to some 
constant, as in


```js
let something = {}
let otherThing = {}
const foo = true
let m = new Map

m.set(something, foo)
m.set(
​otherThing​ , foo
)

m.clear()
```

so then if I retain the reference to `foo` then there's no GC; I'm 
just sticking things in and out of the Map, but I'm curious to know 
how `undefined` is treated, because if that prevents GC, then the code 
can be cleaner.


In practical terms, the behavior with undefined is not going to have any 
more GCs. But are you sure the above is completely GC-safe?


For example, in SpiderMonkey, the below GCs 30 times when run from the 
shell:


var o1 = {};
var o2 = {};
var s = new Set();

for (i = 0; i < 1000; i++) {
  s.add(o1);
  s.add(o2);
  s.clear();
}

If I remove the s.add lines, it will GC 2 times (there are 2 forced 
shutdown GCs).


The only v8 shell I have lying around is too old (3.14.5.10) to have 
Set, so I can't tell you what it would do.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Feature-Request: Add Range type data

2016-07-15 Thread Steve Fink

On 07/14/2016 09:33 PM, Dayvson Lima wrote:

Example:

var myRange = new Range(0,4);

myRange == (0..4)   #=> true


This (0..4) syntax doesn't exist, afaik. Do you mean myRange == 
[0,1,2,3,4]? Given that [1,2] != [1,2], I don't think so. I'm assuming 
you meant that as shorthand.





new Array(myRange)  #=> [0, 1, 2, 3, 4]


I'm not sure what this gives you over

  var Range = function*(start, end) { let i = start; while (i <= end) 
yield i++; };


  var myRange = Range(0, 4);
  new Array(myRange); # [0, 1, 2, 3, 4], but it empties out myRange
  [...Range(0, 4)]; # [0, 1, 2, 3, 4]


var charRange = new Range('a', ' d');  #=> ('a'..'d')


Ugh. This is very latin1-centric. What is 'a'..'d', again? a ä á à b ç c 
d, perhaps? (Yes, charCodeAt(0) offers a possible interpretation, but 
it's somewhat random.) And what is Range('aa', 'bb')? Range('a', 'bb')? 
Range('A', 'a')? Keep away from characters; they aren't numbers drawn 
from any useful 1-dimensional space.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: anaphoric if and while syntax

2016-09-15 Thread Steve Fink

On 09/12/2016 05:32 PM, Danielle McLean wrote:

In current ECMAScript, it is legal to place a variable declaration inside the
initialiser of a `for` loop, as well as to declare the variable used by a
`for...in` or `for...of` loop within the declaring expression:

 for (let i = 0; i < 5; ++i) console.log(i);
 for (let item of collection) process(item);

When this syntax is used with `let` or `const`, the resulting variable is
scoped to the loop and is not visible to the rest of the surrounding block.

I propose that this syntax be extended, making it legal to place a variable
declaration within the condition of an `if` or `while` statement. Any truthy
value will cause the `if` block to run or `while` loop to repeat, as usual -
the advantage is that the particular truthy value is bound to a variable and
can be used inside the conditional block.


My initial reaction was positive, but now I don't think it works.

First, other places in the grammar do not restrict let/const to a single 
variable. Should


  if (let a=0, b=1, c=0) { ... }

execute the if block or not? The obvious solution is to require a single 
variable, which means the grammar for these let/consts is different from 
others. What about


  x = { a: 1 };
  if (let {a} = x) { ... }

Second, that previous example makes it unclear to me at first glance 
what the intended semantics *should* be. I could imagine this printing 
either 1 or 2:


  h = { foo: 0};
  if (let {bar=1} = h) {
print(1);
  } else {
print(2);
  }

Is the conditional based on the variable's final value, or on whether or 
not the destructuring found a match? I could argue for either one, so 
even if there's a natural way to resolve my first problem, I think the 
code looks ambiguous to the eye.


  if (let { children } = node) {
print("interior node");
  } else {
print("leaf node");
  }

Again, the simplest way to resolve this is to restrict it to "let/const 
IDENTIFIER = expression", but it feels weird to have different rules for 
this particular case. for(let...) on the other hand, does not attempt to 
use the let expression as a value, so it does not encounter any of these 
problems.


As a minor issue, it also feels a little awkward to special-case this 
conditional expression. I can do


  if (let x = foo()) print(x)

but not

  (let x = foo()) && print(x)

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Power operator, why does -2**3 throws?

2016-10-18 Thread Steve Fink
If tc39 wanted to implement it one way or the other, they would indeed 
use precedence. The problem is that the precedence of unary '-' vs 
binary '**' is ambiguous *between different people's heads* -- not just 
a little, but a lot. So whichever precedence you pick, some people will 
be very surprised. It will be *obviously* wrong to some people, and 
obviously correct to others.


No matter how good your mechanism is, it can't fix a policy problem.

On 10/18/2016 01:05 AM, medikoo wrote:

There are many other cases when with no parens involved, people have
different expectations on the outcome.
If expression looks ambigous the actual result always depends on operators
precedence, it's how language worked for years, and I don't remember any big
problems due to that.


Jordan Harband wrote

It's quite simple (as has already been stated): some people expect `-x **
y` to be `-(x ** y)`. Some expect it to be `(-x) ** y`.

The early SyntaxError ensures that nobody is confused - programmers will
immediately add parens to disambiguate.

Avoiding a potential footgun for the next 50 years, at the insignificant
cost of adding two characters so that it parses seems like a very cheap
price to pay.

On Tue, Oct 18, 2016 at 12:20 AM, medikoo 
medikoo+mozilla.org@

wrote:


I must say throwing here, instead of relying on math dictated operators
precedence looks really bad.
It's very surprising to those well experienced with the language, and
totally inconsistent with how operators worked so far (there is no
previous
case where one will throw for similar reason).

Also argument that it's inconsistent with Math.pow(-2, 2), is total miss
in
my eyes.
I believe to most programmers `Math.pow(-2, 2)`, translates to
`(-2)**(2)`
and not to `-2**2`,
same as `Math.pow(a ? b : c, 2)` intuitively translates to `(a ? b :
c)**(2)` and not to `a ? b : c**2`




--
View this message in context: http://mozilla.6506.n7.nabble.
com/Power-operator-why-does-2-3-throws-tp359609p359731.html
Sent from the Mozilla - ECMAScript 4 discussion mailing list archive at
Nabble.com.
___
es-discuss mailing list


es-discuss@

https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@
https://mail.mozilla.org/listinfo/es-discuss





--
View this message in context: 
http://mozilla.6506.n7.nabble.com/Power-operator-why-does-2-3-throws-tp359609p359733.html
Sent from the Mozilla - ECMAScript 4 discussion mailing list archive at 
Nabble.com.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak Reference proposal

2016-12-27 Thread Steve Fink

On 12/27/2016 04:45 AM, Isiah Meadows wrote:
The weak reference proposal 
 hasn't seen a lot of 
activity, and I haven't found much news elsewhere on it. What's the 
status on it?


Where I'm building a language-integrated process pool in Node.js, 
complete with shared "references" and async iterator support, I really 
badly need weak references to avoid otherwise inevitable memory leaks 
across multiple processes if the references aren't explicitly 
released. So far, my only option is to take a native dependency (I 
have no other dependencies), but that's very suboptimal, and it 
eliminates the possibility of porting to browsers. So I really badly 
need language-level weak references.


Would weak references be enough to solve cross-process garbage 
collection? How would you recover a cycle of references among your 
processes?


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Boolean.parseBoolean

2017-03-18 Thread Steve Fink

On 03/16/2017 09:40 PM, Dmitry Soshnikov wrote:
On Thu, Mar 16, 2017 at 7:04 PM, Karl Cheng > wrote:


On 17 March 2017 at 08:03, Ben Newman > wrote:
> Just to check my understanding, would
>
>   Boolean.parseBoolean = function (value) {
> return !! (value && JSON.parse(String(value)));
>   };
>
> be a reasonable polyfill for your proposed function?

Not quite -- that would throw for strings that are not valid JSON,
e.g.:

```
Boolean.parseBoolean('{dd]');
```

It'd probably be more like:

```
Boolean.parseBoolean = function (val) {
  if (val === 'false') return false;
  return !!val;
};
```


Looks good either (probably worth making case-insensitive).


There are many, many reasonable choices for a function that maps a 
string to a boolean. Even more for a function that maps an arbitrary 
value to a boolean. The choice of the function is highly context 
dependent. That context includes language/locale/whatever the right l10n 
term is. It's true that JS could arbitrarily pick one, but then it would 
implicitly be favoring one context over another. And not even Node and 
the Web would completely agree on the most appropriate definition. It 
makes sense for JSON to pick a single function, because it's a specified 
interchange format.


-1 from me.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Declaration types inside destructuring declarations

2017-07-07 Thread Steve Fink

On 07/06/2017 10:39 AM, Oriol _ wrote:

And why not just use

```js
const result = complexExpression();
const {a} = result;
let {b} = result;
```

and with arrays:

```js
const result = complexExpression();
const a = result[0];
let b = result[2];
```


That's exactly what I do now. It's tolerable, but inelegant. I suspect 
it's also not identical semantics, with getters being called different 
numbers of times and things, which is mostly irrelevant but could matter 
for optimizations. Or maybe destructuring assignment is specced to be 
identical, I don't know.


The array ones are pretty ugly. If I'm packing multiple return values 
into an array, then the numeric indexes are pure noise.


  const [a, _, b] = complexExpression();

is *way* better than

  const a = result[0];
  const b = result[2];

IMHO, and similarly for objects. Not to mention

  const [a, [b, c], d] = complexExpression();

My coding aesthetic is to have as much of the program text as possible 
be stuff related to the problem I'm solving, with some amount of 
unavoidable wiring (the amount varies widely by language). A temporary 
like this requires a name, which with my style implies that it is 
significant enough to deserve a name. A reader/reviewer must come to the 
realization that it's just a workaround for a language deficiency and 
not something semantically meaningful, which is a small speed bump to 
understanding. (And in fact, even if the facilities *were* available, I 
would at times introduce an intermediate variable anyway, if I felt it 
aided understanding of the semantics. The presence or absence would be a 
conscious choice based on conveying meaning to the (human) reader, even 
though the computer couldn't care less. If there's a good name for the 
intermediate, it should probably be there, and if not, it shouldn't.)


In the whole scheme of things, this is minor enough that I wouldn't 
personally argue for any special syntax here, but I could support 
something if others were interested and it had clean syntax.



If nothing else references `result`, it can be garbage-collected.


In the absence of optimizations (scalar replacement), there's no GC 
difference here. complexExpression() is going to create and return a 
garbage-collectable object whether you bind it to a name or not. The 
optimization is certainly simpler and I would guess much more likely to 
be applied if you don't have the temporary, though.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Declaration types inside destructuring declarations

2017-07-06 Thread Steve Fink

On 07/03/2017 12:25 PM, Jordan Harband wrote:

```
const { a } = o;
let { b } = o;
b = 1;
```
seems like a much simpler workaround than adding this complexity to 
the language.


I dunno. I've wanted this numerous times. The use case is really

  const { a, let b } = complexExpression();

A typical scenario is when I'm splitting something up and returning the 
parts, and I want one for a short-lifetime const and the other to update 
a state variable.


On the other hand, it reads pretty oddly to me, and I would probably 
vote against complicating the parsing for this anyway.


  { const a, let b } = complexExpression();

reads a little better to me. Another alternative would be

  const { a } = let { b } = complexExpression();

and I suppose that suggests a better workaround than what I normally 
use. Workaround:


  let b;
  const { a } = { b } = complexExpression();

Or with arrays:

  let b;
  const [ a ] = [ ,,b ] = complexExpression();

Tangent: it would be nice to have a "don't care" value; those commas are 
hard to spot. 'undefined' doesn't work and is too long anyway. _ is ok 
if you predeclare with let _; but you can only use it once. This won't work:


  const [ _, a, _, b ] = o;

nor will

  let _;
  const [ _, a ] = [ b, _, c ] = foo();

I guess it's better to *not* predeclare with let, but then you pollute 
the global object, and you still can't use it with


  const [ _, a ] = foo();
  const [ _, b ] = bar();

Given your two examples, I'd find it bizarre for one to work and the 
other not, so we'd want to support both. It also raises the question 
of declaration-less assignments - `({ a, b } = o);` could become `({ 
const a, let b } = o);`?


On Mon, Jul 3, 2017 at 10:49 AM, Bob Myers > wrote:


Totally minor, but

```
const {a, b} = o;
b = 1;
```

Complains that `b` is `const` and can't be assigned.

```
let {a, b} = o;
b = 1;
```

Now lint complains that `a` is never modified, and ought to be
`const`.

So I would like to write:

```
const {a, let b} = o;
b = 1;


or alternatively

```
let {const a, b} = o;
b = 1;




___
es-discuss mailing list
es-discuss@mozilla.org 
https://mail.mozilla.org/listinfo/es-discuss





___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of language features

2017-07-22 Thread Steve Fink
This makes sense to me. Though I kind of feel like the discussion has 
veered off on a less useful direction because of reactions to words like 
"policing" or "gatekeeping". It may be more productive to consider 
whether it might be useful to have a mechanism whereby frameworks could 
leverage the expertise of people close to tc39. If I were a framework 
author (and I'm not), I would appreciate having the ability to say "hey, 
I'm thinking of doing X. What current or potential problems could X run 
into with respect to ES?" The expectation is that I would take the 
feedback into account (so tc39 people wouldn't feel like they were 
shouting into the void, or participating in a meaningless feel-good 
opportunity.) TC39 would benefit by having some degree of influence (not 
control!) over the more unfortunate directions of frameworks, as well as 
getting more exposure to the sorts of problems people are running into.


Anyway, I don't have a dog in any of these races. (Hell, I'm more of a 
cat person to begin with.) I just see the conversation taking a less 
than useful path, and wanted to point it out.


On 07/22/2017 11:35 AM, Naveen Chawla wrote:

Typescript allows breaking changes, ES doesn't.

Hence it would be an acceptable decision for ES to clash with an 
existing Typescript keyword and force Typescript to update accordingly.


Typescript developers shouldn't be unprepared, and ES can continue on 
its path.


None of this makes Typescript "bad". Developers can keep using their 
existing version of Typescript and its transpiler if they don't want 
to risk disruption.


So this kind of works for everybody: those who want bleeding edge 
ideas implemented and are prepared to update in the face of breaking 
changes can use e.g. Typescript and keep updating its version; those 
who want current bleeding edge ideas implemented but not risk breaking 
changes can use e.g. Typescript but stick to the same version; those 
who want to use the latest features of ES can do so directly; those 
who want old ES code to continue to work can have that. So it seems 
all of these cases are serviced OK.


I'm not sure it's TC39's job to mark the implementation of preliminary 
ideas as "unfriendly". If anything such implementations could expose 
any weaknesses of these ideas such that they can be improved upon, or 
if not, exposed as required as-is, potentially more clearly than a 
hypothetical discussion on them, and that would carry value in of itself.


So Javascript and Typescript serve different purposes. Typescript, 
being as it is transpiled to Javascript, has the luxury of not having 
to be backwards compatible, whereas because Javascript is run directly 
on browsers, it has to be.


On Sat, 22 Jul 2017 at 23:26 Andrea Giammarchi 
> wrote:


CSP to name one, but you picked 1% of my reply.

On Sat, 22 Jul 2017 at 19:52, Claude Petit > wrote:

“TC39 consider the usage of `eval` inappropriate for production”

And what about dynamic code, expressions evaluation, ...? Who
has wake up one day and decided that nobody should use “eval” ?

*From:* Andrea Giammarchi [mailto:andrea.giammar...@gmail.com
]
*Sent:* Saturday, July 22, 2017 1:44 PM
*To:* kai zhu >
*Cc:* es-discuss >
*Subject:* Re: Removal of language features

answering to all questions here:

> What problems would this address?

It will give developers a clear indication of what's good and
future proof and what's not so cool.

MooTools and Prototype extending natives in all ways didn't
translate into "cool, people like these methods, let's put
them on specs" ... we all know the story.

Having bad practices promoted as "cool stuff" is not a great
way to move the web forward, which AFAIK is part of the
manifesto too.

> In general, the committee sees any tool with significant adoption as 
an
opportunity to learn/draw ideas from, not a plague.

That's the ideal situation, reality is that there are so many
Stage 0 proposals instantly adopted by many that have been
discarded by TC39.

This spans to other standards like W3C or WHATWG, see Custom
Elements builtin extends as clear example of what I mean.

Committee might have the *right* opinion even about proposed
standards, not even developers experimenting, so as much I
believe what you stated is true, I'm not sure that's actually
what happens. There are more things to consider than hype, and
thanks gosh it's like that.

> you wouldn't see any interest in policing libraries and frameworks 
from
the 

Re: Removal of language features

2017-07-22 Thread Steve Fink

On 07/21/2017 03:00 PM, kai zhu wrote:
Can you produce any data at all to back that up? I've never seen any 
appetite in that regard at all.
no hard data admittedly.  i regularly attend tech meetups in hong 
kong.  at these gatherings, the general sentiment from frontend 
developers is that creating webapps has gotten considerably more 
difficult with the newish technologies.  even the local presenters for 
react and angular2+ at these talks can’t hide their lack of enthusiasm 
for these frameworks (like they’re doing it mainly to hustle more 
side-business for themselves).  on-the-job, we all generally try to 
avoid taking on such technical risks, until we are inevitably asked to 
by our manager.  the ones i see who are enthusiastic are typically 
non-frontend-engineers who write mostly backend nodejs code, that 
isn’t all that more scalable or interesting with what you can do in 
java/c#/ruby.


I think this is mixing up frameworks with the language. There is indeed 
extreme framework fatigue, and has been for quite some time. Language 
changes have been much slower and mind-bending, and it is my 
(uninformed) impression that people generally haven't had too much 
difficulty incorporating them. Or at least, not nearly as much as 
learning the mindset of eg React or Flux or Angular or whatever. And 
there seems to usually be a sense of relief when something gets added to 
the language that removes the need for the workarounds that the 
libraries and frameworks have been using for some time.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: InterleavedTypedArray type

2017-07-03 Thread Steve Fink

On 07/02/2017 11:20 AM, Lars Hansen wrote:
On Sun, Jul 2, 2017 at 9:12 AM, J Decker > wrote:




On Sun, Jul 2, 2017 at 8:25 AM, Lars Hansen > wrote:

The TypedObjects proposal does this, for what it calls
non-opaque types (you can define types and then map them onto
an ArrayBuffer in various ways).  I'm not 100% sure what the
latest text is, I expect it is here:
https://github.com/tschneidereit/typed-objects-explainer
 but
it could also be here:
https://github.com/nikomatsakis/typed-objects-explainer
.

That's about a single structure; as is the thing Isiah suggested
(ref-struct) and not an array of packed structures such as would
be used for interleaved vertex data.


​No, the TypedObjects proposal allows for packed arrays of structures, 
without references.  See 
https://github.com/tschneidereit/typed-objects-explainer/blob/master/core.md#struct-arrays.


--lars​

TypedObjects is currently a stalled proposal.  I expect it may
be revived when WebAssembly integration into JS becomes a more
seriously discussed topic.



TypedObjects are exactly what you want for this sort of use case, and 
are really quite nice. I'm no expert, but TypedArrays probably ought to 
be subsumed by the TypedObject spec since AFAICT they are a proper 
subset of TypedObject arrays, at least for practical purposes.


Spidermonkey has had them implemented since sometime in 2013, though we 
haven't used them much and the constructors are of course not exposed to 
the Web. (And the implementation of TypedArrays is still separate, and 
has better JIT support.) They're really quite nice when you have the 
sorts of problems they're meant for. For other problems, I would guess 
they would be quite an attractive nuisance. ;-)


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.isEqual

2017-05-01 Thread Steve Fink
It would be nice to have *something* for this. Some remaining problems I 
see with using JSON serialization, let's call it JSON.isEqual:

 - JS has order, JSON does not
 - JSON lacks NaN, +/-Infinity (and JSON.stringify maps these to null, 
which means JSON.isEqual({x: 0/0}, {x: 1/0}))

 - cycles
 - ...and everything under your "trivial generalisation"

It still seems like it'd be unfortunate if !JSON.isEqual({foo: val1}, 
{foo: val2}) where val1 === val2 (because val1/2 is not serializable, eg 
it has a cycle).



Also, what is

var x = 4;
JSON.isEqual({get foo() { return x++; }}, {foo: 4})

? If you went purely by "whatever JSON.stringify would return", then 
this would be true once and false afterwards.


This may seem like nitpicking, but if you don't nail down the exact 
semantics, then engines will end up doing the JSON serialization and a 
string compare, which rather defeats the purpose. If you stick to 
something simple like comparing JSON.stringify output, then they will 
pretty much *have* to do this, since there are so many observable side 
effects like getter invocation and proxy traps. You *could* define 
semantics that cover a large percentage of the interesting cases, but 
JSON isn't going to be of much help.


And for the record, JSON does not have an intuitive semantics at all. It 
has intuitive semantics for a small subset of values, a subset that is 
rarely adhered to except near interchange points where JSON makes sense. 
(And even then, it's common to accidentally step outside of it, for 
example by having something overflow to Infinity or accidentally produce 
a NaN.)


On 05/01/2017 02:04 PM, Alexander Jones wrote:
I hear this argument a lot but it strikes me with cognitive 
dissonance! JSON defines a very intuitive notion of object 
value-semantics - whether the serialized JSON is an equivalent string. 
Granted that many value types are not supported by JSON, but it's a 
trivial generalisation.


Let's just give the above a name and get on with it. For 99% of use 
cases it would be ideal, no?


Thoughts?

On 1 May 2017 at 20:58, Oriol _ > wrote:


This is not easy to generalize. Comparing objects is one thing
lots of people want, but not everybody needs the same kind of
comparison.
For example, you compare own property strings. But what about
symbols? Somebody might consider two objects to be different if
they have different symbol properties.
Or the opposite, somebody may think that checking enumerable
properties is enough, and non-enumerable ones can be skipped.
Then some property values might be objects. Are they compared with
=== or recursively with this algorithm (be aware of cycles)?
Similarly, for the [[Prototype]]. Do inherited properties matter?
Should [[Prototype]]s be compared with === or recursively?
There is also the problem of getters: each time you read a
property, it might give a different value! You might want to get
the property descriptor and compare the values or the getter
functions.
And then there are proxies. Taking them into account, I don't
think there is any reasonable way to compare objects.

So I think it's better if each person writes the code that
compares objects according to their needs.

--Oriol



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Lazy evaluation

2017-09-11 Thread Steve Fink

On 9/11/17 5:36 AM, Matthew Robb wrote:
> I think it's irrelevant if internally VMs are not too happy. VMs are 
there to solve our problems, not vice-versa ;-)

​
This ^​ is very important for everyone to get on board with. 
Regardless the cost should be negligible as the shape is only changing 
at the point of delayed init. This will cause, for example V8, to deop 
the object and have to build a new hidden class but only the one time. 
I guess it would potentially be interesting to support an own property 
that when undefined would delegate up the proto chain.


(I don't know, but) I would expect it to be worse than this. The shape 
is changing at the point of delayed init, which means that if an engine 
is associating the possible set of shapes with the constructor (or some 
other form of allocation site + mandatory initialization), then that 
site will produce multiple shapes. All code using such objects, if they 
ever see both shapes, will have to handle them both. Even worse, if you 
have several of these delayed init properties and you end up lazily 
initializing them in different orders (which seems relatively easy to 
do), then the internal slot offsets will vary.


You don't need to bend over backwards to make things easy for the VMs, 
but you don't want to be mean to them either. :-)


Not to mention that the observable property iteration order will vary.

On Mon, Sep 11, 2017 at 7:09 AM, Andrea Giammarchi 
> wrote:


Hi Peter.

Unless you have a faster way to do lazy property assignment, I
think it's irrelevant if internally VMs are not too happy. VMs are
there to solve our problems, not vice-versa ;-)

Regards



On Mon, Sep 11, 2017 at 11:54 AM, peter miller
> wrote:

Hi Andrea,

```
class CaseLazy {
  get bar() {
    var value = Math.random();
    Object.defineProperty(this, 'bar', {value});
    return value;
  }
}
```


Doesn't this count as redefining the shape of the object? Or
are the compilers fine with it?



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Lazy evaluation

2017-09-12 Thread Steve Fink
My intent was only to respond to the performance analysis, specifically 
the implication that the only performance cost is in building the new 
hidden class. That is not the case; everything that touches those 
objects is affected as well.


Whether or not it's still the right way to accomplish what you're after, 
I wasn't venturing an opinion. I could probably come up with a benchmark 
showing that your WeakMap approach can be faster -- eg by only accessing 
the property once, but feeding the old and new versions of the object 
into code that executes many many many times (doing something that never 
looks at that property, but is now slightly slower because it isn't 
monomorphic). But I suspect that for practical usage, redefining the 
property *is* faster than a WeakMap.


If I were to look beyond for other solutions for your problem, then I'm 
just speculating. Can decorators populate multiple properties once the 
expensive work is done?


I really want to tell the VM what's going on. I guess if it knew that 
accessing a getter property would convert it into a value property, and 
that it was doing something that would access the getter, then it could 
know to use the outgoing shape instead of the incoming shape. If only it 
knew that the getter was pure... but that way lies madness.


Given that most code that would slow down would also trigger the lazy 
defineProperty(), it's really not going to be that much of an issue. Any 
access after the first will see a single shape.


meh. Just take the perf hit, with awareness that you may be triggering 
slight slowdowns in all users of that object. Or you might not. I doubt 
it'll be that big, since you'll probably just end up with an inline 
cache for both shapes and there won't be all that much to optimize based 
on knowing a single shape.


Oh, and I think I was wrong about property enumeration order. The 
properties already existed, so defineProperty shouldn't modify the order 
IIUC. (I am awful with language semantics.)


On 9/11/17 2:48 PM, Andrea Giammarchi wrote:
Steve it's not solved in any other way. Even if you use a WeakMap with 
an object, you gonna lazy attach properties to that object.


I honestly would like to see alternatives, if any, 'cause so far there 
is a benchmark and it proves already lazy property assignment is 
around 4x faster.


So, it's easy to say "it's not the best approach" but apparently hard 
to prove that's the case?


Looking forward to see better alternatives.


On Mon, Sep 11, 2017 at 8:15 PM, Steve Fink <sph...@gmail.com 
<mailto:sph...@gmail.com>> wrote:


On 9/11/17 5:36 AM, Matthew Robb wrote:

> I think it's irrelevant if internally VMs are not too happy. VMs
are there to solve our problems, not vice-versa ;-)
​
This ^​ is very important for everyone to get on board with.
Regardless the cost should be negligible as the shape is only
changing at the point of delayed init. This will cause, for
example V8, to deop the object and have to build a new hidden
class but only the one time. I guess it would potentially be
interesting to support an own property that when undefined would
delegate up the proto chain.


(I don't know, but) I would expect it to be worse than this. The
shape is changing at the point of delayed init, which means that
if an engine is associating the possible set of shapes with the
constructor (or some other form of allocation site + mandatory
initialization), then that site will produce multiple shapes. All
code using such objects, if they ever see both shapes, will have
to handle them both. Even worse, if you have several of these
delayed init properties and you end up lazily initializing them in
different orders (which seems relatively easy to do), then the
internal slot offsets will vary.

You don't need to bend over backwards to make things easy for the
VMs, but you don't want to be mean to them either. :-)

Not to mention that the observable property iteration order will vary.

On Mon, Sep 11, 2017 at 7:09 AM, Andrea Giammarchi
<andrea.giammar...@gmail.com <mailto:andrea.giammar...@gmail.com>>
wrote:


Hi Peter.

Unless you have a faster way to do lazy property assignment,
I think it's irrelevant if internally VMs are not too happy.
VMs are there to solve our problems, not vice-versa ;-)

Regards



On Mon, Sep 11, 2017 at 11:54 AM, peter miller
<fuchsia.gr...@virgin.net <mailto:fuchsia.gr...@virgin.net>>
wrote:

Hi Andrea,

```
class CaseLazy {
  get bar() {
    var value = Math.random();
    Object.defineProperty(this, 'bar', {value});
    return value;
  }
}
```


Doesn't this count as redefining the shap

Re: super return

2017-08-30 Thread Steve Fink

On 08/29/2017 08:56 AM, Allen Wirfs-Brock wrote:


On Aug 28, 2017, at 12:29 PM, Sebastian Malton > wrote:


The outcome of this basically means "return from current context up 
one level and then return from there”.


This would be a terrible violation of functional encapsulation.  How 
do you know that the (e.g.) forOf function isn’t internally using a 
encapsulated helper function that is making the actual call to the 
call back.  You simply have no way to predict where returning from the 
current context “up one” means.



I agree. I think this would be much better as

```js
function someThing(doWith) {
return doWith.map(elem => {
typeCheckLabel:
return elem * 2;
});

come from typeCheckLabel if (typeof elem !== "number");
return "Not all are numbers" ;
}
```

:-)

(I agree with the encapsulation argument.)


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Toplevel 'let' binding can be left permanently uninitialized after an error

2017-11-28 Thread Steve Fink

The spidermonkey REPL shell has a special cut-out for this:

js> throw 0; let x;
uncaught exception: 0
(Unable to print stack trace)
Warning: According to the standard, after the above exception,
Warning: the global bindings should be permanently uninitialized.
Warning: We have non-standard-ly initialized them to `undefined`for you.
Warning: This nicety only happens in the JS shell.

It looks like the Firefox console does something similar, just silently. 
The Chrome console and Node REPLs wedge you permanently, from my brief 
testing. I don't have anything else within easy reach to test on.


Separately, I ran into it with a JS debugger REPL that also runs under 
the spidermonkey shell -- I have a 'run' command that reruns the 
toplevel script, which fails if you have any toplevel let/const. And the 
above cutout doesn't help; there is no error.


The bindings are created and exist, they're just set to undefined. So if 
you repeat the above line, you'll get


typein:3:1 SyntaxError: redeclaration of let x
Stack:
  @typein:3:1

These days, if I have a script that I might want to debug with my hacky 
debugger REPL, I'm careful to use only 'var' at the toplevel.


All REPLs and REPL-like things run into this. Perhaps it would be useful 
to agree on a common behavior? Or at least share coping strategies.


On 11/28/2017 11:59 AM, Isiah Meadows wrote:


And this is why I use `var` instead of `let` in REPLs. They're doing 
what they're supposed to do; it's just unintuitive.


As a secondary proposal, I feel `let`/`const` in scripts should be 
allowed to shadow existing globals at the top level *provided* they 
are not declared in the same script. It'd solve the globals issue as 
well as not require the parser to make calls to the runtime 
environment. Of course, this means engines can't assume global `const` 
is immutable, but they already do similar global checks, anyways (like 
if the variable was not defined in that script).



On Tue, Nov 28, 2017, 14:31 Joseph > wrote:


Re "x is irreparably hosed in your REPL"; you can still use it in
subscope, eg <{let x=1;console.log(1)}>.

On 29 November 2017 at 01:30, T.J. Crowder
> wrote:

On Tue, Nov 28, 2017 at 5:05 PM, Joseph > wrote:
> You can still do `{x}`.

Can you expand on that? It doesn't seem to me you can. I mean,
if even `x = 42;` won't work (https://jsfiddle.net/tw3ohac6/),
I fail to see how anything else using `x` would work,
including `{x}` (https://jsfiddle.net/tw3ohac6/1/,
https://jsfiddle.net/tw3ohac6/2/). `x` is permanently in the
TDZ as far as I can tell.

-- T.J. Crowder


___
es-discuss mailing list
es-discuss@mozilla.org 
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Observable GC

2017-10-26 Thread Steve Fink

On 10/20/17 10:52 AM, Filip Pizlo wrote:



On Oct 20, 2017, at 10:29 AM, Mark Miller > wrote:


There is a glaring inconsistency in the criteria we use to evaluate 
these issues. While we are understandably reluctant to admit more 
non-determinism into the platform via weakrefs, we have admitted an 
astonishingly greater degree of non-determinism into the platform via 
"Shared Array Buffers" (SAB), i.e., shared memory multithreading with 
data races.


The scenario we legitimately fear for weakrefs: A developer writes 
code that is not correct according to the weakref specification but 
happens to work on present implementations. Perhaps it relies on 
something being collected that is not guaranteed to be collected.


Being collected when it shouldn’t have been?  Like a dangling 
reference.  The game theory of security exploits forces 
implementations to keep things alive longer, not shorter.


Perhaps it relies on something not being collected that is not 
guaranteed not to be collected. A later correct change to an 
implementation, or another correct implementation, causes that code 
to break. The game theory punishes the correct implementation rather 
than the incorrect code.


Java had weak refs and multiple different implementations.  My claim, 
as someone who implemented lots of weird GC algorithms in Java, is 
that I don’t know of a case where different weak ref semantics breaks 
something.  The only time that getting it wrong ever had an observably 
bad effect is when you break weak refs or finalizers so badly that 
they never get cleared or called, and then some resource has an 
unbounded leak.  This usually results in not being able to run any 
benchmarks that have weak references or finalizers, so you fix those 
bugs pretty quickly.


Here are the motivations:
- Competitive perf motivates GC authors to try to free things as soon 
as possible.  Smaller heaps mean more speed.  Some benchmarks won’t 
run to completion if you aren’t aggressive enough.


I don't follow this. My GC optimization work usually pushes in the 
opposite direction -- scanning less, not more (but hopefully not 
*collecting* much less). We [spidermonkey] partition the heap in all 
kinds of ways so we don't have to collect the whole thing all the time. 
It's partitioned into processes, the processes have thread-local heaps, 
and the thread-local heaps are partitioned into 
independently-collectable zones specific to various purposes (in the web 
browser, they're for tabs, iframes, and some internal things.) It 
doesn't seem unlikely to have a weakref in a lightly-used zone pointing 
into a more active zone. So yes, we'd aggressively collect the pointee 
zone to keep the heap small, but scanning the pointer zones is a waste 
of time. And we're always looking for more ways to subdivide the heap, 
given that the overhead of GC is mostly determined by the amount of live 
stuff you have to scan.


Generational GC similarly partitions the heap, for the same reason. If 
nothing is surviving minor GCs, you won't bother doing any of the major 
GCs that would collect the weakref pointees. I have considered (and I 
believe other engines have implemented) having more generations, by 
splitting off very long-lived (or alternatively, observed to be 
read-only) portions of the tenured heap and not scanning those during 
most major GCs. (I haven't looked enough to decide whether the extra 
cost and complexity of the write barriers is worth it for us at this point.)


That said, we *could* treat a weakref as a cue to collect the source and 
destination zones together. Which would mean weakrefs would be something 
of a "go slow" bit, but it might help contain the damage.


- The need to avoid dangling references forces us to keep alive at 
least what we need to, and sometimes a bit more.


I guess a program could rely on the weak references actually being 
strong in some implementation.  I haven’t heard of Java programs ever 
doing that.  It’s unlikely to happen because the major implementations 
will try to clear weak refs as aggressively as they can to compete on 
benchmarks.


GC-related competition on benchmarks gets really silly without anyone 
even intentionally gaming things. I remember making a minor improvement 
and seeing a benchmark score absolutely plummet. I tracked it down to 
the benchmark having a "warmup" phase to each subtest (which is 
important for eliminating variance that prevents detecting small 
changes, so it's not a wholly bad thing). The minor change shifted 
several GCs from the unmeasured warmup phase into the measured phase. At 
which point I realized that much of our parameter tuning, in particular, 
had been having the effect of hiding GCs under the covers, not 
necessarily speeding anything up. If a benchmark score started depending 
on the promptness of weakref cleanup, then you're right, we'll probably 
end up messing up our heap partitioning to satisfy whatever 

Re: Small Proposal "!in"

2018-07-23 Thread Steve Fink
This reads a little oddly, but another syntax option would be `prop 
in.own obj` (equivalent to `obj.hasOwnProperty(prop)`) and then `prop 
!in.own obj`.


Or perhaps `in.own` should be Object.prototype.hasOwnProperty.call(obj, 
prop)?


Though this makes me think it would be nice to have something like `name 
in.map mymap` (equivalent to `mymap.has(name)`)



On 07/20/2018 10:09 AM, Augusto Moura wrote:
The only use that came to mind was detecting a property descriptor in 
a prototype chain. Sure is not a day to day use case, but it's useful 
when writing libraries that involve descriptor modifications 
(decorators, for example, will largely involve it). Recently I had to 
get the descriptor of properties in a potencial deep inheritance 
(current Object helpers only return own descriptors), and used the 
`in` operator to guard the prototype recursive search.


``` js
const searchRecursivelyPropDescriptor = (obj, prop) =>
  !obj
    ? undefined
    : Object.getOwnPropertyDescriptor(obj, prop) || 
searchRecursivelyPropDescriptor(Object.getPrototypeOf(obj), prop);


const getPropertyDescriptor = (obj, prop) =>
  prop in obj ? searchRecursivelyPropDescriptor(obj, prop) : undefined;
```

Anyways, we can't simply ignore the operator, if we are getting a 
`!instance` and opening precedence to future operators (`!on` or 
`!hasOwn`) I don't see any problems with a `!in`. Legacy bad design 
should not affect language consistency of new features.


Em qui, 19 de jul de 2018 às 12:07, Mike Samuel > escreveu:


On Thu, Jul 19, 2018 at 10:40 AM Augusto Moura
mailto:augusto.borg...@gmail.com>> wrote:

Of couse the usage of `in` is most of the time is not
recommended, but it has it place.


What places does it have?
I remain unconvinced that `in` has significant enough use cases to
warrant high-level ergonomics
were it being proposed today.

It exists, and it'll probably never be removed from the language,
but I don't think it should be taught
as a good part of the language, and linters should probably flag it.

--
Augusto Moura


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Fwd: Boolean equivalent to pre-increment and post-icnrement

2018-08-30 Thread Steve Fink

On 08/29/2018 12:13 PM, Bob Myers wrote:
In the stupid idea of the day department, for some reason I have felt 
the urge more than once in recent months for an operator which would 
invert the value of a boolean variable while evaluating to its 
pre-inversion value. For example:


```js
if (bool!!) console.log("used to be true");
```

The post-inversion case is less important since I can just write `if 
(bool = !bool)`.


There's always

    if (counter++ & 1) console.log("used to be true");
    if (++counter & 1) console.log("is now true");

or (counter++ % 2) if you prefer. And you get a free cycle counter in 
the bargain! (At the cost of flatlining to false at 2**53.)


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Small Proposal "!in"

2018-07-09 Thread Steve Fink

+1 from me for !in. It's a surprisingly common nuisance.

And I don't care for the !obj.x workaround at all -- even if you can 
survive the difference in semantics, from a code reading point of view 
this is saying something entirely different.


And it is very different semantically. 'x' in obj does [[HasProperty]]; 
obj.x does [[GetProperty]]. With


  obj = { get x() { print("getter"); return 3; } };

then |"x" in obj| does not print "getter" while |obj.x| does.

On 06/29/2018 12:26 AM, Cyril Auburtin wrote:


```js
if (!obj.x && !obj.y) {
     doit()
}
```
The cases where they are equal to 0, '', null, undefined shouldn't 
matter imo, if for example those x and y are numbers, they would be 
defined, defaulted to 0, and you would test for `!== 0` rather if needed


Le jeu. 28 juin 2018 à 18:31, Guylian Cox > a écrit :


I agree, it's very annoying to have to write it !(x in y). I've
been wanting this operator for a very, very long time.

If there is interest for !in, I think !instanceof deserves to be
included too.

Le jeu. 28 juin 2018 à 18:19, T.J. Crowder
mailto:tj.crow...@farsightsoftware.com>> a écrit :

On Thu, Jun 28, 2018 at 5:14 PM, Tobias Buschor
mailto:tobias.busc...@shwups.ch>>
wrote:
> I dont like to write:
> if ( !('x' in obj) &&  !('y' in obj) ) {
>      doit()
> }
>
> I was even tempted to write it that way:
> if ('x' in obj  ||  'y' in obj) { } else {
>      doit()
> }

There's

```js
if (!('x' in obj  ||  'y' in obj)) {
     doit()
}
```

That said, I've wanted !in many a time, in a minor sort of way...

-- T.J. Crowder
___
es-discuss mailing list
es-discuss@mozilla.org 
https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org 
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Use hashes as keys instead of toString() for values like object, function to index objects

2019-09-09 Thread Steve Fink

On 9/8/19 1:24 PM, Michael Luder-Rosefield wrote:
I'd suggest that the best way of doing this, without breaking existing 
code, is to put some sugar around Maps so that they can be used in a 
more Object-y way. For starters, Map literal declarations and 
assignment could benefit from this.


+1

Not to mention, there is no guarantee that a persistent ID exists. 
Engines have to keep an object usable as a Map key even if its address 
changes (eg by a generational GC's minor/nursery collection), but there 
are ways to do that without associating a permanent numeric ID with the 
objects. Spidermonkey, in particular, used to rekey its internal 
hashtables when objects moved in memory (essentially removing the old 
address and re-inserting with the new, though in practice it was 
trickier than that because we needed to prevent it from resizing the 
table awkward time). It no longer does, and indeed now any object you 
use as a Map key will be lazily given a unique numeric ID, but the point 
is that implementing this in arbitrary engines is not necessarily as 
straightforward or free as it might seem. (Though the security argument 
is an even stronger counterargument.)



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype.toggle

2020-02-10 Thread Steve Fink
If you're looking for data points: I have never wanted exactly this, and 
would find it a pretty bizarre thing to find in the standard library. 
The most similar thing I've wanted would be to toggle something's 
presence in a Set. Far more often than that, I've wanted something like 
upsert or setdefault. Far more often than *that*, I've wanted 
Map.prototype.get with a default value, though probably ?? covers that 
scenario well enough now.


One reason why toggle's inclusion would seem weird to me is that it's 
not clear to me whether it should remove all copies, or just the first 
(so you'd need to toggle N times to clear out an array with N copies). 
Nor is it obvious why an added-by-toggle element should be pushed onto 
the end as opposed to somewhere else in the list (eg if I had a sorted 
array, I'd probably expect it to be in the middle.)  The main reason, 
though, is that it feels rather niche.


On 2/8/20 4:45 AM, manuelbarzi wrote:
no intention in this proposal to discuss the `how`, but just the 
`what`, as i assume everybody here knows how to implement it in a 
polyfill, single function or any other approach. the proposal just 
goes on the idea that "hey, we have already semantic things like 
`some`, `every`, etc... in array, wouldn't it be useful to have the 
`toggle` too? which in my case i found using and reusing in various 
projects already. how about you, guys?" then if there is enough 
quorum, just thinking about integrating it or not. that's all. thank you.


On Fri, Feb 7, 2020 at 10:36 PM Scott Rudiger > wrote:


I believe this wouldn't result in the OP's desired results since
the filtered array is no longer the same length as the original
array:

```js
var toggle = (arr, el) => Object.assign(arr, arr.filter(n => n !==
el));
toggle([1, 2, 3, 2, 1], 1); // [2, 3 ,2, 2, 1]
```

Here's a helper function that would work (and also push the
element if it's not included in the original array):

```js
var toggle = (arr, el) => {
var len = arr.length;
for (var i = 0; i < arr.length; i++)
if (arr[i] === el)
arr.splice(i--, 1);
if (arr.length === len)
arr.push(el);
return arr;
};
var a = toggle([1, 2, 3, 2, 1], 1); // mutates the original array
removing 1 => [2, 3, 2]
toggle(a, 1); // mutates the original array adding 1 => [2, 3, 2, 1]
```


On Fri, Feb 7, 2020 at 11:26 AM Herby Vojčík mailto:he...@mailbox.sk>> wrote:

On 7. 2. 2020 13:11, Scott Rudiger wrote:
> `Array.prototype.filter` seems more versatile (although it
doesn't
> mutate the original array) since it removes elements based
on a function:
>
> ```js
> [1, 2, 3, 2, 1].filter(n => n !== 1); // [2, 3, 2]
> ```

But what if one wants to mutate in-place. Would this work?

   Object.assign(arr, arr.filter(n => n !== 1))

If not, maybe there can be

   aCollection.replaceWith(anIterable)

Herby


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A way to construct Functions with custom scopes?

2020-06-10 Thread Steve Fink

On 6/10/20 11:06 AM, #!/JoePea wrote:

For example, what if we could write something like

```js
function foo() {
   var x = 10
}

const [scope, returnValue] = scope from foo()

// can't do anything with `scope` directly, it's like an empty object
(an exotic object?).
console.log(getOwnAndInheritedKeys(scope)) // empty array

new Function('console.log(x)', { scope }) // logs 10
```


-1 from me. I think it would be disastrous for performance. It prevents 
any escape analysis and resulting optimizations. It prevents constant 
propagation. It might even interfere with inlining. It would add more 
points where JITs might need to invalidate their compiled code. In 
general, it eliminates much of the remaining freedom that JS engines 
have to optimize in the face of the wildly dynamic nature of JavaScript 
code.


I didn't understand your use case with Element attributes.

Also, how do you specify which scope you want?

```js

    function foo() {

    // Scope 1, with y

    let x = 1;

    // Scope 2, with x and y

    var y = 2;

    // Scope 3

    if (...) {

    let z = 3; // Scope 4, with x, y, and z

 }

    }

```


etc. And with default parameters and things, there are a lot of scopes 
to choose from. I won't even explore the messiness of temporal dead zones.


To make it at all practical, I think you'd need to somehow statically 
declare which functions to permit this for. But at that point, you're 
better off creating and storing a closure at the exact points where you 
want to capture scopes (which fixes the "which scope" problem as well.) 
And it sounds like that wouldn't work for what you want.



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss