Re: Proposal: rest operator in middle of array

2019-06-10 Thread Ethan Resnick
maintainability and debuggability
>> >
>> > 1. maintainability
>> > if you want to extend the function with additional args,
>> > then you'll have to retroactively modify all existing calls to avoid
>> off-by-one argcount:
>> >
>> > ```js
>> > // if extending function with additional args
>> > function pad(targetLength, ...opts, data) ->
>> > function pad(targetLength, ...opts, data, meta)
>> >
>> > // then must retroactively append null/undefined to all existing calls
>> > pad(1, opt1, opt2, "data") ->
>> > pad(1, opt1, opt2, "data", null)
>> > ```
>> >
>> >
>> >
>> > 2. debuggability
>> > when debugging, it takes longer for human to figure out which arg is
>> what:
>> >
>> > ```js
>> > // function pad(targetLength, ...opts, data)
>> > pad(aa, bb, cc, dd);
>> > pad(aa, bb, cc, dd, ee);
>> >
>> > // vs
>> >
>> > // function pad(targetLength, opts, data)
>> > pad(aa, [bb, cc], dd);
>> > pad(aa, [bb, cc, dd], ee);
>> > ```
>> >
>> >
>> >
>> > On Thu, Jun 6, 2019 at 11:54 AM Ethan Resnick 
>> wrote:
>> >>
>> >> Long-time mostly-lurker on here. I deeply appreciate all the hard work
>> that folks here put into JS.
>> >>
>> >> I've run into a couple cases now where it'd be convenient to use a
>> rest operator at the beginning or middle of an array destructuring, as in:
>> >>
>> >> ```
>> >> const [...xs, y] = someArray;
>> >> ```
>> >>
>> >> Or, similarly, in function signatures:
>> >>
>> >> ```
>> >> function(...xs, y) { }
>> >> ```
>> >>
>> >> The semantics would be simple: exhaust the iterable to create the
>> array of `xs`, like a standard rest operator would do, but then slice off
>> the last item and put it in `y`.
>> >>
>> >> For example, I was working with some variable argument functions that,
>> in FP style, always take their data last. So I had a function like this:
>> >>
>> >> ```
>> >> function match(...matchersAndData) {
>> >>   const matchers = matchersAndData.slice(0, -1);
>> >>   const data = matchersAndData[matchersAndData.length - 1];
>> >>   // do matching against data
>> >> }
>> >> ```
>> >>
>> >> Under this proposal, the above could be rewritten:
>> >>
>> >> ```
>> >> function reduce(...matchers, data) { /* ... */ }
>> >> ```
>> >>
>> >> Another example: a function `pad`, which takes a target length and a
>> string to pad, with an optional padding character argument in between:
>> >>
>> >> ```
>> >> function pad(targetLength, ...paddingCharAndOrData) {
>> >>   const [paddingChar = " "] = paddingCharAndOrData.slice(0, -1);
>> >>   const data = paddingCharAndOrData[paddingCharAndOrData.length - 1];
>> >>
>> >>   // pad data with paddingChar to targetLength;
>> >> }
>> >> ```
>> >>
>> >> With this proposal, that could be rewritten:
>> >>
>> >> ```
>> >> function pad(targetLength, ...opts, data) {
>> >>   const [paddingChar = " "] = opts;
>> >>   // pad data with paddingChar to targetLength;
>> >> }
>> >> ```
>> >>
>> >> I'm curious if this has been considered before, and what people think
>> of the idea.
>> >>
>> >> Obviously, if `...a` appeared at the beginning or middle of a list,
>> there would have to be a fixed number of items following it, so a
>> subsequent rest operator in the same list would not be allowed.
>> >>
>> >> Thanks
>> >>
>> >> ___
>> >> es-discuss mailing list
>> >> es-discuss@mozilla.org
>> >> https://mail.mozilla.org/listinfo/es-discuss
>> >
>> > ___
>> > es-discuss mailing list
>> > es-discuss@mozilla.org
>> > https://mail.mozilla.org/listinfo/es-discuss
>>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Proposal: rest operator in middle of array

2019-06-06 Thread Ethan Resnick
Long-time mostly-lurker on here. I deeply appreciate all the hard work that
folks here put into JS.

I've run into a couple cases now where it'd be convenient to use a rest
operator at the beginning or middle of an array destructuring, as in:

```
const [...xs, y] = someArray;
```

Or, similarly, in function signatures:

```
function(...xs, y) { }
```

The semantics would be simple: exhaust the iterable to create the array of
`xs`, like a standard rest operator would do, but then slice off the last
item and put it in `y`.

For example, I was working with some variable argument functions that, in
FP style, always take their data last. So I had a function like this:

```
function match(...matchersAndData) {
  const matchers = matchersAndData.slice(0, -1);
  const data = matchersAndData[matchersAndData.length - 1];
  // do matching against data
}
```

Under this proposal, the above could be rewritten:

```
function reduce(...matchers, data) { /* ... */ }
```

Another example: a function `pad`, which takes a target length and a string
to pad, with an optional padding character argument in between:

```
function pad(targetLength, ...paddingCharAndOrData) {
  const [paddingChar = " "] = paddingCharAndOrData.slice(0, -1);
  const data = paddingCharAndOrData[paddingCharAndOrData.length - 1];

  // pad data with paddingChar to targetLength;
}
```

With this proposal, that could be rewritten:

```
function pad(targetLength, ...opts, data) {
  const [paddingChar = " "] = opts;
  // pad data with paddingChar to targetLength;
}
```

I'm curious if this has been considered before, and what people think of
the idea.

Obviously, if `...a` appeared at the beginning or middle of a list, there
would have to be a fixed number of items following it, so a subsequent rest
operator in the same list would not be allowed.

Thanks
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: NumberFormat maxSignificantDigits Limit

2019-01-23 Thread Ethan Resnick
>
> Well, if you remove the trailing 0s you get an entirely different number.
> That's pretty significant.
> Note that this is the default ES serialization as well.
>

This makes no sense to me. Yes, removing trailing 0s, and therefore
changing the magnitude of the number, changes its meaning. But significant
digits are about capturing precision, not magnitude.

Let's make this concrete:

The number 1344499984510435328 happens to have an exact floating
point representation. However, because that number is larger than the max
safe integer, many other integers are best approximated by the same
floating point value. 13444999800 is one such number.

So, if you do:

1344499984510435328..toLocaleString('en', { maximumSignificantDigits:
21, useGrouping: false })

and

13444999800..toLocaleString('en', { maximumSignificantDigits:
21, useGrouping: false })

you actually get the same output in each case, which makes sense, because
both numbers are represented by the same floating point behind the scenes.

Now, it seems like the serialization logic in `toLocaleString` (or
`toPrecision`) has two options.

First, it could assume that the number it's serializing started life as a
decimal and got converted to the nearest floating point, in which case the
serialization code doesn't know the original intended number. In this case,
its best bet is probably to output 0s in those places where the original
decimal digits are unknown (i.e., for all digits beyond the precision that
was stored). This is actually what toLocaleString does; i.e., all digits
after the 17th are 0, because 64-bit floating points can only store 17
decimal digits of precision. This is where my original question came in,
though: if a float can only encode 17 digits of precision, why would the
maximumSignificantDigits be capped at 21? It seems like the values 18–21
are all just equivalent to 17.

The other option is that the serialization code could assume that the
number stored in the float is exactly the number the user intended (rather
than a best approximation of some other decimal number). This is actually
what `toPrecision` does. I.e., if you call `toPrecision(21)` on either of
the numbers given above, you get 21 non-zero digits, matching the first 21
digits of the underlying float value: `"1.344499984510e+26"`. But,
again, the limit of 21 seems odd here too. Because, if you're going to
assume the float represents exactly the intended number, why not be willing
to output all 27 significant digits in the decimal above? Or more than 27
digits for the decimal representation of bigger floats?

In other words, it seems like `maximumSignificantDigits` should either be
capped at 17 (the real precision of the underlying float) or at 309 (the
length of the decimal representation of the largest float). But neither of
those are 21, hence my original question...

On Mon, Jan 21, 2019 at 2:32 AM Anders Rundgren <
anders.rundgren@gmail.com> wrote:

> This limit seems a bit strange though:
>
> console.log(new Intl.NumberFormat('en', { maximumFractionDigits: 20
> }).format(-0.03));
>
> Result: -0.0333
>
> That's actually two digits less than produced by the default ES
> serialization.
> "maximumFractionDigits" is limited to 20.
>
> Anders
>
>
> On 2019-01-21 06:54, Ethan Resnick wrote:
> > if you input this in a browser debugger it will indeed respond with
> the same 21 [sort of] significant digits
> >
> > 0
> >
> > I'm pretty sure the 0s don't count as significant digits <
> https://www.wikiwand.com/en/Significant_figures> (and, with floating
> point numbers, it makes sense that they wouldn't).
> >
> > l this is probably best asked at https://github.com/tc39/ecma402,
> since it seems to imply a potential spec bug.
> >
> >
> > Although my question was framed in terms of NumberFormat, I don't
> actually think this is Ecma 402-specific. Specifically, I believe the limit
> started, or at least also applies to, the Number.prototype.toPrecision <
> https://www.ecma-international.org/ecma-262/6.0/#sec-number.prototype.toprecision>
> API from Ecma 262 (where it is equally unexplained).
> >
> > That's true for decimal values, but the limit of 21 would also
> include the fractional portion of the double value as well, so would need
> more than 17, I think?
> >
> >
> > My understanding of floating point encoding is that 17 digits will also
> cover the fractional portion. The only case I can think of where 17 digits
> might not be enough is if the number system is not base 10; e.g., a base 6
> number system would presumably require more digits. But, I don't see any
> such number systems as output options in the NumberForma

Re: NumberFormat maxSignificantDigits Limit

2019-01-20 Thread Ethan Resnick
>
> if you input this in a browser debugger it will indeed respond with the
> same 21 [sort of] significant digits
>
0

I'm pretty sure the 0s don't count as significant digits
<https://www.wikiwand.com/en/Significant_figures> (and, with floating point
numbers, it makes sense that they wouldn't).

l this is probably best asked at https://github.com/tc39/ecma402, since it
> seems to imply a potential spec bug.


Although my question was framed in terms of NumberFormat, I don't actually
think this is Ecma 402-specific. Specifically, I believe the limit started,
or at least also applies to, the Number.prototype.toPrecision
<https://www.ecma-international.org/ecma-262/6.0/#sec-number.prototype.toprecision>
API from Ecma 262 (where it is equally unexplained).

That's true for decimal values, but the limit of 21 would also include the
> fractional portion of the double value as well, so would need more than 17,
> I think?
>

My understanding of floating point encoding is that 17 digits will also
cover the fractional portion. The only case I can think of where 17 digits
might not be enough is if the number system is not base 10; e.g., a base 6
number system would presumably require more digits. But, I don't see any
such number systems as output options in the NumberFormat API, and such
localization concerns don't really explain the limit in N.p.toPrecision
linked above, which is definitely dealing with base 10.

On Sun, Jan 20, 2019 at 4:48 PM Logan Smyth  wrote:

> It does seem unclear why the limit is 21. Is that maybe the most you need
> to uniquely stringify any double value?
>
> > an only encode up to 17 significant decimal digits
>
> That's true for decimal values, but the limit of 21 would also include the
> fractional portion of the double value as well, so would need more than 17,
> I think?
>
> On Sun, Jan 20, 2019 at 1:18 PM Isiah Meadows 
> wrote:
>
>> I feel this is probably best asked at https://github.com/tc39/ecma402,
>> since it seems to imply a potential spec bug.
>>
>> -
>>
>> Isiah Meadows
>> cont...@isiahmeadows.com
>> www.isiahmeadows.com
>>
>>
>> On Sun, Jan 20, 2019 at 2:31 PM Anders Rundgren <
>> anders.rundgren@gmail.com> wrote:
>>
>>> On 2019-01-20 20:18, Ethan Resnick wrote:
>>> > Hi,
>>> >
>>> > Apologies if es-discuss is the wrong venue for this; I've tried first
>>> poring through the specs and asking online to no avail.
>>> >
>>> > My question is: why is the limit for the `maximumSignificantDigits`
>>> option in the `NumberFormat` API set at 21? This seems rather arbitrary —
>>> and especially odd to me given that, iiuc, all Numbers in JS, as 64 bit
>>> floats, can only encode up to 17 significant decimal digits. Is this some
>>> sort of weird historical artifact of something? Should the rationale be
>>> documented anywhere?
>>>
>>> I don't know for sure but if you input this in a browser debugger it
>>> will indeed respond with the same 21 [sort of] significant digits
>>> 0
>>>
>>> rgds,
>>> Anders
>>> >
>>> > Thanks!
>>> >
>>> > Ethan
>>> >
>>> > ___
>>> > es-discuss mailing list
>>> > es-discuss@mozilla.org
>>> > https://mail.mozilla.org/listinfo/es-discuss
>>> >
>>>
>>> ___
>>> es-discuss mailing list
>>> es-discuss@mozilla.org
>>> https://mail.mozilla.org/listinfo/es-discuss
>>>
>> ___
>> es-discuss mailing list
>> es-discuss@mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


NumberFormat maxSignificantDigits Limit

2019-01-20 Thread Ethan Resnick
Hi,

Apologies if es-discuss is the wrong venue for this; I've tried first
poring through the specs and asking online to no avail.

My question is: why is the limit for the `maximumSignificantDigits` option
in the `NumberFormat` API set at 21? This seems rather arbitrary — and
especially odd to me given that, iiuc, all Numbers in JS, as 64 bit floats,
can only encode up to 17 significant decimal digits. Is this some sort of
weird historical artifact of something? Should the rationale be documented
anywhere?

Thanks!

Ethan
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Javascript Code Churn Rate?

2015-11-10 Thread Ethan Resnick
> To the extent that the web is used for applications, this is probably OK,
but for documents this is really a bad approach because we (well at least
some of us) want those to continue to be readable as the web evolves.

Sure, I can appreciate that. And the academic/researcher in me definitely
likes the idea of never removing a language feature.

I guess I was just asking in case anyone felt there could be some (very,
very low) level of breakage that's tolerable. After all, links/images
already go bad pretty regularly and removing bits of JS wouldn't make the
web the only medium for which old equipment (here, an old browser) is
required to view old content. On that front, print is the remarkable
exception; most everything else (audio recordings, video recordings,
conventional software) is pretty tightly bound to its original technology.
Of course, "other mediums suck at longevity too" isn't much of an argument,
but if there's a tradeoff here, maybe it's worth keeping in mind.

Regardless, it seems like there are many less radical approaches that
deprioritize old features without making them strictly unavailable, so I'm
still curious to know about JS churn rates, if that data exists, to get a
sense of the timescale for those approaches.
On Nov 10, 2015 6:58 AM, "Boris Zbarsky" <bzbar...@mit.edu> wrote:

> On 11/10/15 7:41 AM, Ethan Resnick wrote:
>
>> And how long until they could remove support for the rest of the
>> language altogether?
>>
>
> This makes the fundamental assumption that it's OK to break old things
> just because they're old.  To the extent that the web is used for
> applications, this is probably OK, but for documents this is really a bad
> approach because we (well at least some of us) want those to continue to be
> readable as the web evolves.  Otherwise we end up with a "dark ages" later
> on where things that appeared in print continue to be readable while later
> digital stuff, even if still available, is not.
>
> And in this case "documents" includes things like interactive New York
> Times stuff and whatnot...
>
> -Boris
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Javascript Code Churn Rate?

2015-11-10 Thread Ethan Resnick
I've been trying to think through possible ways to address JS's growing
complexity (something I know I'm not alone in worrying about) that are
consistent with "don't break the web". I understand going in that the
solution will largely lie in controlling future growth rather than removing
existing features, which will always be hard and is currently near
impossible. Still, I feel like deprecation/subsetting approaches might not
have been adequately explored.

Before I go on proposing things without knowing what I'm talking about,
though, I was hoping y'all could point me to (or help me by collecting?)
some relevant data. In particular, I'm wondering: what's the distribution
of the age of js files on the web, accounting for the frequency with which
each page is visited? Or, more concretely: suppose you could magically get
all new/newly-modified JS to only use a particular subset of the language;
how long would it take for that subset to dominate the web, such that
engines could heavily optimize for it? And how long until they could remove
support for the rest of the language altogether?

Cheers,
Ethan

P.S. Long time es-discuss lurker and I really admire all the great work you
folks do here.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Javascript Code Churn Rate?

2015-11-10 Thread Ethan Resnick
Thanks Nelo! Yes, I've seen strong mode and I think it's an interesting
idea (though it trades a bit more usability for performance than I would
have personally). Still though, I'm curious to see the js churn data if
anyone has it, as that effects all approaches in the strong mode vein.

On Tue, Nov 10, 2015 at 4:57 AM, Nelo Mitranim  wrote:

> In case you happen to be unaware, the V8 team recently came out with ideas
> about a “stricter mode” that cuts down on questionable features that hurt
> optimisation: https://developers.google.com/v8/experiments?hl=en,
> relevant HN discussion: https://news.ycombinator.com/item?id=9178765
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: Exponentiation operator precedence

2015-08-27 Thread Ethan Resnick
Long-time esdiscuss lurker; hopefully this perspective is helpful.

I think the problem here is that traditional mathematic notation uses
visual cues to imply precedence that JS can't take advantage of. When -3 **
2 is written out on paper, the 2 is very clearly grouped visually with the
3. In fact, the superscript almost makes the 2 feel like an appendage of
the 3. That makes it more natural to read it as two items: the negative
sign, and (3 ** 2).

By contrast, when (-3 ** 2) is written out in code, the negative sign is
way closer visually to the 3 than the 2 is, so I find myself
instinctively pulling out a -3 first and reading the expression as
(-3)**2.

Treating -3 ** 2 as -(3 ** 2) seems technologically possible and
mathematically sensible, but also like it's going against the grain of the
human visual system. I think that's at least one reason to value the
binary precedence should be
lower than unary principle.

The `Number.prototype.pow` approach might be an improvement, as it has the
effect of mandating parentheses on some cases that might otherwise be
confusing (e.g. x ** y ** z)  and it offers most of the conciseness of
**. But -x.pow(2) still feels unpredictable to me as an everyday programmer
switching between languages. (Also, pow() requires an extra set of
parentheses if I want to operate on a literal.)

Maybe it's ok if the operator surprises some people in some cases, and the
guidance will just become to use parentheses if you're unsure. That
occasional uncertainty for the vast majority of JS programmers that aren't
doing much exponentiation might be worth it if ** makes a minority of JS
programmers much more productive. I don't know.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss