Re: One-shot Delimited Continuations with Effect Handlers

2016-03-15 Thread Sebastian Markbåge
It's possible that this pattern matching feature can be decoupled into a
separate proposal though. You'd need a way to call an iterator but only
receive specific matches and reyield the rest.

On Tue, Mar 15, 2016 at 6:05 PM, Sebastian Markbåge 
wrote:

>
> Perhaps if there was a way to wrap any arbitrary expression with a
>> generator that captured any yielded values and allowed resumption by
>> calling .next(), then you could accomplish this without inventing new
>> try-catch syntax?
>>
>
> Yea, except you need to be able to nest them inside each other as well.
>
> If a generator captured any yielded values, then it would be yielded at
> the inner most caller.
>
> You could handle this the way JavaScript does exception handling by
> "rethrowing" errors it didn't handle. I.e. if you see a yield that you
> don't recognize you would re-yield it.
>
> In my use case I can have many nested handlers and I want to handle a
> particular type at the top of the stack. If you have to conditionally
> "reyield" all the way up there, you miss out on potential important
> optimizations.
>
> This is related to "enums" and "pattern matching" too. The current pattern
> matching proposal also doesn't have any optimizations either.
>
>
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: One-shot Delimited Continuations with Effect Handlers

2016-03-15 Thread Sebastian Markbåge
> Perhaps if there was a way to wrap any arbitrary expression with a
> generator that captured any yielded values and allowed resumption by
> calling .next(), then you could accomplish this without inventing new
> try-catch syntax?
>

Yea, except you need to be able to nest them inside each other as well.

If a generator captured any yielded values, then it would be yielded at the
inner most caller.

You could handle this the way JavaScript does exception handling by
"rethrowing" errors it didn't handle. I.e. if you see a yield that you
don't recognize you would re-yield it.

In my use case I can have many nested handlers and I want to handle a
particular type at the top of the stack. If you have to conditionally
"reyield" all the way up there, you miss out on potential important
optimizations.

This is related to "enums" and "pattern matching" too. The current pattern
matching proposal also doesn't have any optimizations either.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: One-shot Delimited Continuations with Effect Handlers

2016-03-15 Thread Ben Newman
Ok, I think I understand how your needs differ from the async/relaxed-await
idea. You need a way to

   - extract values from deeply nested yields, which is not allowed by
   await expressions,
   - resume execution whenever you choose, which is a decision await
   expressions make for you, i.e. they resume whenever the Promise is
   resolved/rejected, and
   - execute everything synchronously (if desired).

Perhaps if there was a way to wrap any arbitrary expression with a
generator that captured any yielded values and allowed resumption by
calling .next(), then you could accomplish this without inventing new
try-catch syntax?

On Tue, Mar 15, 2016 at 8:27 PM Sebastian Markbåge 
wrote:

> async functions only address the async use case and they're all scheduled
> on a simple micro-task queue. We need fine grained control over scheduling.
> Perhaps Zones can help a bit with that but that's just one of severals
> concepts that need this.
>
> It doesn't solve other more general generator use cases. You could
> potentially expand it to generators as well.
>
> However, then you'd also need to solve the nested handlers case
> efficiently. That's my use case. What if you have a layout handler, in an
> iteration handler in an async scheduler handler?
>
> The async functions could be implemented in terms of this though since
> they're just a more specific and locked down version.
>
>
>
> On Tue, Mar 15, 2016 at 5:16 PM, Ben Newman  wrote:
>
>> What if we simply allowed await expressions anywhere in the call stack of
>> an async function, rather than only in the bodies of async functions? That
>> would give us all the power of "yielding deep in a fiber" with a much more
>> familiar syntax, and an easy way to capture effects, since an async
>> function returns a Promise that allows for asynchronous error handling. No
>> new catch effect … syntax needed.
>>
>> I've talked about this before
>> , and it's my understanding
>> that TC39 needs a lot of convincing that coroutines are a good idea. We
>> haven't really discussed the topic in the context of async functions, which
>> are soon to be an official part of the language, so perhaps the time for
>> discussing coroutines/continuations/etc. is drawing near!
>>
>> On Tue, Mar 15, 2016 at 7:44 PM Sebastian Markbåge 
>> wrote:
>>
>>> Has anyone previously proposed a Algebraic Effects (e.g. Eff like
>>> handlers of continuations) for ECMAScript? #lazyweb  I could only find
>>> variations that aren't quite this.
>>>
>>> I'm specifically looking at the OCaml implementation for inspiration:
>>>
>>> http://kcsrk.info/ocaml/multicore/2015/05/20/effects-multicore/
>>>
>>> http://kcsrk.info/slides/multicore_fb16.pdf
>>>
>>> I'm not proposing the multicore aspect of this but just the delimited
>>> continuations.
>>>
>>> Basically, the idea is that you can capture effects by wrapping a call
>>> in a "try". That spawns a fiber. Then any function can "perform" an effect
>>> as a new language feature. That works effectively like "throw" expect it
>>> also captures a reified continuation, in a tuple. The continuation can be
>>> invoked to continue where you left off.
>>>
>>> Imaginary syntax:
>>>
>>> function otherFunction() {
>>>   console.log(1);
>>>   let a = perform { x: 1, y: 2 };
>>>   console.log(a);
>>>   return a;
>>> }
>>>
>>> do try {
>>>   let b = otherFunction();
>>>   b + 1;
>>> } catch effect -> [{ x, y }, continuation] {
>>>   console.log(2);
>>>   let c = continuation(x + y);
>>>   console.log(c);
>>> }
>>>
>>> Prints:
>>> 1
>>> 2
>>> 3
>>> 4
>>>
>>> (`perform` is a contextual keyword to "throw" and `catch effect` is a
>>> keyword for catching it.)
>>>
>>> We've experimented with changing React's implementation to use these
>>> internally to handle concurrency and being able to solve complex algorithms
>>> that require a lot of back and forth such as layout calculation. It seems
>>> to make these implementations much easier while remaining efficient.
>>>
>>> It also allows for seamless async I/O handling by yielding deep in a
>>> fiber.
>>>
>>> Effectively this is just solving the same thing as generator and
>>> async/await.
>>>
>>> However, the benefit is that you don't have a split between "async"
>>> functions, generator functions and synchronous functions. You still have an
>>> explicit entry point through the place where you catch effects.
>>>
>>> With generators and async functions, anytime you want to change any deep
>>> effects you have to unwind all potential callers. Any intermediate library
>>> have to be turned into async functions. The refactoring is painful and
>>> leaves you with a lot of syntax overhead.
>>>
>>> If you want to nest different effects such as layout, iterations and
>>> async functions that complexity explodes because now every intermediate
>>> function has to be able to handle all those concepts.
>>>
>>> The performance 

Re: One-shot Delimited Continuations with Effect Handlers

2016-03-15 Thread Sebastian Markbåge
async functions only address the async use case and they're all scheduled
on a simple micro-task queue. We need fine grained control over scheduling.
Perhaps Zones can help a bit with that but that's just one of severals
concepts that need this.

It doesn't solve other more general generator use cases. You could
potentially expand it to generators as well.

However, then you'd also need to solve the nested handlers case
efficiently. That's my use case. What if you have a layout handler, in an
iteration handler in an async scheduler handler?

The async functions could be implemented in terms of this though since
they're just a more specific and locked down version.



On Tue, Mar 15, 2016 at 5:16 PM, Ben Newman  wrote:

> What if we simply allowed await expressions anywhere in the call stack of
> an async function, rather than only in the bodies of async functions? That
> would give us all the power of "yielding deep in a fiber" with a much more
> familiar syntax, and an easy way to capture effects, since an async
> function returns a Promise that allows for asynchronous error handling. No
> new catch effect … syntax needed.
>
> I've talked about this before
> , and it's my understanding
> that TC39 needs a lot of convincing that coroutines are a good idea. We
> haven't really discussed the topic in the context of async functions, which
> are soon to be an official part of the language, so perhaps the time for
> discussing coroutines/continuations/etc. is drawing near!
>
> On Tue, Mar 15, 2016 at 7:44 PM Sebastian Markbåge 
> wrote:
>
>> Has anyone previously proposed a Algebraic Effects (e.g. Eff like
>> handlers of continuations) for ECMAScript? #lazyweb  I could only find
>> variations that aren't quite this.
>>
>> I'm specifically looking at the OCaml implementation for inspiration:
>>
>> http://kcsrk.info/ocaml/multicore/2015/05/20/effects-multicore/
>>
>> http://kcsrk.info/slides/multicore_fb16.pdf
>>
>> I'm not proposing the multicore aspect of this but just the delimited
>> continuations.
>>
>> Basically, the idea is that you can capture effects by wrapping a call in
>> a "try". That spawns a fiber. Then any function can "perform" an effect as
>> a new language feature. That works effectively like "throw" expect it also
>> captures a reified continuation, in a tuple. The continuation can be
>> invoked to continue where you left off.
>>
>> Imaginary syntax:
>>
>> function otherFunction() {
>>   console.log(1);
>>   let a = perform { x: 1, y: 2 };
>>   console.log(a);
>>   return a;
>> }
>>
>> do try {
>>   let b = otherFunction();
>>   b + 1;
>> } catch effect -> [{ x, y }, continuation] {
>>   console.log(2);
>>   let c = continuation(x + y);
>>   console.log(c);
>> }
>>
>> Prints:
>> 1
>> 2
>> 3
>> 4
>>
>> (`perform` is a contextual keyword to "throw" and `catch effect` is a
>> keyword for catching it.)
>>
>> We've experimented with changing React's implementation to use these
>> internally to handle concurrency and being able to solve complex algorithms
>> that require a lot of back and forth such as layout calculation. It seems
>> to make these implementations much easier while remaining efficient.
>>
>> It also allows for seamless async I/O handling by yielding deep in a
>> fiber.
>>
>> Effectively this is just solving the same thing as generator and
>> async/await.
>>
>> However, the benefit is that you don't have a split between "async"
>> functions, generator functions and synchronous functions. You still have an
>> explicit entry point through the place where you catch effects.
>>
>> With generators and async functions, anytime you want to change any deep
>> effects you have to unwind all potential callers. Any intermediate library
>> have to be turned into async functions. The refactoring is painful and
>> leaves you with a lot of syntax overhead.
>>
>> If you want to nest different effects such as layout, iterations and
>> async functions that complexity explodes because now every intermediate
>> function has to be able to handle all those concepts.
>>
>> The performance characteristics demonstrated by KC Sivaramakrishnan are
>> also much more promising than JS VMs has been able to do with async/await
>> and generators so far. It's plausible that VMs can optimize this in similar
>> way, in time. I suspect that the leakiness of the microtask queue might
>> cause problems though.
>>
>> I converted the OCaml example scheduler to this ECMAScript compatible
>> syntax:
>>
>> // RoundRobinScheduler.js
>>
>> class Fork {
>>   constructor(fn) {
>> this.fn = fn;
>>   }
>> }
>> function fork(f) {
>>   perform new Fork(f)
>> }
>>
>> class Yield { }
>> function yieldToScheduler() {
>>   perform new Yield();
>> }
>>
>> export function run(main) {
>>   const run_q = [];
>>   function enqueue(k) {
>> run_q.push(k);
>>   }
>>   function dequeue() {
>> if (run_q.length) {
>>   run_q.shift()();
>> }
>>  

One-shot Delimited Continuations with Effect Handlers

2016-03-15 Thread Sebastian Markbåge
Has anyone previously proposed a Algebraic Effects (e.g. Eff like handlers
of continuations) for ECMAScript? #lazyweb  I could only find variations
that aren't quite this.

I'm specifically looking at the OCaml implementation for inspiration:

http://kcsrk.info/ocaml/multicore/2015/05/20/effects-multicore/

http://kcsrk.info/slides/multicore_fb16.pdf

I'm not proposing the multicore aspect of this but just the delimited
continuations.

Basically, the idea is that you can capture effects by wrapping a call in a
"try". That spawns a fiber. Then any function can "perform" an effect as a
new language feature. That works effectively like "throw" expect it also
captures a reified continuation, in a tuple. The continuation can be
invoked to continue where you left off.

Imaginary syntax:

function otherFunction() {
  console.log(1);
  let a = perform { x: 1, y: 2 };
  console.log(a);
  return a;
}

do try {
  let b = otherFunction();
  b + 1;
} catch effect -> [{ x, y }, continuation] {
  console.log(2);
  let c = continuation(x + y);
  console.log(c);
}

Prints:
1
2
3
4

(`perform` is a contextual keyword to "throw" and `catch effect` is a
keyword for catching it.)

We've experimented with changing React's implementation to use these
internally to handle concurrency and being able to solve complex algorithms
that require a lot of back and forth such as layout calculation. It seems
to make these implementations much easier while remaining efficient.

It also allows for seamless async I/O handling by yielding deep in a fiber.

Effectively this is just solving the same thing as generator and
async/await.

However, the benefit is that you don't have a split between "async"
functions, generator functions and synchronous functions. You still have an
explicit entry point through the place where you catch effects.

With generators and async functions, anytime you want to change any deep
effects you have to unwind all potential callers. Any intermediate library
have to be turned into async functions. The refactoring is painful and
leaves you with a lot of syntax overhead.

If you want to nest different effects such as layout, iterations and async
functions that complexity explodes because now every intermediate function
has to be able to handle all those concepts.

The performance characteristics demonstrated by KC Sivaramakrishnan are
also much more promising than JS VMs has been able to do with async/await
and generators so far. It's plausible that VMs can optimize this in similar
way, in time. I suspect that the leakiness of the microtask queue might
cause problems though.

I converted the OCaml example scheduler to this ECMAScript compatible
syntax:

// RoundRobinScheduler.js

class Fork {
  constructor(fn) {
this.fn = fn;
  }
}
function fork(f) {
  perform new Fork(f)
}

class Yield { }
function yieldToScheduler() {
  perform new Yield();
}

export function run(main) {
  const run_q = [];
  function enqueue(k) {
run_q.push(k);
  }
  function dequeue() {
if (run_q.length) {
  run_q.shift()();
}
  }
  function spawn(f) {
try {
  f();
  dequeue();
} catch (e) {
  console.log(e.toString());
} catch effect Yield -> [_, k] {
  enqueue(k);
  dequeue();
} catch effect Fork -> [fork, k] {
  enqueue(k);
  spawn(fork.fn);
}
  }
  spawn(main);
}

// Example.js

import * as Sched from "RoundRobinScheduler";

function f(id, depth) {
  console.log("Starting number %i", id);
  if (depth > 0) {
console.log("Forking number %i", id * 2 + 1);
Sched.fork(() => f(id * 2 + 1, depth - 1));
console.log("Forking number %i", id * 2 + 2);
Sched.fork(() => f(id * 2 + 2, depth - 1));
  } else {
console.log("Yielding in number %i", id);
Sched.yield();
console.log("Resumed number %i", id);
  }
  console.log("Finishing number %i", id);
}

Sched.run(() => f(0, 2));
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: stable sort proposal

2016-03-15 Thread Vic99999
> What about the Timsort?

I cannot believe it will be faster on random int array. And TimSort is base on 
MergeSort and, seems, for it's worst cases it cannot be better than MergeSort.
I have tried https://github.com/mziccard/node-timsort/ with my old node.js - 
0.10.4 and Chrome 49 (win32) - and I see that random int array case is much 
slower that native in Chrome, and in node.js too if I replace "native" with a 
function from https://github.com/v8/v8/blob/master/src/js/array.js .

Perhaps, implementers will want to leave the behaviour of 
`array.sort(comparefn)` as it was for backward compatiblity.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: stable sort proposal

2016-03-15 Thread Dean Landolt
On Tue, Mar 15, 2016 at 12:00 AM, Vic9  wrote:

> @Tab Atkins Jr.,
>
> The only question is how much slower/more expensiver stable sorting is
> than unstable sorting, and whether that cost is sufficient to outweigh the
> usefulness of a stable sort.
> My own experiment shows, that QuickSort (unstable) is ~20% (or more)
> faster on "random" array of integers (a good case for QuickSort). Probably,
> there are many Chrome users who sort large integer arrays, and so Chrome
> devs will never switch to a stable sort. (see comments at
> https://bugs.chromium.org/p/v8/issues/detail?id=90 )
>


I bet the chrome team would switch it were spec'd that way -- this would
put them on an even playing field with other implementations. But yes, 20+%
definitely makes the case that if stable sort were spec'd, a hook to get to
at any faster unstable sort should be specified as well.

But again, unstable sort should be opt-in, used by those that fully grok
the implications. Devs who don't really know the difference (a *large*
proportion, IME) are very likely to expect stable sort semantics.
Correctness should trump performance for the default case.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Async iteration

2016-03-15 Thread Domenic Denicola
From: es-discuss [mailto:es-discuss-boun...@mozilla.org] On Behalf Of Isiah 
Meadows

> By the way, I think observables are getting more headway than async 
> generators. 

This is very inaccurate. They have gotten more conference talks, but not more 
committee headway.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Async iteration

2016-03-15 Thread Isiah Meadows
I'm referring to async generators primarily, though. And observables do
have the forEach method to listen and iterate with.

On Tue, Mar 15, 2016, 08:46 Benjamin Gruenbaum  wrote:

> Observables and async iterators are not at-odds. The two proposals
> complement each other and address two sides of the same coin (pull and
> push).
>
> I'm also not sure what "headway" means here but a few months ago the
> observable counterpart to using the `async for` loop - the for.. on loop
> proposed by Jafar has been (possibly temporarily) scrapped in favor of
> async iteration with async iterators.
>
> On Tue, Mar 15, 2016 at 2:26 PM, Isiah Meadows 
> wrote:
>
>> By the way, I think observables are getting more headway than async
>> generators.
>>
>> https://github.com/zenparsing/es-observable
>>
>> On Mon, Mar 14, 2016, 16:01 Benjamin Gruenbaum  wrote:
>>
>>> I would be super surprised if I could use `var` everywhere _except_
>>> async iteration.
>>>
>>> So I'd say consistency triumphs. Same reason all the ES2015 features
>>> exist in non-strict mode.
>>>
>>> Also, you might want to look at the async/await pep for why Python has
>>> added async iteration in 3.5
>>>
>>> On 14 Mar 2016, at 20:29, John Lenz  wrote:
>>>
>>>
>>> On Mon, Mar 14, 2016 at 11:19 AM, Kevin Smith 
>>> wrote:
>>>
 Is there a summary of the motivation for "for-await" and "async
> iteration" in general?
>

 There's a short section at:
 https://github.com/tc39/proposal-async-iteration#overview-and-motivation

>>>
>>> Thanks
>>>
>>>


> Has there any discussion in not supporting "var" in "for-await"
> initializers?
>

 Symmetry with for-of is definitely an important consideration.  Is
 there a particular reason why you think "var" should be disallowed?


>>> There is not a backward compatibility concern and "var" is a smell for
>>> all the reasons that "let" was introduced (per-interation bindings don't
>>> happen, it isn't scoped to the block).  (I'm unclear why "for-of" allows
>>> "var" for the same reason). I'm not convinced 100% either way, I was
>>> just wondering if it was discussed.
>>>
>>>
>>> ___
>>> es-discuss mailing list
>>> es-discuss@mozilla.org
>>> https://mail.mozilla.org/listinfo/es-discuss
>>>
>>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Async iteration

2016-03-15 Thread Benjamin Gruenbaum
Observables and async iterators are not at-odds. The two proposals
complement each other and address two sides of the same coin (pull and
push).

I'm also not sure what "headway" means here but a few months ago the
observable counterpart to using the `async for` loop - the for.. on loop
proposed by Jafar has been (possibly temporarily) scrapped in favor of
async iteration with async iterators.

On Tue, Mar 15, 2016 at 2:26 PM, Isiah Meadows 
wrote:

> By the way, I think observables are getting more headway than async
> generators.
>
> https://github.com/zenparsing/es-observable
>
> On Mon, Mar 14, 2016, 16:01 Benjamin Gruenbaum  wrote:
>
>> I would be super surprised if I could use `var` everywhere _except_ async
>> iteration.
>>
>> So I'd say consistency triumphs. Same reason all the ES2015 features
>> exist in non-strict mode.
>>
>> Also, you might want to look at the async/await pep for why Python has
>> added async iteration in 3.5
>>
>> On 14 Mar 2016, at 20:29, John Lenz  wrote:
>>
>>
>> On Mon, Mar 14, 2016 at 11:19 AM, Kevin Smith 
>> wrote:
>>
>>> Is there a summary of the motivation for "for-await" and "async
 iteration" in general?

>>>
>>> There's a short section at:
>>> https://github.com/tc39/proposal-async-iteration#overview-and-motivation
>>>
>>
>> Thanks
>>
>>
>>>
>>>
 Has there any discussion in not supporting "var" in "for-await"
 initializers?

>>>
>>> Symmetry with for-of is definitely an important consideration.  Is there
>>> a particular reason why you think "var" should be disallowed?
>>>
>>>
>> There is not a backward compatibility concern and "var" is a smell for
>> all the reasons that "let" was introduced (per-interation bindings don't
>> happen, it isn't scoped to the block).  (I'm unclear why "for-of" allows
>> "var" for the same reason). I'm not convinced 100% either way, I was
>> just wondering if it was discussed.
>>
>>
>> ___
>> es-discuss mailing list
>> es-discuss@mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Async iteration

2016-03-15 Thread Isiah Meadows
By the way, I think observables are getting more headway than async
generators.

https://github.com/zenparsing/es-observable

On Mon, Mar 14, 2016, 16:01 Benjamin Gruenbaum  wrote:

> I would be super surprised if I could use `var` everywhere _except_ async
> iteration.
>
> So I'd say consistency triumphs. Same reason all the ES2015 features exist
> in non-strict mode.
>
> Also, you might want to look at the async/await pep for why Python has
> added async iteration in 3.5
>
> On 14 Mar 2016, at 20:29, John Lenz  wrote:
>
>
> On Mon, Mar 14, 2016 at 11:19 AM, Kevin Smith 
> wrote:
>
>> Is there a summary of the motivation for "for-await" and "async
>>> iteration" in general?
>>>
>>
>> There's a short section at:
>> https://github.com/tc39/proposal-async-iteration#overview-and-motivation
>>
>
> Thanks
>
>
>>
>>
>>> Has there any discussion in not supporting "var" in "for-await"
>>> initializers?
>>>
>>
>> Symmetry with for-of is definitely an important consideration.  Is there
>> a particular reason why you think "var" should be disallowed?
>>
>>
> There is not a backward compatibility concern and "var" is a smell for all
> the reasons that "let" was introduced (per-interation bindings don't
> happen, it isn't scoped to the block).  (I'm unclear why "for-of" allows
> "var" for the same reason). I'm not convinced 100% either way, I was
> just wondering if it was discussed.
>
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Feature Request : Access Closures of a function, outside of the Function!

2016-03-15 Thread Isiah Meadows
Oh, and closures being accessible outside of them also would have huge
memory implications. IMHO that's when you start just using traditional OOP,
and bind methods where necessary.

On Sun, Mar 13, 2016, 20:42 Gorgi Kosev  wrote:

> Have you tried experimenting with babel to achieve this? Here is a
> starting point for a transform that makes all arrow functions behave this
> way: https://gist.github.com/spion/adc5b0110f9466efc0a2 (note: may need
> more work)
>
> Unfortunately, as far as I know whole new type of function would probably
> have to be introduced for this. Some security capabilities of JS are based
> on closure captures being inaccessible outside their scope.
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Fwd: stable sort proposal

2016-03-15 Thread Isiah Meadows
Missed the list...

-- Forwarded message -
From: Isiah Meadows 
Date: Tue, Mar 15, 2016, 07:42
Subject: Re: stable sort proposal
To: Vic9 


What about the Timsort? It's used in Android's Java implementation, as well
as Python since 2.3. I don't know that much about the performance aspects
of it outside it requiring O(n) memory (compared to QuickSort's O(log n))
and being adaptive (QuickSort is not).

On Tue, Mar 15, 2016, 00:00 Vic9  wrote:

> @Tab Atkins Jr.,
>
> The only question is how much slower/more expensiver stable sorting is
> than unstable sorting, and whether that cost is sufficient to outweigh the
> usefulness of a stable sort.
> My own experiment shows, that QuickSort (unstable) is ~20% (or more)
> faster on "random" array of integers (a good case for QuickSort). Probably,
> there are many Chrome users who sort large integer arrays, and so Chrome
> devs will never switch to a stable sort. (see comments at
> https://bugs.chromium.org/p/v8/issues/detail?id=90 )
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: stable sort proposal

2016-03-15 Thread Isiah Meadows
Yeah...you got the intent, though. :)

On Mon, Mar 14, 2016, 14:16 Tab Atkins Jr.  wrote:

> On Fri, Mar 11, 2016 at 8:17 PM, Isiah Meadows 
> wrote:
> > In my honest opinion, there's not much reason to just require the sort
> to be
> > stable. Some engines have done this in the past, and the spec technically
> > allows it. At this point, stable sorts are about as fast as unstable
> ones,
> > both in theory and practice (wasn't the case 10 years ago IIRC).
>
> I think you meant "to not just require", yes?  As in, you think the
> spec *should* require .sort() to be stable?
>
> On Sun, Mar 13, 2016 at 12:23 PM, Vic9  wrote:
> > Stable sort is useful sometimes, but it is possible to implement it in
> js.
>
> Most things in the JS standard library are possible in userland code,
> so that's not a useful rebuttal to anything by itself.  The question
> is whether it's worth requiring that the standard sort be stable.
> Stable sort is never *bad* - there are no use-cases where a stable
> sort gives the wrong answer while an unstable sort is correct, but the
> opposite is definitely true.  The only question is how much
> slower/more expensiver stable sorting is than unstable sorting, and
> whether that cost is sufficient to outweigh the usefulness of a stable
> sort.
>
> ~TJ
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss