Hello,

On Fri, 12 Feb 2021 18:26:53 +1100
Chris Angelico <ros...@gmail.com> wrote:

> On Fri, Feb 12, 2021 Paul Sokolovsky <pmis...@gmail.com> wrote:
> > ... And on the 2nd thought, that won't work. The reason it works in
> > JS is that it doesn't have tuples. In Python, "(a, b) => (1, 2)"
> > means "compare a tuple for greater-or-equal".  
> 
> Should be safe actually - "=>" is not a valid comparison operator.

To punish myself for making such stupid mistakes, I volunteered to
implement PoC of that. And by long honorable tradition, improvements to
Python get implemented in Python first. So with my "imphook" thingy
https://pypi.org/project/imphook/, and with the hook module at the end,
following works as expected:

===========
$ cat example_arrow_func.py 
f = (a, b) => a + b
print(f(1, 2))

res = ((a, b) => a + b)(3, 4)
print(res)

print(list(map((x) => x * 2, [1, 2, 3, 4])))

# Confirm there's no crashing on bare tuple at the tail of file.
(1, 2)

$ python3 -m imphook -i mod_arrow_func -m example_arrow_func
3
7
[2, 4, 6, 8]
===========

The implementation was written a bit cowboyishly in 15 mins, maybe,
just maybe, you can still crash it. (For example, it clearly doesn't
support newlines in arrow param list):

===== mod_arrow_func.py ======
import sys
import tokenize

import imphook


class TokBuf:

    def __init__(self):
        self.tokens = []

    def append(self, t):
        self.tokens.append(t)

    def clear(self):
        self.tokens.clear()

    def empty(self):
        return not self.tokens

    def spool(self):
        yield from self.tokens
        self.clear()


def xform(token_stream):
    tokbuf = TokBuf()

    for t in token_stream:

        if t[1] == "(":
            # We're interested only in the deepest parens.
            if not tokbuf.empty():
                yield from tokbuf.spool()
            tokbuf.append(t)
        elif t[1] == ")":
            nt1 = next(token_stream)
            nt2 = next(token_stream)
            if nt1[1] == "=" and nt2[1] == ">":
                yield (tokenize.NAME, "lambda")
                yield from tokbuf.tokens[1:]
                tokbuf.clear()
                yield (tokenize.OP, ":")
            else:
                yield from tokbuf.tokens
                tokbuf.clear()
                yield t
                yield nt1
                yield nt2
        elif not tokbuf.empty():
            tokbuf.append(t)
        else:
            yield t


def hook(modname, filename):
    with open(filename, "rb") as f:
        # Fairly speaking, tokenizing just to convert back to string form
        # isn't too efficient, but CPython doesn't offer us a way to parse
        # token stream so far, so we have no choice.
        source = tokenize.untokenize(xform(tokenize.tokenize(f.readline)))
    mod = type(imphook)(modname)
    exec(source, vars(mod))
    return mod


imphook.add_import_hook(hook, (".py",))
===========

-- 
Best regards,
 Paul                          mailto:pmis...@gmail.com
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/VOVHZUNJKCVCIPPMA4EBUJ2CQ5FJJR2F/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to