[issue39232] asyncio crashes when tearing down the proactor event loop

2020-09-12 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue39232>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41505] asyncio.gather of large streams with limited resources

2020-08-23 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

The traditional way this done is with a finite number of workers pulling work 
off a queue. This is straightforward to set up with builtins:


from uuid import uuid4
import asyncio, random


async def worker(q: asyncio.Queue):
while job := await q.get():
print(f"working on job {job}")
await asyncio.sleep(random.random() * 5)
print(f"Completed job {job}")
q.task_done()


async def scheduler(q, max_concurrency=5):
workers = []
for i in range(max_concurrency):
w = asyncio.create_task(worker(q))
workers.append(w)

try:
await asyncio.gather(*workers)
except asyncio.CancelledError:
pass


async def main():
jobs = [uuid4().hex for i in range(1_000)]
q = asyncio.Queue()
for job in jobs:
await q.put(job)

t = asyncio.create_task(scheduler(q))
await q.join()
t.cancel()
await t


if __name__ == "__main__":
asyncio.run(main())


A neater API would be something like our Executor API in concurrent.futures, 
but we don't yet have one of those for asyncio.  I started playing with some 
ideas for this a while ago here: https://github.com/cjrh/coroexecutor

Alas, I did not yet add a "max_workers" parameter so that isn't available in my 
lib yet. I discuss options for implementing that in an issue: 
https://github.com/cjrh/coroexecutor/issues/2

I believe that the core devs are working on a feature that might also help for 
this, called "task groups", but I haven't been following closely so I don't 
know where that's at currently.

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue41505>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40894] asyncio.gather() cancelled() always False

2020-06-12 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

Kyle is correct.  By analogy with Kyle's example, the following example has no 
gather, only two nested futures:

```
# childfut.py
import asyncio

async def f(fut):
await fut

async def g(t):
await asyncio.sleep(t)

async def main():
fut_g = asyncio.create_task(g(1))
fut_f = asyncio.create_task(f(fut_g))

try:

# Cancel the "child" future
fut_g.cancel()

await fut_f
except asyncio.CancelledError as e:
pass

print(f'fut_f done? {fut_f.done()} fut_f cancelled? {fut_f.cancelled()}')
print(f'fut_g done? {fut_g.done()} fut_g cancelled? {fut_g.cancelled()}')

asyncio.run(main())
```

It produces:

```
$ python childfut.py
fut_f done? True fut_f cancelled? True
fut_g done? True fut_g cancelled? True
```

The outer future f, has f.cancelled() == True even though it was the inner 
future got cancelled.

I think `gather()` should work the same. It would be confusing if 
`future_gather.cancelled()` is false if a child is cancelled, while a plain old 
outer future returns `future.cancelled() == true` if futures that it waits on 
are cancelled.

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue40894>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39839] Non-working error handler when creating a task with assigning a variable

2020-03-15 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

Can reproduce also on 3.8. Another version that "works" (raises the exception) 
is

task = loop.create_task(test())
del task

Suggests there's something going on with reference counting or garbage 
collection. In the version that "doesn't work", the task exception only appears 
in the custom exception handler when the loop is stopped, not before. I've 
added a log message to show each second that passes, and the loop is stopped 
after 5 seconds:

import asyncio
import logging


def handle_exception(loop, context):
msg = context.get("exception", context["message"])
logging.error("Caught exception: %s", msg)


async def test():
await asyncio.sleep(1)
raise Exception("Crash.")


def second_logger(loop, count) -> int:
logging.warning(count)
loop.call_later(1, lambda count=count: second_logger(loop, count + 1))


def main():
loop = asyncio.get_event_loop()
loop.call_later(1, lambda: second_logger(loop, 0))
loop.call_later(5, loop.stop)
loop.set_exception_handler(handle_exception)
task = loop.create_task(test())
try:
loop.run_forever()
finally:
loop.close()


if __name__ == "__main__":
main()

OUTPUT:

$ py -3.8 -u bpo-issue39839.py
WARNING:root:0
WARNING:root:1
WARNING:root:2
WARNING:root:3
WARNING:root:4
ERROR:root:Caught exception: Crash.

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue39839>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39857] subprocess.run: add an extra_env kwarg to complement existing env kwarg

2020-03-10 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

The dict unpacking generalizations that I posted were added in Python 3.5, 
which is pretty old by now. (But, true, is in Python 3 and not Python 2). This 
is the PEP: https://www.python.org/dev/peps/pep-0448/

The new syntax that Brandt posted will indeed only be available from 3.9 on.

--

___
Python tracker 
<https://bugs.python.org/issue39857>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39857] subprocess.run: add an extra_env kwarg to complement existing env kwarg

2020-03-07 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

dict syntax tools make it fairy easy to compose new dicts from old ones with 
overrides:

subprocess.run(..., env={**os.environ, 'FOO': ..., 'BAR', ...}, ...)

Would this be sufficient to avoid the copy/pasting boilerplate?

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue39857>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39609] Set the thread_name_prefix for asyncio's default executor ThreadPoolExecutor

2020-02-17 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

This change seems fine.

Markus,

I'm curious if there is a specific reason you prefer to use the default 
executor rather than replacing it with your own? Is it just convenience or are 
there other reasons?

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue39609>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38988] Killing asyncio subprocesses on timeout?

2020-02-01 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

@dontbugme This is a very old problem with threads and sub-processes.  In the 
general case (cross-platform, etc) it is difficult to kill threads and 
sub-processes from the outside. The traditional solution is to somehow send a 
message to the thread or subprocess to tell it to finish up. Then, you have to 
write the code running the thread or subprocess to notice such a message, and 
then shut itself down. With threads, the usual solution is to pass `None` on a 
queue, and have the thread pull data off that queue. When it receives a `None` 
it knows that it's time to shut down, and the thread terminates itself. This 
model can also be used with the multiprocessing module because there is a Queue 
instance provided there that works across the inter-process boundary.  
Unfortunately, we don't have that feature in the asyncio subprocess machinery 
yet. For subprocesses, there are three options available:

1) Send a "shutdown" sentinal via STDIN (asyncio.subprocess.Process.communicate)
2) Send a process signal (via asyncio.subprocess.Process.send_signal)
3) Pass messages between main process and child process via socket connections

My experience has been that (3) is the most practical, esp. in a cross-platform 
sense. The added benefit of (3) is that this also works, unchanged, if the 
"worker" process is running on a different machine. There are probably things 
we can do to make (3) easier. Not sure.

I don't know if my comment helps, but I feel your pain. You are correct that 
`wait_for` is not an alternative to `timeout` because there is no actual 
cancellation that happen.

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue38988>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39483] Proposial add loop parametr to run in asyncio

2020-02-01 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

Hmm, my recent comment looks rude but I didn't intend it to be that way. What I 
mean is: there are many, many more users of asyncio.run than there are of 
teleton, so any change made to asyncio.run is going to affect more people than 
the other way round. So before we regard this as something that will be 
generally useful (run() taking a loop parameter), it will be faster to check if 
the 3rd party library provides a way to work with the std library.  FWIW I have 
an alternative run() implementation, with the PYPI package "aiorun" in which I 
do allow a loop parameter to be passed in. So far, it's only caused headaches 
for me as the maintainer :)

--

___
Python tracker 
<https://bugs.python.org/issue39483>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39483] Proposial add loop parametr to run in asyncio

2020-02-01 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

@heckad You should instead ask the maintainers of teleton how to use their 
library with asyncio.run, not the other way round.

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue39483>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39010] ProactorEventLoop raises unhandled ConnectionResetError

2020-02-01 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue39010>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37334] Add a cancel method to asyncio Queues

2019-11-22 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

Ok, I see now. The improvement with only a single producer/consumer might be 
marginal, but the proposition that `queue.cancel()` might simplify the 
situation with multiple producers and/or consumers is more compelling.

Usually, assuming M producers and N threads (and 1 queue), one would typically 
stop the M producers outright, and then place N `None` values on the queue, and 
then each consumer shuts itself down when it receives a `None`.  It's clunky 
but it works. 

This from experience with threaded code, but the difference with asyncio is 
that we have cancellation available to us, whereas we could not cancel threads. 
I think this is why I've carried over my queue-idioms from threads to asyncio. 
So this is an interesting idea: if we introduce cancellation to queue handling, 
do things get better for the MxN producers and consumers design?

Rehashing, what I might expect to happen when I call `queue.cancel()` is that 

1. A CancelledError (or maybe`QueueCancelled`?) exception is raised in all 
producers and consumers ) - this gives a producer a chance to handle the error 
and do something with the waiting item that could not be `put()`
2. Items currently on the queue still get processed in the consumers before the 
consumers exit.

I think (1) is easy, but I expected (2) to be more tricky. You said it already 
works that way in your PR. I didn't believe it so I wrote a silly program to 
check, and it does! All pending items on the queue are still consumed by 
consumers even after the queue._getters futures are cancelled.  

So yes, both (1) and (2) appear to work.

> As for the "why don't I just cancel the task?", well, if you know it. There 
> may be many consumer or producer tasks waiting for their turn. Sure, you can 
> keep a list of all those tasks. But that's exactly the point of the proposed 
> change: the Queue already knows all the waiting tasks, no need to keep 
> another list up-to-date!

Finally - while I think the MxN producers/consumers case might be simplified by 
this, it's worth noting that usually in shutdown, *all* the currently-pending 
tasks are cancelled anyway. And as I said before, in an MxN queue scenario, one 
might place N `None` values on the queue, and then just send `CancelledError` 
to everything anyway (consumers will ignore the cancellation and just wait for 
the `None`s to exit). This works well.  

Thus, if `queue.cancel()` were available to me right now, the primary 
difference as I see it would be that during shutdown, instead of placing N 
`None` values on the queue, I would instead call `queue.cancel()`. I agree 
that's a bit neater.  (It however will still necessary to absorb CancelledError 
in the consumers, e.g. what is raised by `asyncio.run()` during shutdown, so 
that's unchanged).

I agree with Yury that I don't like `queue.close`. "Cancel" seems better after 
all.

I disagree with Yury that items are discarded - I checked that already-present 
items on the queue will still be consumed by consumers, before the 
`queue.close` cancellation is actually raised.

--

___
Python tracker 
<https://bugs.python.org/issue37334>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38529] Python 3.8 improperly warns about closing properly closed streams

2019-10-19 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue38529>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38501] multiprocessing.Pool hangs atexit (and garbage collection sometimes)

2019-10-19 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue38501>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38306] High level API for loop.run_in_executor(None, ...)?

2019-10-12 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

We can't allow both an `executor=` kwarg, as well as **kwargs for the target 
func, unfortunately. If we do `executor=`, we'll again have to force users to 
use functools.partial to wrap their functions that take kwargs.

--

___
Python tracker 
<https://bugs.python.org/issue38306>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38306] High level API for loop.run_in_executor(None, ...)?

2019-10-12 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

Even before task groups land, this API can be easily improved by adding

asyncio.run_in_executor(func, *args, **kwargs)

- Only valid inside a coro or async context (uses get_running_loop internally)
- Analogous to how `loop.create_task` became `asyncio.create_task`
- Drop having to specify `None` for the default executor
- Users already know the `run_in_executor` name
- Allow both positional and kwargs (we can partial internally before calling 
loop.run_in_executor)

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue38306>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37736] asyncio.wait_for is still confusing

2019-10-05 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

> asyncio.wait_for is still confusing

Perhaps the confusion can be fixed with improvements to the docs? To me, these 
specific docs seem pretty clear now, but I might not be a good judge of that.

> However, we still have the case where a misbehaving Task can cause wait_for 
> to hang indefinitely.

The key word here is "misbehaving". Cooperative concurrency does require 
cooperation. There are many ways in which coroutines can misbehave, the popular 
one being calling blocking functions when they shouldn't. I would be very 
uncomfortable with my coroutine being killable (e.g. by wait_for) by some other 
means besides CancelledError (which I can intercept and manage cleanup).  

The contract is: if my coroutine has a CancelledError raised, I take that to 
mean that I need to clean up whatever resources need cleanup, in a timely 
manner and then exit. If my coro refuses to exit, it is my coroutine that is 
wrong, not wait_for being unable to kill the coroutine.

I definitely agree with Yury that the previous behaviour, the one where 
wait_for could raise TimeoutError *before* the inner coro has exited, was buggy 
and needed to be fixed.

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue37736>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38164] polishing asyncio Streams API

2019-09-29 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue38164>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38242] Revert the new asyncio Streams API

2019-09-29 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue38242>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34037] asyncio: BaseEventLoop.close() shutdowns the executor without waiting causing leak of dangling threads

2019-09-11 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue34037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37334] Add a cancel method to asyncio Queues

2019-06-21 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

I'm interested in how this change would affect the pattern of shutting down a 
queue-processing task.

How would one decide between whether to cancel the queue or the task? (I'm 
asking for real, this is not an objection to the PR). For example, looking at 
the two tests in the PR:

def test_cancel_get(self):
queue = asyncio.Queue(loop=self.loop)

getter = self.loop.create_task(queue.get())
test_utils.run_briefly(self.loop)
queue.cancel()   # < HERE
test_utils.run_briefly(self.loop)
with self.assertRaises(asyncio.CancelledError):
self.loop.run_until_complete(getter)

This test would work exactly the same if the `getter` task was cancelled 
instead right?  Like this:

def test_cancel_get(self):
queue = asyncio.Queue(loop=self.loop)

getter = self.loop.create_task(queue.get())
test_utils.run_briefly(self.loop)
getter.cancel()   # < HERE
test_utils.run_briefly(self.loop)
with self.assertRaises(asyncio.CancelledError):
self.loop.run_until_complete(getter)

So my initial reaction is that I'm not sure under what conditions it would be 
more useful to cancel the queue instead of the task. I am very used to applying 
cancellation to tasks rather than the queues they contain, so I might lack 
imagination in this area. The idiom I've been using so far for consuming queues 
looks roughly something like this:

async def consumer(q: asyncio.Queue):
while True:
try:
data = await q.get()
except asyncio.CancelledError:
q.put_nowait(None) # ignore QueueFull for this discussion
continue

try:
if not data:
logging.info('Queue shut down cleanly')
return # <-- The only way to leave the coro

except Exception:
logging.exception('Unexpected exception:')
continue
finally:
q.task_done() 

^^ With this pattern, I can shut down the `consumer` task either by cancelling 
the task (internally it'll put a `None` on the queue) or by placing a `None` on 
the queue outright from anywhere else. The key point is that in either case, 
existing items on the queue will still get processed before the `None` is 
consumed, terminating the task from the inside.

(A) If the queue itself is cancelled (as in the proposed PR), would it still be 
possible to catch the `CancelledError` and continue processing whatever items 
have already been placed onto the queue? (and in this case, I think I'd still 
need to place a sentinel onto the queue to mark the "end"...is that correct?)

(B) The `task_done()` is important for app shutdown so that the application 
shutdown process waits for all currently-pending queue items to be processed 
before proceeding with the next shutdown step. So, if the queue itself is 
cancelled (as in the proposed PR), what happens to the application-level call 
to `await queue.join()` during the shutdown sequence, if a queue was cancelled 
while there were still unprocessed items on the queue for which `task_done()` 
had not been called?

It would be great to have an example of how the proposed `queue.cancel()` would 
be used idiomatically, w.r.t. the two questions above.  It might be intended 
that the idiomatic usage of `queue.cancel()` is for situations where one 
doesn't care about dropping items previously placed on the queue. Is that the 
case?

--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue37334>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2019-06-19 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

I'm removing the GUI section of the chat case study. Yury was right, it's not 
going to add anything useful. The CLI chat client will work well because 
prompt-toolkit has actual support for asyncio.  Tkinter does not, and I think 
it'll be better to add a GUI section to this tutorial only once Tkinter gets 
first-class support for asyncio.

--

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2019-06-16 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

FYI I'm going to be using the 3rd-party prompt-toolkit for the chat client. 
(The server depends only on asyncio only).  I put several hours research into 
finding a way for the CLI chat client to be not terrible, but it gets very 
complicated trying to manage stdin and stdout with asyncio.  OTOH 
prompt-toolkit just gives us exactly what we need, and the code looks short, 
neat and easy to understand. I hope that's ok (that I'll be mentioning a 3rd 
party lib in the tutorial).

--

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2019-06-15 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

That was an long two months, apologies.  I've made some fixes based on review 
comments and cleaned up some more of the code samples.  The primary outstanding 
pieces are the client component of the chat application case study, and the GUI 
integration section of the client case study.

--

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19495] context manager for measuring duration of blocks of code

2019-03-25 Thread Caleb Hattingh

Caleb Hattingh  added the comment:

Somehow I missed that there's been an open issue on this. Like others I've 
written a bunch of different incarnations of an "elapsed" context manager over 
the years.  Always for the more crude "how long did this take" reason like 
David mentioned, never the microbenchmarks scenario that timeit serves. The 
work is never quite substantial enough to make a Pypi package of it, but always 
annoying to have to do again the next time the need arises. 


The overall least fiddly scheme I've found is to just use a callback. It's the 
simplest option:


@contextmanager
def elapsed(cb: Callable[[float, float], None], counter=time.perf_counter):
t0 = counter()
try:
yield
finally:
t1 = counter()
cb(t0, t1)


The simple case, the one most people want when they just want a quick check of 
elapsed time on a chunk of code, is then quite easy:


with elapsed(lambda t0, t1: print(f'read_excel: {t1 - t0:.2f} s')):
# very large spreadsheet
df = pandas.read_excel(filename, dtype=str)


(I rarely need to use a timer decorator for functions, because the profiler 
tracks function calls. But within the scope of a function it can sometimes be 
difficult to get timing information, particularly if the calls made there are 
into native extensions)

One of the consequences of using a callback strategy is that an additional 
version might be required for async callbacks (I have used these in production 
also):


@asynccontextmanager
async def aioelapsed(acb: Callable[[float, float], Awaitable[None]],
 counter=time.perf_counter):
t0 = counter()
try:
yield
finally:
t1 = counter()
await acb(t0, t1)


So, the interesting thing here is that there is a general form for which an 
"elapsed" function is just a special case:


T = TypeVar('T')


@contextmanager
def sample_before_and_after(cb: Callable[[T, T], None], sample: Callable[[], 
T]):
before = sample()
try:
yield
finally:
after = sample()
cb(before, after)


The version of "elapsed" given further above is just this with kwarg 
sample=time.perf_counter.  So, it might be sufficient to cover the use-case of 
an "elapsed" context manager instead with something like the above instead, 
which is more general. However, I don't actually have any use cases for this 
more general thing above, other than "elapsed", but I thought it was 
interesting.


Whether any of this merits being in the stdlib or not is hard to say. These 
code snippets are all short and easy to write. But I've written them multiple 
times to make "elapsed".


---


Once the "elapsed" abstraction is available, other cool ideas become a little 
bit easier to think about. These would be things that are user code (not be in 
the stdlib), but which can make use of the "elapsed" cm; for example, a clever 
logger for slow code blocks (written a few of these too):


@contextmanager
def slow_code_logging(logger_name, msg, *args, threshold_sec=1.0, **kwargs):
logger = logging.getLogger(logger_name)

if logger.isEnabledFor(logging.INFO):
def cb(t0: float, t1: float) -> None:
dt = t1 - t0
if dt < threshold_sec:
# Change the logger level depending on threshold
logger.debug(msg, dt, *args, **kwargs)
else:
logger.info(msg, dt, *args, **kwargs)

cm = elapsed(cb)
else:
# Logger is not even enabled, do nothing.
cm = nullcontext()

with cm:
yield


with slow_code_logging(__name__, 'Took longer to run than expected: %.4g s'):
...


And a super-hacky timing histogram generator (which would be quite interesting 
to use for measuring latency in network calls, e.g. with asyncio code):


@contextmanager
def histobuilder(counter, bin_size):
def cb(t0, t1):
dt = t1 - t0
bucket = dt - dt % bin_size
counter[bucket, bucket + bin_size] += 1

with elapsed(cb, counter=time.perf_counter_ns):
yield

counter = Counter()

for i in range(100):
with histobuilder(counter, bin_size=int(5e4)):  # 50 us
time.sleep(0.01)  # 10 ms

for (a, b), v in sorted(counter.items(), key=lambda _: _[0][0]):
print(f'{a/1e6:6.2f} ms - {b/1e6:>6.2f} ms: {v:4} ' + '\u2588' * v)


output:


  9.85 ms -   9.90 ms:1 █
  9.90 ms -   9.95 ms:   10 ██
  9.95 ms -  10.00 ms:   17 █
 10.00 ms -  10.05 ms:8 
 10.05 ms -  10.10 ms:   12 
 10.10 ms -  10.15 ms:5 █
 10.15 ms -  10.20 ms:4 
 10.20 ms -  10.25 ms:4 
 10.25 ms -  10.30 ms:6 ██
 10.30 ms -  10.35 ms:9 █
 10.35 ms -  10.40 ms:3 ███
 10.40 ms -  10.45 ms:5 █
 10.45 ms -  10.50 ms:   12 
 10.50 ms -  10.55 ms:3 ███
 10.55 ms -  10.60 ms:1 █


T

[issue34831] Asyncio Tutorial

2019-01-05 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

A quick note to say I have not abandoned this, it's just that life got 
complicated for me in late 2018. I intend to pick up progress again within the 
next month or two.

--

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30487] DOC: automatically create a venv and install Sphinx when running make

2019-01-05 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

@cheryl.sabella I am ok with closing this, but the original motivation for this 
work was from @zack.ware so he should weigh in.

I am not going to work on this any further for the forseeable future (I've got 
my hands full already with the asyncio docs I'm trying to write in #34831).

--

___
Python tracker 
<https://bugs.python.org/issue30487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2018-10-21 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

I've added a few ideas for items in the "cookbook" page, which you'll see in 
the PR. If anyone has suggestions for more or better cookbook entries 
(recipes?), feel free to mention here or in the PR, I check both places. I 
expect to get more time to work on this next weekend, so it would be great to 
get ideas and reviews in during the week.

--

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35036] logger failure in suspicious.py

2018-10-21 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
nosy: +cjrh

___
Python tracker 
<https://bugs.python.org/issue35036>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2018-10-07 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

I tested the Python 3.7.0 release version for Mac, the download called "macOS 
64-bit installer" with checksum ae0717a02efea3b0eb34aadc680dc498 on this page:

https://www.python.org/downloads/release/python-370/

I downloaded, installed that on a mac, and the chat client app launched no 
problem. (I added a screenshot to my github repo readme).

Your error "ModuleNotFoundError: No module named '_tkinter'" suggests that the 
python you were using is a different one, because in the release download, Tk 
8.6.8 is bundled and the _tkinter module was built against it and must exist.

Anyway, I just wanted to check that it does work on mac :)

--

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2018-10-07 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

I set up a basic structure under "Doc/library/asyncio-tutorial" as suggested, 
and opened a PR to show you how that looks. When I make
more progress on a section, I'll post an update here.

--

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2018-10-07 Thread Caleb Hattingh


Change by Caleb Hattingh :


--
keywords: +patch
pull_requests: +9134
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2018-10-06 Thread Caleb Hattingh


Caleb Hattingh  added the comment:

A CLI client is a necessary step along the way anyway, so that sounds good by 
me.

You suggested:

> I'd organize the tutorial in a dedicated directory like 
> "Doc/library/asyncio-tutorial/" 

I had a look at the source tree, there is an existing "howto" directory; do you 
still prefer your suggestion of using "Doc/library/" over something like 
"Doc/howto/asyncio.rst"?

--

___
Python tracker 
<https://bugs.python.org/issue34831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34831] Asyncio Tutorial

2018-10-01 Thread Caleb Hattingh

Caleb Hattingh  added the comment:

> * I think we should stick to your structure and push things to 
> docs.python.org as soon as every next section is somewhat ready.

Ok. I'll get a PR going for the start page of the tutorial.

> * Every big section should probably have its own page, linking prev/next 
> tutorial pages.
> * I'd organize the tutorial in a dedicated directory like 
> "Doc/library/asyncio-tutorial/".

Agree.

> BTW, maybe we should consider using the new iPythonn async repl: 
> https://blog.jupyter.org/ipython-7-0-async-repl-a35ce050f7f7   What do you 
> think about that?

I saw ​Matthias' tweets about that recently too. It's cool! but...for teaching 
purposes it's not great to introduce a whole new complex tool (ipython) to 
explain a different complex tool (asyncio). My experience is that *every* 
single new thing that is mentioned adds cognitive load for learners. For this 
tutorial my feeling is to keep as much to "ordinary" Python stuff as possible, 
i.e., stdlib.

> Just a quick note: I'd try to not mention the low-level loop APIs as long as 
> possible (e.g. no loop.run_until_complete() etc).

For sure, I agree with you 100% on this. But I find it hard to do as soon as I 
have to make a real thing. I think you're right that we focus initially on only 
high-level stuff first (and for most of the tut). That is doable.

> I think we'll collapse first two section into one ("Coroutines" and 
> "Awaitables" into "Awaitables") and link the tutorial from that new section.

ok

> Yay for streams!
> I never use tkinter myself :( I remember trying to use it and it didn't work 
> on my macOS.  So I'd try to either:
> * build a simple browser app (that would require us to implement HTTP 0.9 
> which can be fun);
> * build a terminal app;
> * use iPython repl to connect to our asyncio server (might end up being more 
> complicated than the first two options).

I too have bashed my head for many hours over the years trying to get Tkinter 
to work on Mac, but a lot of work has gone into this recently and the newer 
(release) Python's have bundled Tk 8.6: 
https://www.python.org/download/mac/tcltk/ (this is what learners will prob use 
on Mac)

Tkinter gets a bad rap, but it's really quite powerful--and portable. Runs 
great on a Raspberry Pi for example.

Noticing your hesitation towards tkinter ;) , I spent a few hours on Sunday 
sketching out my "chat server/client" idea a little more, using Tkinter for the 
client UI:

https://github.com/cjrh/chat

(Notice especially in the README all the different aspects of asyncio, streams 
etc. that we would be able to cover and explain with an actual use-case. THESE 
are the kinds of tricky things people desperately want help with.)

It's still rough obviously (I can prob reduce the total LOC footprint by 20% & 
I'm sure you can improve on some parts) but I just wanted to show you something 
runnable you can prod and poke to give a concrete idea of what I'm suggesting. 
It works on Windows, should work on Linux but I haven't tested yet.

My proposal is that we slowly build up towards this, starting with the "hello 
world" simple case (asyncio.run calling main() which prints out "hello world" 
or something), and then adding the necessary features, piece by piece, with 
commentary along the way on what each piece does, and why it is done in a 
particular way. (I specifically like to write like this: simplistic case first, 
and then improve incrementally)

- Only requires stdlib (so we don't have to explain or link to pip/virtualenv 
etc. etc.)
- shows a wide range of *interoperating* asyncio features in a condensed app
- client has a proper GUI, i.e. "looks" like an actual application, not just an 
ugly CLI thing
- client handles reconnection, if the server goes down and comes back later.
- using signal handling to trigger shutdown (esp. the server)
- signal handling works on Windows (CTRL-C and CTRL-BREAK near-instant 
controlled shutdown)
- server is 100% asyncio (so that situation is covered), but client requires 
marrying two loops (so this situation is also covered), one for tkinter and one 
for asyncio. (This is a common problem, not just with UI frameworks but also 
with game programming frameworks like pygame, pyarcade and so on. Again, this 
is the kind of problem many people ask for help with.)
- thus, an example showing how to run asyncio in a thread. (asyncio.run works 
great in a thread, nice job!)
- an actual SSL example that works (this was surprisingly hard to find, 
eventually found one at PyMOTW)

I fully realise that this case study implementation might look weird and ugly, 
and we don't really want to mention threads at all, and we don't want to 
explicitly refer to the loop, or create a Future instance, etc., but this is 
the kind of case study that will give people guidance on ho

[issue34831] Asyncio Tutorial

2018-09-28 Thread Caleb Hattingh


New submission from Caleb Hattingh :

Hi Yury,

As discussed, below is a very rough outline of a proposed TOC for an asyncio 
tutorial. No content has been written yet (only what you see below). I think we 
should nail down the TOC first.

Asyncio Tutorial


Proposed Table of Contents:

- Why asyncio?
- Reason #1: thread safety by not using threads at all.
- Reason #2: very many concurrent socket connections, which threads
  make cumbersome.
- Demo: concurrency without threads
- (only goals here are to *briefly* introduce the syntax, and then
  demonstrate concurrency without threads)
- show two tasks, one prints out integers 0-9, while the other
  prints out letters of the alphabet, A-I.
- pose the question: "they're running concurrently but we're
  not using threads: how is that possible?"
- explain that there is a thing called an "event loop" behind
  the scenes that allows execution to switch between the two
  every time one of them "waits".
- emphasize that you can easily spot where the switch can happen,
  it is where you see the "await" keyword. Switches will occur
  nowhere else.

- The difference between functions and `async def` functions
- async & await
- show `async def` functions, compare to `def` functions.  
- use inspect module to show what things actually are, e.g. function,
  coroutine function, coroutine, generator function, generator,
  asynchronous generator function, asynchronous generator.
- point out you can only use "await" inside an async def fn
- point out that `await ` is an expression, and you can
  use it in most places any other expression can be used.

- How to run `async def` functions
- point out there are two different issues: (a) `async def` functions
  can call other functions, and other `async def` functions using
  `await`, but also, (b) how to "get started" with the first
  `async def` function? Answer: run the event loop.
- show `asyncio.run()`, in particular running an `async def main()`
  function, which itself can call others.

- Dealing with concurrent functions
- (This is maybe too similar to the new docs in the section
  
https://docs.python.org/3/library/asyncio-task.html?highlight=asyncio%20run#coroutines-and-tasks)
- What if you want to run an `async def` function, but you don't
  want to wait for it? Answer: create a "task" for it.
- What if you want to run multiple functions concurrently, and you
  do want to wait for all of them to finish? Answer: use `gather`
  or `wait()`

- Case Study: chat server/client (my proposal)
- (goal is to walk through building a realistic example of using 
  asyncio)
- (This will be a lot more fun than a web-crawler. Web-crawlers are
  super boring!)
- (I'm pretty confident the size of the code can be kept small. A 
  lot can be done in under 50 lines, as I showed in the case studies
  in my book)
- server uses streams API
- server receives many long-lived connections
- user can create/join a "room", and then start typing messages.
  Other connected clients in the same room will see the messages.
- client implementation has some options:
- could use Tkinter gui, using streams API in an event loop 
  on a separate thread. (This would show how asyncio isn't some 
  alien thing, but is part of python. Would also show how to 
  set up asyncio to work in a separate thread. Finally, would not
  require any external dependencies, only the stdlib is needed
  for the entire case study.)
- could use a browser client, connecting back to the server 
  over a websocket connection. (This might seem simpler, but 
  in fact introduces a lot more complexity than just using
  tkinter. We need to bring in html, css, probably some js and
  probably also a third-party websockets python library. I feel
  like it isn't a good fit for a stdlib python tutorial, but it 
  is a more modern approach.)
- there are not a lot of other options. terminal-based client 
  might be possible, but probably hard, not cross-platform, and 
  will be unappealing to many people. 
- When describing the code, point out:
- you have to choose a "message protocol" (size-prefixed is fine)
- you must put `send` and `recv` in separate tasks
- server will "keep track" of connected clients and the room
  (or rooms?) they've joined
- startup and shutdown, specific references to the new `run()`
  function
- ?

--
assignee: docs@python
components: Documentation, asyncio
messages: 326628
nosy: asvetlo

[issue30487] DOC: automatically create a venv and install Sphinx when running make

2017-11-27 Thread Caleb Hattingh

Caleb Hattingh <caleb.hatti...@gmail.com> added the comment:

Yep, sounds good.

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30487] DOC: automatically create a venv and install Sphinx when running make

2017-11-27 Thread Caleb Hattingh

Caleb Hattingh <caleb.hatti...@gmail.com> added the comment:

It looks like the check for an existing sphinx-build passes, and so no new venv 
is made, but this also means that blurb doesn't get installed.  I was concerned 
about this, but figured that at least the buildbots would create new envs each 
time, and this would only be an issue that a user with an odd configuration 
might have.

Sorry for the trouble.

The feature was simpler when it was only sphinx.  Now that blurb is there too, 
the logic for checking what is and isn't already present becomes a bit complex 
to reason through.

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30487] DOC: automatically create a venv and install Sphinx when running make

2017-11-27 Thread Caleb Hattingh

Caleb Hattingh <caleb.hatti...@gmail.com> added the comment:

Hi Ned

It's still supposed to allow both. It sounds like it's not working properly. 
I'll have a look. FYI, I worked on this for Zach Ware who is the primary 
stakeholder for this feature. 

Rgds
Caleb

> On 28 Nov 2017, at 7:12 AM, Ned Deily <rep...@bugs.python.org> wrote:
> 
> 
> Ned Deily <n...@python.org> added the comment:
> 
> I don't think this is a good idea. It has already caused problems with one 
> buildbot (Issue32149) and will likely break other build scripts.  As the Doc 
> Makefile stood previous to this commit, the Doc builds could take advantage 
> of either a system-installed or user-supplied Sphinx and blurb without having 
> to use the venv step.  Unless there are major objections, I am going to at 
> least temporarily revert it.
> 
> --
> nosy: +ned.deily
> resolution: fixed -> 
> stage: resolved -> needs patch
> status: closed -> open
> 
> ___
> Python tracker <rep...@bugs.python.org>
> <https://bugs.python.org/issue30487>
> ___

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30487] DOC: automatically create a venv and install Sphinx when running make

2017-11-09 Thread Caleb Hattingh

Caleb Hattingh <caleb.hatti...@gmail.com> added the comment:

No worries. I've made a new PR 4346. The old one was unsalvagable I'm afraid. 
Too many other people got added to the notifications list as a result of my 
incorrect rebase.  The new one is fine.

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30487] DOC: automatically create a venv and install Sphinx when running make

2017-11-09 Thread Caleb Hattingh

Change by Caleb Hattingh <caleb.hatti...@gmail.com>:


--
keywords: +patch
pull_requests: +4303

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30487] DOC: automatically create a venv and install Sphinx when running make

2017-11-08 Thread Caleb Hattingh

Caleb Hattingh <caleb.hatti...@gmail.com> added the comment:

I messed up the PR through a failed rebase (trying to rebase my PR on top of 
upstream). I closed the PR as a result.  I have now fixed up my feature branch, 
but I have not resubmitted the PR.  Since the PR was left alone for many 
months, I'm ok with leaving things as is, and close this issue?

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31620] asyncio.Queue leaks memory if the queue is empty and consumers poll it frequently

2017-09-29 Thread Caleb Hattingh

Caleb Hattingh <caleb.hatti...@gmail.com> added the comment:

This looks like a dupe, or at least quite closely related to 
https://bugs.python.org/issue26259. If the PR resolves both issues that one 
should be closed too.

--
nosy: +cjrh

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31620>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30487] DOC: automatically create a venv and install Sphinx when running make

2017-05-26 Thread Caleb Hattingh

New submission from Caleb Hattingh:

Under guidance from zware during Pycon sprints, I've changed the Doc/ Makefile 
to automatically create a virtual environment and install Sphinx, all as part 
of the `make html` command.

--
assignee: docs@python
components: Documentation
messages: 294556
nosy: cjrh, docs@python, willingc, zach.ware
priority: normal
pull_requests: 1909
severity: normal
status: open
title: DOC: automatically create a venv and install Sphinx when running make
versions: Python 3.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30433] Devguide lacks instructions for building docs

2017-05-26 Thread Caleb Hattingh

Caleb Hattingh added the comment:

The PR has been merged by Mariatta so I think this can be closed.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30433>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30433] Devguide lacks instructions for building docs

2017-05-22 Thread Caleb Hattingh

Caleb Hattingh added the comment:

Oops, sorry!  The PR was wrong because it auto-assumes the main cpython repo, 
but my PR is in the devguide repo. This is the URL for the PR:

https://github.com/python/devguide/pull/206

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30433>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30433] Devguide lacks instructions for building docs

2017-05-22 Thread Caleb Hattingh

Changes by Caleb Hattingh <caleb.hatti...@gmail.com>:


--
pull_requests:  -1812

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30433>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30433] Devguide lacks instructions for building docs

2017-05-22 Thread Caleb Hattingh

New submission from Caleb Hattingh:

The official devguide at https://github.com/python/devguide does not include 
instructions on exactly how to build the docs!  If, after cloning, you simply 
type `make`, you get some helpful output:

$ make
Please use `make ' where  is one of
  html   to make standalone HTML files
  dirhtmlto make HTML files named index.html in directories
  singlehtml to make a single large HTML file
  pickle to make pickle files
  json   to make JSON files
  htmlhelp   to make HTML files and a HTML help project
  qthelp to make HTML files and a qthelp project
  devhelpto make HTML files and a Devhelp project
  epub   to make an epub
  latex  to make LaTeX files, you can set PAPER=a4 or PAPER=letter
  latexpdf   to make LaTeX files and run them through pdflatex
  text   to make text files
  manto make manual pages
  changesto make an overview of all changed/added/deprecated items
  linkcheck  to check all external links for integrity
  doctestto run all doctests embedded in the documentation (if enabled)
  check  to run a check for frequent markup errors

However, in order to build, say, HTML, you need to have sphinx installed in 
your environment.  I would like to add a `requirements.txt` file that will 
specify which dependencies must be installed (into a virtualenv, probably), in 
order to successfully build the documentation.  In the GitHub PR, I have also 
added a BUILDING.rst that explains how to install the dependencies for building 
the documentation.

--
assignee: docs@python
components: Documentation
messages: 294170
nosy: cjrh, docs@python
priority: normal
pull_requests: 1812
severity: normal
status: open
title: Devguide lacks instructions for building docs
versions: Python 3.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30433>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15663] Investigate providing Tcl/Tk 8.6 with OS X installers

2016-09-01 Thread Caleb Hattingh

Changes by Caleb Hattingh <caleb.hatti...@gmail.com>:


--
nosy: +cjrh

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue15663>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12294] multiprocessing.Pool: Need a way to find out if work are finished.

2016-08-19 Thread Caleb Hattingh

Changes by Caleb Hattingh <caleb.hatti...@gmail.com>:


--
nosy: +cjrh

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue12294>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12982] Document that importing .pyo files needs python -O

2016-08-19 Thread Caleb Hattingh

Caleb Hattingh added the comment:

Presumably PEP488 (and the 4 years of inactivity) means that this issue could 
be closed?

--
nosy: +cjrh

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue12982>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11602] python-config code should be in sysconfig

2016-08-19 Thread Caleb Hattingh

Changes by Caleb Hattingh <caleb.hatti...@gmail.com>:


--
nosy: +cjrh

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue11602>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25572] _ssl doesn't build on OSX 10.11

2016-06-23 Thread Caleb Hattingh

Caleb Hattingh added the comment:

I struggled with this issue, and eventually found the recommendations about 
linking with homebrew's OpenSSL on StackOverflow or similar, and then only 
later found this issue here (and with it the link to the devguide); but the 
*first* places I looked were the README in the source root, and then the README 
in the Mac/ directory. That may however just be ignorance on my part of where I 
should have been looking. Yet another reminder that I need to become much more 
familiar with the devguide.

The README only mentions the devguide in the context of contributing, but not 
that it will contain further information required for building. Under "Build 
Instructions", the README says:

***

Build Instructions
--

On Unix, Linux, BSD, OSX, and Cygwin:

./configure
make
make test
sudo make install

This will install Python as python3.

You can pass many options to the configure script; run "./configure --help" to 
find out more.  On OSX and Cygwin, the executable is called python.exe;
elsewhere it's just python.

On Mac OS X, if you have configured Python with --enable-framework, you should 
use "make frameworkinstall" to do the installation.  Note that this installs 
the Python executable in a place that is not normally on your PATH, you may 
want to set up a symlink in /usr/local/bin.

***

It might be helpful to add to the README (in the "Build Instructions" section): 

"The devguide may include further information about specific build dependencies 
for your platform here: 
https://docs.python.org/devguide/setup.html#build-dependencies;

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25572>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25572] _ssl doesn't build on OSX 10.11

2016-06-15 Thread Caleb Hattingh

Changes by Caleb Hattingh <caleb.hatti...@gmail.com>:


--
nosy: +cjrh

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25572>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com