On 7/24/2010 10:08 AM, Guido van Rossum wrote:

- Commit privileges: Maybe we've been too careful with only giving
commit privileges to to experienced and trusted new developers. I
spoke to Ezio Melotti and from his experience with getting commit
privileges, it seems to be a case of "the lion is much more afraid of
you than you are afraid of the lion". I.e. having got privileges he
was very concerned about doing something wrong, worried about the
complexity of SVN, and so on. Since we've got lots of people watching
the commit stream, I think that there really shouldn't need to be a
worry at all about a new committer doing something malicious, and
there shouldn't be much worry about honest beginners' mistakes either
-- the main worry remains that new committers don't use their
privileges enough.

My initial inclination is to start with 1 or 2 line patches that I am 99.99% certain are correct. But it has occurred to me that it might be better for Python if I were willing to take a greater than 1/10000 chance of making a mistake. But how much greater? What error rate do *you* consider acceptable?

- Concurrency and parallelism: Russel Winder and Sarah Mount pushed
the idea of CSP
(http://en.wikipedia.org/wiki/Communicating_sequential_processes) in
several talks at the conference. They (at least Russell) emphasized
the difference between concurrency (interleaved event streams)

This speeds perceived and maybe actual responsiveness to user input.

and parallelism (using many processors to speed things up).

This reduces total time.

Their
prediction is that as machines with many processing cores become more
prevalent, the relevant architecture will change from cores sharing a
single coherent memory (the model on which threads are based) to one
where each core has a limited amount of private memory, and
communication is done via message passing between the cores.

I believe this is a prediction that current prototypes, if not current products, will be both technically and commercially successful. My impression is enough better than 50/50 to be worth taking into account. It does not seem like much of a leap from private caches that write through to common memory to private memory that is not written through, especially on 64 bit machines with memory space to spare.

- After seeing Raymond's talk about monocle (search for it on PyPI) I
am getting excited again about PEP 380 (yield from, return values from
generators). Having read the PEP on the plane back home I didn't see
anything wrong with it, so it could just be accepted in its current
form. Implementation will still have to wait for Python 3.3 because of
the moratorium. (Although I wouldn't mind making an exception to get
it into 3.2.)

While initially -0, I now think the moratorium was a good idea. It seems to be successful at letting and even encouraging people to target 3.2 by working with 3.1. A big exception like this would probably annoy lots of people who had *their* 'equally good' ideas put off and might annoy alternative implementors counting on core 3.2 being as it. So the only exception I would make would one that had a really good technical reason, like making Python work better on multi-core processors

- This made me think of how the PEP process should evolve so as to not
require my personal approval for every PEP. I think the model for
future PEPs should be the one we used for PEP 3148 (futures, which was
just approved by Jesse):

+1

--
Terry Jan Reedy

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to