Author: Philip Jenvey <[email protected]> Branch: extradoc Changeset: r4940:065d72cf8cdf Date: 2013-01-09 09:45 -0800 http://bitbucket.org/pypy/extradoc/changeset/065d72cf8cdf/
Log: merge upstream diff --git a/blog/draft/numpy-internship-2.rst b/blog/draft/numpy-internship-2.rst new file mode 100644 --- /dev/null +++ b/blog/draft/numpy-internship-2.rst @@ -0,0 +1,28 @@ + +Hello everyone + +I would like to advertise a PyPy-related summer internship at +the National Center for Atmospheric Research, which is located in lovely +Boulder, Colorado. As for the last year, the mentor will be Davide del Vento, +with my possible support on the PyPy side. + +The full details of the application are to be found on +`the internship description`_ and make sure you read `the requirements`_ +first. Important requirements: + +* Must currently be enrolled in a United States university. + +* Only students authorized to work for any employer in the United + States will be considered for the SIParCS program. + +* Must be a graduate or under graduate who has completed their sophomore year. + +If you happen to fulfill the requirements, to me this sounds like +a great opportunity to spend a summer at NCAR in Boulder hacking on atmospheric +models using PyPy. + +.. _`the internship description`: http://cisl.catsone.com/careers/index.php?m=portal&a=details&jobOrderID=1694159 +.. _`the requirements`: https://www2.cisl.ucar.edu/siparcs + +Cheers, +fijal diff --git a/blog/draft/numpy-status-update-6.rst b/blog/draft/numpy-status-update-6.rst new file mode 100644 --- /dev/null +++ b/blog/draft/numpy-status-update-6.rst @@ -0,0 +1,29 @@ +NumPy status update #6 +---------------------- + +Hello. + +This is the last two months update of the activities on the NumPyPy project. + +First the update: + +* **dtype support** - NumPy on PyPy now supports all the numeric dtypes in numpy, + including non-native storage formats, longdouble, clongdouble and friends. + +* **missing ndarray attributes** - work has been made toward supporting the attributes + on ndarrays. We are progressing alphabetically, and have made it to d. + +* **pickling support for numarray** - hasn't started yet, but next on the list + +* There has been some work on exposing FFT routines into numpypy. + +More importantly, we're closing on being able to run the pure-python part of +numpy without modifications. This is not getting us close to passing all +the tests, but it's a good start. + +The most important part is the funding. While we managed to get a significant +amount of money in donations, we only managed to spend around $10 000 from it +so far. XXX + +Cheers, +Matti Picus, Maciej Fijalkowski diff --git a/talk/stm2012/stmimpl.rst b/talk/stm2012/stmimpl.rst --- a/talk/stm2012/stmimpl.rst +++ b/talk/stm2012/stmimpl.rst @@ -149,6 +149,8 @@ ``list_of_read_objects`` is a set of all global objects read from, in the revision that was used for reading. It is actually implemented as a list, but the order or repetition of elements in the list is irrelevant. +This list includes all objects that are keys in ``global_to_local``: we +don't have the notion of "write-only object". ``recent_reads_cache`` is a fixed-size cache that remembers recent additions to the preceeding list, in order to avoid inserting too much @@ -233,7 +235,7 @@ if v > start_time: # object too recent? if V >= LOCKED: # object actually locked? goto retry # spin-loop to start of func - ValidateDuringTransaction() # try to move start_time forward + ValidateNow() # try to move start_time forward goto retry # restart searching from R PossiblyUpdateChain(G, R, ...) # see below return R @@ -292,6 +294,7 @@ L->h_revision = R # back-reference to the original L->objectbody... = R->objectbody... global_to_local[R] = L + list_of_read_objects.append(R) return L def LocalizeReadReady(R): @@ -443,12 +446,12 @@ Validation ------------------------------------ -``ValidateDuringTransaction`` is called during a transaction to update -``start_time``. It makes sure that none of the read objects have been -modified since ``start_time``. If one of these objects is modified by -another commit in parallel, then we want this transaction to eventually -fail. More precisely, it will fail the next time one of the -``ValidateDuring*`` functions is called. +``ValidateDuringTransaction`` is called during a transaction just after +``start_time`` has been updated. It makes sure that none of the read +objects have been modified since ``start_time``. If one of these +objects is modified by another commit in parallel, then we want this +transaction to eventually fail. More precisely, it will fail the next +time ``ValidateDuringTransaction`` is called. Note a subtle point: if an object is currently locked, we have to wait until it gets unlocked, because it might turn out to point to a more @@ -456,30 +459,30 @@ Here is ``ValidateDuringTransaction``:: - def ValidateDuringTransaction(): - start_time = GetGlobalCurTime() # copy from the global time + def ValidateDuringTransaction(during_commit): for R in list_of_read_objects: v = R->h_revision if not (v & 1): # "is a pointer", i.e. - AbortTransaction() # "has a more recent revision" + return False # "has a more recent revision" if v >= LOCKED: # locked - spin loop retry # jump back to the "v = ..." line + if not during_commit: + assert v != my_lock # we don't hold any lock + spin loop retry # jump back to the "v = ..." line + else: + if v != my_lock: # not locked by me: conflict + return False + return True -The last detection for inconsistency is during commit, when -``ValidateDuringCommit`` is called. It is a slightly more complex -version than ``ValidateDuringTransaction`` because it has to handle -"locks" correctly. It also returns a True/False result instead of -aborting:: + def ValidateNow(): + start_time = GetGlobalCurTime() # copy from the global time + if not ValidateDuringTransaction(0): # do validation + AbortTransaction() # if it fails, abort - def ValidateDuringCommit(): - for R in list_of_read_objects: - v = R->h_revision - if not (v & 1): # "is a pointer", i.e. - return False # "has a more recent revision" - if v >= LOCKED: # locked - if v != my_lock: # and not by me - return False - return True +Checking for ``my_lock`` is only useful when ``ValidateDuringTransaction`` +is called during commit, which is when we actually hold locks. In that +case, detecting other already-locked objects causes a conflict. Note that +we should in general never spin-loop during commit; other threads might be +blocked by the fact that we own locks already, causing a deadlock. Local garbage collection @@ -583,8 +586,8 @@ while not CMPXCHG(&global_cur_time, cur_time, cur_time + 2): cur_time = global_cur_time # try again if cur_time != start_time: - if not ValidateDuringCommit(): # only call it if needed - AbortTransaction() # last abort point + if not ValidateDuringTransaction(1): # only call it if needed + AbortTransaction() # last abort point UpdateChainHeads(cur_time) Note the general style of usage of CMPXCHG: we first read normally the @@ -627,7 +630,7 @@ After this, ``CommitTransaction`` increases the global time and then -calls ``ValidateDuringCommit`` defined above. It may still abort. In +calls ``ValidateDuringTransaction`` defined above. It may still abort. In case ``AbortTransaction`` is called, it must release the locks. This is done by writing back the original timestamps in the ``h_revision`` fields:: @@ -707,7 +710,7 @@ cur_time = global_cur_time # try again if start_time != cur_time: start_time = cur_time - if not ValidateDuringCommit(): + if not ValidateDuringTransaction(0): global_cur_time = cur_time # must restore the value inevitable_mutex.release() AbortTransaction() @@ -741,8 +744,8 @@ while not CMPXCHG(&global_cur_time, cur_time, cur_time + 2): cur_time = GetGlobalCurTimeInCommit() # try again if cur_time != start_time: - if not ValidateDuringCommit(): # only call it if needed - AbortTransaction() # last abort point + if not ValidateDuringTransaction(1): # only call it if needed + AbortTransaction() # last abort point UpdateChainHeads(cur_time) def GetGlobalCurTimeInCommit(): _______________________________________________ pypy-commit mailing list [email protected] http://mail.python.org/mailman/listinfo/pypy-commit
