Author: Armin Rigo <[email protected]>
Branch:
Changeset: r85:6187b6951c91
Date: 2013-06-09 19:02 +0200
http://bitbucket.org/pypy/stmgc/changeset/6187b6951c91/
Log: Update the document with a simpler version which is a better
starting point.
diff --git a/c4/doc-objects.txt b/c4/doc-objects.txt
--- a/c4/doc-objects.txt
+++ b/c4/doc-objects.txt
@@ -27,17 +27,17 @@
Private freshly created
\ Private, with backup
- \ ^ | ^
- \ / commit | |
- commit \ modify / | |
- \ / | | modify
- V / | |
- Protected, no backup V |
- ^ ^ Protected, with backup
- / | gc |
- commit / `----------------'
- /
- /
+ \ ^ /
+ commit \ / / commit
+ \ modify / /
+ \ / /
+ V / V
+ Protected
+ ^
+ /
+ commit /
+ /
+ /
Private copy of
a public obj
@@ -48,32 +48,27 @@
Private objects:
- freshly created PRN
-- converted from a protected obj PRN
+- converted from a protected obj ptr to backup
- private copy of a public obj PRN
Protected objects:
- converted from fresh private obj (old PRN)
-- converted from a private obj with backup ptr to backup
-- converted from a private obj from public GT
+- converted from a private obj GT
+
+Backup copy:
- backup copy of a private obj original h_revision
-- backup copy still attached to a protected GT
-- original obj after GC killed the backup GT
Public objects:
- prebuilt object, never modified 1
- other public object, never modified GT
-- outdated, has a protected copy HANDLE to prot/priv copy
-- outdated, target stolen ptr to a more recent public copy
+- outdated ptr to a more recent public copy
-Public stubs:
-- from stealing: like outdated public objects
-- from major GC: like outdated public objects with target stolen
+Public stubs (have also a ref to one thread):
+- from stealing ptr (maybe to priv/prot) | 2
PRN = Private revision number (negative odd number)
GT = Global time (positive odd number)
-HANDLE = Reference to a prot/priv copy and its thread
- (positive even number, such that: handle % 4 == 2)
@@ -83,54 +78,67 @@
- the PRN (private revision number): odd, negative, changes for every
transaction that commits
-- list active_backup_copies = [(private, backup copy)]
+- list private_from_protected = [private obj converted from protected]
- dict public_to_private = {public obj: private copy}
- list read_set containing the objects in the read set, with possibly
some duplicates (but hopefully not too many)
-- list stolen_objects = [(priv/prot object, public copy)]
+- collection_lock: a thread-local lock that is acquired to change
+ the status of private/protected objects
-Kind of object copy distinguishing feature
+
+Kind of object copy distinguishing feature
-------------------------------------------------------------------
-Any private object h_revision == PRN
-Private with a backup in active_backup_copies
-Backup copy GCFLAG_BACKUP_COPY
-Any public object GCFLAG_PUBLIC
-Any protected object h_revision != PRN && !GCFLAG_PUBLIC
-Stubs GCFLAG_STUB
+Any private object h_revision == PRN or GCFLAG_PRIVATE_FROM_PROTECTED
+Private with a backup GCFLAG_PRIVATE_FROM_PROTECTED
+Backup copy GCFLAG_BACKUP_COPY (flag for debugging)
+Any public object GCFLAG_PUBLIC
+Stubs GCFLAG_STUB (flag for debugging)
A public object that might \
-be key in public_to_private has additionally GCFLAG_PUBLIC_TO_PRIVATE
+be key in public_to_private has additionally GCFLAG_PUBLIC_TO_PRIVATE
Read barrier
-----------------------------------------
-Inline check: if P in read_barrier_cache, we don't call the slow path.
+Inline check: if h_revision == PRN or if P in read_barrier_cache,
+ we don't call the slow path.
Slow path:
- if h_revision == PRN, just add P to read_barrier_cache and return
+ if GCFLAG_PRIVATE_FROM_PROTECTED:
+
+ check P->h_revision->h_revision: if a pointer, then it means
+ the backup copy has been stolen into a public object and then
+ modified by some other thread. Abort.
+
+ add P to 'read_barrier_cache' and return
if GCFLAG_PUBLIC:
follow the chained list of h_revision's as long as they are
regular pointers
- if it ends with an odd revision number, check that it's older
- than start_time; extend the start timestamp if not
+ if it ends with h_revision % 4 == 2:
+ then we're in a stub
- if it ends with a handle (L, Thread):
-
- if Thread is the current thread: set P = L
+ if Thread is the current thread: follow P = h_revision - 2
else: do stealing and restart the read barrier
- if we land on a P in read_barrier_cache: return
+ if we land on a P in read_barrier_cache:
+ return P
+
+ if P has GCFLAG_PUBLIC_TO_PRIVATE and is in 'public_to_private':
+ return the private object
+
+ if it ends with an odd revision number, check that it's older
+ than start_time; extend the start timestamp if not
add P to 'read_set'
@@ -143,29 +151,26 @@
private objects which might be a majority, vs. making the inline check
larger).
-Handles are stored for example in a global list, and the actual handle
-encodes an index in the list. Every entry in the list is a pointer to a
-prot/priv object --- excepted once every N positions, where it is a
-thread descriptor giving the thread to which all N-1 following pointers
-belong. The pair (L, Thread) is thus `(list[H], list[H rounded down to
-a multiple of N])`.
+Stub objects are public, always outdated (with h_revision a pointer) and
+contain only a header; additionally they have a thread descriptor that
+tells to which thread the h_revision object is a protected/private
+object of.
-Stealing of an object copy L is done with the "collection lock" of
-the target Thread. The target would also acquire its own lock in
-when doing some operations, like a minor collection, which can't
-occur in parallel with stealing.
+Stealing of an object copy L is done with the "collection lock" of the
+target Thread. The target would also acquire its own lock in when doing
+some operations, like a minor collection or a write barrier on a
+protected object, which can't occur in parallel with stealing.
Once we have the lock, stealing is:
if the situation changed while we were waiting for the lock, return
- if L has got a backup copy, turn it public;
- else L must be protected, and we make a public copy of it
+ if L has GCFLAG_PRIVATE_FROM_PROTECTED:
+ set L = L->h_revision (the backup copy)
- update the original P->h_revision to point directly to the new
- public copy
+ change L from protected to public, i.e. add GCFLAG_PUBLIC
- add (P, new public copy) to stolen_objects
+ update the original P->h_revision to point directly to L
@@ -174,18 +179,15 @@
The write barrier works for both STM purposes and for GC purposes.
-Inline check: if h_revision == PRN && !GCFLAG_WRITE_BARRIER, we're done.
+Inline check: if h_revision == PRN or GCFLAG_PRIVATE_FROM_PROTECTED, we're
done.
Slow path:
R = read_barrier(P) # always do a full read_barrier first
- if h_revision == PRN:
+ if h_revision == PRN or GCFLAG_PRIVATE_FROM_PROTECTED:
+ return R
- GC only: remove GCFLAG_WRITE_BARRIER, add R to the GC list of
- modified old objects to trace at the next minor collection,
- and return R
-
- elif GCFLAG_PUBLIC:
+ if GCFLAG_PUBLIC:
add the flag GCFLAG_PUBLIC_TO_PRIVATE to R, if needed
@@ -193,19 +195,50 @@
add {R: L} in 'public_to_private'
+ remove R from read_barrier_cache
+
return L
- else: # protected object
+ # else, R is a protected object
+ with collection_lock:
- if h_revision is not a pointer:
+ allocate a backup copy and copy the object into the backup copy
- allocate a backup copy, and attach it to h_revision
+ change R->h_revision to be the backup copy
+
+ set GCFLAG_PRIVATE_FROM_PROTECTED on R
- copy the object into the backup copy
-
- change h_revision to be PRN (i.e. turn private)
-
- if GCFLAG_WRITE_BARRIER: remove it, add R to the GC list of
- modified old objects to trace at the next minor collection
+ add R in 'private_from_protected'
return R
+
+
+
+Commit-time change of flags
+---------------------------
+
+(This occurs during commit, when we have got the collection_lock.)
+
+public_to_private:
+
+ write GT into the private object
+
+ make a stub with h_revision = private object | 2
+
+ after a CPU write barrier, make the public h_revision to point to the stub
+
+private_from_protected:
+
+ get the backup B from P->h_revision
+
+ set P->h_revision to GT
+
+ if B has GCFLAG_PUBLIC: it has been stolen
+
+ if it has been modified: conflict, abort transaction
+
+ B->h_revision = P
+
+ else:
+ possibly free B now, it's not used any more
+
diff --git a/c4/et.c b/c4/et.c
--- a/c4/et.c
+++ b/c4/et.c
@@ -289,12 +289,21 @@
return L;
}
+static gcptr LocalizePublic(struct tx_descriptor *d, gcptr R);
+
static gcptr LocalizeProtected(struct tx_descriptor *d, gcptr P)
{
gcptr B;
+ spinlock_acquire(d->public_descriptor->collection_lock, 'L');
+
+ if (P->h_tid & GCFLAG_PUBLIC)
+ {
+ /* became PUBLIC while waiting for the collection_lock */
+ spinlock_release(d->public_descriptor->collection_lock);
+ return LocalizePublic(d, P);
+ }
assert(P->h_revision != stm_private_rev_num);
- assert(!(P->h_tid & GCFLAG_PUBLIC));
assert(!(P->h_tid & GCFLAG_PUBLIC_TO_PRIVATE));
assert(!(P->h_tid & GCFLAG_BACKUP_COPY));
assert(!(P->h_tid & GCFLAG_STUB));
@@ -307,21 +316,17 @@
}
else
{
- size_t size = stmcb_size(P);
B = (gcptr)P->h_revision;
assert(B->h_tid & GCFLAG_BACKUP_COPY);
+ size_t size = stmcb_size(P);
memcpy(B + 1, P + 1, size - sizeof(*B));
}
assert(B->h_tid & GCFLAG_BACKUP_COPY);
- gcptrlist_locked_insert2(&d->public_descriptor->active_backup_copies, P, B,
- &d->public_descriptor->collection_lock);
+ gcptrlist_insert2(&d->public_descriptor->active_backup_copies, P, B);
+ P->h_revision = stm_private_rev_num;
- smp_wmb(); /* guarantees that stm_steal_stub() will see the list
- up to the (P, B) pair in case it goes the path
- h_revision == *foreign_pd->private_revision_ref */
-
- P->h_revision = stm_private_rev_num;
+ spinlock_release(d->public_descriptor->collection_lock);
return P;
}
@@ -368,6 +373,7 @@
struct tx_descriptor *d = thread_descriptor;
assert(d->active >= 1);
+ retry:
P = stm_read_barrier(P);
if (P->h_tid & GCFLAG_PUBLIC)
diff --git a/c4/lists.c b/c4/lists.c
--- a/c4/lists.c
+++ b/c4/lists.c
@@ -171,23 +171,6 @@
gcptrlist->size = i + 2;
}
-void gcptrlist_locked_insert2(struct GcPtrList *gcptrlist, gcptr newitem1,
- gcptr newitem2, revision_t *lock)
-{
- gcptr *items;
- long i = gcptrlist->size;
- if (UNLIKELY((gcptrlist->alloc - i) < 2))
- {
- spinlock_acquire(*lock, 'I');
- _gcptrlist_grow(gcptrlist);
- spinlock_release(*lock);
- }
- items = gcptrlist->items;
- items[i+0] = newitem1;
- items[i+1] = newitem2;
- gcptrlist->size = i + 2;
-}
-
void gcptrlist_insert3(struct GcPtrList *gcptrlist, gcptr newitem1,
gcptr newitem2, gcptr newitem3)
{
diff --git a/c4/lists.h b/c4/lists.h
--- a/c4/lists.h
+++ b/c4/lists.h
@@ -164,9 +164,6 @@
void gcptrlist_merge(struct GcPtrList *, struct GcPtrList *gcptrlist_source);
void gcptrlist_move(struct GcPtrList *, struct GcPtrList *gcptrlist_source);
-void gcptrlist_locked_insert2(struct GcPtrList *gcptrlist, gcptr newitem1,
- gcptr newitem2, revision_t *lock);
-
/************************************************************/
/* The fxcache_xx functions implement a fixed-size set of gcptr's.
diff --git a/c4/steal.c b/c4/steal.c
--- a/c4/steal.c
+++ b/c4/steal.c
@@ -53,33 +53,41 @@
if ((v & 3) != 2)
goto done; /* un-stubbed while we waited for the lock */
- gcptr Q, L = (gcptr)(v - 2);
- revision_t w = ACCESS_ONCE(L->h_revision);
+ gcptr L = (gcptr)(v - 2);
+ revision_t w = L->h_revision;
if (w == *foreign_pd->private_revision_ref) {
/* The stub points to a private object L. Because it cannot point
to "really private" objects, it must mean that L used to be
a protected object, and it has an attached backed copy.
XXX find a way to optimize this search, maybe */
- long i;
+ long i, size = foreign_pd->active_backup_copies.size;
gcptr *items = foreign_pd->active_backup_copies.items;
- /* we must find L as the first item of a pair in the list. We
- cannot rely on how big the list is here, but we know that
- it will not be resized while we hold collection_lock. */
- for (i = 0; items[i] != L; i += 2)
- ;
+ for (i = size - 2; ; i -= 2) {
+ assert(i >= 0);
+ if (items[i] == L)
+ break;
+ }
L = items[i + 1];
assert(L->h_tid & GCFLAG_BACKUP_COPY);
+ L->h_tid &= ~GCFLAG_BACKUP_COPY;
}
- /* duplicate L */
- Q = stmgc_duplicate(L); XXX RACE
- Q->h_tid &= ~GCFLAG_BACKUP_COPY;
- Q->h_tid |= GCFLAG_PUBLIC;
- gcptrlist_insert2(&foreign_pd->stolen_objects, L, Q);
+ else if (L->h_tid & GCFLAG_PUBLIC) {
+ /* The stub already points to a public object */
+ goto unstub;
+ }
+ else if (!(w & 1)) {
+ /* The stub points to a protected object L which has a backup
+ copy attached. Forget the backup copy. */
+ w = ((gcptr)w)->h_revision;
+ assert(w & 1);
+ L->h_revision = w;
+ }
+ /* turn L into a public object */
+ L->h_tid |= GCFLAG_PUBLIC;
- smp_wmb();
-
- P->h_revision = (revision_t)Q;
+ unstub:
+ P->h_revision = (revision_t)L;
done:
spinlock_release(foreign_pd->collection_lock);
_______________________________________________
pypy-commit mailing list
[email protected]
http://mail.python.org/mailman/listinfo/pypy-commit