On 23/10/2014 21:36, Adam Roach wrote: > In particular, what I dislike the most is that you're going to > introduce a moment in time where someone generates a URL, and then it > goes away almost immediately. > > So I think it would be good to have a migration plan in place here, at > least for some of the data: > > * Recently used HAWK tokens (note that this is recently USED, not > recently GENERATED -- if you're not tracking last login time, we > probably need to copy everything) > > * Recently generated call tokens; we're probably going back only two > weeks (which is what I think Tarek suggested elsewhere). > Thanks Adam, that's exacly the kind of feedback I'm looking for.
If we have to migrate this data then let's do it. This mail was mainly to see if we were putting efforts in something that wouldn't impact a lot of people. My thinking was along the lines of "well, that's Firefox Beta, so users expect it to fail". > I will note that you could limit downtime during migration (and not > have to worry about filtering existing data by freshness) by glassing > the redis database and switching over the the SQL instance. Whenever > you get a query miss in the SQL database, you read through to the > glassed redis instance to see if there's a match. If so, copy it to > the SQL database and proceed as normal. After some migration period, > turn down the redis instance. > > This is conceptually very easy to implement, saves the trouble of > taking the whole thing offline for a while while copying data over, > preserves the utility of all data still in use (while allowing you to > abandon old cruft), and avoids surprising service behavior. This is exactly the process we're willing to implement, but wanted to avoid it if not needed. — Alexis _______________________________________________ dev-media mailing list [email protected] https://lists.mozilla.org/listinfo/dev-media

