On 10/23/14 09:41, Alexis Métaireau wrote:

Specifically, we would like to know if that would make sense, from a product perspective, to drop the data we have in redis right now, rather than migrating it from redis to MySQL. In other words, that would mean that the data generated for Beta / Nightly will stop being available at one point in time: call urls would stop working and hawk sessions will not be valid anymore.

On the client, how are we dealing with this? Would the hawk session be re-created in case of a 401?

If this data loss would be possible, I would like to have a max date when this can happen;

I've started a wiki page to try to summarize what we're trying to do and how, it's at https://wiki.mozilla.org/CloudServices/Loop/MySQL

I'm worried about the effects of dumping service data.

We're seeing more than 10,000 people registering for the service on any given day (and that's with 90% of Beta users unable to use it! It may well be in the hundreds of thousands by early November), and regularly have more than 1,000 calls in a day (again, this is historical data, from when the throttle was set to 10%). This is a large enough scale that I would hesitate to do a complete drop of the database.

In particular, what I dislike the most is that you're going to introduce a moment in time where someone generates a URL, and then it goes away almost immediately.

So I think it would be good to have a migration plan in place here, at least for some of the data:

 * Recently used HAWK tokens (note that this is recently USED, not
   recently GENERATED -- if you're not tracking last login time, we
   probably need to copy everything)

 * Recently generated call tokens; we're probably going back only two
   weeks (which is what I think Tarek suggested elsewhere).

I will note that you could limit downtime during migration (and not have to worry about filtering existing data by freshness) by glassing the redis database and switching over the the SQL instance. Whenever you get a query miss in the SQL database, you read through to the glassed redis instance to see if there's a match. If so, copy it to the SQL database and proceed as normal. After some migration period, turn down the redis instance.

This is conceptually very easy to implement, saves the trouble of taking the whole thing offline for a while while copying data over, preserves the utility of all data still in use (while allowing you to abandon old cruft), and avoids surprising service behavior.

--
Adam Roach
Principal Platform Engineer
[email protected]
+1 650 903 0800 x863
_______________________________________________
dev-media mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-media

Reply via email to