This generally looks fine.
In reading, a couple thoughts.

a.) i was able to use the metadata crawler code to crawl the metadata service.
    code: http://paste.ubuntu.com/8404509/
    output: http://paste.ubuntu.com/8404520/
    I kind of like the idea of re-using that since digital ocean went through 
the excercise of creating a index'd metadata service.

  what do you think about that?

b.) generally we want to have a single GET attempt determine presense of the 
datasource.

c.) unit tests and doc/
d.) i would prefer it if it looked for one url, with no timeout/retry on that.
   and then if successful there, retried on the subsequent items.
   as it is right now, i think if this datasource runs somewhere else, there 
will be 3 retries and 1 second sleeps, then a WARN message.

please don't let any of the above discourage you. generally this looks great, 
and thanks.


-- 
https://code.launchpad.net/~nshrader/cloud-init/digitalocean-datasource/+merge/238590
Your team cloud init development team is requested to review the proposed merge 
of lp:~nshrader/cloud-init/digitalocean-datasource into lp:cloud-init.

_______________________________________________
Mailing list: https://launchpad.net/~cloud-init-dev
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~cloud-init-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to