. Chengxing
. data: getting the set for 2016 summer school
(need Benyang’s help in running a script to get the ranges)
may need to bring in more model data (possible data time range
extension)
discussed the size of source data files (as big as 1 GB)
do we need to break them? decide after testing on Amazon
need to make some files cmip5 format (done, need to check for
more)
will announce the availability of data 2016
. plot resolution: using python matplotlib to solve the problem
updated conditional sampling service
. added data saving capacity for conditional sampling
. Benyang
. installed new packages for climate sciences (anaconda, ferret, etc.)
. where is the latest entry.py?
. demoed new feature of “customization,” “interactive Map plotting”
will repeat this feature to other services
. Lei
. pushed the latest Docker container from JPL to Docker hub
so the CMU team (Qihao) can pick up
. pulled the Docker container to the Amazon Cloud instance
. configured port forwarding and file folder sharing on Amazon
so the Docker container is accessible via ssh, the JPL frontend,
and the JPL service backend
. worked with Qihao so he can get the latest Docker container
from JPL, and he can play the container on Amazon with
additional ports forwarded for CMU services
. designed with Jia the deployment strategy for the 2016 summer school
. one Docker image with all JPL-CMU frontends and backends deployed
. when the Docker container is up, auto-start the JPL frontend, JPL backend,
and the CMU frontend. This is for our students
. fire up one Amazon instance with the Docker container, and manually
start up the CMU backend (provenance, knowledge graph, social network,
etc.)
This single instance will serve as a “virtual Einstein” for provenance
collecting etc.
. configure entry.py to point to this virtual einstein
. configure the MySQL db on einstein to make it the secondary db for the
primary
db on virtual einstein (optional)
. working with Qihao on auto-start the JPL-CMU services inside the Docker
and the Docker container inside the Amazon EC2 instance, still some
details to figure out