This is an automated email from the ASF dual-hosted git repository.

lahirujayathilake pushed a commit to branch agent-framewok-refactoring
in repository https://gitbox.apache.org/repos/asf/airavata.git


The following commit(s) were added to refs/heads/agent-framewok-refactoring by 
this push:
     new 83eb5a3d1c md notebook and cs settings file changes
83eb5a3d1c is described below

commit 83eb5a3d1ccfbdfe3524cead2cbd18d43bb301ea
Author: lahiruj <[email protected]>
AuthorDate: Tue Dec 17 12:36:38 2024 -0500

    md notebook and cs settings file changes
---
 .../jupyterhub/user-container/MD/poc.ipynb         | 236 ++++++++++-----------
 .../jupyterhub/user-container/MD/settings.ini      |  33 +--
 2 files changed, 118 insertions(+), 151 deletions(-)

diff --git a/dev-tools/deployment/jupyterhub/user-container/MD/poc.ipynb 
b/dev-tools/deployment/jupyterhub/user-container/MD/poc.ipynb
index eda73e05a9..f0e015c53f 100644
--- a/dev-tools/deployment/jupyterhub/user-container/MD/poc.ipynb
+++ b/dev-tools/deployment/jupyterhub/user-container/MD/poc.ipynb
@@ -4,17 +4,17 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "# Airavata Experiment SDK - Molecular Dynamics Example\n",
+    "# Cybershuttle SDK -  Molecular Dynamics\n",
+    "> Define, run, monitor, and analyze molecular dynamics experiments in a 
HPC-agnostic way.\n",
     "\n",
-    "This SDK allows users to define, plan, and execute molecular dynamics 
experiments with ease.\n",
-    "Here we demonstrate how to authenticate, set up a NAMD experiment, add 
replicas, create an execution plan, and monitor the execution."
+    "This notebook shows how users can setup and launch a **NAMD** experiment 
with replicas, monitor its execution, and run analyses both during and after 
execution."
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Install the required packages\n",
+    "## Installing Required Packages\n",
     "\n",
     "First, install the `airavata-python-sdk-test` package from the pip 
repository."
    ]
@@ -32,12 +32,12 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Import the Experiments SDK"
+    "## Importing the SDK"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 2,
+   "execution_count": null,
    "metadata": {},
    "outputs": [],
    "source": [
@@ -49,7 +49,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Authenticate for Remote Execution\n",
+    "## Authenticating\n",
     "\n",
     "To authenticate for remote execution, call the `ae.login()` method.\n",
     "This method will prompt you to enter your credentials and authenticate 
your session."
@@ -57,17 +57,9 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 3,
+   "execution_count": null,
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "Using saved token\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "ae.login()"
    ]
@@ -76,7 +68,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Once authenticated, the `ae.list_runtimes()` function can be called to 
list HPC resources that the user can access."
+    "Once authenticated, the `ae.list_runtimes()` function can be called to 
list HPC resources that the user has access to."
    ]
   },
   {
@@ -86,14 +78,14 @@
    "outputs": [],
    "source": [
     "runtimes = ae.list_runtimes()\n",
-    "display(runtimes)"
+    "ae.display(runtimes)"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Upload Experiment Files\n",
+    "## Uploading Experiment Files\n",
     "\n",
     "Drag and drop experiment files onto the workspace that this notebook is 
run on.\n",
     "\n",
@@ -119,7 +111,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Define a NAMD Experiment\n",
+    "## Defining a NAMD Experiment\n",
     "\n",
     "The `md.NAMD.initialize()` is used to define a NAMD experiment.\n",
     "Here, provide the paths to the `.conf` file, the `.pdb` file, the `.psf` 
file, any optional files you want to run NAMD on.\n",
@@ -136,7 +128,13 @@
     "    parallelism: Literal['CPU', 'GPU'] = \"CPU\",\n",
     "    num_replicas: int = 1\n",
     ") -> Experiment[ExperimentApp]\n",
-    "```"
+    "```\n",
+    "\n",
+    "To add replica runs, simply call the `exp.add_replica()` function.\n",
+    "You can call the `add_replica()` function as many times as you want 
replicas.\n",
+    "Any optional resource constraint can be provided here.\n",
+    "\n",
+    "You can also call `ae.display()` to pretty-print the experiment."
    ]
   },
   {
@@ -147,7 +145,7 @@
    "source": [
     "exp = md.NAMD.initialize(\n",
     "    name=\"yasith_namd_experiment\",\n",
-    "    config_file=\"data/pull.conf\",\n",
+    "    config_file=\"data/pull_cpu.conf\",\n",
     "    pdb_file=\"data/structure.pdb\",\n",
     "    psf_file=\"data/structure.psf\",\n",
     "    ffp_files=[\n",
@@ -160,17 +158,20 @@
     "      \"data/b4pull.restart.vel\",\n",
     "      \"data/b4pull.restart.xsc\",\n",
     "    ],\n",
-    "    parallelism=\"GPU\",\n",
-    ")"
+    "    parallelism=\"CPU\",\n",
+    "    num_replicas=1,\n",
+    ")\n",
+    "exp.add_replica(*ae.list_runtimes(cluster=\"login.expanse.sdsc.edu\", 
category=\"cpu\"))\n",
+    "ae.display(exp)"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "To add replica runs, simply call the `exp.add_replica()` function.\n",
-    "You can call the `add_replica()` function as many times as you want 
replicas.\n",
-    "Any optional resource constraint can be provided here."
+    "## Creating an Execution Plan\n",
+    "\n",
+    "Call the `exp.plan()` function to transform the experiment definition + 
replicas into a stateful execution plan."
    ]
   },
   {
@@ -179,17 +180,17 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "exp.add_replica()"
+    "plan = exp.plan()\n",
+    "ae.display(plan)"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Create Execution Plan\n",
+    "## Saving the Plan\n",
     "\n",
-    "Call the `exp.plan()` function to transform the experiment definition + 
replicas into a stateful execution plan.\n",
-    "This plan can be exported in JSON format and imported back."
+    "A created plan can be saved locally (in JSON) or remotely (in a 
user-local DB) for later reference."
    ]
   },
   {
@@ -198,16 +199,19 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "plan = exp.plan()  # this will create a plan for the experiment\n",
-    "plan.describe()  # this will describe the plan\n",
-    "plan.save_json(\"plan.json\")  # save the plan state"
+    "plan.save()  # this will save the plan in DB\n",
+    "plan.save_json(\"plan.json\")  # save the plan state locally"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Execute the Plan"
+    "## Launching the Plan\n",
+    "\n",
+    "A created plan can be launched using the `plan.launch()` function.\n",
+    "Changes to plan states will be automatically saved onto the remote.\n",
+    "However, plan state can also be tracked locally by invoking 
`plan.save_json()`."
    ]
   },
   {
@@ -216,16 +220,16 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "plan = ae.load_plan(\"plan.json\")\n",
     "plan.launch()\n",
-    "plan.save_json(\"plan.json\")  # save the plan state"
+    "plan.save_json(\"plan.json\")"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Option A - Wait for Completion"
+    "## Checking the Plan Status\n",
+    "The status of a plan can be retrieved by calling `plan.status()`."
    ]
   },
   {
@@ -234,57 +238,37 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "plan = ae.load_plan(\"plan.json\")\n",
-    "plan.describe()"
+    "plan.status()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Loading a Saved Plan\n",
+    "\n",
+    "A saved plan can be loaded by calling `ae.plan.load_json(plan_path)` (for 
local plans) or `ae.plan.load(plan_id)` (for remote plans)."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 8,
+   "execution_count": null,
    "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "123ad583417a4ced989b4b9b1b99b315",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "Output()"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "data": {
-      "text/html": [
-       "<pre 
style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu
 Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
-      ],
-      "text/plain": []
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "Interrupted by user.\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
-    "plan = ae.load_plan(\"plan.json\")\n",
-    "plan.join()"
+    "plan = ae.plan.load_json(\"plan.json\")\n",
+    "plan = ae.plan.load(plan.id)\n",
+    "plan.status()\n",
+    "ae.display(plan)"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Option B - Terminate Execution"
+    "## Fetching User-Defined Plans\n",
+    "\n",
+    "The `ae.plan.query()` function retrieves all plans stored in the remote."
    ]
   },
   {
@@ -293,64 +277,74 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "plan = ae.load_plan(\"plan.json\")\n",
-    "plan.stop()"
+    "plans = ae.plan.query()\n",
+    "ae.display(plans)"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Option C - Monitor Files During Execution"
+    "## Managing Plan Execution\n",
+    "\n",
+    "The `plan.stop()` function will stop a currently executing plan.\n",
+    "The `plan.wait_for_completion()` function would block until the plan 
finishes executing."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# plan.stop()\n",
+    "plan.wait_for_completion()"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Displaying the status and files generated by each replica (task)"
+    "## Interacting with Files\n",
+    "\n",
+    "The `task` object has several helper functions to perform file operations 
within its context.\n",
+    "\n",
+    "* `task.ls()` - list all remote files (inputs, outputs, logs, etc.)\n",
+    "* `task.upload(<local_path>, <remote_path>)` - upload a local file to 
remote\n",
+    "* `task.cat(<remote_path>)` - displays contents of a remote file\n",
+    "* `task.download(<remote_path>, <local_path>)` - fetch a remote file to 
local"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 9,
+   "execution_count": null,
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "status=ExperimentStatus(state=4, timeOfStateChange=1733386271725, 
reason='process  started', 
statusId='EXPERIMENT_STATE_e5b4246d-9d7c-41c7-8a03-df8292941518')\n"
-     ]
-    },
-    {
-     "ename": "Exception",
-     "evalue": "Agent not found",
-     "output_type": "error",
-     "traceback": [
-      
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
-      "\u001b[0;31mException\u001b[0m                                 
Traceback (most recent call last)",
-      "Cell \u001b[0;32mIn[9], line 5\u001b[0m\n\u001b[1;32m      3\u001b[0m 
status \u001b[38;5;241m=\u001b[39m 
task\u001b[38;5;241m.\u001b[39mstatus()\n\u001b[1;32m      4\u001b[0m 
\u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstatus=\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mstatus\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m---->
 5\u001b[0m files \u001b[38;5;241m=\u001b[39m \u001b[43mtask\u001b[49m\ [...]
-      "File 
\u001b[0;32m~/projects/artisan/airavata/airavata-api/airavata-client-sdks/airavata-experiment-sdk/airavata_experiments/task.py:54\u001b[0m,
 in \u001b[0;36mTask.files\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     
52\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m 
\u001b[38;5;21mfiles\u001b[39m(\u001b[38;5;28mself\u001b[39m) 
\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m 
\u001b[38;5;28mlist\u001b[39m[\u001b[38;5;28mstr\u001b[39m]:\n\u001b[1;32m     
53\u001b[0m   \u001b[38;5; [...]
-      "File 
\u001b[0;32m~/projects/artisan/airavata/airavata-api/airavata-client-sdks/airavata-experiment-sdk/airavata_experiments/runtime.py:166\u001b[0m,
 in \u001b[0;36mRemote.ls\u001b[0;34m(self, task)\u001b[0m\n\u001b[1;32m    
164\u001b[0m data \u001b[38;5;241m=\u001b[39m 
res\u001b[38;5;241m.\u001b[39mjson()\n\u001b[1;32m    165\u001b[0m 
\u001b[38;5;28;01mif\u001b[39;00m 
data[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124merror\u001b[39m\u001b[38;5;124m\"\u001b[39m]
 \u001b[38;5;129;01mi [...]
-      "\u001b[0;31mException\u001b[0m: Agent not found"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
-    "plan = ae.load_plan(\"plan.json\")\n",
     "for task in plan.tasks:\n",
-    "    status = task.status()\n",
-    "    print(f\"status={status}\")\n",
-    "    files = task.files()\n",
-    "    print(f\"files={files}\")"
+    "    print(task.name, task.pid)\n",
+    "    # display files\n",
+    "    display(task.ls())\n",
+    "    # upload a file\n",
+    "    task.upload(\"data/sample.txt\")\n",
+    "    # preview contents of a file\n",
+    "    display(task.cat(\"sample.txt\"))\n",
+    "    # download a specific file\n",
+    "    task.download(\"sample.txt\", f\"./results_{task.name}\")\n",
+    "    # download all files\n",
+    "    task.download_all(f\"./results_{task.name}\")"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Displaying the intermediate results generated by each replica (task)"
+    "## Executing Task-Local Code Remotely\n",
+    "\n",
+    "The `@task.context()` decorator can be applied on Python functions to run 
them remotely within the task context.\n",
+    "The functions executed this way has access to the task files, as well as 
the remote compute resources.\n",
+    "\n",
+    "**NOTE**: Currently, remote code execution is only supported for ongoing 
tasks. In future updates, we will support both ongoing and completed tasks. 
Stay tuned!"
    ]
   },
   {
@@ -359,19 +353,15 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "from matplotlib import pyplot as plt\n",
-    "import pandas as pd\n",
-    "\n",
     "for index, task in enumerate(plan.tasks):\n",
-    "\n",
-    "    @cs.task_context(task)\n",
-    "    def visualize():\n",
-    "        data = pd.read_csv(\"data.csv\")\n",
-    "        plt.figure(figsize=(8, 6))\n",
-    "        plt.plot(data[\"x\"], data[\"y\"], marker=\"o\", linestyle=\"-\", 
linewidth=2, markersize=6)\n",
-    "        plt.title(f\"Plot for Replica {index} of {len(plan.tasks)}\")\n",
-    "\n",
-    "    visualize()"
+    "    @task.context(packages=[\"numpy\", \"pandas\"])\n",
+    "    def analyze() -> None:\n",
+    "        import numpy as np\n",
+    "        with open(\"pull.conf\", \"r\") as f:\n",
+    "            data = f.read()\n",
+    "        print(\"pull.conf has\", len(data), \"chars\")\n",
+    "        print(np.arange(10))\n",
+    "    analyze()"
    ]
   }
  ],
diff --git a/dev-tools/deployment/jupyterhub/user-container/MD/settings.ini 
b/dev-tools/deployment/jupyterhub/user-container/MD/settings.ini
index 695ba507ba..afa10efda6 100644
--- a/dev-tools/deployment/jupyterhub/user-container/MD/settings.ini
+++ b/dev-tools/deployment/jupyterhub/user-container/MD/settings.ini
@@ -2,39 +2,16 @@
 API_HOST = api.gateway.cybershuttle.org
 API_PORT = 9930
 API_SECURE = True
-
-[KeycloakServer]
-CLIENT_ID = cybershuttle-agent
-CLIENT_SECRET = "" # not used
-TOKEN_URL = "" # not used
-USER_INFO_URL = "" # not used
-VERIFY_SSL = False
-CERTIFICATE_FILE_PATH = None
-REALM = 10000000
-API_URL = https://auth.cybershuttle.org
-LOGIN_DESKTOP_URI = https://gateway.cybershuttle.org/auth/login-desktop
+CONNECTION_SVC_URL = https://api.gateway.cybershuttle.org/api/v1
+FILEMGR_SVC_URL = http://3.142.234.94:8050
 
 [Gateway]
 GATEWAY_ID = default
 GATEWAY_URL = gateway.cybershuttle.org
 GATEWAY_DATA_STORE_DIR = /var/www/portals/gateway-user-data/iguide-cybershuttle
-
-[Thrift]
-THRIFT_CLIENT_POOL_KEEPALIVE = 5
-
-[ExperimentConf]
 STORAGE_RESOURCE_HOST = 
iguide-cybershuttle.che070035.projects.jetstream-cloud.org
 SFTP_PORT = 9000
-PROJECT_NAME = Default Project
-# everything below should be dynamic
-APPLICATION_NAME = NAMD
-COMPUTE_HOST_DOMAIN = login.expanse.sdsc.edu
-GROUP_RESOURCE_PROFILE_NAME = Default
-NODE_COUNT = 1
-TOTAL_CPU_COUNT = 24
-WALL_TIME_LIMIT = 30
-QUEUE_NAME = gpu-shared
-MONITOR_STATUS = True
 
-[ConnectionServer]
-CONNECTION_SERVER_URL = https://api.gateway.cybershuttle.org/api/v1/agent
\ No newline at end of file
+[User]
+PROJECT_NAME = Default Project
+GROUP_RESOURCE_PROFILE_NAME = Default
\ No newline at end of file

Reply via email to