damondouglas commented on code in PR #27516:
URL: https://github.com/apache/beam/pull/27516#discussion_r1270028777
##########
examples/notebooks/healthcare/beam_nlp.ipynb:
##########
@@ -0,0 +1,642 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ "<a
href=\"https://colab.research.google.com/github/apache/beam/blob/healthcarenlp/examples/notebooks/healthcare/beam_nlp.ipynb\"
target=\"_parent\"><img
src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In
Colab\"/></a>"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# @title ###### Licensed to the Apache Software Foundation (ASF),
Version 2.0 (the \"License\")\n",
+ "\n",
+ "# Licensed to the Apache Software Foundation (ASF) under one\n",
+ "# or more contributor license agreements. See the NOTICE file\n",
+ "# distributed with this work for additional information\n",
+ "# regarding copyright ownership. The ASF licenses this file\n",
+ "# to you under the Apache License, Version 2.0 (the\n",
+ "# \"License\"); you may not use this file except in compliance\n",
+ "# with the License. You may obtain a copy of the License at\n",
+ "#\n",
+ "# http://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing,\n",
+ "# software distributed under the License is distributed on an\n",
+ "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+ "# KIND, either express or implied. See the License for the\n",
+ "# specific language governing permissions and limitations\n",
+ "# under the License"
+ ],
+ "metadata": {
+ "id": "lBuUTzxD2mvJ",
+ "cellView": "form"
+ },
+ "execution_count": 1,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Natural Language Processing Pipeline**\n",
+ "\n",
+ "**Note**: This example is used from
[here](https://github.com/rasalt/healthcarenlp/blob/main/nlp_public.ipynb).\n",
+ "\n",
+ "\n",
+ "\n",
+ "This example demonstrates how to set up an Apache Beam pipeline that
reads a file from [Google Cloud
Storage](https://https://cloud.google.com/storage), and calls the [Google Cloud
Healthcare NLP API](https://cloud.google.com/healthcare-api/docs/how-tos/nlp)
to extract information from unstructured data. This application can be used in
contexts such as reading scanned clinical documents and extracting structure
from it.\n",
+ "\n",
+ "An Apache Beam pipeline is a pipeline that reads input data,
transforms that data, and writes output data. It consists of PTransforms and
PCollections. A PCollection represents a distributed data set that your Beam
pipeline operates on. A PTransform represents a data processing operation, or a
step, in your pipeline. It takes one or more PCollections as input, performs a
processing function that you provide on the elements of that PCollection, and
produces zero or more output PCollection objects.\n",
+ "\n",
+ "For details about Apache Beam pipelines, including PTransforms and
PCollections, visit the [Beam Programming
Guide](https://beam.apache.org/documentation/programming-guide/).\n",
+ "\n",
+ "You'll be able to use this notebook to explore the data in each
PCollection."
+ ],
+ "metadata": {
+ "id": "nEUAYCTx4Ijj"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "First, lets install Apache Beam."
+ ],
+ "metadata": {
+ "id": "ZLBB0PTG5CHw"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "pip install apache-beam[gcp]"
+ ],
+ "metadata": {
+ "id": "O7hq2sse8K4u"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Set the variables in the next cell based upon your project and
preferences. The files referred to in this notebook nlpsample*.csv are in the
format with one\n",
+ "blurb of clinical note.\n",
+ "\n",
+ "Note that below, **us-central1** is hardcoded as the location. This
is because of the limited number of
[locations](https://cloud.google.com/healthcare-api/docs/how-tos/nlp) the API
currently supports."
+ ],
+ "metadata": {
+ "id": "D7lJqW2PRFcN"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "DATASET=\"<YOUR-DATASET-HERE>\"\n",
+ "TEMP_LOCATION=\"<YOUR-TEMP-LOCATION-HERE>\"\n",
+ "PROJECT='<YOUR-PROJECT-ID-HERE>'\n",
+ "LOCATION='us-central1'\n",
+
"URL=f'https://healthcare.googleapis.com/v1/projects/{PROJECT}/locations/{LOCATION}/services/nlp:analyzeEntities'\n",
+
"NLP_SERVICE=f'projects/{PROJECT}/locations/{LOCATION}/services/nlp'\n",
+ "GCS_BUCKET=PROJECT"
Review Comment:
We should probably follow this naming guidance for security best practice:
https://cloud.google.com/storage/docs/buckets#:~:text=Don%27t%20use%20user%20IDs%2C%20email%20addresses%2C%20project%20names%2C%20project%20numbers%2C%20or%20any%20personally%20identifiable%20information%20(PII)%20in%20bucket%20names%20because%20anyone%20can%20probe%20for%20the%20existence%20of%20a%20bucket.
##########
examples/notebooks/healthcare/beam_nlp.ipynb:
##########
@@ -0,0 +1,642 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ "<a
href=\"https://colab.research.google.com/github/apache/beam/blob/healthcarenlp/examples/notebooks/healthcare/beam_nlp.ipynb\"
target=\"_parent\"><img
src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In
Colab\"/></a>"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# @title ###### Licensed to the Apache Software Foundation (ASF),
Version 2.0 (the \"License\")\n",
+ "\n",
+ "# Licensed to the Apache Software Foundation (ASF) under one\n",
+ "# or more contributor license agreements. See the NOTICE file\n",
+ "# distributed with this work for additional information\n",
+ "# regarding copyright ownership. The ASF licenses this file\n",
+ "# to you under the Apache License, Version 2.0 (the\n",
+ "# \"License\"); you may not use this file except in compliance\n",
+ "# with the License. You may obtain a copy of the License at\n",
+ "#\n",
+ "# http://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing,\n",
+ "# software distributed under the License is distributed on an\n",
+ "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+ "# KIND, either express or implied. See the License for the\n",
+ "# specific language governing permissions and limitations\n",
+ "# under the License"
+ ],
+ "metadata": {
+ "id": "lBuUTzxD2mvJ",
+ "cellView": "form"
+ },
+ "execution_count": 1,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Natural Language Processing Pipeline**\n",
+ "\n",
+ "**Note**: This example is used from
[here](https://github.com/rasalt/healthcarenlp/blob/main/nlp_public.ipynb).\n",
+ "\n",
+ "\n",
+ "\n",
+ "This example demonstrates how to set up an Apache Beam pipeline that
reads a file from [Google Cloud
Storage](https://https://cloud.google.com/storage), and calls the [Google Cloud
Healthcare NLP API](https://cloud.google.com/healthcare-api/docs/how-tos/nlp)
to extract information from unstructured data. This application can be used in
contexts such as reading scanned clinical documents and extracting structure
from it.\n",
+ "\n",
+ "An Apache Beam pipeline is a pipeline that reads input data,
transforms that data, and writes output data. It consists of PTransforms and
PCollections. A PCollection represents a distributed data set that your Beam
pipeline operates on. A PTransform represents a data processing operation, or a
step, in your pipeline. It takes one or more PCollections as input, performs a
processing function that you provide on the elements of that PCollection, and
produces zero or more output PCollection objects.\n",
+ "\n",
+ "For details about Apache Beam pipelines, including PTransforms and
PCollections, visit the [Beam Programming
Guide](https://beam.apache.org/documentation/programming-guide/).\n",
+ "\n",
+ "You'll be able to use this notebook to explore the data in each
PCollection."
+ ],
+ "metadata": {
+ "id": "nEUAYCTx4Ijj"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "First, lets install Apache Beam."
+ ],
+ "metadata": {
+ "id": "ZLBB0PTG5CHw"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "pip install apache-beam[gcp]"
+ ],
+ "metadata": {
+ "id": "O7hq2sse8K4u"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Set the variables in the next cell based upon your project and
preferences. The files referred to in this notebook nlpsample*.csv are in the
format with one\n",
+ "blurb of clinical note.\n",
+ "\n",
+ "Note that below, **us-central1** is hardcoded as the location. This
is because of the limited number of
[locations](https://cloud.google.com/healthcare-api/docs/how-tos/nlp) the API
currently supports."
+ ],
+ "metadata": {
+ "id": "D7lJqW2PRFcN"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "DATASET=\"<YOUR-DATASET-HERE>\"\n",
+ "TEMP_LOCATION=\"<YOUR-TEMP-LOCATION-HERE>\"\n",
+ "PROJECT='<YOUR-PROJECT-ID-HERE>'\n",
+ "LOCATION='us-central1'\n",
+
"URL=f'https://healthcare.googleapis.com/v1/projects/{PROJECT}/locations/{LOCATION}/services/nlp:analyzeEntities'\n",
+
"NLP_SERVICE=f'projects/{PROJECT}/locations/{LOCATION}/services/nlp'\n",
+ "GCS_BUCKET=PROJECT"
+ ],
+ "metadata": {
+ "id": "s9lhe5CZ5F3o"
+ },
+ "execution_count": 5,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**BigQuery Setup**\n",
+ "\n",
+ "We will be using BigQuery to warehouse the structured data revealed
in the output of the Healthcare NLP API. For this purpose, we create 3 tables
to organize the data. Specifically, these will be table entities, table
relations, and table entity mentions, which are all outputs of interest from
the Healthcare NLP API."
+ ],
+ "metadata": {
+ "id": "DI_Qkyn75LO-"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from google.cloud import bigquery\n",
+ "\n",
+ "# Construct a BigQuery client object.\n",
+ "\n",
+ "TABLE_ENTITY=\"entity\"\n",
+ "\n",
+ "\n",
+ "schemaEntity = [\n",
+ " bigquery.SchemaField(\"entityId\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"preferredTerm\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"vocabularyCodes\", \"STRING\",
mode=\"REPEATED\"),\n",
+ "]\n",
+ "\n",
+ "\n",
+ "client = bigquery.Client()\n",
+ "\n",
+ "# Create Table IDs\n",
+ "table_ent = PROJECT+\".\"+DATASET+\".\"+TABLE_ENTITY\n",
+ "\n",
+ "\n",
+ "# If table exists, delete the tables.\n",
+ "client.delete_table(table_ent, not_found_ok=True)\n",
+ "\n",
+ "\n",
+ "# Create tables\n",
+ "\n",
+ "table = bigquery.Table(table_ent, schema=schemaEntity)\n",
+ "table = client.create_table(table) # Make an API request.\n",
+ "\n",
+ "print(\n",
+ " \"Created table {}.{}.{}\".format(table.project,
table.dataset_id, table.table_id)\n",
+ ")"
+ ],
+ "metadata": {
+ "id": "bZDqtFVE5Wd_"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from google.cloud import bigquery\n",
+ "\n",
+ "# Construct a BigQuery client object.\n",
+ "\n",
+ "TABLE_REL=\"relations\"\n",
+ "\n",
+ "schemaRelations = [\n",
+ " bigquery.SchemaField(\"subjectId\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"objectId\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"confidence\", \"FLOAT64\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"id\", \"STRING\", mode=\"NULLABLE\"),\n",
+ "]\n",
+ "\n",
+ "client = bigquery.Client()\n",
+ "\n",
+ "# Create Table IDs\n",
+ "\n",
+ "table_rel = PROJECT+\".\"+DATASET+\".\"+TABLE_REL\n",
+ "\n",
+ "# If table exists, delete the tables.\n",
+ "\n",
+ "client.delete_table(table_rel, not_found_ok=True)\n",
+ "\n",
+ "# Create tables\n",
+ "\n",
+ "table = bigquery.Table(table_rel, schema=schemaRelations)\n",
+ "table = client.create_table(table) # Make an API request.\n",
+ "print(\n",
+ " \"Created table {}.{}.{}\".format(table.project,
table.dataset_id, table.table_id)\n",
+ ")\n",
+ "\n",
+ "\n"
+ ],
+ "metadata": {
+ "id": "YK-G7uV5APuP"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from google.cloud import bigquery\n",
+ "\n",
+ "# Construct a BigQuery client object.\n",
+ "\n",
+ "TABLE_ENTITYMENTIONS=\"entitymentions\"\n",
+ "\n",
+ "schemaEntityMentions = [\n",
+ " bigquery.SchemaField(\"mentionId\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"type\", \"STRING\", mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\n",
+ " \"text\",\n",
+ " \"RECORD\",\n",
+ " mode=\"NULLABLE\",\n",
+ " fields=[\n",
+ " bigquery.SchemaField(\"content\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"beginOffset\", \"INTEGER\",
mode=\"NULLABLE\"),\n",
+ " ],\n",
+ " ),\n",
+ " bigquery.SchemaField(\n",
+ " \"linkedEntities\",\n",
+ " \"RECORD\",\n",
+ " mode=\"REPEATED\",\n",
+ " fields=[\n",
+ " bigquery.SchemaField(\"entityId\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " ],\n",
+ " ),\n",
+ " bigquery.SchemaField(\n",
+ " \"temporalAssessment\",\n",
+ " \"RECORD\",\n",
+ " mode=\"NULLABLE\",\n",
+ " fields=[\n",
+ " bigquery.SchemaField(\"value\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"confidence\", \"FLOAT64\",
mode=\"NULLABLE\"),\n",
+ " ],\n",
+ " ),\n",
+ " bigquery.SchemaField(\n",
+ " \"certaintyAssessment\",\n",
+ " \"RECORD\",\n",
+ " mode=\"NULLABLE\",\n",
+ " fields=[\n",
+ " bigquery.SchemaField(\"value\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"confidence\", \"FLOAT64\",
mode=\"NULLABLE\"),\n",
+ " ],\n",
+ " ),\n",
+ " bigquery.SchemaField(\n",
+ " \"subject\",\n",
+ " \"RECORD\",\n",
+ " mode=\"NULLABLE\",\n",
+ " fields=[\n",
+ " bigquery.SchemaField(\"value\", \"STRING\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"confidence\", \"FLOAT64\",
mode=\"NULLABLE\"),\n",
+ " ],\n",
+ " ),\n",
+ " bigquery.SchemaField(\"confidence\", \"FLOAT64\",
mode=\"NULLABLE\"),\n",
+ " bigquery.SchemaField(\"id\", \"STRING\", mode=\"NULLABLE\")\n",
+ "]\n",
+ "\n",
+ "client = bigquery.Client()\n",
+ "\n",
+ "# Create Table IDs\n",
+ "\n",
+ "table_mentions = PROJECT+\".\"+DATASET+\".\"+TABLE_ENTITYMENTIONS\n",
+ "\n",
+ "# If table exists, delete the tables.\n",
+ "\n",
+ "client.delete_table(table_mentions, not_found_ok=True)\n",
+ "\n",
+ "# Create tables\n",
+ "\n",
+ "table = bigquery.Table(table_mentions,
schema=schemaEntityMentions)\n",
+ "table = client.create_table(table) # Make an API request.\n",
+ "print(\n",
+ " \"Created table {}.{}.{}\".format(table.project,
table.dataset_id, table.table_id)\n",
+ ")"
+ ],
+ "metadata": {
+ "id": "R9IHgZKoAQWj"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**Pipeline Setup**\n",
+ "\n",
+ "We will use InteractiveRunner in this notebook."
+ ],
+ "metadata": {
+ "id": "jc_iS_BP5aS4"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# Python's regular expression library\n",
+ "import re\n",
+ "from sys import argv\n",
+ "# Beam and interactive Beam imports\n",
+ "import apache_beam as beam\n",
+ "from apache_beam.runners.interactive.interactive_runner import
InteractiveRunner\n",
+ "import apache_beam.runners.interactive.interactive_beam as ib\n",
+ "\n",
+ "#Reference
https://cloud.google.com/dataflow/docs/guides/specifying-exec-params#python_1\n",
+ "from apache_beam.options.pipeline_options import PipelineOptions\n",
+ "\n",
+ "runnertype = \"InteractiveRunner\"\n",
+ "\n",
+ "options = PipelineOptions(\n",
+ " flags=argv,\n",
+ " runner=runnertype,\n",
+ " project=PROJECT,\n",
+ " job_name=\"my-healthcare-nlp-job\",\n",
+ " temp_location=TEMP_LOCATION,\n",
+ " region=LOCATION)"
+ ],
+ "metadata": {
+ "id": "07ct6kf55ihP"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "The following defines a `PTransform` named `ReadLinesFromText`, that
extracts lines from a file."
+ ],
+ "metadata": {
+ "id": "dO1A9_WK5lb4"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "class ReadLinesFromText(beam.PTransform):\n",
+ "\n",
+ " def __init__(self, file_pattern):\n",
+ " self._file_pattern = file_pattern\n",
+ "\n",
+ " def expand(self, pcoll):\n",
+ " return (pcoll.pipeline\n",
+ " | beam.io.ReadFromText(self._file_pattern))"
+ ],
+ "metadata": {
+ "id": "t5iDRKMK5n_B"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "The following sets up an Apache Beam pipeline with the *Interactive
Runner*. The *Interactive Runner* is the runner suitable for running in
notebooks. A runner is an execution engine for Apache Beam pipelines."
+ ],
+ "metadata": {
+ "id": "HI_HVB185sMQ"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from google.auth import default\n",
+ "\n",
+ "credentials = default()"
+ ],
+ "metadata": {
+ "id": "dMc10Dlgtp1c"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "p = beam.Pipeline(options = options)"
+ ],
+ "metadata": {
+ "id": "7osCZ1om5ql0"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "The following sets up a PTransform that extracts words from a Google
Cloud Storage file that contains lines with each line containing a In our
example, each line is a medical notes excerpt that will be passed through the
Healthcare NLP API\n",
+ "\n",
+ "**\"|\"** is an overloaded operator that applies a PTransform to a
PCollection to produce a new PCollection. Together with |, >> allows you to
optionally name a PTransform.\n",
+ "\n",
+ "Usage:[PCollection] | [PTransform], **or** [PCollection] | [name] >>
[PTransform]"
+ ],
+ "metadata": {
+ "id": "EaF8NfC_521y"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "lines = p | 'read' >> ReadLinesFromText(GCS_BUCKET +
\"nlpsample500.csv\")"
Review Comment:
I received this error at this cell:
```
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies
required for Interactive Beam PCollection visualization are not available,
please use: `pip install apache-beam[interactive]` to install necessary
dependencies to enable all data visualization features.
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args:
['/usr/local/lib/python3.10/dist-packages/ipykernel_launcher.py', '-f',
'/root/.local/share/jupyter/runtime/kernel-dffda434-c113-4e04-b963-0f893638a253.json']
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
[<ipython-input-14-01de179c8151>](https://localhost:8080/#) in <cell line:
1>()
----> 1 lines = p | 'read' >> ReadLinesFromText(GCS_BUCKET +
"nlpsample500.csv")
12 frames
[/usr/local/lib/python3.10/dist-packages/apache_beam/io/filebasedsource.py](https://localhost:8080/#)
in _validate(self)
188 match_result = FileSystems.match([pattern], limits=[1])[0]
189 if len(match_result.metadata_list) <= 0:
--> 190 raise IOError('No files found based on the file pattern %s' %
pattern)
191
192 def split(
OSError: No files found based on the file pattern
47378da8-91cd-4362-abe4-0932d933acccnlpsample500.csv
```
##########
examples/notebooks/healthcare/beam_nlp.ipynb:
##########
@@ -0,0 +1,642 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ "<a
href=\"https://colab.research.google.com/github/apache/beam/blob/healthcarenlp/examples/notebooks/healthcare/beam_nlp.ipynb\"
target=\"_parent\"><img
src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In
Colab\"/></a>"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# @title ###### Licensed to the Apache Software Foundation (ASF),
Version 2.0 (the \"License\")\n",
+ "\n",
+ "# Licensed to the Apache Software Foundation (ASF) under one\n",
+ "# or more contributor license agreements. See the NOTICE file\n",
+ "# distributed with this work for additional information\n",
+ "# regarding copyright ownership. The ASF licenses this file\n",
+ "# to you under the Apache License, Version 2.0 (the\n",
+ "# \"License\"); you may not use this file except in compliance\n",
+ "# with the License. You may obtain a copy of the License at\n",
+ "#\n",
+ "# http://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing,\n",
+ "# software distributed under the License is distributed on an\n",
+ "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+ "# KIND, either express or implied. See the License for the\n",
+ "# specific language governing permissions and limitations\n",
+ "# under the License"
+ ],
+ "metadata": {
+ "id": "lBuUTzxD2mvJ",
+ "cellView": "form"
+ },
+ "execution_count": 1,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Natural Language Processing Pipeline**\n",
+ "\n",
+ "**Note**: This example is used from
[here](https://github.com/rasalt/healthcarenlp/blob/main/nlp_public.ipynb).\n",
+ "\n",
+ "\n",
+ "\n",
+ "This example demonstrates how to set up an Apache Beam pipeline that
reads a file from [Google Cloud
Storage](https://https://cloud.google.com/storage), and calls the [Google Cloud
Healthcare NLP API](https://cloud.google.com/healthcare-api/docs/how-tos/nlp)
to extract information from unstructured data. This application can be used in
contexts such as reading scanned clinical documents and extracting structure
from it.\n",
+ "\n",
+ "An Apache Beam pipeline is a pipeline that reads input data,
transforms that data, and writes output data. It consists of PTransforms and
PCollections. A PCollection represents a distributed data set that your Beam
pipeline operates on. A PTransform represents a data processing operation, or a
step, in your pipeline. It takes one or more PCollections as input, performs a
processing function that you provide on the elements of that PCollection, and
produces zero or more output PCollection objects.\n",
+ "\n",
+ "For details about Apache Beam pipelines, including PTransforms and
PCollections, visit the [Beam Programming
Guide](https://beam.apache.org/documentation/programming-guide/).\n",
+ "\n",
+ "You'll be able to use this notebook to explore the data in each
PCollection."
+ ],
+ "metadata": {
+ "id": "nEUAYCTx4Ijj"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "First, lets install Apache Beam."
+ ],
+ "metadata": {
+ "id": "ZLBB0PTG5CHw"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "pip install apache-beam[gcp]"
+ ],
+ "metadata": {
+ "id": "O7hq2sse8K4u"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Set the variables in the next cell based upon your project and
preferences. The files referred to in this notebook nlpsample*.csv are in the
format with one\n",
+ "blurb of clinical note.\n",
+ "\n",
+ "Note that below, **us-central1** is hardcoded as the location. This
is because of the limited number of
[locations](https://cloud.google.com/healthcare-api/docs/how-tos/nlp) the API
currently supports."
+ ],
+ "metadata": {
+ "id": "D7lJqW2PRFcN"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "DATASET=\"<YOUR-DATASET-HERE>\"\n",
+ "TEMP_LOCATION=\"<YOUR-TEMP-LOCATION-HERE>\"\n",
+ "PROJECT='<YOUR-PROJECT-ID-HERE>'\n",
+ "LOCATION='us-central1'\n",
+
"URL=f'https://healthcare.googleapis.com/v1/projects/{PROJECT}/locations/{LOCATION}/services/nlp:analyzeEntities'\n",
+
"NLP_SERVICE=f'projects/{PROJECT}/locations/{LOCATION}/services/nlp'\n",
+ "GCS_BUCKET=PROJECT"
+ ],
+ "metadata": {
+ "id": "s9lhe5CZ5F3o"
+ },
+ "execution_count": 5,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**BigQuery Setup**\n",
+ "\n",
+ "We will be using BigQuery to warehouse the structured data revealed
in the output of the Healthcare NLP API. For this purpose, we create 3 tables
to organize the data. Specifically, these will be table entities, table
relations, and table entity mentions, which are all outputs of interest from
the Healthcare NLP API."
+ ],
+ "metadata": {
+ "id": "DI_Qkyn75LO-"
+ }
+ },
+ {
+ "cell_type": "code",
Review Comment:
Should there be a cell prior to this one to prompt the user to login? I ran
this cell and naturally received an error of not being authenticated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]