ASF GitHub Bot commented on BAHIR-122:

Github user ckadner commented on the issue:

    @ire7715 -- I create a [Google API Service 
 and [added the generated key 
 to our Jenkins server. All your tests appear to be [enabled and complete 
successfully]( now.
    [INFO] --- scalatest-maven-plugin:1.0:test (test) @ 
spark-streaming-pubsub_2.11 ---
    Discovery starting.
    Google Pub/Sub tests that actually send data has been enabled by setting 
the environment
    variable ENABLE_PUBSUB_TESTS to 1.
    This will create Pub/Sub Topics and Subscriptions in Google cloud platform.
    Please be aware that this may incur some Google cloud costs.
    Set the environment variable GCP_TEST_PROJECT_ID to the desired project.
    Discovery completed in 135 milliseconds.
    Run starting. Expected test count is: 10
    - should build application default
    - should build json service account
    - should provide json creds
    - should build p12 service account
    - should provide p12 creds
    - should build metadata service account
    - SparkGCPCredentials classes should be serializable
    Using project apache-bahir-pubsub for creating Pub/Sub topic and 
subscription for tests.
    - PubsubUtils API
    - pubsub input stream
    - pubsub input stream, create pubsub
    Run completed in 14 seconds, 143 milliseconds.
    Total number of tests run: 10
    Suites: completed 3, aborted 0
    Tests: succeeded 10, failed 0, canceled 0, ignored 0, pending 0
    All tests passed.
    Would you **please add a short paragraph** to the [PubSub 
describing how to enable your unit tests by setting the environment variables 
(and how to set up a Google API *service account*, generate *key files* and how 
to minimally configure the *Roles* like "Pub/Sub Publisher", etc)? i.e.:
    mvn clean package -DskipTests -pl streaming-pubsub
    export GCP_TEST_PROJECT_ID="apache-bahir-pubsub"
    mvn test -pl streaming-pubsub
    **Thank you!**

> [PubSub] Make "ServiceAccountCredentials" really broadcastable
> --------------------------------------------------------------
>                 Key: BAHIR-122
>                 URL: https://issues.apache.org/jira/browse/BAHIR-122
>             Project: Bahir
>          Issue Type: Improvement
>          Components: Spark Streaming Connectors
>            Reporter: Ire Sun
> The origin implementation broadcast the key file path to Spark cluster, then 
> the executor read key file with the broadcasted path. Which is absurd, if you 
> are using a shared Spark cluster in a group/company, you certainly not want 
> to (and have no right to) put your key file on each instance of the cluster.
> If you store the key file on driver node and submit your job to a remote 
> cluster. You would get the following warning:
> {{WARN ReceiverTracker: Error reported by receiver for stream 0: Failed to 
> pull messages - java.io.FileNotFoundException}}

This message was sent by Atlassian JIRA

Reply via email to