ASF GitHub Bot commented on BAHIR-122:

Github user ire7715 commented on the issue:

    Thanks @ckadner for reviewing 😄 
    ### Re: not-force pushing
    Since it is preferable to push changes as other commit, and the PR will be 
squashed when merging. I had recovered the origin commits, and made the last 
changes as new commits (including 
    ### Re: Invalid paths suppose to be an error not an ignorant
    You are right, didn't think about it. I made the test case do not check if 
the file existence anymore, and instead let the `ServiceAccountCredentials` 
check the file existence at construction. If the file is not existed, just 
raise a FileNotFoundException.
    ### Generate a key file for Jenkins PR builder
    I would love to do it! But it might be a little absurd. Here is my points:
    1. There is `org.apache.spark.streaming.pubsub.PubsubTestUtils` which 
already uses environment variables to get the GCP project and json/p12 key 
files. I guess there is a set of key files are set on the Jenkins server 
already? Or the origin PubsubTestUtils didn't turned on by the environment 
    2. I might be the wrong person to generate the key files. Since I am not a 
member in the project, if I generated the key files today, and somehow someday 
I accidentally deleted the key file (or my Google account just got banned). The 
key would be no use at all, and it could be hard for you to contact me. Maybe 
there is a Google account for this Jenkins server, that you may generate the 
key file with it?
    If I misunderstand you or there is anything wrong please tell me.
    #### To generate a service account key file
    1. Go to [Google API Console](console.developers.google.com)
    2. Choose the `Credentials` Tab> `Create credentials` button> `Service 
account key`
    3. Fill the form to create one. I did twice, one for JSON key file, another 
for P12.

> [PubSub] Make "ServiceAccountCredentials" really broadcastable
> --------------------------------------------------------------
>                 Key: BAHIR-122
>                 URL: https://issues.apache.org/jira/browse/BAHIR-122
>             Project: Bahir
>          Issue Type: Improvement
>          Components: Spark Streaming Connectors
>            Reporter: Ire Sun
> The origin implementation broadcast the key file path to Spark cluster, then 
> the executor read key file with the broadcasted path. Which is absurd, if you 
> are using a shared Spark cluster in a group/company, you certainly not want 
> to (and have no right to) put your key file on each instance of the cluster.
> If you store the key file on driver node and submit your job to a remote 
> cluster. You would get the following warning:
> {{WARN ReceiverTracker: Error reported by receiver for stream 0: Failed to 
> pull messages - java.io.FileNotFoundException}}

This message was sent by Atlassian JIRA

Reply via email to