lsZ5_SVCDe_DGw08lU2Duf0yymdZZ7k&s=tYWGTjYLcXRMIuaE3IKN7ugoMSSXqfHknoWQewlqMPc&e=>
> )
>
> So the method(s) exist, but not covered in the Scala API doc.
>
> How do you raise this as a ‘bug’ ?
>
> Thx
>
> -Mike
>
> --
Scott Reynolds
Principal Engineer
[image: twilio] <http://www.twilio.com/?utm_source=email_signature>
EMAIL sreyno...@twilio.com
It is always the case that 0.8 and 0.9 will work with a 0.10 broker.
On Fri, Oct 7, 2016 at 1:28 PM Michael Armbrust
wrote:
>
> 0.10 consumers won't work on an earlier broker.
> Earlier consumers will (should?) work on a 0.10 broker.
>
>
> This lines up with my testing. Is there a page I'm mis
Following the documentation on spark-submit,
http://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit
- application-jar: Path to a bundled jar including your application and
all dependencies. The URL must be globally visible inside of your cl
ct 16, 2015 at 2:25 AM, Steve Loughran
wrote:
>
> > On 15 Oct 2015, at 19:04, Scott Reynolds wrote:
> >
> > List,
> >
> > Right now we build our spark jobs with the s3a hadoop client. We do this
> because our machines are only allowed to use IAM access to the s
ur EMR cluster.
> And that brings s3a jars to the worker nodes and it becomes available to
> your application.
>
> On Thu, Oct 15, 2015 at 11:04 AM, Scott Reynolds
> wrote:
>
>> List,
>>
>> Right now we build our spark jobs with the s3a hadoop client. We do this
&
List,
Right now we build our spark jobs with the s3a hadoop client. We do this
because our machines are only allowed to use IAM access to the s3 store. We
can build our jars with the s3a filesystem and the aws sdk just fine and
this jars run great in *client mode*.
We would like to move from clie