That might be a reasonable and much more simpler approach to try... but if
we resolve these issues, we should make it part of some frequent build to
make sure the build don't regress and that the actual functionality don't
regress either. Let me look into this again...

On Wed, Sep 7, 2016 at 2:46 PM, Josh Rosen <joshro...@databricks.com> wrote:

> I think that these tests are valuable so I'd like to keep them. If
> possible, though, we should try to get rid of our dependency on the Spotify
> docker-client library, since it's a dependency hell nightmare. Given our
> relatively simple use of Docker here, I wonder whether we could just write
> some simple scripting over the `docker` command-line tool instead of
> pulling in such a problematic library.
>
> On Wed, Sep 7, 2016 at 2:36 PM Luciano Resende <luckbr1...@gmail.com>
> wrote:
>
>> It looks like there is nobody running these tests, and after some
>> dependency upgrades in Spark 2.0 this has stopped working. I have tried to
>> bring up this but I am having some issues with getting the right
>> dependencies loaded and satisfying the docker-client expectations.
>>
>> The question then is: Does the community find value on having these tests
>> available ? Then we can focus on bringing them up and I can go push my
>> previous experiments as a WIP PR. Otherwise we should just get rid of these
>> tests.
>>
>> Thoughts ?
>>
>>
>> On Tue, Sep 6, 2016 at 4:05 PM, Suresh Thalamati <
>> suresh.thalam...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>>
>>> I am getting the following error , when I am trying to run jdbc docker
>>> integration tests on my laptop.   Any ideas , what I might be be doing
>>> wrong ?
>>>
>>> build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0
>>> -Phive-thriftserver -Phive -DskipTests clean install
>>> build/mvn -Pdocker-integration-tests -pl 
>>> :spark-docker-integration-tests_2.11
>>> compile test
>>>
>>> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>>> MaxPermSize=512m; support was removed in 8.0
>>> Discovery starting.
>>> Discovery completed in 200 milliseconds.
>>> Run starting. Expected test count is: 10
>>> MySQLIntegrationSuite:
>>>
>>> Error:
>>> 16/09/06 11:52:00 INFO BlockManagerMaster: Registered BlockManager
>>> BlockManagerId(driver, 9.31.117.25, 51868)
>>> *** RUN ABORTED ***
>>>   java.lang.AbstractMethodError:
>>>   at org.glassfish.jersey.model.internal.CommonConfig.
>>> configureAutoDiscoverableProviders(CommonConfig.java:622)
>>>   at org.glassfish.jersey.client.ClientConfig$State.
>>> configureAutoDiscoverableProviders(ClientConfig.java:357)
>>>   at org.glassfish.jersey.client.ClientConfig$State.
>>> initRuntime(ClientConfig.java:392)
>>>   at org.glassfish.jersey.client.ClientConfig$State.access$000(
>>> ClientConfig.java:88)
>>>   at org.glassfish.jersey.client.ClientConfig$State$3.get(
>>> ClientConfig.java:120)
>>>   at org.glassfish.jersey.client.ClientConfig$State$3.get(
>>> ClientConfig.java:117)
>>>   at org.glassfish.jersey.internal.util.collection.Values$
>>> LazyValueImpl.get(Values.java:340)
>>>   at org.glassfish.jersey.client.ClientConfig.getRuntime(
>>> ClientConfig.java:726)
>>>   at org.glassfish.jersey.client.ClientRequest.getConfiguration(
>>> ClientRequest.java:285)
>>>   at org.glassfish.jersey.client.JerseyInvocation.
>>> validateHttpMethodAndEntity(JerseyInvocation.java:126)
>>>   ...
>>> 16/09/06 11:52:00 INFO SparkContext: Invoking stop() from shutdown hook
>>> 16/09/06 11:52:00 INFO MapOutputTrackerMasterEndpoint:
>>> MapOutputTrackerMasterEndpoint stopped!
>>>
>>>
>>>
>>> Thanks
>>> -suresh
>>>
>>>
>>
>>
>> --
>> Luciano Resende
>> http://twitter.com/lresende1975
>> http://lresende.blogspot.com/
>>
>


-- 
Luciano Resende
http://twitter.com/lresende1975
http://lresende.blogspot.com/

Reply via email to