On Wed, May 30, 2018 at 7:15 AM, Miller, Clifford <
clifford.mil...@phoenix-opsgroup.com> wrote:
> That's the command that I'm using but it gives me the exception that I
> listed in the previous email. I've installed a Spark standalone cluster
> and am using that for training for now but would
0.12.1 with HDP Spark on YARN
That's the command that I'm using but it gives me the exception that I
listed in the previous email. I've installed a Spark standalone cluster
and am using that for training for now but would like to use Spark on YARN
eventually.
Are you using HDP? If so, what version
I use 'pio train -- --master yarn'
It works for me to train universal recommender
On Tue, May 29, 2018 at 8:31 PM, Miller, Clifford <
clifford.mil...@phoenix-opsgroup.com> wrote:
> To add more details to this. When I attempt to execute my training job
> using the command 'pio train -- --master
To add more details to this. When I attempt to execute my training job
using the command 'pio train -- --master yarn' I get the exception that
I've included below. Can anyone tell me how to correctly submit the
training job or what setting I need to change to make this work. I've made
not
So updating the version in the RELEASE file to 2.1.1 fixed the version
detection problem but I'm still not able to submit Spark jobs unless they
are strictly local. How are you submitting to the HDP Spark?
Thanks,
--Cliff.
On Mon, May 28, 2018 at 1:12 AM, suyash kharade
wrote:
> Hi Miller,
Hi Miller,
I faced same issue.
It is giving error as release file has '-' in version
Insert simple version in release file something like 2.6.
On Mon, May 28, 2018 at 4:32 AM, Miller, Clifford <
clifford.mil...@phoenix-opsgroup.com> wrote:
> *I've installed an HDP cluster with Hbase