All,
I have some questions about dependencies for the UR:
- Is there a specific version of Spark that I must use with the UR or is
any version that's compatible with PIO sufficient?
- Is the UR capable of using Elasticsearch as the metadata and event
store?
- Is it a good idea to
/jira/browse/PIO-72. If it is happening still in
> 0.12.0+ we will need to investigate.
>
> Regards,
> Donald
>
> On Tue, Jun 5, 2018 at 1:35 PM Miller, Clifford opsgroup.com> wrote:
>
>> I'm running a PIO will all remote dependencies. I have the following:
>>
.scala:187)
at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
On Tue, May 29, 2018 at 12:01 AM, Miller, Clifford <
clifford.mil...@phoenix-opsgroup.com&
ler,
> I faced same issue.
> It is giving error as release file has '-' in version
> Insert simple version in release file something like 2.6.
>
> On Mon, May 28, 2018 at 4:32 AM, Miller, Clifford <
> clifford.mil...@phoenix-opsgroup.com> wrote:
>
>> *I've
gt; pio status should be fine with the remote HBase
>
>
> From: Miller, Clifford <clifford.mil...@phoenix-opsgroup.com>
> <clifford.mil...@phoenix-opsgroup.com>
> Reply: Miller, Clifford <clifford.mil...@phoenix-opsgroup.com>
> <clifford.mil...@phoenix-opsg
I'm attempting to use a remote cluster with PIO 0.12.1. When I run
pio-start-all it starts the hbase locally and does not use the remote
cluster as configured. I've copied the HBase and Hadoop conf files from
the cluster and put them into the locally configured directories. I set
this up in the
ccamsmachete.com>于2018年5月25日周五 上午7:56写道:
>
>> I’m having a java.lang.NoClassDefFoundError in a different context and
>> different class. Have you tried this without Yarn? Sorry I can’t find the
>> rest of this thread.
>>
>>
>> From: Miller, Clifford
I've setup a cluster using Hortonworks HDP with Ambari all running in AWS.
I then created a separate EC2 instance and installed PIO 0.12.1, hadoop,
elasticsearch, hbase, and spark2. I copied the configurations from the HDP
cluster and then pio-start-all. The pio-start-all completes successfully
I'm exploring cost saving options for a customer that is wanting to
utilize PredictionIO. We plan on running multiple engines/templates. We
are planning on running everything in AWS and are hoping to not have all
data loaded for all templates at once. The hope is to:
1. start up the HBase
I'm exploring cost saving options for a customer that is wanting to utilize
PredictionIO. We plan on running multiple engines/templates. We are
planning on running everything in AWS and are hoping to not have all data
loaded for all templates at once. The hope is to:
1. start up the HBase
10 matches
Mail list logo