One quick thing to check is the number of regions in the HBase table for
your app. If it's less than the number of cores you have then you won't be
utilizing all computing power. Hope this helps.

Tom

On Sep 5, 2016 9:05 PM, "Digambar Bhat" <[email protected]> wrote:

> Update please..
>
> On 30-Aug-2016 8:06 pm, "Digambar Bhat" <[email protected]> wrote:
>
>> I am using Universal Recommender.
>>
>> On 30-Aug-2016 8:05 pm, "Pat Ferrel" <[email protected]> wrote:
>>
>>> Training time is also template dependent, what template are you using?
>>>
>>> On Aug 30, 2016, at 12:21 AM, Digambar Bhat <[email protected]>
>>> wrote:
>>>
>>> Hello,
>>>
>>> I am using PredictionIO since last one  year. It's working fine for me.
>>>
>>> Earlier importing, training was working flawlessly. But now training is
>>> very slow as events are increased. Training almost taking 9-10 hours.
>>>
>>> Currently, events are about 15 million and items are about 10 million.
>>>
>>> Architecture is like below:
>>> Spark and elastic search is on two machines. Hadoop and hbase is on
>>> another two separate machines.
>>>
>>> Each machine has following configuration:
>>> 160GB ram, CPUs 40, Cores per socket 10, cpu MHz 3000
>>>
>>> So please let me know what is right configuration for such large events.
>>> Also let me know what possibility should I consider as my events are going
>>> to increase to billion. Will it work for such large data set?
>>>
>>> Thanks in advance.
>>>
>>> Thanks,
>>> Digambar
>>>
>>>

Reply via email to