The main reason is that if you compile it yourself then nobody can understand 
what you did. Hence any distribution can be downloaded and people can follow 
what you did. As far as I recall you had described several problems that the 
distributions did not have (eg you could not compile tez, spark only in an 
outdated version etc). Furthermore the distributions have a clear baseline for 
configuration of several complex pieces of software.

Hence even for production use a self-compiled version of something complex such 
as the Hadoop, hive, spark toolkit is clearly a no go.

> On 19 Jul 2016, at 08:25, Mich Talebzadeh <mich.talebza...@gmail.com> wrote:
> 
> 
> 
> "So not use a self-compiled hive or Spark version, but only the ones supplied 
> by distributions (cloudera, Hortonworks, Bigtop...) You will face performance 
> problems, strange errors etc when building and testing your code using 
> self-compiled versions."
> 
> This comment does not make sense and is meaningless without any evidence. 
> Either you provide evidence that you have done this work and you encountered 
> errors or better not mention it. Sounds like scaremongering.
> 
> 
> 
> 
> 
> 
> 
> 
> Dr Mich Talebzadeh
>  
> LinkedIn  
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  
> http://talebzadehmich.wordpress.com
> 
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  
> 
>> On 19 July 2016 at 06:51, Jörn Franke <jornfra...@gmail.com> wrote:
>> So not use a self-compiled hive or Spark version, but only the ones supplied 
>> by distributions (cloudera, Hortonworks, Bigtop...) You will face 
>> performance problems, strange errors etc when building and testing your code 
>> using self-compiled versions.
>> 
>> If you use the Hive APIs then the engine should not be relevant for your 
>> storage handler. Nevertheless, the APIs of the storage handler might have 
>> changed. 
>> 
>> However, I wonder why a 1-1 mapping does not work for you.
>> 
>>> On 18 Jul 2016, at 22:46, Mich Talebzadeh <mich.talebza...@gmail.com> wrote:
>>> 
>>> Hi,
>>> 
>>> You can move up to Hive 2 that works fine and pretty stable. You can opt 
>>> for Hive 1.2.1 if yoy wish.
>>> 
>>> If you want to use Spark (the replacement for Shark) as the execution 
>>> engine for Hive then the version that works (that I have managed to make it 
>>> work with Hive is Spark 1.3.1) that you will need to build from source.
>>> 
>>> It works and it is table.
>>> 
>>> Otherwise you may decide to use Spark Thrift Server (STS) that allows JDBC 
>>> access to Spark SQL (through beeline, Squirrel , Zeppelin) that has Hive 
>>> SQL context built into it as if you were using Hive Thrift Server (HSS)
>>> 
>>> HTH
>>> 
>>> 
>>> Dr Mich Talebzadeh
>>>  
>>> LinkedIn  
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>  
>>> http://talebzadehmich.wordpress.com
>>> 
>>> Disclaimer: Use it at your own risk. Any and all responsibility for any 
>>> loss, damage or destruction of data or any other property which may arise 
>>> from relying on this email's technical content is explicitly disclaimed. 
>>> The author will in no case be liable for any monetary damages arising from 
>>> such loss, damage or destruction.
>>>  
>>> 
>>>> On 18 July 2016 at 21:38, Lavelle, Shawn <shawn.lave...@osii.com> wrote:
>>>> Hello,
>>>> 
>>>>  
>>>> 
>>>>     I am working with an external storage handler written for Hive 0.11 
>>>> and run on a Shark execution engine.  I’d like to move forward and upgrade 
>>>> to hive 1.2.1 on spark 1.6 or even 2.0.  
>>>> 
>>>>    This storage has a need to run queries across tables existing in 
>>>> different databases in the external data store, so existing drivers that 
>>>> map hive to external storage in 1 to 1 mappings are insufficient. I have 
>>>> attempted this upgrade already, but found out that predicate pushdown was 
>>>> not occurring.  Was this changed in 1.2?
>>>> 
>>>>    Can I update and use the same storage handler in Hive or has this 
>>>> concept been replaced by the RDDs and DataFrame API?  
>>>>    
>>>> 
>>>>    Are these questions better for the Spark list?
>>>> 
>>>>  
>>>> 
>>>>    Thank you,
>>>> 
>>>>  
>>>> 
>>>> ~ Shawn M Lavelle
>>>> 
>>>>  
>>>> 
>>>> 
>>>> 
>>>> <image2a7f96.GIF>
>>>> 
>>>> Shawn Lavelle
>>>> Software Development
>>>> 
>>>> 4101 Arrowhead Drive
>>>> Medina, Minnesota 55340-9457
>>>> Phone: 763 551 0559
>>>> Fax: 763 551 0750
>>>> Email: shawn.lave...@osii.com
>>>> Website: www.osii.com
> 

Reply via email to