For reference, I've created OOZIE-2310 to improve this situation.

On Fri, Jul 17, 2015 at 11:39 AM, Robert Kanter <[email protected]>
wrote:

> Also, did you setup Oozie's Hadoop settings?
> http://oozie.apache.org/docs/4.2.0/AG_HadoopConfiguration.html
> The oozie.service.HadoopAccessorService.hadoop.configurations property is
> very important.
> Make sure to do that before the sharelib command I mentioned.
>
> On Fri, Jul 17, 2015 at 11:34 AM, Robert Kanter <[email protected]>
> wrote:
>
>> Hi Matteo,
>>
>> I took a quick look at the code and it looks like you got that ugly (and
>> not helpful NullPointerException) because of OOZIE-1877.  I'll create a
>> JIRA to fix that.
>>
>> Anyway, to fix the problem, you need to deploy the ShareLib in HDFS.
>> oozie-setup.sh sharelib create -fs hdfs://HOST:8020
>>
>>
>> - Robert
>>
>> On Fri, Jul 17, 2015 at 1:06 AM, Matteo Luzzi <[email protected]>
>> wrote:
>>
>>> Hello! I'm new on this mailing list since I just started looking into
>>> oozie
>>>
>>> I'm trying to run oozie 4.2.0 on hadoop 2.7.0 working in a
>>> psudo-distributed mode on my local machine. I built a distro of oozie
>>> binding it to the correct hadoop version I'm using, then, according to
>>> the
>>> documentation I did the following steps
>>> Created a folder called *libext* where I put a .zip of *ext-2.2* library
>>> Lauched the commands : oozie-setup.sh prepare-war, ooziedb.sh create
>>> -sqlfile oozie.sql -run, oozied.sh start
>>>
>>> Everything went fine, in fact I can navigate localhost:11000/oozie/ and
>>> also if I execute oozie admin -oozie http://localhost:11000/oozie
>>> -status the
>>> server replies with *NORMAL*.
>>>
>>> However I'm not able to launch any jobs, neither the examples. After
>>> moving
>>> the example folder in the correct position on HDSF if I execute
>>> oozie job -oozie localhost:11000/oozie -config
>>> /path/to/examples/apps/map-reduce/job.properties -run the server replies
>>> with
>>> *Error: HTTP error code: 500 : Internal Server Error*.
>>> Same situation if I try to submit a workflow programmatically using the
>>> Java API.
>>>
>>> Looking at the oozie.log file I get the following error/warning:
>>>
>>> ERROR V2AdminServlet:517 - SERVER[myserver] USER[-] GROUP[-] TOKEN[-]
>>> APP[-] JOB[-] ACTION[-] URL[GET
>>> http://localhost:11000/oozie/v2/admin/instrumentation?_dc=1437056225186]
>>> error,
>>>
>>> null java.lang.NullPointerException at
>>>
>>> org.apache.oozie.service.ShareLibService.getLatestLibPath(ShareLibService.java:687)
>>> at
>>>
>>> org.apache.oozie.service.ShareLibService$7.getValue(ShareLibService.java:742)
>>> at
>>>
>>> org.apache.oozie.service.ShareLibService$7.getValue(ShareLibService.java:737)
>>> at
>>>
>>> org.apache.oozie.servlet.BaseAdminServlet.instrElementsToJson(BaseAdminServlet.java:312)
>>> at
>>>
>>> org.apache.oozie.servlet.BaseAdminServlet.instrToJson(BaseAdminServlet.java:339)
>>> at
>>>
>>> org.apache.oozie.servlet.BaseAdminServlet.sendInstrumentationResponse(BaseAdminServlet.java:396)
>>> at
>>>
>>> org.apache.oozie.servlet.V2AdminServlet.sendInstrumentationResponse(V2AdminServlet.java:124)
>>> at
>>>
>>> org.apache.oozie.servlet.BaseAdminServlet.doGet(BaseAdminServlet.java:127)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>>>
>>> everytime I refresh the webapp interface and
>>>
>>> AuthenticationToken
>>> ignored:org.apache.hadoop.security.authentication.util.SignerException:
>>> Invalid signature
>>>
>>> everytime I submit a workflow either via command line
>>>
>>> I cant even get it working in local mode. I get all the jobs sumbitted to
>>> local server killed. I thought it could have been an hadoop problem, but
>>> everything seems fine
>>>
>>> My working environment:
>>>
>>> Mac Os Yosemite
>>> java version "1.8.0_45"
>>> Hadoop 2.7.0
>>> oozie 4.2.0
>>> Apache Maven 3.3.3
>>>
>>> Can anyone help me to get it working?
>>>
>>> --
>>> Matteo Remo Luzzi
>>>
>>
>>
>

Reply via email to