Build failed in Jenkins: Knox-master-patch-scan #25074

2016-11-02 Thread Apache Jenkins Server
See 

--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H1 (yahoo-not-h2 Hadoop Mapreduce Hdfs Pig) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/knox.git # timeout=10
Fetching upstream changes from https://github.com/apache/knox.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://github.com/apache/knox.git +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision c51b0feb65d3bfe90227b54cd3d02a8938fff0c4 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c51b0feb65d3bfe90227b54cd3d02a8938fff0c4
 > git rev-list c51b0feb65d3bfe90227b54cd3d02a8938fff0c4 # timeout=10
[Knox-master-patch-scan] $ /bin/bash /tmp/hudson965696927105708175.sh
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
100 983230 983230 0   144k  0 --:--:-- --:--:-- --:--:--  144k
100  131k0  131k0 0   133k  0 --:--:-- --:--:-- --:--:--  133k
curl: (18) transfer closed with outstanding read data remaining
Could not retrieve available patches from JIRA
Build step 'Execute shell' marked build as failure


[jira] [Created] (KNOX-770) Update NOTICE file with new year

2016-11-02 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created KNOX-770:
--

 Summary: Update NOTICE file with new year
 Key: KNOX-770
 URL: https://issues.apache.org/jira/browse/KNOX-770
 Project: Apache Knox
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


Currently it is "Copyright 2012-2015".
It should be 2016 instead of 2015.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: HA support for YARN and NameNode UI level

2016-11-02 Thread Mohammad Islam
Thanks Sumit for the pointer.
Is there any plan to work on it soon? The reason I'm asking is : if I get some 
guidance, I can also stab on it if Jeffrey (current Assignee) is busy.
Regards,Mohammad
  

On Wednesday, November 2, 2016 4:43 PM, Sumit Gupta 
 wrote:
 

 Hi Mohammad,

You are right we have HA support for various services that is somewhat generic 
and based on connectivity failures. Our WebHDFS HA support is a bit more 
specific and maps to some of the nuanced states that the Namenodes can be in. 
We have yet to put in specific support for Yarn and like you point out there is 
a redirect that occurs that needs to be handled.

There is an open JIRA to add support for Yarn’s resource manager in HA mode.

https://issues.apache.org/jira/browse/KNOX-719

Sumit

From: Mohammad Islam mailto:misla...@yahoo.com>>
Reply-To: "u...@knox.apache.org" 
mailto:u...@knox.apache.org>>, Mohammad Islam 
mailto:misla...@yahoo.com>>
Date: Wednesday, November 2, 2016 at 6:57 PM
To: "u...@knox.apache.org" 
mailto:u...@knox.apache.org>>, 
"dev@knox.apache.org" 
mailto:dev@knox.apache.org>>
Subject: Re: HA support for YARN and NameNode UI level

Just to add more specific use-case of YARN UI.

BTW my KNOX service was running behind another FE proxy.

When I typed https:///gateway/sandbox/yarn in my browser, I saw 
the message " This is standby RM. Redirecting to the current active RM: 
http://:8088/cluster:8088/cluster>".

At this time, my browser also tried to connect to 
"http://:8088/cluster:8088/cluster>"
  but it's timed out because the YARN internal host was behind the firewall  
and was not directly accessible through browser.


Looks like Knox was sending a response to FE proxy with a header "Refresh: 3; 
url=http://:8088/cluster". My question is : is 
there a way for Knox to replace the internal Yarn Host or endpoint with 
something like knoxy endpoint.

Regards,
Mohammad




On Tuesday, November 1, 2016 12:24 PM, Mohammad Islam 
mailto:misla...@yahoo.com>> wrote:


Hi,
I'm wandering if there is any built-in support in Knox for any (NN/RM) UI to 
automatically forward to active NN/RM.

I saw there is HA support in WebHDFS level. But not sure if I need to do 
anything special for HDFSUI.
The same is true for YARNUI.

Regards,
Mohammad





   

[jira] [Commented] (KNOX-767) Knox transforms XML files written to WebHDFS

2016-11-02 Thread Sumit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630971#comment-15630971
 ] 

Sumit Gupta commented on KNOX-767:
--

That's correct [~jeffreyr97], I think your fix for the CDATA is good for when 
the XML is parsed. That part is a good enhancement to the xmlfilterreader.

> Knox transforms XML files written to WebHDFS
> 
>
> Key: KNOX-767
> URL: https://issues.apache.org/jira/browse/KNOX-767
> Project: Apache Knox
>  Issue Type: Bug
>Reporter: Sumit Gupta
>Assignee: Jeffrey E  Rodriguez
> Fix For: 0.11.0
>
> Attachments: KNOX-767.patch
>
>
> When you write an XML file to WebHDFS through Knox with the Content-Type 
> header set to text/xml or application/xml it is transformed by Knox so that 
> empty tags like  are written as  and CDATA is interpreted. 
> This does not happen if written directly to WebHDFS. For example:
> {code}
> [root@hdp250 ~]# cat xxx
> 
>
>
> 
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT 
> "https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx?op=CREATE&overwrite=true";
> HTTP/1.1 307 Temporary Redirect
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Set-Cookie: 
> JSESSIONID=bt3timb9jl7k546fcntrj8s;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:35 GMT
> Cache-Control: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> X-FRAME-OPTIONS: SAMEORIGIN
> Location: 
> https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48
> Server: Jetty(6.1.26.hwx)
> Content-Length: 0
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT -T xxx -H 
> 'Content-Type: text/xml' 
> "https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48";
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> Date: Thu, 27 Oct 2016 19:25:54 GMT
> Set-Cookie: 
> JSESSIONID=3o27jby7c2a6mdpxducddqac;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:54 GMT
> Location: https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx
> Connection: close
> Server: Jetty(9.2.15.v20160210)
> [root@hdp250 ~]# hdfs dfs -cat /tmp/xxx
> 
>
>wibble
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KNOX-767) Knox transforms XML files written to WebHDFS

2016-11-02 Thread Jeffrey E Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630956#comment-15630956
 ] 

Jeffrey E  Rodriguez commented on KNOX-767:
---

Thanks Again Sumit.
I will check your suggestion about bypassing the xmlfilterreaded if there is no 
rule or filters to be applied. 
So basically we would need not to use the XML Event reader for that case but 
just a straight content reader without any XML parsing.

I think I can check that in a couple of days. That option would work for all 
cases since we wont due XML parsing when no rules of filters apply.

In the other case when we have rules or filters we still we may still have to 
figure out how to avoid the case of start/end tags into empty tag. For XML 
syntax they are equivalent. start/end element tags with no content == null 
element tag.

I think you don't manifested any issue with the CDATA.

> Knox transforms XML files written to WebHDFS
> 
>
> Key: KNOX-767
> URL: https://issues.apache.org/jira/browse/KNOX-767
> Project: Apache Knox
>  Issue Type: Bug
>Reporter: Sumit Gupta
>Assignee: Jeffrey E  Rodriguez
> Fix For: 0.11.0
>
> Attachments: KNOX-767.patch
>
>
> When you write an XML file to WebHDFS through Knox with the Content-Type 
> header set to text/xml or application/xml it is transformed by Knox so that 
> empty tags like  are written as  and CDATA is interpreted. 
> This does not happen if written directly to WebHDFS. For example:
> {code}
> [root@hdp250 ~]# cat xxx
> 
>
>
> 
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT 
> "https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx?op=CREATE&overwrite=true";
> HTTP/1.1 307 Temporary Redirect
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Set-Cookie: 
> JSESSIONID=bt3timb9jl7k546fcntrj8s;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:35 GMT
> Cache-Control: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> X-FRAME-OPTIONS: SAMEORIGIN
> Location: 
> https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48
> Server: Jetty(6.1.26.hwx)
> Content-Length: 0
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT -T xxx -H 
> 'Content-Type: text/xml' 
> "https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48";
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> Date: Thu, 27 Oct 2016 19:25:54 GMT
> Set-Cookie: 
> JSESSIONID=3o27jby7c2a6mdpxducddqac;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:54 GMT
> Location: https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx
> Connection: close
> Server: Jetty(9.2.15.v20160210)
> [root@hdp250 ~]# hdfs dfs -cat /tmp/xxx
> 
>
>wibble
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KNOX-767) Knox transforms XML files written to WebHDFS

2016-11-02 Thread Jeffrey E Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630932#comment-15630932
 ] 

Jeffrey E  Rodriguez commented on KNOX-767:
---

Sumit thanks for your reviews. The XML event parser is the one that converts an 
empty elements into two events an start/end element tag.

You are right that in the case of start/end tags that may get collapsed into a 
empty element tag.

> Knox transforms XML files written to WebHDFS
> 
>
> Key: KNOX-767
> URL: https://issues.apache.org/jira/browse/KNOX-767
> Project: Apache Knox
>  Issue Type: Bug
>Reporter: Sumit Gupta
>Assignee: Jeffrey E  Rodriguez
> Fix For: 0.11.0
>
> Attachments: KNOX-767.patch
>
>
> When you write an XML file to WebHDFS through Knox with the Content-Type 
> header set to text/xml or application/xml it is transformed by Knox so that 
> empty tags like  are written as  and CDATA is interpreted. 
> This does not happen if written directly to WebHDFS. For example:
> {code}
> [root@hdp250 ~]# cat xxx
> 
>
>
> 
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT 
> "https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx?op=CREATE&overwrite=true";
> HTTP/1.1 307 Temporary Redirect
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Set-Cookie: 
> JSESSIONID=bt3timb9jl7k546fcntrj8s;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:35 GMT
> Cache-Control: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> X-FRAME-OPTIONS: SAMEORIGIN
> Location: 
> https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48
> Server: Jetty(6.1.26.hwx)
> Content-Length: 0
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT -T xxx -H 
> 'Content-Type: text/xml' 
> "https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48";
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> Date: Thu, 27 Oct 2016 19:25:54 GMT
> Set-Cookie: 
> JSESSIONID=3o27jby7c2a6mdpxducddqac;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:54 GMT
> Location: https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx
> Connection: close
> Server: Jetty(9.2.15.v20160210)
> [root@hdp250 ~]# hdfs dfs -cat /tmp/xxx
> 
>
>wibble
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Knox 0.10.0

2016-11-02 Thread Sumit Gupta
Thanks Sandeep for the testing and verification!


On 11/2/16, 4:18 PM, "Sandeep More"  wrote:

>Great ! thanks Sumit.
>
>+1 (non-binding)
>
>* Downloaded and built from source (Java 8)
>* Checked LICENSE and NOTICE files
>* Verified GPG/MD5/SHA signatures
>* Installed pseudo-distributed instance (Mac OS X )
>* Ran through knox tests
>* Checked websocket functionality
>* Checked java script compression (Ambari UI)
>
>Best,
>Sandeep
>
>On Tue, Nov 1, 2016 at 11:10 PM, sumit gupta  wrote:
>
>> A candidate for the Apache Knox 0.10.0 (RC1) release is available at:
>>
>> https://dist.apache.org/repos/dist/dev/knox/knox-0.10.0/
>>
>> The release candidate is a zip archive of the sources in:
>>
>> https://git-wip-us.apache.org/repos/asf/knox.git
>> Branch v0.10.0 (git checkout -b v{gateway-version})
>>
>> The KEYS file for signature validation is available at:
>> https://dist.apache.org/repos/dist/release/knox/KEYS
>>
>> Please vote on releasing this package as Apache Knox 0.10.0.
>> The vote is open for the next 72 hours and passes if a majority of at
>> least three +1 Apache Knox PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Knox 0.10.0
>> [ ] -1 Do not release this package because...
>>



Re: [VOTE] Release Apache Knox 0.10.0

2016-11-02 Thread Sumit Gupta
Thanks for the verification Mohammad and certainly feel free to file a
JIRA for tracking purposes.

Thanks again!
Sumit

On 11/2/16, 5:59 PM, "Mohammad Islam"  wrote:

>+1 (non-binding)
>* Downloaded and successfully built with full test suite* Checked LICENSE
>file* Checked NOTICE file -- need to adjust the year from 2015 to 2016.
>Not a blocker* Check rat.txt and looks fine* Check release note @ CHANGES
>file. looks ok
>Very good effort Sumit!One very minor point:  should we create a JIRA to
>update the NOTICE file with year 2016 for future releases?
>Regards,Mohammad
>
> 
>
>On Wednesday, November 2, 2016 1:18 PM, Sandeep More
> wrote:
> 
>
> Great ! thanks Sumit.
>
>+1 (non-binding)
>
>* Downloaded and built from source (Java 8)
>* Checked LICENSE and NOTICE files
>* Verified GPG/MD5/SHA signatures
>* Installed pseudo-distributed instance (Mac OS X )
>* Ran through knox tests
>* Checked websocket functionality
>* Checked java script compression (Ambari UI)
>
>Best,
>Sandeep
>
>On Tue, Nov 1, 2016 at 11:10 PM, sumit gupta  wrote:
>
>> A candidate for the Apache Knox 0.10.0 (RC1) release is available at:
>>
>> https://dist.apache.org/repos/dist/dev/knox/knox-0.10.0/
>>
>> The release candidate is a zip archive of the sources in:
>>
>> https://git-wip-us.apache.org/repos/asf/knox.git
>> Branch v0.10.0 (git checkout -b v{gateway-version})
>>
>> The KEYS file for signature validation is available at:
>> https://dist.apache.org/repos/dist/release/knox/KEYS
>>
>> Please vote on releasing this package as Apache Knox 0.10.0.
>> The vote is open for the next 72 hours and passes if a majority of at
>> least three +1 Apache Knox PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Knox 0.10.0
>> [ ] -1 Do not release this package because...
>>
>
>
>   



Re: HA support for YARN and NameNode UI level

2016-11-02 Thread Sumit Gupta
Hi Mohammad,

You are right we have HA support for various services that is somewhat generic 
and based on connectivity failures. Our WebHDFS HA support is a bit more 
specific and maps to some of the nuanced states that the Namenodes can be in. 
We have yet to put in specific support for Yarn and like you point out there is 
a redirect that occurs that needs to be handled.

There is an open JIRA to add support for Yarn’s resource manager in HA mode.

https://issues.apache.org/jira/browse/KNOX-719

Sumit

From: Mohammad Islam mailto:misla...@yahoo.com>>
Reply-To: "u...@knox.apache.org" 
mailto:u...@knox.apache.org>>, Mohammad Islam 
mailto:misla...@yahoo.com>>
Date: Wednesday, November 2, 2016 at 6:57 PM
To: "u...@knox.apache.org" 
mailto:u...@knox.apache.org>>, 
"dev@knox.apache.org" 
mailto:dev@knox.apache.org>>
Subject: Re: HA support for YARN and NameNode UI level

Just to add more specific use-case of YARN UI.

BTW my KNOX service was running behind another FE proxy.

When I typed https:///gateway/sandbox/yarn in my browser, I saw 
the message " This is standby RM. Redirecting to the current active RM: 
http://:8088/cluster:8088/cluster>".

At this time, my browser also tried to connect to 
"http://:8088/cluster:8088/cluster>"
  but it's timed out because the YARN internal host was behind the firewall  
and was not directly accessible through browser.


Looks like Knox was sending a response to FE proxy with a header "Refresh: 3; 
url=http://:8088/cluster". My question is : is 
there a way for Knox to replace the internal Yarn Host or endpoint with 
something like knoxy endpoint.

Regards,
Mohammad




On Tuesday, November 1, 2016 12:24 PM, Mohammad Islam 
mailto:misla...@yahoo.com>> wrote:


Hi,
I'm wandering if there is any built-in support in Knox for any (NN/RM) UI to 
automatically forward to active NN/RM.

I saw there is HA support in WebHDFS level. But not sure if I need to do 
anything special for HDFSUI.
The same is true for YARNUI.

Regards,
Mohammad





Re: HA support for YARN and NameNode UI level

2016-11-02 Thread Mohammad Islam
Just to add more specific use-case of YARN UI.
BTW my KNOX service was running behind another FE proxy.
When I typed https:///gateway/sandbox/yarn in my browser, I saw 
the message " This is standby RM. Redirecting to the current active RM: 
http://:8088/cluster". 
At this time, my browser also tried to connect to 
"http://:8088/cluster"  but it's timed out because 
the YARN internal host was behind the firewall  and was not directly accessible 
through browser.

Looks like Knox was sending a response to FE proxy with a header "Refresh: 3; 
url=http://:8088/cluster". My question is : is 
there a way for Knox to replace the internal Yarn Host or endpoint with 
something like knoxy endpoint.
Regards,Mohammad

 

On Tuesday, November 1, 2016 12:24 PM, Mohammad Islam  
wrote:
 

 Hi,I'm wandering if there is any built-in support in Knox for any (NN/RM) UI 
to automatically forward to active NN/RM.
I saw there is HA support in WebHDFS level. But not sure if I need to do 
anything special for HDFSUI. The same is true for YARNUI.
Regards,Mohammad


   

Re: [VOTE] Release Apache Knox 0.10.0

2016-11-02 Thread Mohammad Islam
+1 (non-binding)
* Downloaded and successfully built with full test suite* Checked LICENSE file* 
Checked NOTICE file -- need to adjust the year from 2015 to 2016. Not a 
blocker* Check rat.txt and looks fine* Check release note @ CHANGES file. looks 
ok
Very good effort Sumit!One very minor point:  should we create a JIRA to update 
the NOTICE file with year 2016 for future releases?
Regards,Mohammad

 

On Wednesday, November 2, 2016 1:18 PM, Sandeep More 
 wrote:
 

 Great ! thanks Sumit.

+1 (non-binding)

* Downloaded and built from source (Java 8)
* Checked LICENSE and NOTICE files
* Verified GPG/MD5/SHA signatures
* Installed pseudo-distributed instance (Mac OS X )
* Ran through knox tests
* Checked websocket functionality
* Checked java script compression (Ambari UI)

Best,
Sandeep

On Tue, Nov 1, 2016 at 11:10 PM, sumit gupta  wrote:

> A candidate for the Apache Knox 0.10.0 (RC1) release is available at:
>
> https://dist.apache.org/repos/dist/dev/knox/knox-0.10.0/
>
> The release candidate is a zip archive of the sources in:
>
> https://git-wip-us.apache.org/repos/asf/knox.git
> Branch v0.10.0 (git checkout -b v{gateway-version})
>
> The KEYS file for signature validation is available at:
> https://dist.apache.org/repos/dist/release/knox/KEYS
>
> Please vote on releasing this package as Apache Knox 0.10.0.
> The vote is open for the next 72 hours and passes if a majority of at
> least three +1 Apache Knox PMC votes are cast.
>
> [ ] +1 Release this package as Apache Knox 0.10.0
> [ ] -1 Do not release this package because...
>


   

Re: [VOTE] Release Apache Knox 0.10.0

2016-11-02 Thread Mohammad Islam

I tried it many times with the same issue. I check central maven repository  
for "hadoop-common-test". Looks like the only available version is 0.22. Not 
0.23.Then I found the jar here. I manually downloaded and install in my local 
maven. It worked!
Thanks for your help.--Mohammad







 

On Wednesday, November 2, 2016 12:22 PM, Sumit Gupta 
 wrote:
 

 Hey Mohammad,

Thanks for helping with the release. The instructions for building are
here:

https://cwiki.apache.org/confluence/display/KNOX/Build+Process


I essentially do ³ant release² or ³mvn -Prelease clean install² to get all
the tests to run and packaging to work. The other option is Œant package¹
or ³mvn -Ppackage clean install², which excludes some of the tests that
take longer to run.

The error you are getting with ³mvn clean package² seems to be related to
a glitch with downloading a dependency from the main maven repo. I have
not seen this myself when I tried, but I likely have that dependency
already in my local maven repo. Please try again and see if you are able
to get the dependency in another attempt.

Sumit.


On 11/2/16, 2:44 PM, "Mohammad Islam"  wrote:

>Thanks Sumit for the initiative.
>I checked out v0.10.0 and ran "mvn clean package". I got the following
>error for module "gateway-test-release ". Do I need to do anything? In
>another related note, what is the full "mvn" command to create a distro
>in either tar or zip format. I didn't find it in dev documents.
>
>
>Failed to execute goal
>org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process
>(default) on project gateway-test-release: Failed to resolve dependencies
>for one or more projects in the reactor. Reason: Missing:[ERROR]
>--[ERROR] 1)
>org.apache.hadoop:hadoop-common-test:jar:0.23.0-SNAPSHOT[ERROR] [ERROR]
>Try downloading the file manually from the project website.[ERROR]
>[ERROR] Then, install it using the command:[ERROR] mvn
>install:install-file -DgroupId=org.apache.hadoop
>-DartifactId=hadoop-common-test -Dversion=0.23.0-SNAPSHOT -Dpackaging=jar
>-Dfile=/path/to/file[ERROR] [ERROR] Alternatively, if you host your own
>repository you can deploy the file there:[ERROR] mvn deploy:deploy-file
>-DgroupId=org.apache.hadoop -DartifactId=hadoop-common-test
>-Dversion=0.23.0-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file
>-Durl=[url] -DrepositoryId=[id][ERROR] [ERROR] Path to dependency:[ERROR]
>1) org.apache.knox:gateway-test-release:pom:0.10.0[ERROR] 2)
>org.apache.hadoop:hadoop-common-test:jar:0.23.0-SNAPSHOT[ERROR] [ERROR]
>--[ERROR] 1 required artifact is missing.
>
>
> 
>
>    On Tuesday, November 1, 2016 8:10 PM, sumit gupta 
>wrote:
> 
>
> A candidate for the Apache Knox 0.10.0 (RC1) release is available at:
>
>https://dist.apache.org/repos/dist/dev/knox/knox-0.10.0/
>
>The release candidate is a zip archive of the sources in:
>
>https://git-wip-us.apache.org/repos/asf/knox.git
>Branch v0.10.0 (git checkout -b v{gateway-version})
>
>The KEYS file for signature validation is available at:
>https://dist.apache.org/repos/dist/release/knox/KEYS
>
>Please vote on releasing this package as Apache Knox 0.10.0.
>The vote is open for the next 72 hours and passes if a majority of at
>least three +1 Apache Knox PMC votes are cast.
>
>[ ] +1 Release this package as Apache Knox 0.10.0
>[ ] -1 Do not release this package because...
>
>
>  


   

[jira] [Commented] (KNOX-767) Knox transforms XML files written to WebHDFS

2016-11-02 Thread Sumit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630457#comment-15630457
 ] 

Sumit Gupta commented on KNOX-767:
--

BTW, I think that the real fix for this issue is bypassing the xmlfilterreader 
entirely if there is no rules or filters to be applied to the body. Basically 
the noop filter reader path.

> Knox transforms XML files written to WebHDFS
> 
>
> Key: KNOX-767
> URL: https://issues.apache.org/jira/browse/KNOX-767
> Project: Apache Knox
>  Issue Type: Bug
>Reporter: Sumit Gupta
>Assignee: Jeffrey E  Rodriguez
> Fix For: 0.11.0
>
> Attachments: KNOX-767.patch
>
>
> When you write an XML file to WebHDFS through Knox with the Content-Type 
> header set to text/xml or application/xml it is transformed by Knox so that 
> empty tags like  are written as  and CDATA is interpreted. 
> This does not happen if written directly to WebHDFS. For example:
> {code}
> [root@hdp250 ~]# cat xxx
> 
>
>
> 
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT 
> "https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx?op=CREATE&overwrite=true";
> HTTP/1.1 307 Temporary Redirect
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Set-Cookie: 
> JSESSIONID=bt3timb9jl7k546fcntrj8s;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:35 GMT
> Cache-Control: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> X-FRAME-OPTIONS: SAMEORIGIN
> Location: 
> https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48
> Server: Jetty(6.1.26.hwx)
> Content-Length: 0
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT -T xxx -H 
> 'Content-Type: text/xml' 
> "https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48";
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> Date: Thu, 27 Oct 2016 19:25:54 GMT
> Set-Cookie: 
> JSESSIONID=3o27jby7c2a6mdpxducddqac;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:54 GMT
> Location: https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx
> Connection: close
> Server: Jetty(9.2.15.v20160210)
> [root@hdp250 ~]# hdfs dfs -cat /tmp/xxx
> 
>
>wibble
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KNOX-767) Knox transforms XML files written to WebHDFS

2016-11-02 Thread Sumit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630419#comment-15630419
 ] 

Sumit Gupta commented on KNOX-767:
--

[~jeffreyr97] , The patch looks mostly good except for the conversion of an 
empty xml element to the short form. i.e.  is converted to 
. The crux of the issue filed is that xml body should remain unchanged 
if no transformation is needed. This patch doesn't satisfy that requirement, 
even though I understand that the two are the same thing.

> Knox transforms XML files written to WebHDFS
> 
>
> Key: KNOX-767
> URL: https://issues.apache.org/jira/browse/KNOX-767
> Project: Apache Knox
>  Issue Type: Bug
>Reporter: Sumit Gupta
>Assignee: Jeffrey E  Rodriguez
> Fix For: 0.11.0
>
> Attachments: KNOX-767.patch
>
>
> When you write an XML file to WebHDFS through Knox with the Content-Type 
> header set to text/xml or application/xml it is transformed by Knox so that 
> empty tags like  are written as  and CDATA is interpreted. 
> This does not happen if written directly to WebHDFS. For example:
> {code}
> [root@hdp250 ~]# cat xxx
> 
>
>
> 
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT 
> "https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx?op=CREATE&overwrite=true";
> HTTP/1.1 307 Temporary Redirect
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Set-Cookie: 
> JSESSIONID=bt3timb9jl7k546fcntrj8s;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:35 GMT
> Cache-Control: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Expires: Thu, 27 Oct 2016 19:25:35 GMT
> Date: Thu, 27 Oct 2016 19:25:35 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> X-FRAME-OPTIONS: SAMEORIGIN
> Location: 
> https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48
> Server: Jetty(6.1.26.hwx)
> Content-Length: 0
> [root@hdp250 ~]# curl -u guest:guest-password -i -k -X PUT -T xxx -H 
> 'Content-Type: text/xml' 
> "https://hdp250.local:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/xxx?_=CBEQ-XO5sAM86ubmjRdUYXJEZkpM4Vdv3vmIprBetQwfaKaZNN4uc9O1IN8jujDD9GpPPCDJCKxebul_GlCFxDIZzbkhZ1tnhY5rZ6V12SVJgLo5DxMxC8zECeaM4M8OFLqHxamNnvduuUkD5y23RJczzHHJ9SyYuG6yiCpDJKB_5MffZIWFaEEcYM7jOkjStZHU_7cjIg_vRJL2nFCVTWKf1FPkB00QCbXHN-Ua6MfEG8p2aoQB70tfVHnmhhnBWx2PZARJ-kHp42rrpA1yrI86v3Q-OGI4Ya3pnPRWhPj0wbdDr_p_FDinsw2KRu1_aRSIXXznmJ--aX6TflbBGZvDImkw4x0QM48UGFpOChaLtHk73rlMMUbbbAwOew0gJ2-69PuXiL4QB48";
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> Date: Thu, 27 Oct 2016 19:25:54 GMT
> Set-Cookie: 
> JSESSIONID=3o27jby7c2a6mdpxducddqac;Path=/gateway/default;Secure;HttpOnly
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; 
> Expires=Wed, 26-Oct-2016 19:25:54 GMT
> Location: https://hdp250.local:8443/gateway/default/webhdfs/v1/tmp/xxx
> Connection: close
> Server: Jetty(9.2.15.v20160210)
> [root@hdp250 ~]# hdfs dfs -cat /tmp/xxx
> 
>
>wibble
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Knox 0.10.0

2016-11-02 Thread Sandeep More
Great ! thanks Sumit.

+1 (non-binding)

* Downloaded and built from source (Java 8)
* Checked LICENSE and NOTICE files
* Verified GPG/MD5/SHA signatures
* Installed pseudo-distributed instance (Mac OS X )
* Ran through knox tests
* Checked websocket functionality
* Checked java script compression (Ambari UI)

Best,
Sandeep

On Tue, Nov 1, 2016 at 11:10 PM, sumit gupta  wrote:

> A candidate for the Apache Knox 0.10.0 (RC1) release is available at:
>
> https://dist.apache.org/repos/dist/dev/knox/knox-0.10.0/
>
> The release candidate is a zip archive of the sources in:
>
> https://git-wip-us.apache.org/repos/asf/knox.git
> Branch v0.10.0 (git checkout -b v{gateway-version})
>
> The KEYS file for signature validation is available at:
> https://dist.apache.org/repos/dist/release/knox/KEYS
>
> Please vote on releasing this package as Apache Knox 0.10.0.
> The vote is open for the next 72 hours and passes if a majority of at
> least three +1 Apache Knox PMC votes are cast.
>
> [ ] +1 Release this package as Apache Knox 0.10.0
> [ ] -1 Do not release this package because...
>


Re: [VOTE] Release Apache Knox 0.10.0

2016-11-02 Thread Sumit Gupta
Hey Mohammad,

Thanks for helping with the release. The instructions for building are
here:

https://cwiki.apache.org/confluence/display/KNOX/Build+Process


I essentially do ³ant release² or ³mvn -Prelease clean install² to get all
the tests to run and packaging to work. The other option is Œant package¹
or ³mvn -Ppackage clean install², which excludes some of the tests that
take longer to run.

The error you are getting with ³mvn clean package² seems to be related to
a glitch with downloading a dependency from the main maven repo. I have
not seen this myself when I tried, but I likely have that dependency
already in my local maven repo. Please try again and see if you are able
to get the dependency in another attempt.

Sumit.


On 11/2/16, 2:44 PM, "Mohammad Islam"  wrote:

>Thanks Sumit for the initiative.
>I checked out v0.10.0 and ran "mvn clean package". I got the following
>error for module "gateway-test-release ". Do I need to do anything? In
>another related note, what is the full "mvn" command to create a distro
>in either tar or zip format. I didn't find it in dev documents.
>
>
>Failed to execute goal
>org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process
>(default) on project gateway-test-release: Failed to resolve dependencies
>for one or more projects in the reactor. Reason: Missing:[ERROR]
>--[ERROR] 1)
>org.apache.hadoop:hadoop-common-test:jar:0.23.0-SNAPSHOT[ERROR] [ERROR]
>Try downloading the file manually from the project website.[ERROR]
>[ERROR] Then, install it using the command:[ERROR] mvn
>install:install-file -DgroupId=org.apache.hadoop
>-DartifactId=hadoop-common-test -Dversion=0.23.0-SNAPSHOT -Dpackaging=jar
>-Dfile=/path/to/file[ERROR] [ERROR] Alternatively, if you host your own
>repository you can deploy the file there:[ERROR] mvn deploy:deploy-file
>-DgroupId=org.apache.hadoop -DartifactId=hadoop-common-test
>-Dversion=0.23.0-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file
>-Durl=[url] -DrepositoryId=[id][ERROR] [ERROR] Path to dependency:[ERROR]
>1) org.apache.knox:gateway-test-release:pom:0.10.0[ERROR] 2)
>org.apache.hadoop:hadoop-common-test:jar:0.23.0-SNAPSHOT[ERROR] [ERROR]
>--[ERROR] 1 required artifact is missing.
>
>
> 
>
>On Tuesday, November 1, 2016 8:10 PM, sumit gupta 
>wrote:
> 
>
> A candidate for the Apache Knox 0.10.0 (RC1) release is available at:
>
>https://dist.apache.org/repos/dist/dev/knox/knox-0.10.0/
>
>The release candidate is a zip archive of the sources in:
>
>https://git-wip-us.apache.org/repos/asf/knox.git
>Branch v0.10.0 (git checkout -b v{gateway-version})
>
>The KEYS file for signature validation is available at:
>https://dist.apache.org/repos/dist/release/knox/KEYS
>
>Please vote on releasing this package as Apache Knox 0.10.0.
>The vote is open for the next 72 hours and passes if a majority of at
>least three +1 Apache Knox PMC votes are cast.
>
>[ ] +1 Release this package as Apache Knox 0.10.0
>[ ] -1 Do not release this package because...
>
>
>   



Re: [VOTE] Release Apache Knox 0.10.0

2016-11-02 Thread Mohammad Islam
Thanks Sumit for the initiative.
I checked out v0.10.0 and ran "mvn clean package". I got the following error 
for module "gateway-test-release ". Do I need to do anything? In another 
related note, what is the full "mvn" command to create a distro in either tar 
or zip format. I didn't find it in dev documents.


Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project gateway-test-release: Failed to resolve dependencies for one or more 
projects in the reactor. Reason: Missing:[ERROR] --[ERROR] 1) 
org.apache.hadoop:hadoop-common-test:jar:0.23.0-SNAPSHOT[ERROR] [ERROR] Try 
downloading the file manually from the project website.[ERROR] [ERROR] Then, 
install it using the command:[ERROR] mvn install:install-file 
-DgroupId=org.apache.hadoop -DartifactId=hadoop-common-test 
-Dversion=0.23.0-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file[ERROR] [ERROR] 
Alternatively, if you host your own repository you can deploy the file 
there:[ERROR] mvn deploy:deploy-file -DgroupId=org.apache.hadoop 
-DartifactId=hadoop-common-test -Dversion=0.23.0-SNAPSHOT -Dpackaging=jar 
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id][ERROR] [ERROR] Path to 
dependency:[ERROR] 1) org.apache.knox:gateway-test-release:pom:0.10.0[ERROR] 2) 
org.apache.hadoop:hadoop-common-test:jar:0.23.0-SNAPSHOT[ERROR] [ERROR] 
--[ERROR] 1 required artifact is missing.


 

On Tuesday, November 1, 2016 8:10 PM, sumit gupta  wrote:
 

 A candidate for the Apache Knox 0.10.0 (RC1) release is available at:

https://dist.apache.org/repos/dist/dev/knox/knox-0.10.0/

The release candidate is a zip archive of the sources in:

https://git-wip-us.apache.org/repos/asf/knox.git
Branch v0.10.0 (git checkout -b v{gateway-version})

The KEYS file for signature validation is available at:
https://dist.apache.org/repos/dist/release/knox/KEYS

Please vote on releasing this package as Apache Knox 0.10.0.
The vote is open for the next 72 hours and passes if a majority of at
least three +1 Apache Knox PMC votes are cast.

[ ] +1 Release this package as Apache Knox 0.10.0
[ ] -1 Do not release this package because...


   

[jira] [Commented] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Nishant Bangarwa (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629428#comment-15629428
 ] 

Nishant Bangarwa commented on KNOX-758:
---

druid cluster can be setup via docker environment - 
https://github.com/druid-io/docker-druid
you can setup druid environment using above and verify the patch. 

I also modified current druid integration-tests and ran them via the knox proxy 
gateway to make sure query/status api's work fine. 
This required some changes to druid integration-tests to enable them to connect 
to knox proxy - 
https://github.com/druid-io/druid/pull/3646

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Sumit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629215#comment-15629215
 ] 

Sumit Gupta commented on KNOX-758:
--

[~nishantbangarwa], thanks for the patch! I hope to take a look at it soon. Is 
there a test environment (docker?) that can be used for verification?

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629187#comment-15629187
 ] 

Hadoop QA commented on KNOX-758:


Testing JIRA KNOX-758


Patch 
[KNOX-758.patch|https://issues.apache.org/jira/secure/attachment/12836536/KNOX-758.patch]
 downloaded at Wed Nov  2 14:19:30 UTC 2016



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:red}-1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any line longer than 
120
.{color:red}-1{color} the patch does not add/modify any testcase
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warnings
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
.{color:red}WARNING{color}: the current HEAD has 76 javac warning(s)
{color:green}+1 FINDBUGS{color}
.{color:green}+1{color} the patch does not seem to introduce new Findbugs 
warnings
.{color:red}WARNING: the current HEAD has  Findbugs warning(s), they should 
be addressed ASAP{color}
{color:green}+1 TESTS{color}
.Tests run: 674
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 


{color:red}*-1 Overall result, please check the reported -1(s)*{color}

{color:red}.   There is at least one warning, please check{color}

The full output of the test-patch run is available at

.   https://builds.apache.org/job/Knox-master-patch-verify/92/

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KNOX-769) SecureClusterTest fails for Java 1.7

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629105#comment-15629105
 ] 

Hadoop QA commented on KNOX-769:


Testing JIRA KNOX-769


Patch 
[KNOX-769.002.patch|https://issues.apache.org/jira/secure/attachment/12836559/KNOX-769.002.patch]
 downloaded at Wed Nov  2 13:40:02 UTC 2016



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:green}+1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any line longer than 
120
.{color:green}+1{color} the patch does adds/modifies 1 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warnings
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
.{color:red}WARNING{color}: the current HEAD has 76 javac warning(s)
{color:green}+1 FINDBUGS{color}
.{color:green}+1{color} the patch does not seem to introduce new Findbugs 
warnings
.{color:red}WARNING: the current HEAD has  Findbugs warning(s), they should 
be addressed ASAP{color}
{color:green}+1 TESTS{color}
.Tests run: 674
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 


{color:green}*+1 Overall result, good!, no -1s*{color}

{color:red}.   There is at least one warning, please check{color}

The full output of the test-patch run is available at

.   https://builds.apache.org/job/Knox-master-patch-verify/91/

> SecureClusterTest fails for Java 1.7
> 
>
> Key: KNOX-769
> URL: https://issues.apache.org/jira/browse/KNOX-769
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Sandeep More
>Assignee: Sandeep More
> Fix For: 0.10.0
>
> Attachments: KNOX-769.001.patch, KNOX-769.002.patch
>
>
> SecureClusterTest seems to be failing for java versions less than 1.8. 
> The error is :
> [exec] Running org.apache.hadoop.gateway.SecureClusterTest
>  [exec] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 0.001 sec <<< FAILURE! - in org.apache.hadoop.gateway.SecureClusterTest
>  [exec] initializationError(org.apache.hadoop.gateway.SecureClusterTest)  
> Time elapsed: 0.001 sec  <<< ERROR!
>  [exec] java.lang.UnsupportedClassVersionError: 
> org/apache/hadoop/minikdc/MiniKdc : Unsupported major.minor version 52.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KNOX-769) SecureClusterTest fails for Java 1.7

2016-11-02 Thread Sandeep More (JIRA)

 [ 
https://issues.apache.org/jira/browse/KNOX-769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep More updated KNOX-769:
--
Attachment: KNOX-769.002.patch

Patch with a different format (git diff) no code changes compared to previous 
one. Somehow, I managed to screw-up the previous patch and offend Jenkins :(

> SecureClusterTest fails for Java 1.7
> 
>
> Key: KNOX-769
> URL: https://issues.apache.org/jira/browse/KNOX-769
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Sandeep More
>Assignee: Sandeep More
> Fix For: 0.10.0
>
> Attachments: KNOX-769.001.patch, KNOX-769.002.patch
>
>
> SecureClusterTest seems to be failing for java versions less than 1.8. 
> The error is :
> [exec] Running org.apache.hadoop.gateway.SecureClusterTest
>  [exec] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 0.001 sec <<< FAILURE! - in org.apache.hadoop.gateway.SecureClusterTest
>  [exec] initializationError(org.apache.hadoop.gateway.SecureClusterTest)  
> Time elapsed: 0.001 sec  <<< ERROR!
>  [exec] java.lang.UnsupportedClassVersionError: 
> org/apache/hadoop/minikdc/MiniKdc : Unsupported major.minor version 52.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated KNOX-758:
--
Status: Patch Available  (was: Open)

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated KNOX-758:
--
Status: Open  (was: Patch Available)

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated KNOX-758:
--
Attachment: (was: Druid-Apache-Knox-Mappings.pdf)

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628790#comment-15628790
 ] 

Hadoop QA commented on KNOX-758:


Testing JIRA KNOX-758


Patch 
[Druid-Apache-Knox-Mappings.pdf|https://issues.apache.org/jira/secure/attachment/12836539/Druid-Apache-Knox-Mappings.pdf]
 downloaded at Wed Nov  2 12:06:45 UTC 2016



{color:red}-1{color} Patch failed to apply to head of branch



> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: Druid-Apache-Knox-Mappings.pdf, KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated KNOX-758:
--
Status: Patch Available  (was: Open)

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: Druid-Apache-Knox-Mappings.pdf, KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated KNOX-758:
--
Attachment: Druid-Apache-Knox-Mappings.pdf

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: Druid-Apache-Knox-Mappings.pdf, KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KNOX-758) Add service definition for proxying Druid Queries, Coordinator & Indexer UI

2016-11-02 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/KNOX-758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated KNOX-758:
--
Attachment: KNOX-758.patch

attached patch

> Add service definition for proxying Druid Queries, Coordinator & Indexer UI 
> 
>
> Key: KNOX-758
> URL: https://issues.apache.org/jira/browse/KNOX-758
> Project: Apache Knox
>  Issue Type: New Feature
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
> Attachments: KNOX-758.patch
>
>
> Druid (http://druid.io) is an Open Source distributed Database for providing 
> sub-second OLAP queries. This task is to add service definitions for proxying 
> Druid UI and queries using KNOX gateway. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)