[jira] [Commented] (FALCON-2281) HiveDRTest tests are getting permissions denied

2017-02-10 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-2281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861752#comment-15861752
 ] 

Sowmya Ramesh commented on FALCON-2281:
---

>From the stack trace "Message: default/Failed to copy extension artifacts to 
>clusterAbe0e31d0-180c1ef2" Issue is see only if Hive DR was attempted which 
>generates files in /apps/falcon/extensions/mirroring and then the new cluster 
>submission is done.

> HiveDRTest tests are getting permissions denied
> ---
>
> Key: FALCON-2281
> URL: https://issues.apache.org/jira/browse/FALCON-2281
> Project: Falcon
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: Centos 6
>Reporter: Cheng Xu
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> HiveDRTest tests are getting permissions denied.   
> {noformat}
> 2016-11-17 09:31:22,681 INFO  - [pool-28-thread-1:] ~ 
> java.lang.AssertionError: Status should be SUCCEEDED. Message: default/Failed 
> to copy extension artifacts to clusterAbe0e31d0-180c1ef2
> CausedBy: Permission denied: user=falcon, access=READ, 
> inode="/apps/falcon/extensions/mirroring/Events/HiveDRTest7df2bcd6/HiveDRTest7df2bcd6.id":hrt_qa:users:-rwx--
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1785)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1868)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1837)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
>  expected [SUCCEEDED] but found [FAILED]
> at org.testng.Assert.fail(Assert.java:94)
> at org.testng.Assert.failNotEquals(Assert.java:494)
> at org.testng.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.falcon.regression.core.util.AssertUtil.assertSucceeded(AssertUtil.java:199)
> at 
> org.apache.falcon.regression.core.bundle.Bundle.submitClusters(Bundle.java:597)
> at 
> org.apache.falcon.regression.core.bundle.Bundle.submitClusters(Bundle.java:590)
> at 
> org.apache.falcon.regression.extensions.HiveDRTest.setUp(HiveDRTest.java:92)
> at 
> org.apache.falcon.regression.extensions.HiveDRTest.drChangeCommentAndPropertyTest(HiveDRTest.java:452)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at 
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> 

[jira] [Created] (FALCON-2282) Update committer affiliations

2017-02-09 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-2282:
-

 Summary: Update committer affiliations  
 Key: FALCON-2282
 URL: https://issues.apache.org/jira/browse/FALCON-2282
 Project: Falcon
  Issue Type: Bug
  Components: site
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


Update committers and committer affiliations  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FALCON-2281) HiveDRTest tests are getting permissions denied

2017-02-09 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh reassigned FALCON-2281:
-

Assignee: Sowmya Ramesh

> HiveDRTest tests are getting permissions denied
> ---
>
> Key: FALCON-2281
> URL: https://issues.apache.org/jira/browse/FALCON-2281
> Project: Falcon
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: Centos 6
>Reporter: Cheng Xu
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> HiveDRTest tests are getting permissions denied.   
> {noformat}
> 2016-11-17 09:31:22,681 INFO  - [pool-28-thread-1:] ~ 
> java.lang.AssertionError: Status should be SUCCEEDED. Message: default/Failed 
> to copy extension artifacts to clusterAbe0e31d0-180c1ef2
> CausedBy: Permission denied: user=falcon, access=READ, 
> inode="/apps/falcon/extensions/mirroring/Events/HiveDRTest7df2bcd6/HiveDRTest7df2bcd6.id":hrt_qa:users:-rwx--
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1785)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1868)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1837)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
>  expected [SUCCEEDED] but found [FAILED]
> at org.testng.Assert.fail(Assert.java:94)
> at org.testng.Assert.failNotEquals(Assert.java:494)
> at org.testng.Assert.assertEquals(Assert.java:123)
> at 
> org.apache.falcon.regression.core.util.AssertUtil.assertSucceeded(AssertUtil.java:199)
> at 
> org.apache.falcon.regression.core.bundle.Bundle.submitClusters(Bundle.java:597)
> at 
> org.apache.falcon.regression.core.bundle.Bundle.submitClusters(Bundle.java:590)
> at 
> org.apache.falcon.regression.extensions.HiveDRTest.setUp(HiveDRTest.java:92)
> at 
> org.apache.falcon.regression.extensions.HiveDRTest.drChangeCommentAndPropertyTest(HiveDRTest.java:452)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at 
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>  (TestngListener:122)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FALCON-2280) Unable to create mirror on WASB target due to "Cluster entity not found"

2017-02-09 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-2280:
--
Fix Version/s: trunk

> Unable to create mirror on WASB target due to "Cluster entity not found"
> 
>
> Key: FALCON-2280
> URL: https://issues.apache.org/jira/browse/FALCON-2280
> Project: Falcon
>  Issue Type: Bug
>  Components: extensions
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> 1. Create a mirror which source cluster is the one created , and dummy target 
> cluster 
> 2. Save
> Submitting the mirror from CLI (see log below):
> 1) with WASB url on "targetCluster", it complained that the WASB was invalid 
> or 
> 2) with nothing on "targetCluster", it complained that "Cluster entity not 
> found" 
> # cat /tmp/hdfsMirroring.props
> jobName=testjob 
> jobClusterName=primaryCluster 
> jobValidityStart=2017-01-13T00:00Z 
> jobValidityEnd=2017-12-30T00:00Z 
> jobFrequency=minutes(5) 
> sourceDir=/data/testdir 
> sourceCluster=primaryCluster
> targetDir=/data/testdir 
> targetCluster= 
>  # su - falcon 
> # /usr/hdp/current/falcon-server/bin/falcon extension -submit -extensionName 
> hdfs-mirroring -file /tmp/hdfsMirroring.props
> ERROR: Internal Server Error;Cluster entity not found 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FALCON-2280) Unable to create mirror on WASB target due to "Cluster entity not found"

2017-02-09 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-2280:
--
Description: 
1. Create a mirror which source cluster is the one created , and dummy target 
cluster 
2. Save

Submitting the mirror from CLI (see log below):
1) with WASB url on "targetCluster", it complained that the WASB was invalid 
or 
2) with nothing on "targetCluster", it complained that "Cluster entity not 
found" 

# cat /tmp/hdfsMirroring.props
jobName=testjob 
jobClusterName=primaryCluster 
jobValidityStart=2017-01-13T00:00Z 
jobValidityEnd=2017-12-30T00:00Z 
jobFrequency=minutes(5) 
sourceDir=/data/testdir 
sourceCluster=primaryCluster
targetDir=/data/testdir 
targetCluster= 

 # su - falcon 
# /usr/hdp/current/falcon-server/bin/falcon extension -submit -extensionName 
hdfs-mirroring -file /tmp/hdfsMirroring.props
ERROR: Internal Server Error;Cluster entity not found 


  was:
1. Create a mirror which source cluster is the one created , and dummy target 
cluster 
2. Save

Submitting the mirror from CLI (see log below):
1) with WASB url on "targetCluster", it complained that the WASB was invalid 
or 
2) with nothing on "targetCluster", it complained that "Cluster entity not 
found" 


azmbl0002:/var/log/falcon # cat /tmp/hdfsMirroring.props
jobName=testjob 
jobClusterName=primaryCluster 
jobValidityStart=2017-01-13T00:00Z 
jobValidityEnd=2017-12-30T00:00Z 
jobFrequency=minutes(5) 
sourceDir=/data/testdir 
sourceCluster=primaryCluster
targetDir=/data/testdir 
targetCluster= 

azmbl0002:/var/log/falcon # su - falcon 
falcon@azmbl0002:~> /usr/hdp/current/falcon-server/bin/falcon extension -submit 
-extensionName hdfs-mirroring -file /tmp/falcon.conf.txt 
ERROR: Internal Server Error;Cluster entity not found 



> Unable to create mirror on WASB target due to "Cluster entity not found"
> 
>
> Key: FALCON-2280
> URL: https://issues.apache.org/jira/browse/FALCON-2280
> Project: Falcon
>  Issue Type: Bug
>  Components: extensions
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>
> 1. Create a mirror which source cluster is the one created , and dummy target 
> cluster 
> 2. Save
> Submitting the mirror from CLI (see log below):
> 1) with WASB url on "targetCluster", it complained that the WASB was invalid 
> or 
> 2) with nothing on "targetCluster", it complained that "Cluster entity not 
> found" 
> # cat /tmp/hdfsMirroring.props
> jobName=testjob 
> jobClusterName=primaryCluster 
> jobValidityStart=2017-01-13T00:00Z 
> jobValidityEnd=2017-12-30T00:00Z 
> jobFrequency=minutes(5) 
> sourceDir=/data/testdir 
> sourceCluster=primaryCluster
> targetDir=/data/testdir 
> targetCluster= 
>  # su - falcon 
> # /usr/hdp/current/falcon-server/bin/falcon extension -submit -extensionName 
> hdfs-mirroring -file /tmp/hdfsMirroring.props
> ERROR: Internal Server Error;Cluster entity not found 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FALCON-2280) Unable to create mirror on WASB target due to "Cluster entity not found"

2017-02-09 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-2280:
-

 Summary: Unable to create mirror on WASB target due to "Cluster 
entity not found"
 Key: FALCON-2280
 URL: https://issues.apache.org/jira/browse/FALCON-2280
 Project: Falcon
  Issue Type: Bug
  Components: extensions
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh


1. Create a mirror which source cluster is the one created , and dummy target 
cluster 
2. Save

Submitting the mirror from CLI (see log below):
1) with WASB url on "targetCluster", it complained that the WASB was invalid 
or 
2) with nothing on "targetCluster", it complained that "Cluster entity not 
found" 


azmbl0002:/var/log/falcon # cat /tmp/hdfsMirroring.props
jobName=testjob 
jobClusterName=primaryCluster 
jobValidityStart=2017-01-13T00:00Z 
jobValidityEnd=2017-12-30T00:00Z 
jobFrequency=minutes(5) 
sourceDir=/data/testdir 
sourceCluster=primaryCluster
targetDir=/data/testdir 
targetCluster= 

azmbl0002:/var/log/falcon # su - falcon 
falcon@azmbl0002:~> /usr/hdp/current/falcon-server/bin/falcon extension -submit 
-extensionName hdfs-mirroring -file /tmp/falcon.conf.txt 
ERROR: Internal Server Error;Cluster entity not found 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FALCON-2117) Implement X-Frame-Options header for Falcon UI

2016-08-16 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-2117:
--
Description: 
implement the X-Frame-Options for Falcon UI: DENY header in response.

For security this should be implemented to prevent potential security issue 
allowing click-jacking.

1. Access Falcon UI via curl or browser. 
2. Check for X-Frame-Options in the Response Header.

  was:
implement the X-Frame-Options for Falcon UI: DENY header in HTTP response.

For security this should be implemented to prevent potential security issue 
allowing click-jacking.

1. Access Falcon UI via curl or browser. 
2. Check for X-Frame-Options in the Response Header.


> Implement X-Frame-Options header for Falcon UI
> --
>
> Key: FALCON-2117
> URL: https://issues.apache.org/jira/browse/FALCON-2117
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> implement the X-Frame-Options for Falcon UI: DENY header in response.
> For security this should be implemented to prevent potential security issue 
> allowing click-jacking.
> 1. Access Falcon UI via curl or browser. 
> 2. Check for X-Frame-Options in the Response Header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-2117) Implement X-Frame-Options header for Falcon UI

2016-08-16 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-2117:
-

 Summary: Implement X-Frame-Options header for Falcon UI
 Key: FALCON-2117
 URL: https://issues.apache.org/jira/browse/FALCON-2117
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


implement the X-Frame-Options for Falcon UI: DENY header in HTTP response.

For security this should be implemented to prevent potential security issue 
allowing click-jacking.

1. Access Falcon UI via curl or browser. 
2. Check for X-Frame-Options in the Response Header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-2115) UT test failure on FalconCSRFFilterTest

2016-08-12 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-2115.
---
Resolution: Fixed

> UT test failure on FalconCSRFFilterTest
> ---
>
> Key: FALCON-2115
> URL: https://issues.apache.org/jira/browse/FALCON-2115
> Project: Falcon
>  Issue Type: Bug
>Reporter: Ying Zheng
>Assignee: Ying Zheng
>
> Need to add the property falcon.security.csrf.header to startup properties 
> when testing custom header for CSRF filter.
> Tests run: 5, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 1.186 sec <<< 
> FAILURE! - in org.apache.falcon.security.FalconCSRFFilterTest
> testCSRFEnabledIncludeCustomHeaderFromBrowser(org.apache.falcon.security.FalconCSRFFilterTest)
>   Time elapsed: 0.012 sec  <<< FAILURE!
> org.mockito.exceptions.verification.NeverWantedButInvoked: 
> mockResponse.sendError(
> 403,
> "Missing Required Header for CSRF Vulnerability Protection"
> );
> Never wanted here:
> -> at 
> org.apache.falcon.security.FalconCSRFFilterTest.testCSRFEnabledIncludeCustomHeaderFromBrowser(FalconCSRFFilterTest.java:83)
> But invoked here:
> -> at 
> org.apache.falcon.security.RestCsrfPreventionFilter$ServletFilterHttpInteraction.sendError(RestCsrfPreventionFilter.java:173)
>   at 
> org.apache.falcon.security.FalconCSRFFilterTest.testCSRFEnabledIncludeCustomHeaderFromBrowser(FalconCSRFFilterTest.java:83)
> testCSRFEnabledNoCustomHeaderFromBrowser(org.apache.falcon.security.FalconCSRFFilterTest)
>   Time elapsed: 0.003 sec  <<< FAILURE!
> org.mockito.exceptions.verification.TooManyActualInvocations: 
> mockResponse.sendError(
> 403,
> "Missing Required Header for CSRF Vulnerability Protection"
> );
> Wanted 1 time:
> -> at 
> org.apache.falcon.security.FalconCSRFFilterTest.testCSRFEnabledNoCustomHeaderFromBrowser(FalconCSRFFilterTest.java:73)
> But was 2 times. Undesired invocation:
> -> at 
> org.apache.falcon.security.RestCsrfPreventionFilter$ServletFilterHttpInteraction.sendError(RestCsrfPreventionFilter.java:173)
>   at 
> org.apache.falcon.security.FalconCSRFFilterTest.testCSRFEnabledNoCustomHeaderFromBrowser(FalconCSRFFilterTest.java:73)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1945) Handle TDE enabled for feed replication

2016-08-01 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1945.
---
Resolution: Fixed

Implemented as part of FALCON-1944

> Handle TDE enabled for feed replication
> ---
>
> Key: FALCON-1945
> URL: https://issues.apache.org/jira/browse/FALCON-1945
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> If TDE is enabled then DistCP options update should be set to true and skip 
> CRC should be set to true.
> User can pass if TDE is enabled as a custom property. If replication feed has 
> TDE enabled custom property then replication action xml will be updated in 
> runtime to include TDE option. And in FeedReplicator if TDE is enabled DistCP 
> options should be got accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1944) Ability to provide additional DistCP options for mirroring extensions and feed replication

2016-08-01 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1944:
--
Summary: Ability to provide additional DistCP options for mirroring 
extensions and feed replication  (was: Ability to provide additional DistCP 
options for mirroring extensions)

> Ability to provide additional DistCP options for mirroring extensions and 
> feed replication
> --
>
> Key: FALCON-1944
> URL: https://issues.apache.org/jira/browse/FALCON-1944
> Project: Falcon
>  Issue Type: Improvement
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> Mirroring extensions should have the ability to provide additional DistCp 
> options. Also handle TDE enabled option



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1944) Ability to provide additional DistCP options for mirroring extensions and feed replication

2016-08-01 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1944:
--
Description: Mirroring extensions and feed replication should have the 
ability to provide additional DistCp options. Also handle TDE enabled option  
(was: Mirroring extensions should have the ability to provide additional DistCp 
options. Also handle TDE enabled option)

> Ability to provide additional DistCP options for mirroring extensions and 
> feed replication
> --
>
> Key: FALCON-1944
> URL: https://issues.apache.org/jira/browse/FALCON-1944
> Project: Falcon
>  Issue Type: Improvement
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> Mirroring extensions and feed replication should have the ability to provide 
> additional DistCp options. Also handle TDE enabled option



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-2072) Hive2 URLs in Falcon should allow additional configuration elements in the URL

2016-08-01 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-2072:
--
Fix Version/s: trunk

> Hive2 URLs in Falcon should allow additional configuration elements in the URL
> --
>
> Key: FALCON-2072
> URL: https://issues.apache.org/jira/browse/FALCON-2072
> Project: Falcon
>  Issue Type: Bug
>Affects Versions: 0.6.1, 0.8, 0.9, 0.10
>Reporter: Venkat Ranganathan
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> When configuring the Hive2 URLs, SSL, trust store password may need to be 
> specified.
> Furthermore, the URL can be configured to use zookeeper discovery mode where 
> zookeeper can be used as a single point of configuration changes that are 
> automatically available to the Hive2 clients.   This mode also supports HA 
> for the HiveServer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FALCON-2072) Hive2 URLs in Falcon should allow additional configuration elements in the URL

2016-07-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh reassigned FALCON-2072:
-

Assignee: Sowmya Ramesh

> Hive2 URLs in Falcon should allow additional configuration elements in the URL
> --
>
> Key: FALCON-2072
> URL: https://issues.apache.org/jira/browse/FALCON-2072
> Project: Falcon
>  Issue Type: Bug
>Affects Versions: 0.6.1, 0.8, 0.9, 0.10
>Reporter: Venkat Ranganathan
>Assignee: Sowmya Ramesh
>
> When configuring the Hive2 URLs, SSL, trust store password may need to be 
> specified.
> Furthermore, the URL can be configured to use zookeeper discovery mode where 
> zookeeper can be used as a single point of configuration changes that are 
> automatically available to the Hive2 clients.   This mode also supports HA 
> for the HiveServer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FALCON-2047) HiveDR tests are failed due to data-mirroring does not have correct ownership/permissions

2016-06-30 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh reassigned FALCON-2047:
-

Assignee: Sowmya Ramesh

> HiveDR tests are failed due to data-mirroring does not have correct 
> ownership/permissions
> -
>
> Key: FALCON-2047
> URL: https://issues.apache.org/jira/browse/FALCON-2047
> Project: Falcon
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.10
>Reporter: Murali Ramasami
>Assignee: Sowmya Ramesh
>Priority: Critical
> Fix For: trunk
>
>
> When I executed the hiveDR cases in unsecure cluster it is failed because of 
> /apps/data-mirroring does not have correct ownership/permissions. Please see 
> the error message below:
> {noformat}
> LogType:stderr
> Log Upload Time:Tue Jun 21 21:02:06 + 2016
> LogLength:860
> Log Contents:
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/grid/0/hadoop/yarn/local/filecache/317/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/grid/0/hadoop/yarn/local/filecache/39/mapreduce.tar.gz/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Invalid arguments: Base dir 
> hdfs://mramasami-falcon-multicluster-10.openstacklocal:8020/apps/data-mirroring
>  does not have correct ownership/permissions. Please set group to users and 
> permissions to rwxrwx---
> Intercepting System.exit(-1)
> Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], 
> exit code [-1]
> End of LogType:stderr
> LogType:stdout
> Log Upload Time:Tue Jun 21 21:02:06 + 2016
> LogLength:192002
> Log Contents:
> Oozie Launcher starts
> {noformat}
> Permission :
> {noformat}
> hrt_qa@mramasami-falcon-multicluster-13:/grid/0/hadoopqe/tests/falcon/falcon-regression/falcon-regression/merlin/target/surefire-reports$
>  hdfs dfs -ls /apps/
> Found 3 items
> drwxrwxrwx   - falcon hdfs  0 2016-06-21 19:04 /apps/data-mirroring
> drwxrwxrwx   - falcon hdfs  0 2016-06-21 19:04 /apps/falcon
> drwxr-xr-x   - hdfs   hdfs  0 2016-06-21 19:07 /apps/hive
> {noformat}
> So /apps/data-mirroring contains full permission but it expects 770. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FALCON-2046) HDFS Replication failing in secure Mode

2016-06-30 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh reassigned FALCON-2046:
-

Assignee: Sowmya Ramesh

> HDFS Replication failing in secure Mode
> ---
>
> Key: FALCON-2046
> URL: https://issues.apache.org/jira/browse/FALCON-2046
> Project: Falcon
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.10
>Reporter: Murali Ramasami
>Assignee: Sowmya Ramesh
>Priority: Critical
> Fix For: trunk, 0.10
>
>
> HDFS Replication failing in secure Mode with the Authentication required 
> error.
> Scenario:
> HDFS replication from single source to single target
> Extension property file
> {noformat}
> [hrt_qa@nat-os-r6-upns-falcon-multicluster-14 hadoopqe]$ cat 
> /tmp/falcon-extension/HdfsExtensionTesta1b85962.properties
> #
> # Licensed to the Apache Software Foundation (ASF) under one
> # or more contributor license agreements.  See the NOTICE file
> # distributed with this work for additional information
> # regarding copyright ownership.  The ASF licenses this file
> # to you under the Apache License, Version 2.0 (the
> # "License"); you may not use this file except in compliance
> # with the License.  You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> # NOTE: This is a TEMPLATE file which can be copied and edited
> jobName = HdfsExtensionTesta1b85962
> jobClusterName = Aa39cd108-b6c6e9ff
> jobValidityStart = 2016-06-21T04:27Z
> jobValidityEnd = 2016-06-21T04:47Z
> jobFrequency = days(1)
> sourceCluster = Aa39cd108-b6c6e9ff
> sourceDir = /tmp/falcon-regression/HdfsExtensionTest/HdfsDR/source
> targetCluster = Aa39cd108-38a7a9cc
> targetDir = /tmp/falcon-regression/HdfsExtensionTest/HdfsDR/target
> jobAclOwner = hrt_qa
> jobAclGroup = users
> jobAclPermission = *
> extensionName = hdfs-mirroring
> jobProcessFrequency = minutes(5)
> {noformat}
> Please see the application log below:
> {noformat}
> =
> >>> Invoking Main class now >>>
> Fetching child yarn jobs
> tag id : oozie-62d207ec7d2c61db9dd3220d0fda7c22
> Child yarn jobs are found -
> Main class: org.apache.falcon.replication.FeedReplicator
> Arguments :
> -Dmapred.job.queue.name=default
> -Dmapred.job.priority=NORMAL
> -maxMaps
> 1
> -mapBandwidth
> 100
> -sourcePaths
> 
> webhdfs://nat-os-r6-upns-falcon-multicluster-14.openstacklocal:20070/tmp/falcon-regression/HdfsExtensionTest/HdfsDR/source
> -targetPath
> 
> hdfs://nat-os-r6-upns-falcon-multicluster-10.openstacklocal:8020/tmp/falcon-regression/HdfsExtensionTest/HdfsDR/target
> -falconFeedStorageType
> FILESYSTEM
> -availabilityFlag
> NA
> -counterLogDir
> 
> hdfs://nat-os-r6-upns-falcon-multicluster-14.openstacklocal:8020/tmp/fs/falcon/workflows/process/HdfsExtensionTesta1b85962/logs/job-2016-06-21-04-27/
> <<< Invocation of Main class completed <<<
> Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], 
> main() threw exception, org.apache.hadoop.security.AccessControlException: 
> Authentication required
> org.apache.oozie.action.hadoop.JavaMainException: 
> org.apache.hadoop.security.AccessControlException: Authentication required
> at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:59)
> at 
> org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:51)
> at org.apache.oozie.action.hadoop.JavaMain.main(JavaMain.java:35)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:242)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at 

[jira] [Updated] (FALCON-1914) Extensions: Hive mirroring should work for secure to unsecure & viceversa

2016-06-30 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1914:
--
Description: 
Extensions: Update the code to make it work for secure to unsecure & viceversa
Also cleanup HiveDR artifacts 

  was:Extensions: Update the code to make it work for secure to unsecure & 
viceversa


> Extensions: Hive mirroring should work for secure to unsecure & viceversa
> -
>
> Key: FALCON-1914
> URL: https://issues.apache.org/jira/browse/FALCON-1914
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk, 0.10
>
>
> Extensions: Update the code to make it work for secure to unsecure & viceversa
> Also cleanup HiveDR artifacts 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1914) Extensions: Hive mirroring should work for secure to unsecure & viceversa

2016-06-30 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1914:
--
Fix Version/s: 0.10

> Extensions: Hive mirroring should work for secure to unsecure & viceversa
> -
>
> Key: FALCON-1914
> URL: https://issues.apache.org/jira/browse/FALCON-1914
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk, 0.10
>
>
> Extensions: Update the code to make it work for secure to unsecure & viceversa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1915) Hdfs mirroring extension should work in secure environment

2016-06-30 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1915.
---
Resolution: Duplicate

> Hdfs mirroring extension should work in secure environment
> --
>
> Key: FALCON-1915
> URL: https://issues.apache.org/jira/browse/FALCON-1915
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk, 0.10
>
>
> Hdfs mirroring extension should work in secure environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-2056) HiveDR doesn't work with multiple users

2016-06-30 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-2056:
--
Fix Version/s: 0.10

> HiveDR doesn't work with multiple users
> ---
>
> Key: FALCON-2056
> URL: https://issues.apache.org/jira/browse/FALCON-2056
> Project: Falcon
>  Issue Type: Bug
>  Components: replication
>Affects Versions: trunk
>Reporter: Murali Ramasami
>Assignee: Sowmya Ramesh
> Fix For: trunk, 0.10
>
>
> When falcon hivedr jobs are executed with multiple users. Jobs are failing 
> with the following error:
> {noformat}
> Invalid arguments: Permission denied: user=ambari-qa, access=WRITE, 
> inode="/apps/data-mirroring/Events/sowmya-hivedr":hrt_qa:users:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.j
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-2056) HiveDR doesn't work with multiple users

2016-06-30 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-2056:
--
Description: 
When falcon hivedr jobs are executed with multiple users. Jobs are failing with 
the following error:

{noformat}
Invalid arguments: Permission denied: user=ambari-qa, access=WRITE, 
inode="/apps/data-mirroring/Events/sowmya-hivedr":hrt_qa:users:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.j
{noformat}



  was:
When falcon hivedr jobs are executed with multiple users. Jobs are failing with 
the following error:

{noformat}
Invalid arguments: Permission denied: user=ambari-qa, access=WRITE, 
inode="/apps/data-mirroring/Events/sowmya-hivedr":hrt_qa:users:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.j
{noformat}

Reason:
Reason is /apps/data-mirroring/Events is created by first user who submits the 
job say ambari-qa. Permission of this dir was 
inode="/apps/data-mirroring/Events"ambari-qa_qa:users:drwxr-xr-x. Now if second 
user hrt_qa submits the job he won't have write access. Permission should be 
set to 770 for /apps/data-mirroring/Events/"


> HiveDR doesn't work with multiple users
> ---
>
> Key: FALCON-2056
> URL: https://issues.apache.org/jira/browse/FALCON-2056
> Project: Falcon
>  Issue Type: Bug
>  Components: replication
>Affects Versions: trunk
>Reporter: Murali Ramasami
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> When falcon hivedr jobs are executed with multiple users. Jobs are failing 
> with the following error:
> {noformat}
> Invalid arguments: Permission denied: user=ambari-qa, access=WRITE, 
> inode="/apps/data-mirroring/Events/sowmya-hivedr":hrt_qa:users:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.j
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FALCON-2056) HiveDR doesn't work with multiple users

2016-06-30 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh reassigned FALCON-2056:
-

Assignee: Sowmya Ramesh

> HiveDR doesn't work with multiple users
> ---
>
> Key: FALCON-2056
> URL: https://issues.apache.org/jira/browse/FALCON-2056
> Project: Falcon
>  Issue Type: Bug
>  Components: replication
>Affects Versions: trunk
>Reporter: Murali Ramasami
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> When falcon hivedr jobs are executed with multiple users. Jobs are failing 
> with the following error:
> {noformat}
> Invalid arguments: Permission denied: user=ambari-qa, access=WRITE, 
> inode="/apps/data-mirroring/Events/sowmya-hivedr":hrt_qa:users:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.j
> {noformat}
> Reason:
> Reason is /apps/data-mirroring/Events is created by first user who submits 
> the job say ambari-qa. Permission of this dir was 
> inode="/apps/data-mirroring/Events"ambari-qa_qa:users:drwxr-xr-x. Now if 
> second user hrt_qa submits the job he won't have write access. Permission 
> should be set to 770 for /apps/data-mirroring/Events/"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-2056) HiveDR doesn't work with multiple users

2016-06-30 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15357877#comment-15357877
 ] 

Sowmya Ramesh commented on FALCON-2056:
---

Reason:
Reason is /apps/data-mirroring/Events is created by first user who submits the 
job say ambari-qa. Permission of this dir was 
inode="/apps/data-mirroring/Events"ambari-qa_qa:users:drwxr-xr-x. Now if second 
user hrt_qa submits the job he won't have write access. Permission should be 
set to 770 for /apps/data-mirroring/Events/"

> HiveDR doesn't work with multiple users
> ---
>
> Key: FALCON-2056
> URL: https://issues.apache.org/jira/browse/FALCON-2056
> Project: Falcon
>  Issue Type: Bug
>  Components: replication
>Affects Versions: trunk
>Reporter: Murali Ramasami
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> When falcon hivedr jobs are executed with multiple users. Jobs are failing 
> with the following error:
> {noformat}
> Invalid arguments: Permission denied: user=ambari-qa, access=WRITE, 
> inode="/apps/data-mirroring/Events/sowmya-hivedr":hrt_qa:users:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.j
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-2032) Update the extension documentation to add ExtensionService before ConfigurationStore in startup properties

2016-06-16 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-2032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-2032.
---
Resolution: Fixed

> Update the extension documentation to add ExtensionService before 
> ConfigurationStore in startup properties
> --
>
> Key: FALCON-2032
> URL: https://issues.apache.org/jira/browse/FALCON-2032
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk, 0.10
>
>
> In case where Falcon cluster entities exist before Extension service is 
> enabled, during reload where the extension artifacts are copied  to all 
> Falcon cluster entities will be skipped if ExtensionService is registered 
> after configuration store.
> To avoid this it should be mandated to add ExtensionService before 
> ConfigurationStore in startup properties when extension service is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-2032) Update the extension documentation to add ExtensionService before ConfigurationStore in startup properties

2016-06-15 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-2032:
-

 Summary: Update the extension documentation to add 
ExtensionService before ConfigurationStore in startup properties
 Key: FALCON-2032
 URL: https://issues.apache.org/jira/browse/FALCON-2032
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk, 0.10


In case where Falcon cluster entities exist before Extension service is 
enabled, during reload where the extension artifacts are copied  to all Falcon 
cluster entities will be skipped if ExtensionService is registered after 
configuration store.

To avoid this it should be mandated to add ExtensionService before 
ConfigurationStore in startup properties when extension service is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-2028) HDFS extension: Validate and append/remove the scheme://authority for the paths

2016-06-14 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-2028:
-

 Summary: HDFS extension: Validate and append/remove the 
scheme://authority for the paths
 Key: FALCON-2028
 URL: https://issues.apache.org/jira/browse/FALCON-2028
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


In hdfs-mirroring-workflow.xml target EP is added only for target dir.

{noformat}
-sourcePaths
${sourceDir}
-targetPath
${targetClusterFS}${targetDir}
{noformat}

in the req, if sourceDir is not sent as fully qualified paths then extension 
should generate fully qualified paths before submitting the process. For 
targetDir if passed it should be removed.

e.g.

{noformat}
sourceDir=/user/ambari-qaqa/dr/test/primaryCluster/input1, 
/user/ambari-qaqa/dr/test/primaryCluster/input2

After processing:
sourceDir=hdfs://240.0.0.10:8020/user/ambari-qaqa/dr/test/primaryCluster/input1,
 hdfs://240.0.0.10:8020/user/ambari-qaqa/dr/test/primaryCluster/input2
{noformat}

Also, if user has already specified the fully qualified path then it should not 
be added for sourceDir but it should be removed for targetDir

{noformat}
sourceDir=hdfs://240.0.0.10:8020/user/ambari-qa/dr/test/primaryCluster/input1, 
hdfs://240.0.0.10:8020/user/ambari-qa/dr/test/primaryCluster/input2

After processing:
sourceDir=hdfs://240.0.0.10:8020/user/ambari-qa/dr/test/primaryCluster/input1, 
hdfs://240.0.0.10:8020/user/ambari-qa/dr/test/primaryCluster/input2


targetDir=hdfs://240.0.0.11:8020/user/ambari-qa/dr/test/backupCluster/outputs

After processing:
targetDir=/user/ambari-qa/dr/test/backupCluster/outputs

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-2017) Fix HiveDR extension issues

2016-06-07 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-2017:
-

 Summary: Fix HiveDR extension issues
 Key: FALCON-2017
 URL: https://issues.apache.org/jira/browse/FALCON-2017
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk, 0.10


1> Fix the argument mismatch issue
2> Clean up the extension WF artifacts
3> Fix warnings
4> Fix to allow mutiple DB mirroring



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FALCON-1894) HDFS Data replication cannot be initiated independent of Oozie server location

2016-06-06 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313342#comment-15313342
 ] 

Sowmya Ramesh edited comment on FALCON-1894 at 6/6/16 5:57 PM:
---

[~abush]: Issue will be seen only if HDFS DR is initiated from Falcon UI as it 
doesn't use the logic in FalconClient.
Recipes submitted through Falcon CLI will not hit this issue.

Below code in HdfsReplicationRecipeTool takes care of generating the absolute 
path 

{noformat}
// Construct fully qualified hdfs src path
String srcPaths = 
recipeProperties.getProperty(HdfsReplicationRecipeToolOptions
.REPLICATION_SOURCE_DIR.getName());
StringBuilder absoluteSrcPaths = new StringBuilder();
String srcFsPath = recipeProperties.getProperty(

HdfsReplicationRecipeToolOptions.REPLICATION_SOURCE_CLUSTER_FS_WRITE_ENDPOINT.getName());
if (StringUtils.isNotEmpty(srcFsPath)) {
srcFsPath = StringUtils.removeEnd(srcFsPath, File.separator);
}
if (StringUtils.isNotEmpty(srcPaths)) {
String[] paths = srcPaths.split(COMMA_SEPARATOR);

for (String path : paths) {
StringBuilder srcpath = new StringBuilder(srcFsPath);
srcpath.append(path.trim());
srcpath.append(COMMA_SEPARATOR);
absoluteSrcPaths.append(srcpath);
}
}
{noformat}

Fix you have recommended will not work if user wants to replicate multiple 
comma separated directories.
Better workaround will be for the suer to provide the fully qualified source 
path to replicate instead of modifying the  workflow template as below

{noformat}
drSourceDir=hftp://c6402.ambari.apache.org:50070/user/ambari-qa/test1/, 
hftp://c6402.ambari.apache.org:50070/user/ambari-qa/test2/
{noformat}

Full qualified paths should be specified only for drSourceDir and not for 
drTargetDir.

With https://issues.apache.org/jira/browse/FALCON-1107 recipe is renamed as 
extensions and processing is moved to server side. Logic to construct the fully 
qualified path is on the server side and hence will work for both UI and cmd 
line. 

I am closing this as fixed as it will not be sen after 0.10 release.

Thanks!






was (Author: sowmyaramesh):
[~abush]: Issue will be seen only if HDFS DR is initiated from Falcon UI as it 
doesn't use the logic in FalconClient.
Recipes submitted through Falcon CLI will not hit this issue.

Below code in HdfsReplicationRecipeTool takes care of generating the absolute 
path 

{noformat}
// Construct fully qualified hdfs src path
String srcPaths = 
recipeProperties.getProperty(HdfsReplicationRecipeToolOptions
.REPLICATION_SOURCE_DIR.getName());
StringBuilder absoluteSrcPaths = new StringBuilder();
String srcFsPath = recipeProperties.getProperty(

HdfsReplicationRecipeToolOptions.REPLICATION_SOURCE_CLUSTER_FS_WRITE_ENDPOINT.getName());
if (StringUtils.isNotEmpty(srcFsPath)) {
srcFsPath = StringUtils.removeEnd(srcFsPath, File.separator);
}
if (StringUtils.isNotEmpty(srcPaths)) {
String[] paths = srcPaths.split(COMMA_SEPARATOR);

for (String path : paths) {
StringBuilder srcpath = new StringBuilder(srcFsPath);
srcpath.append(path.trim());
srcpath.append(COMMA_SEPARATOR);
absoluteSrcPaths.append(srcpath);
}
}
{noformat}

Fix you have recommended will not work if user wants to replicate multiple 
comma separated directories.
Better workaround will be for the suer to provide the fully qualified source 
path to replicate instead of modifying the  workflow template as below

{noformat}
drSourceDir=hftp://c6402.ambari.apache.org:50070/user/ambari-qa/test1/, 
hftp://c6402.ambari.apache.org:50070/user/ambari-qa/test2/
{noformat}

With https://issues.apache.org/jira/browse/FALCON-1107 recipe is renamed as 
extensions and processing is moved to server side. Logic to construct the fully 
qualified path is on the server side and hence will work for both UI and cmd 
line. 

I am closing this as fixed as it will not be sen after 0.10 release.

Thanks!





> HDFS Data replication cannot be initiated independent of Oozie server location
> --
>
> Key: FALCON-1894
> URL: https://issues.apache.org/jira/browse/FALCON-1894
> Project: Falcon
>  Issue Type: Bug
>  Components: general
>Affects Versions: trunk
>Reporter: Alex Bush
>Assignee: Sowmya Ramesh
>Priority: Minor
> Fix For: trunk, 0.10
>
>
> The HDFS mirroring scripts allow replication between two clusters.
> Currently, even though the UI allows the replication in any direction 

[jira] [Resolved] (FALCON-1894) HDFS Data replication cannot be initiated independent of Oozie server location

2016-06-02 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1894.
---
   Resolution: Fixed
Fix Version/s: 0.10

> HDFS Data replication cannot be initiated independent of Oozie server location
> --
>
> Key: FALCON-1894
> URL: https://issues.apache.org/jira/browse/FALCON-1894
> Project: Falcon
>  Issue Type: Bug
>  Components: general
>Affects Versions: trunk
>Reporter: Alex Bush
>Assignee: Sowmya Ramesh
>Priority: Minor
> Fix For: trunk, 0.10
>
>
> The HDFS mirroring scripts allow replication between two clusters.
> Currently, even though the UI allows the replication in any direction between 
> clusters independent of which cluster the Falcon and Oozie servers belong to 
> this is not observed and the source cluster is always the cluster with 
> Oozie/Falcon server.
> Steps to reproduce:
> 1) Define both clusters in Falcon server on cluster 2
> 2) Set up HDFS mirroring in Falcon server on cluster 2 from Cluster 1 to 
> Cluster 2 and set to run on Oozie server of Cluster 2
> Result:
> Falcon will replicate data from Cluster 2 to Cluster 2
> Cause:
> In hdfs-replication-workflow.xml, the source dir should be defined like the 
> target dir by including clusterfs:
> https://github.com/apache/falcon/blob/master/addons/recipes/hdfs-replication/src/main/resources/hdfs-replication-workflow.xml#L63
> ${drSourceDir}
> should be
> ${drSourceClusterFS}${drSourceDir}
>  
> like
> https://github.com/apache/falcon/blob/master/addons/recipes/hdfs-replication/src/main/resources/hdfs-replication-workflow.xml#L65



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FALCON-1894) HDFS Data replication cannot be initiated independent of Oozie server location

2016-06-02 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh reassigned FALCON-1894:
-

Assignee: Sowmya Ramesh

> HDFS Data replication cannot be initiated independent of Oozie server location
> --
>
> Key: FALCON-1894
> URL: https://issues.apache.org/jira/browse/FALCON-1894
> Project: Falcon
>  Issue Type: Bug
>  Components: general
>Affects Versions: trunk
>Reporter: Alex Bush
>Assignee: Sowmya Ramesh
>Priority: Minor
> Fix For: trunk
>
>
> The HDFS mirroring scripts allow replication between two clusters.
> Currently, even though the UI allows the replication in any direction between 
> clusters independent of which cluster the Falcon and Oozie servers belong to 
> this is not observed and the source cluster is always the cluster with 
> Oozie/Falcon server.
> Steps to reproduce:
> 1) Define both clusters in Falcon server on cluster 2
> 2) Set up HDFS mirroring in Falcon server on cluster 2 from Cluster 1 to 
> Cluster 2 and set to run on Oozie server of Cluster 2
> Result:
> Falcon will replicate data from Cluster 2 to Cluster 2
> Cause:
> In hdfs-replication-workflow.xml, the source dir should be defined like the 
> target dir by including clusterfs:
> https://github.com/apache/falcon/blob/master/addons/recipes/hdfs-replication/src/main/resources/hdfs-replication-workflow.xml#L63
> ${drSourceDir}
> should be
> ${drSourceClusterFS}${drSourceDir}
>  
> like
> https://github.com/apache/falcon/blob/master/addons/recipes/hdfs-replication/src/main/resources/hdfs-replication-workflow.xml#L65



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1894) HDFS Data replication cannot be initiated independent of Oozie server location

2016-06-02 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313342#comment-15313342
 ] 

Sowmya Ramesh commented on FALCON-1894:
---

[~abush]: Issue will be seen only if HDFS DR is initiated from Falcon UI as it 
doesn't use the logic in FalconClient.
Recipes submitted through Falcon CLI will not hit this issue.

Below code in HdfsReplicationRecipeTool takes care of generating the absolute 
path 

{noformat}
// Construct fully qualified hdfs src path
String srcPaths = 
recipeProperties.getProperty(HdfsReplicationRecipeToolOptions
.REPLICATION_SOURCE_DIR.getName());
StringBuilder absoluteSrcPaths = new StringBuilder();
String srcFsPath = recipeProperties.getProperty(

HdfsReplicationRecipeToolOptions.REPLICATION_SOURCE_CLUSTER_FS_WRITE_ENDPOINT.getName());
if (StringUtils.isNotEmpty(srcFsPath)) {
srcFsPath = StringUtils.removeEnd(srcFsPath, File.separator);
}
if (StringUtils.isNotEmpty(srcPaths)) {
String[] paths = srcPaths.split(COMMA_SEPARATOR);

for (String path : paths) {
StringBuilder srcpath = new StringBuilder(srcFsPath);
srcpath.append(path.trim());
srcpath.append(COMMA_SEPARATOR);
absoluteSrcPaths.append(srcpath);
}
}
{noformat}

Fix you have recommended will not work if user wants to replicate multiple 
comma separated directories.
Better workaround will be for the suer to provide the fully qualified source 
path to replicate instead of modifying the  workflow template as below

{noformat}
drSourceDir=hftp://c6402.ambari.apache.org:50070/user/ambari-qa/test1/, 
hftp://c6402.ambari.apache.org:50070/user/ambari-qa/test2/
{noformat}

With https://issues.apache.org/jira/browse/FALCON-1107 recipe is renamed as 
extensions and processing is moved to server side. Logic to construct the fully 
qualified path is on the server side and hence will work for both UI and cmd 
line. 

I am closing this as fixed as it will not be sen after 0.10 release.

Thanks!





> HDFS Data replication cannot be initiated independent of Oozie server location
> --
>
> Key: FALCON-1894
> URL: https://issues.apache.org/jira/browse/FALCON-1894
> Project: Falcon
>  Issue Type: Bug
>  Components: general
>Affects Versions: trunk
>Reporter: Alex Bush
>Priority: Minor
> Fix For: trunk
>
>
> The HDFS mirroring scripts allow replication between two clusters.
> Currently, even though the UI allows the replication in any direction between 
> clusters independent of which cluster the Falcon and Oozie servers belong to 
> this is not observed and the source cluster is always the cluster with 
> Oozie/Falcon server.
> Steps to reproduce:
> 1) Define both clusters in Falcon server on cluster 2
> 2) Set up HDFS mirroring in Falcon server on cluster 2 from Cluster 1 to 
> Cluster 2 and set to run on Oozie server of Cluster 2
> Result:
> Falcon will replicate data from Cluster 2 to Cluster 2
> Cause:
> In hdfs-replication-workflow.xml, the source dir should be defined like the 
> target dir by including clusterfs:
> https://github.com/apache/falcon/blob/master/addons/recipes/hdfs-replication/src/main/resources/hdfs-replication-workflow.xml#L63
> ${drSourceDir}
> should be
> ${drSourceClusterFS}${drSourceDir}
>  
> like
> https://github.com/apache/falcon/blob/master/addons/recipes/hdfs-replication/src/main/resources/hdfs-replication-workflow.xml#L65



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1894) HDFS Data replication cannot be initiated independent of Oozie server location

2016-06-02 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1894:
--
Fix Version/s: trunk

> HDFS Data replication cannot be initiated independent of Oozie server location
> --
>
> Key: FALCON-1894
> URL: https://issues.apache.org/jira/browse/FALCON-1894
> Project: Falcon
>  Issue Type: Bug
>  Components: general
>Affects Versions: trunk
>Reporter: Alex Bush
>Priority: Minor
> Fix For: trunk
>
>
> The HDFS mirroring scripts allow replication between two clusters.
> Currently, even though the UI allows the replication in any direction between 
> clusters independent of which cluster the Falcon and Oozie servers belong to 
> this is not observed and the source cluster is always the cluster with 
> Oozie/Falcon server.
> Steps to reproduce:
> 1) Define both clusters in Falcon server on cluster 2
> 2) Set up HDFS mirroring in Falcon server on cluster 2 from Cluster 1 to 
> Cluster 2 and set to run on Oozie server of Cluster 2
> Result:
> Falcon will replicate data from Cluster 2 to Cluster 2
> Cause:
> In hdfs-replication-workflow.xml, the source dir should be defined like the 
> target dir by including clusterfs:
> https://github.com/apache/falcon/blob/master/addons/recipes/hdfs-replication/src/main/resources/hdfs-replication-workflow.xml#L63
> ${drSourceDir}
> should be
> ${drSourceClusterFS}${drSourceDir}
>  
> like
> https://github.com/apache/falcon/blob/master/addons/recipes/hdfs-replication/src/main/resources/hdfs-replication-workflow.xml#L65



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1962) Extension related bugs

2016-05-20 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1962.
---
Resolution: Fixed

> Extension related bugs
> --
>
> Key: FALCON-1962
> URL: https://issues.apache.org/jira/browse/FALCON-1962
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>
> 1> If authorization is disabled then ACL is not retained in the extension job 
> generated
> 2> Hive mirroring extension submission fails if clusters with names as 
> sourceCluster, targetCluster, jobClusterName doesn't exist
> {noformat}
> "Cluster entity sourceCluster  not found"
> {noformat}
> 3> HiveMirror extension update API failing since timestamp is getting added 
> to jobName before update  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1972) Handling cases when Extension service or "extension.store.uri" is not present in startup proeprties

2016-05-18 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1972:
-

 Summary: Handling cases when Extension service or 
"extension.store.uri" is not present in startup proeprties
 Key: FALCON-1972
 URL: https://issues.apache.org/jira/browse/FALCON-1972
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh


1> If ExtensionService is added in application.servcies but 
"extension.store.uri" is not set ExtensionService is initialized and Falcon 
Service is up and running.  Expected behavior is ExtensionService should not be 
initialized and Falcon Server start should fail.

2> Extension REST API/CLI should return "ExtensionService is not enabled" error 
if ExtensionService is not added in startup properties. Today it returns 
"Property extension.store.uri not set in startup properties" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1944) Ability to provide additional DistCP options for mirroring extensions

2016-05-17 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1944:
--
Issue Type: Improvement  (was: Bug)

> Ability to provide additional DistCP options for mirroring extensions
> -
>
> Key: FALCON-1944
> URL: https://issues.apache.org/jira/browse/FALCON-1944
> Project: Falcon
>  Issue Type: Improvement
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> Mirroring extensions should have the ability to provide additional DistCp 
> options. Also handle TDE enabled option



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1943) Extension API/CLI fails when authorization is enabled

2016-05-17 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1943.
---
Resolution: Fixed

commit 4ccd3b119389e9707cc69ef89e6d38dd6639cb32

> Extension API/CLI fails when authorization is enabled
> -
>
> Key: FALCON-1943
> URL: https://issues.apache.org/jira/browse/FALCON-1943
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>
> Extension REST API's fail with Authorization failed : 400/Illegal resource
> {noformat}
> 2016-05-06 00:42:59,616 INFO  - [main:] ~ Generated 
> unique-instance-id=ac165f6421108-os-r6-mhblfu-falcon-3-31 
> (GraphDatabaseConfiguration:1469)
> 2016-05-06 00:42:59,660 INFO  - [main:] ~ Initiated backend operations thread 
> pool of size 4 (Backend:168)
> 2016-05-06 00:42:59,744 INFO  - [main:] ~ Loaded unidentified ReadMarker 
> start time Timepoint[1462495379743000 μs] into 
> com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller@3520a0d7 
> (KCVSLog:733)
> 2016-05-06 00:42:59,947 INFO  - [main:] ~ Indexes already exist for graph 
> (MetadataMappingService:148)
> 2016-05-06 00:42:59,948 INFO  - [main:] ~ Initialized graph db: 
> titangraph[berkeleyje:/hadoop/falcon/data/lineage/graphdb] 
> (MetadataMappingService:93)
> 2016-05-06 00:42:59,950 INFO  - [main:] ~ Init vertex property keys: [name, 
> type, version, timestamp] (MetadataMappingService:96)
> 2016-05-06 00:42:59,953 INFO  - [main:] ~ Init edge property keys: [name, 
> type, version, timestamp] (MetadataMappingService:99)
> 2016-05-06 00:42:59,958 INFO  - [main:] ~ Service initialized: 
> org.apache.falcon.metadata.MetadataMappingService (ServiceInitializer:52)
> 2016-05-06 00:42:59,961 INFO  - [main:] ~ FalconAuditFilter initialization 
> started (FalconAuditFilter:49)
> 2016-05-06 00:42:59,965 INFO  - [main:] ~ FalconAuthenticationFilter 
> initialization started (FalconAuthenticationFilter:83)
> 2016-05-06 00:42:59,985 INFO  - [main:] ~ Falcon is running with 
> authorization enabled (FalconAuthorizationFilter:62)
> 2016-05-06 00:43:01,623 INFO  - [main:] ~ Started 
> SocketConnector@0.0.0.0:15000 (log:67)
> 2016-05-06 00:43:08,832 INFO  - [1706292388@qtp-1610702581-0 - 
> 88ba0bf7-2300-48a5-92a7-fede4775b523:] ~ HttpServletRequest RemoteUser is 
> null (Servlets:47)
> 2016-05-06 00:43:08,836 INFO  - [1706292388@qtp-1610702581-0 - 
> 88ba0bf7-2300-48a5-92a7-fede4775b523:] ~ HttpServletRequest user.name param 
> value is falcon (Servlets:53)
> 2016-05-06 00:43:08,838 DEBUG - [1706292388@qtp-1610702581-0 - 
> 88ba0bf7-2300-48a5-92a7-fede4775b523:] ~ Audit: falcon/172.22.95.100 
> performed request 
> http://os-r6-mhblfu-falcon-3-3.openstacklocal:15000/api/options?user.name=falcon
>  (172.22.95.100) at time 2016-05-06T00:43Z (FalconAuditFilter:86)
> 2016-05-06 00:43:08,875 INFO  - [1706292388@qtp-1610702581-0 - 
> ba3e5632-1457-415e-bdd4-4a81d31c4f70:] ~ HttpServletRequest RemoteUser is 
> null (Servlets:47)
> 2016-05-06 00:43:08,876 INFO  - [1706292388@qtp-1610702581-0 - 
> ba3e5632-1457-415e-bdd4-4a81d31c4f70:] ~ HttpServletRequest user.name param 
> value is falcon (Servlets:53)
> 2016-05-06 00:43:08,876 DEBUG - [1706292388@qtp-1610702581-0 - 
> ba3e5632-1457-415e-bdd4-4a81d31c4f70:] ~ Audit: falcon/172.22.95.100 
> performed request 
> http://os-r6-mhblfu-falcon-3-3.openstacklocal:15000/api/options?user.name=falcon=falcon
>  (172.22.95.100) at time 2016-05-06T00:43Z (FalconAuditFilter:86)
> 2016-05-06 00:43:08,917 INFO  - [1706292388@qtp-1610702581-0 - 
> 87cabbc5-87d9-45d3-aac8-5620c7cee207:] ~ HttpServletRequest RemoteUser is 
> falcon (Servlets:47)
> 2016-05-06 00:43:08,919 INFO  - [1706292388@qtp-1610702581-0 - 
> 87cabbc5-87d9-45d3-aac8-5620c7cee207:falcon:GET//extension/enumerate/] ~ 
> Logging in falcon (CurrentUser:65)
> 2016-05-06 00:43:08,919 INFO  - [1706292388@qtp-1610702581-0 - 
> 87cabbc5-87d9-45d3-aac8-5620c7cee207:falcon:GET//extension/enumerate/] ~ 
> Request from authenticated user: falcon, URL=/api/extension/enumerate/, doAs 
> user: null (FalconAuthenticationFilter:185)
> 2016-05-06 00:43:08,921 INFO  - [1706292388@qtp-1610702581-0 - 
> 87cabbc5-87d9-45d3-aac8-5620c7cee207:falcon:GET//extension/enumerate/] ~ 
> Authorizing user=falcon against request=RequestParts{resource='extension', 
> action='enumerate'} (FalconAuthorizationFilter:78)
> 2016-05-06 00:43:08,922 ERROR - [1706292388@qtp-1610702581-0 - 
> 87cabbc5-87d9-45d3-aac8-5620c7cee207:falcon:GET//extension/enumerate/] ~ 
> Authorization failed : 400/Illegal resource: extension 
> (FalconAuthorizationFilter:155)
> 2016-05-06 00:43:08,931 DEBUG - [1706292388@qtp-1610702581-0 - 
> 87cabbc5-87d9-45d3-aac8-5620c7cee207:] ~ Audit: falcon/172.22.95.100 
> performed request 
> 

[jira] [Created] (FALCON-1962) Extension related bugs

2016-05-16 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1962:
-

 Summary: Extension related bugs
 Key: FALCON-1962
 URL: https://issues.apache.org/jira/browse/FALCON-1962
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh


1> If authorization is disabled then ACL is not retained in the extension job 
generated

2> Hive mirroring extension submission fails if clusters with names as 
sourceCluster, targetCluster, jobClusterName doesn't exist

{noformat}
"Cluster entity sourceCluster  not found"
{noformat}


3> HiveMirror extension update API failing since timestamp is getting added to 
jobName before update




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1945) Handle TDE enabled for feed replication

2016-05-10 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1945:
-

 Summary: Handle TDE enabled for feed replication
 Key: FALCON-1945
 URL: https://issues.apache.org/jira/browse/FALCON-1945
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


If TDE is enabled then DistCP options update should be set to true and skip CRC 
should be set to true.

User can pass if TDE is enabled as a custom property. If replication feed has 
TDE enabled custom property then replication action xml will be updated in 
runtime to include TDE option. And in FeedReplicator if TDE is enabled DistCP 
options should be got accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1944) Ability to provide additional DistCP options for mirroring extensions

2016-05-10 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1944:
-

 Summary: Ability to provide additional DistCP options for 
mirroring extensions
 Key: FALCON-1944
 URL: https://issues.apache.org/jira/browse/FALCON-1944
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


Mirroring extensions should have the ability to provide additional DistCp 
options. Also handle TDE enabled option



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1943) Extension API/CLI fails when authorization is enabled

2016-05-10 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1943:
-

 Summary: Extension API/CLI fails when authorization is enabled
 Key: FALCON-1943
 URL: https://issues.apache.org/jira/browse/FALCON-1943
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh


Extension REST API's fail with Authorization failed : 400/Illegal resource
{noformat}
2016-05-06 00:42:59,616 INFO  - [main:] ~ Generated 
unique-instance-id=ac165f6421108-os-r6-mhblfu-falcon-3-31 
(GraphDatabaseConfiguration:1469)
2016-05-06 00:42:59,660 INFO  - [main:] ~ Initiated backend operations thread 
pool of size 4 (Backend:168)
2016-05-06 00:42:59,744 INFO  - [main:] ~ Loaded unidentified ReadMarker start 
time Timepoint[1462495379743000 μs] into 
com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller@3520a0d7 
(KCVSLog:733)
2016-05-06 00:42:59,947 INFO  - [main:] ~ Indexes already exist for graph 
(MetadataMappingService:148)
2016-05-06 00:42:59,948 INFO  - [main:] ~ Initialized graph db: 
titangraph[berkeleyje:/hadoop/falcon/data/lineage/graphdb] 
(MetadataMappingService:93)
2016-05-06 00:42:59,950 INFO  - [main:] ~ Init vertex property keys: [name, 
type, version, timestamp] (MetadataMappingService:96)
2016-05-06 00:42:59,953 INFO  - [main:] ~ Init edge property keys: [name, type, 
version, timestamp] (MetadataMappingService:99)
2016-05-06 00:42:59,958 INFO  - [main:] ~ Service initialized: 
org.apache.falcon.metadata.MetadataMappingService (ServiceInitializer:52)
2016-05-06 00:42:59,961 INFO  - [main:] ~ FalconAuditFilter initialization 
started (FalconAuditFilter:49)
2016-05-06 00:42:59,965 INFO  - [main:] ~ FalconAuthenticationFilter 
initialization started (FalconAuthenticationFilter:83)
2016-05-06 00:42:59,985 INFO  - [main:] ~ Falcon is running with authorization 
enabled (FalconAuthorizationFilter:62)
2016-05-06 00:43:01,623 INFO  - [main:] ~ Started SocketConnector@0.0.0.0:15000 
(log:67)
2016-05-06 00:43:08,832 INFO  - [1706292388@qtp-1610702581-0 - 
88ba0bf7-2300-48a5-92a7-fede4775b523:] ~ HttpServletRequest RemoteUser is null 
(Servlets:47)
2016-05-06 00:43:08,836 INFO  - [1706292388@qtp-1610702581-0 - 
88ba0bf7-2300-48a5-92a7-fede4775b523:] ~ HttpServletRequest user.name param 
value is falcon (Servlets:53)
2016-05-06 00:43:08,838 DEBUG - [1706292388@qtp-1610702581-0 - 
88ba0bf7-2300-48a5-92a7-fede4775b523:] ~ Audit: falcon/172.22.95.100 performed 
request 
http://os-r6-mhblfu-falcon-3-3.openstacklocal:15000/api/options?user.name=falcon
 (172.22.95.100) at time 2016-05-06T00:43Z (FalconAuditFilter:86)
2016-05-06 00:43:08,875 INFO  - [1706292388@qtp-1610702581-0 - 
ba3e5632-1457-415e-bdd4-4a81d31c4f70:] ~ HttpServletRequest RemoteUser is null 
(Servlets:47)
2016-05-06 00:43:08,876 INFO  - [1706292388@qtp-1610702581-0 - 
ba3e5632-1457-415e-bdd4-4a81d31c4f70:] ~ HttpServletRequest user.name param 
value is falcon (Servlets:53)
2016-05-06 00:43:08,876 DEBUG - [1706292388@qtp-1610702581-0 - 
ba3e5632-1457-415e-bdd4-4a81d31c4f70:] ~ Audit: falcon/172.22.95.100 performed 
request 
http://os-r6-mhblfu-falcon-3-3.openstacklocal:15000/api/options?user.name=falcon=falcon
 (172.22.95.100) at time 2016-05-06T00:43Z (FalconAuditFilter:86)
2016-05-06 00:43:08,917 INFO  - [1706292388@qtp-1610702581-0 - 
87cabbc5-87d9-45d3-aac8-5620c7cee207:] ~ HttpServletRequest RemoteUser is 
falcon (Servlets:47)
2016-05-06 00:43:08,919 INFO  - [1706292388@qtp-1610702581-0 - 
87cabbc5-87d9-45d3-aac8-5620c7cee207:falcon:GET//extension/enumerate/] ~ 
Logging in falcon (CurrentUser:65)
2016-05-06 00:43:08,919 INFO  - [1706292388@qtp-1610702581-0 - 
87cabbc5-87d9-45d3-aac8-5620c7cee207:falcon:GET//extension/enumerate/] ~ 
Request from authenticated user: falcon, URL=/api/extension/enumerate/, doAs 
user: null (FalconAuthenticationFilter:185)
2016-05-06 00:43:08,921 INFO  - [1706292388@qtp-1610702581-0 - 
87cabbc5-87d9-45d3-aac8-5620c7cee207:falcon:GET//extension/enumerate/] ~ 
Authorizing user=falcon against request=RequestParts{resource='extension', 
action='enumerate'} (FalconAuthorizationFilter:78)
2016-05-06 00:43:08,922 ERROR - [1706292388@qtp-1610702581-0 - 
87cabbc5-87d9-45d3-aac8-5620c7cee207:falcon:GET//extension/enumerate/] ~ 
Authorization failed : 400/Illegal resource: extension 
(FalconAuthorizationFilter:155)
2016-05-06 00:43:08,931 DEBUG - [1706292388@qtp-1610702581-0 - 
87cabbc5-87d9-45d3-aac8-5620c7cee207:] ~ Audit: falcon/172.22.95.100 performed 
request 
http://os-r6-mhblfu-falcon-3-3.openstacklocal:15000/api/extension/enumerate/ 
(172.22.95.100) at time 2016-05-06T00:43Z (FalconAuditFilter:86)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1767) Improve Falcon retention policy documentation

2016-05-03 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1767.
---
Resolution: Fixed

> Improve Falcon retention policy documentation
> -
>
> Key: FALCON-1767
> URL: https://issues.apache.org/jira/browse/FALCON-1767
> Project: Falcon
>  Issue Type: Sub-task
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> Improve Falcon retention policy documentation with few examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1922) Documentation for extension repository management

2016-05-02 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1922.
---
Resolution: Fixed

Fixed as part of FALCON-1106.

> Documentation for extension repository management
> -
>
> Key: FALCON-1922
> URL: https://issues.apache.org/jira/browse/FALCON-1922
> Project: Falcon
>  Issue Type: Sub-task
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> Add documentation for extension repository management



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1106) Documentation for extension

2016-05-02 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1106:
--
Component/s: (was: client)

> Documentation for extension
> ---
>
> Key: FALCON-1106
> URL: https://issues.apache.org/jira/browse/FALCON-1106
> Project: Falcon
>  Issue Type: Sub-task
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: extensions
>
> Place holder for documenting recipe



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1106) Documentation for extension

2016-05-02 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1106:
--
Labels: extensions  (was: Recipe)

> Documentation for extension
> ---
>
> Key: FALCON-1106
> URL: https://issues.apache.org/jira/browse/FALCON-1106
> Project: Falcon
>  Issue Type: Sub-task
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: extensions
>
> Place holder for documenting recipe



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1922) Documentation for extension repository management

2016-04-19 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1922:
-

 Summary: Documentation for extension repository management
 Key: FALCON-1922
 URL: https://issues.apache.org/jira/browse/FALCON-1922
 Project: Falcon
  Issue Type: Sub-task
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


Add documentation for extension repository management



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1921) Server side extension repository management REST API and CLI IT tests

2016-04-19 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1921:
-

 Summary: Server side extension repository management REST API and 
CLI IT tests 
 Key: FALCON-1921
 URL: https://issues.apache.org/jira/browse/FALCON-1921
 Project: Falcon
  Issue Type: Sub-task
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


Add IT tests for extension repository management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1914) Extensions: Hive mirroring should work for secure to unsecure & viceversa

2016-04-19 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1914:
--
Summary: Extensions: Hive mirroring should work for secure to unsecure & 
viceversa  (was: Extensions: Update the code to make it work for secure to 
unsecure & viceversa)

> Extensions: Hive mirroring should work for secure to unsecure & viceversa
> -
>
> Key: FALCON-1914
> URL: https://issues.apache.org/jira/browse/FALCON-1914
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> Extensions: Update the code to make it work for secure to unsecure & viceversa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1917) Fix build failure

2016-04-19 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1917:
-

 Summary: Fix build failure
 Key: FALCON-1917
 URL: https://issues.apache.org/jira/browse/FALCON-1917
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


{noformat}
[INFO] Apache Falcon . FAILURE [01:59 min]
[INFO] Apache Falcon UI .. SUCCESS [ 11.010 s]
[INFO] Build Tools ... SUCCESS [  5.004 s]
[INFO] Apache Falcon Java client . SUCCESS [  8.537 s]
[INFO] Apache Falcon Metrics . SUCCESS [  4.022 s]
[INFO] Apache Falcon Hadoop Dependencies . SUCCESS [  2.343 s]
[INFO] Apache Falcon Test Utility  SUCCESS [  1.859 s]
[INFO] Apache Falcon Commons . SUCCESS [ 13.568 s]
[INFO] Apache Falcon CLI client .. SUCCESS [  4.252 s]
[INFO] Apache Falcon Oozie EL Extension .. SUCCESS [  2.597 s]
[INFO] Apache Falcon Embedded Hadoop - Test Cluster .. SUCCESS [ 22.301 s]
[INFO] Apache Falcon Sharelib Hive - Test Cluster  SUCCESS [  0.706 s]
[INFO] Apache Falcon Sharelib Pig - Test Cluster . SUCCESS [  0.189 s]
[INFO] Apache Falcon Sharelib Hcatalog - Test Cluster  SUCCESS [  0.358 s]
[INFO] Apache Falcon Sharelib Oozie - Test Cluster ... SUCCESS [  0.118 s]
[INFO] Apache Falcon Test Tools - Test Cluster ... SUCCESS [  0.075 s]
[INFO] Apache Falcon Messaging ... SUCCESS [  2.590 s]
[INFO] Apache Falcon extensions .. SUCCESS [  2.439 s]
[INFO] Apache Falcon LIfecycle Module  SUCCESS [  4.377 s]
[INFO] Apache Falcon Oozie Adaptor ... SUCCESS [  7.804 s]
[INFO] Apache Falcon Scheduler ... SUCCESS [  6.852 s]
[INFO] Apache Falcon Acquisition . SUCCESS [  0.442 s]
[INFO] Apache Falcon Distcp Replication .. SUCCESS [  2.249 s]
[INFO] Apache Falcon Retention ... SUCCESS [  2.218 s]
[INFO] Apache Falcon Archival  SUCCESS [  0.458 s]
[INFO] Apache Falcon Rerun ... SUCCESS [  3.479 s]
[INFO] Apache Falcon Prism ... SUCCESS [ 46.954 s]
[INFO] falcon-unit ... SUCCESS [  2.798 s]
[INFO] Apache Falcon Web Application . SUCCESS [ 13.455 s]
[INFO] Apache Falcon Documentation ... SUCCESS [  8.802 s]
[INFO] Apache Falcon Distro .. SUCCESS [ 35.181 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 05:39 min
[INFO] Finished at: 2016-04-19T07:36:25+00:00
[INFO] Final Memory: 833M/2269M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.6:assembly (default-cli) on 
project falcon-main: Failed to create assembly: Error adding file to archive: 

 -> [Help 1]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1907) Package new CLI module added

2016-04-15 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1907:
--
Description: After moving CLI related files to cli module its not packaged 
and hence falcon-cli-.jar is missing in the packaging.  (was: After 
moving CLI related files to client module its not packaged and hence 
falcon-cli-.jar is missing in the packaging.)

> Package new CLI module added
> 
>
> Key: FALCON-1907
> URL: https://issues.apache.org/jira/browse/FALCON-1907
> Project: Falcon
>  Issue Type: Bug
>  Components: client
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> After moving CLI related files to cli module its not packaged and hence 
> falcon-cli-.jar is missing in the packaging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1907) Package new CLI module added

2016-04-15 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1907:
-

 Summary: Package new CLI module added
 Key: FALCON-1907
 URL: https://issues.apache.org/jira/browse/FALCON-1907
 Project: Falcon
  Issue Type: Bug
  Components: client
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


After moving CLI related files to client module its not packaged and hence 
falcon-cli-.jar is missing in the packaging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-637) Add packaging for recipes

2016-04-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-637.
--
Resolution: Fixed

Implemented as part of FALCON-1107.

> Add packaging for recipes
> -
>
> Key: FALCON-637
> URL: https://issues.apache.org/jira/browse/FALCON-637
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Venkatesh Seetharam
>Assignee: Sowmya Ramesh
>  Labels: recipes
> Attachments: FALCON-637.patch
>
>
> * package each recipe and its dependencies into a uber jar or a tar
> * We need to have a superpom for recipes that packages all the recipes into a 
> uber tar file that can be distributed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1901) Intermittent IT test failures

2016-04-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1901.
---
Resolution: Fixed

[~yzheng-hortonworks], [~bvellanki]: Thanks for the quick review. Appreciate it!

> Intermittent IT test failures
> -
>
> Key: FALCON-1901
> URL: https://issues.apache.org/jira/browse/FALCON-1901
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>Priority: Blocker
> Fix For: trunk
>
>
> {noformat}
> Running org.apache.falcon.cli.FalconCLIIT
> Tests run: 27, Failures: 19, Errors: 0, Skipped: 0, Time elapsed: 17.927 sec 
> <<< FAILURE! - in org.apache.falcon.cli.FalconCLIIT
> testClientProperties(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 1.834 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at 
> org.apache.falcon.cli.FalconCLIIT.testClientProperties(FalconCLIIT.java:858)
> testContinue(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.826 sec  <<< 
> FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at org.apache.falcon.cli.FalconCLIIT.testContinue(FalconCLIIT.java:796)
> testDefinitionEntityValidCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
> elapsed: 0.748 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at 
> org.apache.falcon.cli.FalconCLIIT.testDefinitionEntityValidCommands(FalconCLIIT.java:167)
> testDeleteEntityValidCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
> elapsed: 0.706 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at 
> org.apache.falcon.cli.FalconCLIIT.testDeleteEntityValidCommands(FalconCLIIT.java:311)
> testEntityLineage(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.757 sec 
>  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.testEntityLineage(FalconCLIIT.java:614)
> testEntityPaginationFilterByCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
> elapsed: 1.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at 
> org.apache.falcon.cli.FalconCLIIT.testEntityPaginationFilterByCommands(FalconCLIIT.java:642)
> testInstanceGetLogs(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.681 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at org.apache.falcon.cli.FalconCLIIT.testInstanceGetLogs(FalconCLIIT.java:890)
> testInstanceKillAndRerun(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 
> 0.665 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at 

[jira] [Created] (FALCON-1903) Flaky test in scheduler

2016-04-13 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1903:
-

 Summary: Flaky test in scheduler
 Key: FALCON-1903
 URL: https://issues.apache.org/jira/browse/FALCON-1903
 Project: Falcon
  Issue Type: Sub-task
  Components: scheduler
Reporter: Sowmya Ramesh
 Fix For: trunk


{noformat}

org.apache.falcon.notification.service.SchedulerServiceTest
testDeRegistration(org.apache.falcon.notification.service.SchedulerServiceTest) 
 Time elapsed: 0.748 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:160)
at 
org.apache.falcon.notification.service.SchedulerServiceTest.testDeRegistration(SchedulerServiceTest.java:274)

Running org.apache.falcon.workflow.engine.WorkflowEngineFactoryTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.496 sec - in 
org.apache.falcon.workflow.engine.WorkflowEngineFactoryTest

Results :

Failed tests: 
  SchedulerServiceTest.testDeRegistration:274 expected:<1> but was:

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1902) Server side extension repository management CLI support

2016-04-13 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1902:
-

 Summary: Server side extension repository management CLI support
 Key: FALCON-1902
 URL: https://issues.apache.org/jira/browse/FALCON-1902
 Project: Falcon
  Issue Type: Sub-task
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


Add support for Server side extension repository management command line support



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1105) Server side extension repository management REST API support

2016-04-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1105:
--
Summary: Server side extension repository management REST API support  
(was: Server side extension repository management API's and CLI support)

> Server side extension repository management REST API support
> 
>
> Key: FALCON-1105
> URL: https://issues.apache.org/jira/browse/FALCON-1105
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: extensions
> Attachments: ExtensionStore-RESTAPI, 
> FALCON-1105.v0.patch
>
>
> Provide REST API's for extensions store hosted in HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1901) Intermittent IT test failures

2016-04-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1901:
--
Priority: Blocker  (was: Major)

> Intermittent IT test failures
> -
>
> Key: FALCON-1901
> URL: https://issues.apache.org/jira/browse/FALCON-1901
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>Priority: Blocker
> Fix For: trunk
>
>
> {noformat}
> Running org.apache.falcon.cli.FalconCLIIT
> Tests run: 27, Failures: 19, Errors: 0, Skipped: 0, Time elapsed: 17.927 sec 
> <<< FAILURE! - in org.apache.falcon.cli.FalconCLIIT
> testClientProperties(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 1.834 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at 
> org.apache.falcon.cli.FalconCLIIT.testClientProperties(FalconCLIIT.java:858)
> testContinue(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.826 sec  <<< 
> FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at org.apache.falcon.cli.FalconCLIIT.testContinue(FalconCLIIT.java:796)
> testDefinitionEntityValidCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
> elapsed: 0.748 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at 
> org.apache.falcon.cli.FalconCLIIT.testDefinitionEntityValidCommands(FalconCLIIT.java:167)
> testDeleteEntityValidCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
> elapsed: 0.706 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at 
> org.apache.falcon.cli.FalconCLIIT.testDeleteEntityValidCommands(FalconCLIIT.java:311)
> testEntityLineage(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.757 sec 
>  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.testEntityLineage(FalconCLIIT.java:614)
> testEntityPaginationFilterByCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
> elapsed: 1.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at 
> org.apache.falcon.cli.FalconCLIIT.testEntityPaginationFilterByCommands(FalconCLIIT.java:642)
> testInstanceGetLogs(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.681 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at org.testng.Assert.assertEquals(Assert.java:118)
> at org.testng.Assert.assertEquals(Assert.java:365)
> at org.testng.Assert.assertEquals(Assert.java:375)
> at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
> at org.apache.falcon.cli.FalconCLIIT.testInstanceGetLogs(FalconCLIIT.java:890)
> testInstanceKillAndRerun(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 
> 0.665 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.testng.Assert.fail(Assert.java:89)
> at org.testng.Assert.failNotEquals(Assert.java:489)
> at 

[jira] [Created] (FALCON-1901) Intermittent IT test failures

2016-04-13 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1901:
-

 Summary: Intermittent IT test failures
 Key: FALCON-1901
 URL: https://issues.apache.org/jira/browse/FALCON-1901
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


{noformat}
Running org.apache.falcon.cli.FalconCLIIT
Tests run: 27, Failures: 19, Errors: 0, Skipped: 0, Time elapsed: 17.927 sec 
<<< FAILURE! - in org.apache.falcon.cli.FalconCLIIT
testClientProperties(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 1.834 
sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-1>
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:365)
at org.testng.Assert.assertEquals(Assert.java:375)
at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
at org.apache.falcon.cli.FalconCLIIT.testClientProperties(FalconCLIIT.java:858)

testContinue(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.826 sec  <<< 
FAILURE!
java.lang.AssertionError: expected:<0> but was:<-1>
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:365)
at org.testng.Assert.assertEquals(Assert.java:375)
at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
at org.apache.falcon.cli.FalconCLIIT.testContinue(FalconCLIIT.java:796)

testDefinitionEntityValidCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
elapsed: 0.748 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-1>
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:365)
at org.testng.Assert.assertEquals(Assert.java:375)
at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
at 
org.apache.falcon.cli.FalconCLIIT.testDefinitionEntityValidCommands(FalconCLIIT.java:167)

testDeleteEntityValidCommands(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 
0.706 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-1>
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:365)
at org.testng.Assert.assertEquals(Assert.java:375)
at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
at 
org.apache.falcon.cli.FalconCLIIT.testDeleteEntityValidCommands(FalconCLIIT.java:311)

testEntityLineage(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.757 sec  
<<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-1>
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:365)
at org.testng.Assert.assertEquals(Assert.java:375)
at org.apache.falcon.cli.FalconCLIIT.testEntityLineage(FalconCLIIT.java:614)

testEntityPaginationFilterByCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
elapsed: 1.258 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-1>
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:365)
at org.testng.Assert.assertEquals(Assert.java:375)
at 
org.apache.falcon.cli.FalconCLIIT.testEntityPaginationFilterByCommands(FalconCLIIT.java:642)

testInstanceGetLogs(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 0.681 sec 
 <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-1>
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:365)
at org.testng.Assert.assertEquals(Assert.java:375)
at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
at org.apache.falcon.cli.FalconCLIIT.testInstanceGetLogs(FalconCLIIT.java:890)

testInstanceKillAndRerun(org.apache.falcon.cli.FalconCLIIT)  Time elapsed: 
0.665 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-1>
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertEquals(Assert.java:118)
at org.testng.Assert.assertEquals(Assert.java:365)
at org.testng.Assert.assertEquals(Assert.java:375)
at org.apache.falcon.cli.FalconCLIIT.submitTestFiles(FalconCLIIT.java:974)
at 
org.apache.falcon.cli.FalconCLIIT.testInstanceKillAndRerun(FalconCLIIT.java:565)

testInstanceRunningAndStatusCommands(org.apache.falcon.cli.FalconCLIIT)  Time 
elapsed: 0.668 sec  <<< 

[jira] [Updated] (FALCON-1105) Server side extension repository management API's and CLI support

2016-03-24 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1105:
--
Attachment: ExtensionStore-RESTAPI

Please find the design document for REST API and CLI support for extension 
store and provide feedback if any. Thanks!

> Server side extension repository management API's and CLI support
> -
>
> Key: FALCON-1105
> URL: https://issues.apache.org/jira/browse/FALCON-1105
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: extensions
> Attachments: ExtensionStore-RESTAPI, 
> FALCON-1105.v0.patch
>
>
> Provide REST API's for extensions store hosted in HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1105) Server side extension repository management API's and CLI support

2016-03-23 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1105:
--
Summary: Server side extension repository management API's and CLI support  
(was: Recipe repository management API's and CLI support)

> Server side extension repository management API's and CLI support
> -
>
> Key: FALCON-1105
> URL: https://issues.apache.org/jira/browse/FALCON-1105
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: extensions
> Attachments: FALCON-1105.v0.patch
>
>
> Provide REST API's for recipe store hosted in HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1105) Server side extension repository management API's and CLI support

2016-03-23 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1105:
--
Description: Provide REST API's for extensions store hosted in HDFS.  (was: 
Provide REST API's for recipe store hosted in HDFS.)

> Server side extension repository management API's and CLI support
> -
>
> Key: FALCON-1105
> URL: https://issues.apache.org/jira/browse/FALCON-1105
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: extensions
> Attachments: FALCON-1105.v0.patch
>
>
> Provide REST API's for extensions store hosted in HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1105) Recipe repository management API's and CLI support

2016-03-23 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1105:
--
Labels: extensions  (was: Recipe)

> Recipe repository management API's and CLI support
> --
>
> Key: FALCON-1105
> URL: https://issues.apache.org/jira/browse/FALCON-1105
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: extensions
> Attachments: FALCON-1105.v0.patch
>
>
> Provide REST API's for recipe store hosted in HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-634) Add server side extensions in Falcon

2016-03-23 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-634:
-
Summary: Add server side extensions in Falcon  (was: Add recipes in Falcon)

> Add server side extensions in Falcon
> 
>
> Key: FALCON-634
> URL: https://issues.apache.org/jira/browse/FALCON-634
> Project: Falcon
>  Issue Type: New Feature
>Affects Versions: 0.6
>Reporter: Venkatesh Seetharam
>  Labels: recipes
>
> Falcon offers many services OOTB and caters to a wide array of use cases. 
> However, there has been many asks that does not fit the functionality offered 
> by Falcon. I'm proposing that we add recipes to Falcon which is similar to 
> recipes in Whirr and other management solutions such as puppet and chef.
> Overview:
> A recipe essentially is a static process template with parameterized workflow 
> to realize a specific use case. For example:
> * replicating directories from one HDFS cluster to another (not timed 
> partitions)
> * replicating hive metadata (database, table, views, etc.)
> * replicating between HDFS and Hive - either way
> * anonymization of data based on schema
> * data masking
> * etc.
> Proposal:
> Falcon provides a Process abstraction that encapsulates the configuration 
> for a user workflow with scheduling controls. All recipes can be modeled 
> as a Process with in Falcon which executes the user workflow 
> periodically. The process and its associated workflow are parameterized. The 
> user will provide a properties file with name value pairs that are 
> substituted by falcon before scheduling it.
> This is a client side concept. The server does not know about a recipe but 
> only accepts the cooked recipe as a process entity. 
> The CLI would look something like this:
> falcon -recipe $recipe_name -properties $properties_file
> Recipes will reside inside addons (contrib) with source code and will have an 
> option to package 'em.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1105) Recipe repository management API's and CLI support

2016-03-03 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1105:
--
Description: Provide REST API's for recipe store hosted in HDFS.  (was: 
*Recipe Listing*
A GET method may be added to the Prism server on a new Jersey resource to list 
recipes and their corresponding root location on recipe repository. 
Corresponding CLI methods to be present

{noformat}
[hrt_qa@node-1 recipe]$ falcon recipe -list
List of recipes
{
  "totalSize": 2,
  "results": {
"hdfs-replication": 
"hdfs://node-1.example.com:8020/apps/falcon/recipe/hdfs-replication",
"hive-disaster-recovery": 
"hdfs://node-1.example.com:8020/apps/falcon/recipe/hive-disaster-recovery"
  }
}
{noformat}

*Get resources of a given recipe*
List all the resources of a given recipe

{noformat}
[hrt_qa@node-1 recipe]$ falcon recipe -getResources -name hdfs-replication
{
  "totalSize": 3,
  "results": {
"hdfs-replication-template.xml": 
"hdfs://node-1.example.com:8020/apps/falcon/recipe/hdfs-replication/resources/build/hdfs-replication-template.xml",
"hdfs-replication-workflow.xml": 
"hdfs://node-1.example.com:8020/apps/falcon/recipe/hdfs-replication/resources/runtime/hdfs-replication-workflow.xml",
"hdfs-replication.properties": 
"hdfs://node-1.example.com:8020/apps/falcon/recipe/hdfs-replication/resources/build/hdfs-replication.properties"
  }
}
{noformat}

*Recipe Description*
A GET method may be added to the Prism server to echo the README as 
documentation for the users. This may contain brief on the functionality 
offered by the recipe and any operability notes of importance

{noformat}
[hrt_qa@node-1 recipe]$ falcon recipe -describe -name hdfs-replication
HDFS Directory Replication Recipe

Overview
This recipe implements replicating arbitrary directories on HDFS from one
Hadoop cluster to another Hadoop cluster.
This piggy backs on replication solution in Falcon which uses the DistCp tool.

Use Case
* Copy directories between HDFS clusters with out dated partitions
* Archive directories from HDFS to Cloud. Ex: S3, Azure WASB

Limitations
As the data volume and number of files grow, this can get inefficient.
{noformat}
)

> Recipe repository management API's and CLI support
> --
>
> Key: FALCON-1105
> URL: https://issues.apache.org/jira/browse/FALCON-1105
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Recipe
> Attachments: FALCON-1105.v0.patch
>
>
> Provide REST API's for recipe store hosted in HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1107) Move trusted recipe processing to server side

2016-02-24 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166652#comment-15166652
 ] 

Sowmya Ramesh commented on FALCON-1107:
---

As per our discussion, issue with having a centralized location for recipe 
repository is that for cross clusters it requires additional security handling 
and also can result in performance overhead if there are multiple 
files/libraries to be accessed.

I agree with the solution proposed above. With this approach existing 
assumption that WF and WF libs are present on the cluster where the process 
instance runs will still hold true.

For recipe repository management say list, describe path specified in the 
startup properties will be used as recipe store uri which is default hdfs 
associated with the Falcon server. 

With this implementation addition of new recipes to recipe repository requires 
Falcon restart. This can be automated in the future when we provide the upload 
operation to recipe repository. 
 
Trusted recipes will be made available to all. For Custom recipes, users might 
have proprietary code and may not be willing to share with other users. We need 
to provide additional recipe repository management features to handle such 
cases.

> Move trusted recipe processing to server side
> -
>
> Key: FALCON-1107
> URL: https://issues.apache.org/jira/browse/FALCON-1107
> Project: Falcon
>  Issue Type: Sub-task
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Recipe
> Fix For: trunk
>
> Attachments: ApacheFalcon-RecipeDesignDocument.V1.pdf, 
> ApacheFalcon-RecipeDesignDocument.pdf
>
>
> Today Recipe cooking is a client side logic. Recipe also supports extensions 
> i.e. user can cook his/her own custom recipes.
> Decision to make it client side logic was for the following reasons
>   *   Keep it isolated from falcon server
>   *   As custom recipe cooking is supported, user recipes can introduce 
> security vulnerabilities and also can bring down the falcon server
> Today, falcon provides HDFS DR recipe out of the box. There is a plan to add 
> UI support for DR in Falcon.
> Rest API support cannot be added for recipe as it is client side processing.
> If the UI is pure java script[JS] then all the recipe cooking logic has to be 
> repeated in JS. This is not a feasible solution - if more recipes are added 
> say DR for hive, hbase and others, UI won't be extensible.
> For the above mentioned reasons Recipe should me made a server side logic.
> Provided/Trusted recipes [recipes provided out of the box]  can run as Falcon 
> process. Recipe cooking will be done in a new process if its custom recipe 
> [user code].
> For cooking of custom recipes, design proposed should consider handling 
> security implications, handling the issues where the custom user code can 
> bring down the Falcon server (trapping System.exit), handling  class path 
> isolation.
> Also it shouldn't in anyway destabilize the Falcon system.
> There are couple of approaches which was discussed
> *Approach 1:*
> Custom Recipe cooking can be carried out separately in another Oozie WF, this 
> will ensure isolation. Oozie already has the ability to schedule jobs as a 
> user and handles all the security aspects of it.
> Pros:
> - Provides isolation
> - Piggyback on Oozie as it already provides the required functionality
> Cons:
> - As recipe processing is done in different WF, from operations point of view 
> user cannot figure out recipe processing status and thus adds to the 
> operational pain. Operational issue with this approach is said to be the 
> overall
> apparatus needed to monitor and manage the recipe-cooking workflows.  
> Oozie scheduling can bring arbitrary delays  Granted we can design around the 
> limitations and make use of the strengths of the approach but it seems 
> something we can avoid if we can.
> - There has been few discussions to move away from Oozie as scheduling engine 
> for Falcon. If this is the plan going forward its good not to add new 
> functionality using oozie.
> *Approach 2:*
> Custom recipe cooking is done on the server side in a separate independent 
> process than Falcon process I.e. It runs in a different JVM. Throttling 
> should be added for how many recipe cooking processes can be launched keeping 
> in mind the machine configuration.
> Pros:
> - Provides isolation as recipe cooking is done in a independent process
> Cons:
> - Performance overhead as new process is launched for custom recipe cooking
> - Adds more complexity to the system
> This bug will be used to move recipe processing for trusted recipes to server 
> side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1107) Move trusted recipe processing to server side

2016-02-10 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141901#comment-15141901
 ] 

Sowmya Ramesh commented on FALCON-1107:
---

Opening up for discussion about where to publish the recipe artifacts.

Recipe artifacts repo is structured as 
{noformat}
RecipeRoot
|-- Recipe1
|-- README
|-- META
|-- libs
|-- build
|-- runtime
|-- resources
|-- build
|-- runtime
{noformat}

Falcon server should be aware of where recipe artifacts are hosted as it is 
required for cooking the recipe and also to provide recipe repository 
management feature. Multiple clients can use the same recipe, so this should be 
hosted in one centralized location.

For trusted recipes which are provided by Falcon OOTB the artifacts are 
published by Falcon and for custom recipes instructions are provided to the 
user about where to publish the artifacts.
Also, this should be designed so that it works for both unsecure and secure 
cluster setup.

Today, in Falcon during the process entity validation its assumed that 
workflow[WF] and WF libs are present on the cluster where the process instance 
runs. This has to change as in case of recipes WF and libs can reside on 
different cluster. Also, in case of secure cluster, required NN principals 
should be passed to access the file and configuration 
"mapreduce.job.hdfs-servers" should be updated for job execution to succeed. 

One approach is similar to config store uri introduce another config for recipe 
store uri in startup properties and another config to set the NN principal. For 
trusted recipes Ambari can be used to copy the required artifacts to this 
location. In case of custom recipes user has to manually copy them. One issue 
with this approach is that since its configurable if user changes this config 
later say when implementing custom recipes, it will break the trusted recipes 
if the artifacts are not copied to new recipe location.

Please let me know if there any suggestions or better approaches, thanks!
cc: [~sriksun], [~venkatnrangan]

> Move trusted recipe processing to server side
> -
>
> Key: FALCON-1107
> URL: https://issues.apache.org/jira/browse/FALCON-1107
> Project: Falcon
>  Issue Type: Sub-task
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Recipe
> Fix For: trunk
>
> Attachments: ApacheFalcon-RecipeDesignDocument.V1.pdf, 
> ApacheFalcon-RecipeDesignDocument.pdf
>
>
> Today Recipe cooking is a client side logic. Recipe also supports extensions 
> i.e. user can cook his/her own custom recipes.
> Decision to make it client side logic was for the following reasons
>   *   Keep it isolated from falcon server
>   *   As custom recipe cooking is supported, user recipes can introduce 
> security vulnerabilities and also can bring down the falcon server
> Today, falcon provides HDFS DR recipe out of the box. There is a plan to add 
> UI support for DR in Falcon.
> Rest API support cannot be added for recipe as it is client side processing.
> If the UI is pure java script[JS] then all the recipe cooking logic has to be 
> repeated in JS. This is not a feasible solution - if more recipes are added 
> say DR for hive, hbase and others, UI won't be extensible.
> For the above mentioned reasons Recipe should me made a server side logic.
> Provided/Trusted recipes [recipes provided out of the box]  can run as Falcon 
> process. Recipe cooking will be done in a new process if its custom recipe 
> [user code].
> For cooking of custom recipes, design proposed should consider handling 
> security implications, handling the issues where the custom user code can 
> bring down the Falcon server (trapping System.exit), handling  class path 
> isolation.
> Also it shouldn't in anyway destabilize the Falcon system.
> There are couple of approaches which was discussed
> *Approach 1:*
> Custom Recipe cooking can be carried out separately in another Oozie WF, this 
> will ensure isolation. Oozie already has the ability to schedule jobs as a 
> user and handles all the security aspects of it.
> Pros:
> - Provides isolation
> - Piggyback on Oozie as it already provides the required functionality
> Cons:
> - As recipe processing is done in different WF, from operations point of view 
> user cannot figure out recipe processing status and thus adds to the 
> operational pain. Operational issue with this approach is said to be the 
> overall
> apparatus needed to monitor and manage the recipe-cooking workflows.  
> Oozie scheduling can bring arbitrary delays  Granted we can design around the 
> limitations and make use of the strengths of the approach but it seems 
> something we can avoid if we can.
> - There has been few discussions to move away from Oozie as scheduling engine 

[jira] [Assigned] (FALCON-887) Support for multiple lib paths in falcon process

2016-02-08 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh reassigned FALCON-887:


Assignee: Sowmya Ramesh

> Support for multiple lib paths in falcon process
> 
>
> Key: FALCON-887
> URL: https://issues.apache.org/jira/browse/FALCON-887
> Project: Falcon
>  Issue Type: New Feature
>  Components: process
>Reporter: Akshay Goyal
>Assignee: Sowmya Ramesh
>Priority: Minor
>
> Cases where I have multiple jobs which use common libraries and also have 
> some libraries specific to each job. There are two options:
> 1: To use same lib path for all jobs which has all libraries (common and 
> specific). But all the jobs will end up loading all libraries.
> 2: To use a central lib location for common libraries and another lib 
> location which is specific to the job. 
> We can enable this by supporting comma separated lib paths in falcon process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1107) Move trusted recipe processing to server side

2016-02-03 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15131423#comment-15131423
 ] 

Sowmya Ramesh commented on FALCON-1107:
---

Attached the updated design document for trusted recipes. Please review!

> Move trusted recipe processing to server side
> -
>
> Key: FALCON-1107
> URL: https://issues.apache.org/jira/browse/FALCON-1107
> Project: Falcon
>  Issue Type: Sub-task
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Recipe
> Fix For: trunk
>
> Attachments: ApacheFalcon-RecipeDesignDocument.V1.pdf, 
> ApacheFalcon-RecipeDesignDocument.pdf
>
>
> Today Recipe cooking is a client side logic. Recipe also supports extensions 
> i.e. user can cook his/her own custom recipes.
> Decision to make it client side logic was for the following reasons
>   *   Keep it isolated from falcon server
>   *   As custom recipe cooking is supported, user recipes can introduce 
> security vulnerabilities and also can bring down the falcon server
> Today, falcon provides HDFS DR recipe out of the box. There is a plan to add 
> UI support for DR in Falcon.
> Rest API support cannot be added for recipe as it is client side processing.
> If the UI is pure java script[JS] then all the recipe cooking logic has to be 
> repeated in JS. This is not a feasible solution - if more recipes are added 
> say DR for hive, hbase and others, UI won't be extensible.
> For the above mentioned reasons Recipe should me made a server side logic.
> Provided/Trusted recipes [recipes provided out of the box]  can run as Falcon 
> process. Recipe cooking will be done in a new process if its custom recipe 
> [user code].
> For cooking of custom recipes, design proposed should consider handling 
> security implications, handling the issues where the custom user code can 
> bring down the Falcon server (trapping System.exit), handling  class path 
> isolation.
> Also it shouldn't in anyway destabilize the Falcon system.
> There are couple of approaches which was discussed
> *Approach 1:*
> Custom Recipe cooking can be carried out separately in another Oozie WF, this 
> will ensure isolation. Oozie already has the ability to schedule jobs as a 
> user and handles all the security aspects of it.
> Pros:
> - Provides isolation
> - Piggyback on Oozie as it already provides the required functionality
> Cons:
> - As recipe processing is done in different WF, from operations point of view 
> user cannot figure out recipe processing status and thus adds to the 
> operational pain. Operational issue with this approach is said to be the 
> overall
> apparatus needed to monitor and manage the recipe-cooking workflows.  
> Oozie scheduling can bring arbitrary delays  Granted we can design around the 
> limitations and make use of the strengths of the approach but it seems 
> something we can avoid if we can.
> - There has been few discussions to move away from Oozie as scheduling engine 
> for Falcon. If this is the plan going forward its good not to add new 
> functionality using oozie.
> *Approach 2:*
> Custom recipe cooking is done on the server side in a separate independent 
> process than Falcon process I.e. It runs in a different JVM. Throttling 
> should be added for how many recipe cooking processes can be launched keeping 
> in mind the machine configuration.
> Pros:
> - Provides isolation as recipe cooking is done in a independent process
> Cons:
> - Performance overhead as new process is launched for custom recipe cooking
> - Adds more complexity to the system
> This bug will be used to move recipe processing for trusted recipes to server 
> side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1107) Move trusted recipe processing to server side

2016-02-03 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1107:
--
Attachment: ApacheFalcon-RecipeDesignDocument.V1.pdf

> Move trusted recipe processing to server side
> -
>
> Key: FALCON-1107
> URL: https://issues.apache.org/jira/browse/FALCON-1107
> Project: Falcon
>  Issue Type: Sub-task
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Recipe
> Fix For: trunk
>
> Attachments: ApacheFalcon-RecipeDesignDocument.V1.pdf, 
> ApacheFalcon-RecipeDesignDocument.pdf
>
>
> Today Recipe cooking is a client side logic. Recipe also supports extensions 
> i.e. user can cook his/her own custom recipes.
> Decision to make it client side logic was for the following reasons
>   *   Keep it isolated from falcon server
>   *   As custom recipe cooking is supported, user recipes can introduce 
> security vulnerabilities and also can bring down the falcon server
> Today, falcon provides HDFS DR recipe out of the box. There is a plan to add 
> UI support for DR in Falcon.
> Rest API support cannot be added for recipe as it is client side processing.
> If the UI is pure java script[JS] then all the recipe cooking logic has to be 
> repeated in JS. This is not a feasible solution - if more recipes are added 
> say DR for hive, hbase and others, UI won't be extensible.
> For the above mentioned reasons Recipe should me made a server side logic.
> Provided/Trusted recipes [recipes provided out of the box]  can run as Falcon 
> process. Recipe cooking will be done in a new process if its custom recipe 
> [user code].
> For cooking of custom recipes, design proposed should consider handling 
> security implications, handling the issues where the custom user code can 
> bring down the Falcon server (trapping System.exit), handling  class path 
> isolation.
> Also it shouldn't in anyway destabilize the Falcon system.
> There are couple of approaches which was discussed
> *Approach 1:*
> Custom Recipe cooking can be carried out separately in another Oozie WF, this 
> will ensure isolation. Oozie already has the ability to schedule jobs as a 
> user and handles all the security aspects of it.
> Pros:
> - Provides isolation
> - Piggyback on Oozie as it already provides the required functionality
> Cons:
> - As recipe processing is done in different WF, from operations point of view 
> user cannot figure out recipe processing status and thus adds to the 
> operational pain. Operational issue with this approach is said to be the 
> overall
> apparatus needed to monitor and manage the recipe-cooking workflows.  
> Oozie scheduling can bring arbitrary delays  Granted we can design around the 
> limitations and make use of the strengths of the approach but it seems 
> something we can avoid if we can.
> - There has been few discussions to move away from Oozie as scheduling engine 
> for Falcon. If this is the plan going forward its good not to add new 
> functionality using oozie.
> *Approach 2:*
> Custom recipe cooking is done on the server side in a separate independent 
> process than Falcon process I.e. It runs in a different JVM. Throttling 
> should be added for how many recipe cooking processes can be launched keeping 
> in mind the machine configuration.
> Pros:
> - Provides isolation as recipe cooking is done in a independent process
> Cons:
> - Performance overhead as new process is launched for custom recipe cooking
> - Adds more complexity to the system
> This bug will be used to move recipe processing for trusted recipes to server 
> side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1787) Ooozie pig-action.xml requires hive sharelib for HCatalog use

2016-01-28 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1517#comment-1517
 ] 

Sowmya Ramesh commented on FALCON-1787:
---

No RB link, one line change.

> Ooozie pig-action.xml requires hive sharelib for HCatalog use
> -
>
> Key: FALCON-1787
> URL: https://issues.apache.org/jira/browse/FALCON-1787
> Project: Falcon
>  Issue Type: Bug
>  Components: oozie
>Affects Versions: 0.6.1
> Environment: HDP-2.3.2.0-2950
> Pig   0.15.0.2.3
> Hive  1.2.1.2.3
> Oozie 4.2.0.2.3
> Falcon0.6.1.2.3
>Reporter: Mark Greene
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
> Attachments: EDL-COMMON-CERTIFIED-PRF-OF-DLVR.xml, 
> EDL-COMMON-LOAD-DLVR-RCPT.xml, EDL-COMMON-PRF-OF-DLVR-LOAD.xml, 
> FALCON-1787.V0.patch, Stack Trace.txt, Workflow Job Configuration.txt, 
> Workflow Pig Action Configuration.txt, prf_of_dlvr_lz_to_cz.pig
>
>
> I have a Pig script that I am using as the workflow for my Falcon process. 
> The pig script uses HCatalogStorer to write to a HCatalog URI that is the 
> output feed defined in my Falcon Process Entity. The Pig action in the 
> resulting Ooozie Workflow generated by Falcon fails with the attached stack 
> trace. The root is that it is missing a class definitions of 
> org/apache/hadoop/hive/shims/ShimLoader.
> Running the script manually using pig -x tex -useHCatalog  passed by Oozie>  results in a successful execution. It's 
> only once this is called as a Pig activity in the Falcon-generated Oozie 
> workflow that the missing class definitions manifests.
> After some investigation I found that the Oozie workflow.xml is missing a 
> required sharelib decleration.
> From the workflow.xml generated by Falcon:
> 
> oozie.action.sharelib.for.pig
> pig,hcatalog
> 
> If I modify the value to include hive sharelib then the Pig action succeeds 
> and does not throw a missing class definition error.
> Modified workflow.xml property (works):
> 
>   oozie.action.sharelib.for.pig
>   hive,pig,hcatalog
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1787) Ooozie pig-action.xml requires hive sharelib for HCatalog use

2016-01-28 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1787:
--
Attachment: FALCON-1787.V0.patch

> Ooozie pig-action.xml requires hive sharelib for HCatalog use
> -
>
> Key: FALCON-1787
> URL: https://issues.apache.org/jira/browse/FALCON-1787
> Project: Falcon
>  Issue Type: Bug
>  Components: oozie
>Affects Versions: 0.6.1
> Environment: HDP-2.3.2.0-2950
> Pig   0.15.0.2.3
> Hive  1.2.1.2.3
> Oozie 4.2.0.2.3
> Falcon0.6.1.2.3
>Reporter: Mark Greene
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
> Attachments: EDL-COMMON-CERTIFIED-PRF-OF-DLVR.xml, 
> EDL-COMMON-LOAD-DLVR-RCPT.xml, EDL-COMMON-PRF-OF-DLVR-LOAD.xml, 
> FALCON-1787.V0.patch, Stack Trace.txt, Workflow Job Configuration.txt, 
> Workflow Pig Action Configuration.txt, prf_of_dlvr_lz_to_cz.pig
>
>
> I have a Pig script that I am using as the workflow for my Falcon process. 
> The pig script uses HCatalogStorer to write to a HCatalog URI that is the 
> output feed defined in my Falcon Process Entity. The Pig action in the 
> resulting Ooozie Workflow generated by Falcon fails with the attached stack 
> trace. The root is that it is missing a class definitions of 
> org/apache/hadoop/hive/shims/ShimLoader.
> Running the script manually using pig -x tex -useHCatalog  passed by Oozie>  results in a successful execution. It's 
> only once this is called as a Pig activity in the Falcon-generated Oozie 
> workflow that the missing class definitions manifests.
> After some investigation I found that the Oozie workflow.xml is missing a 
> required sharelib decleration.
> From the workflow.xml generated by Falcon:
> 
> oozie.action.sharelib.for.pig
> pig,hcatalog
> 
> If I modify the value to include hive sharelib then the Pig action succeeds 
> and does not throw a missing class definition error.
> Modified workflow.xml property (works):
> 
>   oozie.action.sharelib.for.pig
>   hive,pig,hcatalog
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1789) REST API and CLI support for server side Recipes

2016-01-27 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1789:
-

 Summary: REST API and CLI support for server side Recipes
 Key: FALCON-1789
 URL: https://issues.apache.org/jira/browse/FALCON-1789
 Project: Falcon
  Issue Type: Sub-task
Reporter: Sowmya Ramesh
Assignee: Ying Zheng
 Fix For: trunk


REST API infrastructure for server side recipes should be generic enough so 
that it can be reused for custom recipes too in the future. 

Today, Falcon provides OOTB HDFS & Hive DR recipes. Generic REST API support 
should be extended to provide mirroring as a service.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1570) Falcon and Apache Atlas integration

2016-01-27 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1570.
---
Resolution: Fixed

https://issues.apache.org/jira/browse/ATLAS-379 implemented as part of this.

> Falcon and Apache Atlas integration
> ---
>
> Key: FALCON-1570
> URL: https://issues.apache.org/jira/browse/FALCON-1570
> Project: Falcon
>  Issue Type: New Feature
>Affects Versions: trunk
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
>
> Falcon and Apache Atlas integration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-794) Recipe enhancement to support additional functionality

2016-01-27 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-794.
--
Resolution: Fixed

Implemented already as part of FALCON-1188 

> Recipe enhancement to support additional functionality
> --
>
> Key: FALCON-794
> URL: https://issues.apache.org/jira/browse/FALCON-794
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Recipe
>
> Recipe v1 is restrictive and should be enhanced for better usability
> * Today if user wants to pass additional custom properties 
> -template.xml has to be manually edited to add property tag. 
> Instead using JAXB will allow Recipe tool to add these properties at run time
> * Support multiple source directories replication
> * Knob for user to choose if the replication should be triggered at Source or 
> target cluster
> * Support for other optional DistCP options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1107) Move trusted recipe processing to server side

2016-01-27 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1107:
--
Summary: Move trusted recipe processing to server side  (was: Move recipe 
processing to server side)

> Move trusted recipe processing to server side
> -
>
> Key: FALCON-1107
> URL: https://issues.apache.org/jira/browse/FALCON-1107
> Project: Falcon
>  Issue Type: Sub-task
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Recipe
> Fix For: trunk
>
> Attachments: ApacheFalcon-RecipeDesignDocument.pdf
>
>
> Today Recipe cooking is a client side logic. Recipe also supports extensions 
> i.e. user can cook his/her own custom recipes.
> Decision to make it client side logic was for the following reasons
>   *   Keep it isolated from falcon server
>   *   As custom recipe cooking is supported, user recipes can introduce 
> security vulnerabilities and also can bring down the falcon server
> Today, falcon provides HDFS DR recipe out of the box. There is a plan to add 
> UI support for DR in Falcon.
> Rest API support cannot be added for recipe as it is client side processing.
> If the UI is pure java script[JS] then all the recipe cooking logic has to be 
> repeated in JS. This is not a feasible solution - if more recipes are added 
> say DR for hive, hbase and others, UI won't be extensible.
> For the above mentioned reasons Recipe should me made a server side logic.
> Provided/Trusted recipes [recipes provided out of the box]  can run as Falcon 
> process. Recipe cooking will be done in a new process if its custom recipe 
> [user code].
> For cooking of custom recipes, design proposed should consider handling 
> security implications, handling the issues where the custom user code can 
> bring down the Falcon server (trapping System.exit), handling  class path 
> isolation.
> Also it shouldn't in anyway destabilize the Falcon system.
> There are couple of approaches which was discussed
> *Approach 1:*
> Custom Recipe cooking can be carried out separately in another Oozie WF, this 
> will ensure isolation. Oozie already has the ability to schedule jobs as a 
> user and handles all the security aspects of it.
> Pros:
> - Provides isolation
> - Piggyback on Oozie as it already provides the required functionality
> Cons:
> - As recipe processing is done in different WF, from operations point of view 
> user cannot figure out recipe processing status and thus adds to the 
> operational pain. Operational issue with this approach is said to be the 
> overall
> apparatus needed to monitor and manage the recipe-cooking workflows.  
> Oozie scheduling can bring arbitrary delays  Granted we can design around the 
> limitations and make use of the strengths of the approach but it seems 
> something we can avoid if we can.
> - There has been few discussions to move away from Oozie as scheduling engine 
> for Falcon. If this is the plan going forward its good not to add new 
> functionality using oozie.
> *Approach 2:*
> Custom recipe cooking is done on the server side in a separate independent 
> process than Falcon process I.e. It runs in a different JVM. Throttling 
> should be added for how many recipe cooking processes can be launched keeping 
> in mind the machine configuration.
> Pros:
> - Provides isolation as recipe cooking is done in a independent process
> Cons:
> - Performance overhead as new process is launched for custom recipe cooking
> - Adds more complexity to the system
> This bug will be used to move recipe processing for trusted recipes to server 
> side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1107) Move trusted recipe processing to server side

2016-01-27 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120529#comment-15120529
 ] 

Sowmya Ramesh commented on FALCON-1107:
---

I am using this jira to work on moving trusted recipes to server side [HDFS & 
Hive DR]. 

Provided/Trusted recipes [OOTB recipes] can run as Falcon java process. There 
is no need
to handle provided recipes same as custom recipes as they cannot introduce any 
security
vulnerabilities and hence overhead of handling as different java process can be 
avoided.

[~ajayyadava], [~pallavi.rao]: Let me know if you are ok with this approach, 
its detailed in the design doc. 

Custom recipes will be worked as part of jira 
[FALCON-1108|https://issues.apache.org/jira/browse/FALCON-1108]

> Move trusted recipe processing to server side
> -
>
> Key: FALCON-1107
> URL: https://issues.apache.org/jira/browse/FALCON-1107
> Project: Falcon
>  Issue Type: Sub-task
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Recipe
> Fix For: trunk
>
> Attachments: ApacheFalcon-RecipeDesignDocument.pdf
>
>
> Today Recipe cooking is a client side logic. Recipe also supports extensions 
> i.e. user can cook his/her own custom recipes.
> Decision to make it client side logic was for the following reasons
>   *   Keep it isolated from falcon server
>   *   As custom recipe cooking is supported, user recipes can introduce 
> security vulnerabilities and also can bring down the falcon server
> Today, falcon provides HDFS DR recipe out of the box. There is a plan to add 
> UI support for DR in Falcon.
> Rest API support cannot be added for recipe as it is client side processing.
> If the UI is pure java script[JS] then all the recipe cooking logic has to be 
> repeated in JS. This is not a feasible solution - if more recipes are added 
> say DR for hive, hbase and others, UI won't be extensible.
> For the above mentioned reasons Recipe should me made a server side logic.
> Provided/Trusted recipes [recipes provided out of the box]  can run as Falcon 
> process. Recipe cooking will be done in a new process if its custom recipe 
> [user code].
> For cooking of custom recipes, design proposed should consider handling 
> security implications, handling the issues where the custom user code can 
> bring down the Falcon server (trapping System.exit), handling  class path 
> isolation.
> Also it shouldn't in anyway destabilize the Falcon system.
> There are couple of approaches which was discussed
> *Approach 1:*
> Custom Recipe cooking can be carried out separately in another Oozie WF, this 
> will ensure isolation. Oozie already has the ability to schedule jobs as a 
> user and handles all the security aspects of it.
> Pros:
> - Provides isolation
> - Piggyback on Oozie as it already provides the required functionality
> Cons:
> - As recipe processing is done in different WF, from operations point of view 
> user cannot figure out recipe processing status and thus adds to the 
> operational pain. Operational issue with this approach is said to be the 
> overall
> apparatus needed to monitor and manage the recipe-cooking workflows.  
> Oozie scheduling can bring arbitrary delays  Granted we can design around the 
> limitations and make use of the strengths of the approach but it seems 
> something we can avoid if we can.
> - There has been few discussions to move away from Oozie as scheduling engine 
> for Falcon. If this is the plan going forward its good not to add new 
> functionality using oozie.
> *Approach 2:*
> Custom recipe cooking is done on the server side in a separate independent 
> process than Falcon process I.e. It runs in a different JVM. Throttling 
> should be added for how many recipe cooking processes can be launched keeping 
> in mind the machine configuration.
> Pros:
> - Provides isolation as recipe cooking is done in a independent process
> Cons:
> - Performance overhead as new process is launched for custom recipe cooking
> - Adds more complexity to the system
> This bug will be used to move recipe processing for trusted recipes to server 
> side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FALCON-1787) Ooozie pig-action.xml requires hive sharelib for HCatalog use

2016-01-27 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh reassigned FALCON-1787:
-

Assignee: Sowmya Ramesh

> Ooozie pig-action.xml requires hive sharelib for HCatalog use
> -
>
> Key: FALCON-1787
> URL: https://issues.apache.org/jira/browse/FALCON-1787
> Project: Falcon
>  Issue Type: Bug
>  Components: oozie
>Affects Versions: 0.6.1
> Environment: HDP-2.3.2.0-2950
> Pig   0.15.0.2.3
> Hive  1.2.1.2.3
> Oozie 4.2.0.2.3
> Falcon0.6.1.2.3
>Reporter: Mark Greene
>Assignee: Sowmya Ramesh
> Attachments: EDL-COMMON-CERTIFIED-PRF-OF-DLVR.xml, 
> EDL-COMMON-LOAD-DLVR-RCPT.xml, EDL-COMMON-PRF-OF-DLVR-LOAD.xml, Stack 
> Trace.txt, Workflow Job Configuration.txt, Workflow Pig Action 
> Configuration.txt, prf_of_dlvr_lz_to_cz.pig
>
>
> I have a Pig script that I am using as the workflow for my Falcon process. 
> The pig script uses HCatalogStorer to write to a HCatalog URI that is the 
> output feed defined in my Falcon Process Entity. The Pig action in the 
> resulting Ooozie Workflow generated by Falcon fails with the attached stack 
> trace. The root is that it is missing a class definitions of 
> org/apache/hadoop/hive/shims/ShimLoader.
> Running the script manually using pig -x tex -useHCatalog  passed by Oozie>  results in a successful execution. It's 
> only once this is called as a Pig activity in the Falcon-generated Oozie 
> workflow that the missing class definitions manifests.
> After some investigation I found that the Oozie workflow.xml is missing a 
> required sharelib decleration.
> From the workflow.xml generated by Falcon:
> 
> oozie.action.sharelib.for.pig
> pig,hcatalog
> 
> If I modify the value to include hive sharelib then the Pig action succeeds 
> and does not throw a missing class definition error.
> Modified workflow.xml property (works):
> 
>   oozie.action.sharelib.for.pig
>   hive,pig,hcatalog
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1787) Ooozie pig-action.xml requires hive sharelib for HCatalog use

2016-01-27 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1787:
--
Fix Version/s: trunk

> Ooozie pig-action.xml requires hive sharelib for HCatalog use
> -
>
> Key: FALCON-1787
> URL: https://issues.apache.org/jira/browse/FALCON-1787
> Project: Falcon
>  Issue Type: Bug
>  Components: oozie
>Affects Versions: 0.6.1
> Environment: HDP-2.3.2.0-2950
> Pig   0.15.0.2.3
> Hive  1.2.1.2.3
> Oozie 4.2.0.2.3
> Falcon0.6.1.2.3
>Reporter: Mark Greene
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
> Attachments: EDL-COMMON-CERTIFIED-PRF-OF-DLVR.xml, 
> EDL-COMMON-LOAD-DLVR-RCPT.xml, EDL-COMMON-PRF-OF-DLVR-LOAD.xml, Stack 
> Trace.txt, Workflow Job Configuration.txt, Workflow Pig Action 
> Configuration.txt, prf_of_dlvr_lz_to_cz.pig
>
>
> I have a Pig script that I am using as the workflow for my Falcon process. 
> The pig script uses HCatalogStorer to write to a HCatalog URI that is the 
> output feed defined in my Falcon Process Entity. The Pig action in the 
> resulting Ooozie Workflow generated by Falcon fails with the attached stack 
> trace. The root is that it is missing a class definitions of 
> org/apache/hadoop/hive/shims/ShimLoader.
> Running the script manually using pig -x tex -useHCatalog  passed by Oozie>  results in a successful execution. It's 
> only once this is called as a Pig activity in the Falcon-generated Oozie 
> workflow that the missing class definitions manifests.
> After some investigation I found that the Oozie workflow.xml is missing a 
> required sharelib decleration.
> From the workflow.xml generated by Falcon:
> 
> oozie.action.sharelib.for.pig
> pig,hcatalog
> 
> If I modify the value to include hive sharelib then the Pig action succeeds 
> and does not throw a missing class definition error.
> Modified workflow.xml property (works):
> 
>   oozie.action.sharelib.for.pig
>   hive,pig,hcatalog
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1109) Rest API support for recipe

2016-01-27 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1109.
---
Resolution: Duplicate

https://issues.apache.org/jira/browse/FALCON-1109



> Rest API support for recipe
> ---
>
> Key: FALCON-1109
> URL: https://issues.apache.org/jira/browse/FALCON-1109
> Project: Falcon
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 0.6
>Reporter: Sowmya Ramesh
>  Labels: Recipe
>
> Recipe processing will be done on the server side. Rest API functionality 
> should be provided to prepare and schedule recipe, list recipe, delete and 
> other operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FALCON-1767) Improve Falcon retention policy documentation

2016-01-22 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1767:
-

 Summary: Improve Falcon retention policy documentation
 Key: FALCON-1767
 URL: https://issues.apache.org/jira/browse/FALCON-1767
 Project: Falcon
  Issue Type: Sub-task
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


Improve Falcon retention policy documentation with few examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1682) Falcon server starts successfully even if application services fail to start

2016-01-19 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15107463#comment-15107463
 ] 

Sowmya Ramesh commented on FALCON-1682:
---

[~pavan kumar], [~pallavi.rao]: Sounds good. Can we create another and tag it 
for next release? Thanks!

> Falcon server starts successfully even if application services fail to start
> 
>
> Key: FALCON-1682
> URL: https://issues.apache.org/jira/browse/FALCON-1682
> Project: Falcon
>  Issue Type: Bug
>  Components: general
>Affects Versions: 0.8
>Reporter: Pragya Mittal
>Assignee: pavan kumar kolamuri
> Attachments: FALCON-1682.patch
>
>
> If falcon is configured to run with mysql db, and user does not create db 
> then server start should fail and throw error for the same. But server starts 
> successfully as of now. Although error is logged in server logs saying :
> {noformat}
> 2015-12-21 13:41:01,899 ERROR - [main:] ~ Failed to initialize service 
> org.apache.falcon.state.store.service.FalconJPAService (ServiceInitializer:49)
>  
> org.apache.openjpa.persistence.PersistenceException: Cannot create 
> PoolableConnectionFactory (Access denied for user 'sa'@'localhost' (using 
> password: NO))
>   at 
> org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:106)
>   at 
> org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)
>   at 
> org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1518)
>   at 
> org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:533)
>   at 
> org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:458)
>   at 
> org.apache.openjpa.lib.conf.PluginValue.instantiate(PluginValue.java:121)
>   at 
> org.apache.openjpa.conf.MetaDataRepositoryValue.instantiate(MetaDataRepositoryValue.java:68)
>   at 
> org.apache.openjpa.lib.conf.ObjectValue.instantiate(ObjectValue.java:83)
>   at 
> org.apache.openjpa.conf.OpenJPAConfigurationImpl.newMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:967)
>   at 
> org.apache.openjpa.conf.OpenJPAConfigurationImpl.getMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:958)
>   at 
> org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:642)
>   at 
> org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:202)
>   at 
> org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:154)
>   at 
> org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:226)
>   at 
> org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:153)
>   at 
> org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:59)
>   at 
> org.apache.falcon.state.store.service.FalconJPAService.getEntityManager(FalconJPAService.java:169)
>   at 
> org.apache.falcon.state.store.service.FalconJPAService.init(FalconJPAService.java:91)
>   at 
> org.apache.falcon.service.ServiceInitializer.initialize(ServiceInitializer.java:47)
>   at 
> org.apache.falcon.listener.ContextStartupListener.contextInitialized(ContextStartupListener.java:56)
>   at 
> org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:550)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:136)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:519)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.falcon.util.EmbeddedServer.start(EmbeddedServer.java:57)
>   at org.apache.falcon.FalconServer.main(FalconServer.java:102)
> Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create 
> PoolableConnectionFactory (Access denied for user 'sa'@'localhost' (using 
> password: NO))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1747) Falcon instance status listing is throwing error message.

2016-01-13 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15096800#comment-15096800
 ] 

Sowmya Ramesh commented on FALCON-1747:
---

It will be hepful if you can update the jira wth the stack trace too. Thanks!

> Falcon instance status listing is throwing error message.
> -
>
> Key: FALCON-1747
> URL: https://issues.apache.org/jira/browse/FALCON-1747
> Project: Falcon
>  Issue Type: Bug
>  Components: prism
>Reporter: Peeyush Bishnoi
>Assignee: Peeyush Bishnoi
> Fix For: 0.9
>
> Attachments: FALCON-1747.patch
>
>
> When trying to do listing of Falcon instance, following error message appears:
> {code:java}
> ERROR: Bad Request;default/Specified End date 2016-01-12T10:20Z is before the 
> entity was scheduled 221558424-03-21T20:03Z
> {code}
> Process entity validity start date and end date is valid.
> This issue has been reported by one of the user internally at Hortonworks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1747) Falcon instance status listing is throwing error message.

2016-01-13 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097116#comment-15097116
 ] 

Sowmya Ramesh commented on FALCON-1747:
---

[~peeyushb]: If I am not wrong actual issue is exception 
"org.apache.falcon.FalconException: Specified End date 2015-12-16T10:55Z is 
before the entity was scheduled 221558424-02-23T20:38Z" being thrown.

I think in addition to the fix provided we should have an upper bound on 
numResults. Fix you have provided might not ensure correctness or can result in 
OOM issues if the start and end date diff is too much. Falcon can fix upper 
bound for numResults and add validation to throw exception if numResults 
exceeds it. Thanks!

> Falcon instance status listing is throwing error message.
> -
>
> Key: FALCON-1747
> URL: https://issues.apache.org/jira/browse/FALCON-1747
> Project: Falcon
>  Issue Type: Bug
>  Components: prism
>Reporter: Peeyush Bishnoi
>Assignee: Peeyush Bishnoi
> Fix For: 0.9
>
> Attachments: FALCON-1747.patch
>
>
> When trying to do listing of Falcon instance, following error message appears:
> {code:java}
> ERROR: Bad Request;default/Specified End date 2016-01-12T10:20Z is before the 
> entity was scheduled 221558424-03-21T20:03Z
> {code}
> Process entity validity start date and end date is valid.
> This issue has been reported by one of the user internally at Hortonworks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1682) Falcon server starts successfully even if application services fail to start

2016-01-12 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095044#comment-15095044
 ] 

Sowmya Ramesh commented on FALCON-1682:
---

[~pavan kumar]: I don't think checking the version with retries is a clean 
solution.

Quick google search yielded couple of options. can we explore them? Thanks!

* 
http://stackoverflow.com/questions/16153212/how-to-cancel-start-or-shutdown-jetty-if-webappcontext-fails-to-start
* 
http://stackoverflow.com/questions/8645792/jetty-detect-if-webapp-failed-to-start


> Falcon server starts successfully even if application services fail to start
> 
>
> Key: FALCON-1682
> URL: https://issues.apache.org/jira/browse/FALCON-1682
> Project: Falcon
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Pragya Mittal
>Assignee: pavan kumar kolamuri
> Attachments: FALCON-1682.patch
>
>
> If falcon is configured to run with mysql db, and user does not create db 
> then server start should fail and throw error for the same. But server starts 
> successfully as of now. Although error is logged in server logs saying :
> {noformat}
> 2015-12-21 13:41:01,899 ERROR - [main:] ~ Failed to initialize service 
> org.apache.falcon.state.store.service.FalconJPAService (ServiceInitializer:49)
>  
> org.apache.openjpa.persistence.PersistenceException: Cannot create 
> PoolableConnectionFactory (Access denied for user 'sa'@'localhost' (using 
> password: NO))
>   at 
> org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:106)
>   at 
> org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)
>   at 
> org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1518)
>   at 
> org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:533)
>   at 
> org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:458)
>   at 
> org.apache.openjpa.lib.conf.PluginValue.instantiate(PluginValue.java:121)
>   at 
> org.apache.openjpa.conf.MetaDataRepositoryValue.instantiate(MetaDataRepositoryValue.java:68)
>   at 
> org.apache.openjpa.lib.conf.ObjectValue.instantiate(ObjectValue.java:83)
>   at 
> org.apache.openjpa.conf.OpenJPAConfigurationImpl.newMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:967)
>   at 
> org.apache.openjpa.conf.OpenJPAConfigurationImpl.getMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:958)
>   at 
> org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:642)
>   at 
> org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:202)
>   at 
> org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:154)
>   at 
> org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:226)
>   at 
> org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:153)
>   at 
> org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:59)
>   at 
> org.apache.falcon.state.store.service.FalconJPAService.getEntityManager(FalconJPAService.java:169)
>   at 
> org.apache.falcon.state.store.service.FalconJPAService.init(FalconJPAService.java:91)
>   at 
> org.apache.falcon.service.ServiceInitializer.initialize(ServiceInitializer.java:47)
>   at 
> org.apache.falcon.listener.ContextStartupListener.contextInitialized(ContextStartupListener.java:56)
>   at 
> org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:550)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:136)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:519)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.falcon.util.EmbeddedServer.start(EmbeddedServer.java:57)
>   at org.apache.falcon.FalconServer.main(FalconServer.java:102)
> Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create 
> PoolableConnectionFactory (Access denied for user 'sa'@'localhost' (using 
> password: NO))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1565) Listing API non-intuitive response if time > endTime

2015-11-17 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15009263#comment-15009263
 ] 

Sowmya Ramesh commented on FALCON-1565:
---

[~Praveen]: I am not working on it. Thanks for picking it up!

> Listing API non-intuitive response if time > endTime
> 
>
> Key: FALCON-1565
> URL: https://issues.apache.org/jira/browse/FALCON-1565
> Project: Falcon
>  Issue Type: Bug
>Affects Versions: 0.8
> Environment: QA
>Reporter: Pragya Mittal
>Assignee: Praveen Adlakha
>  Labels: newbie
> Fix For: trunk
>
>
> While listing for an entity with time > endTime, response shows stack trace. 
> Instead error mesage should be shown as response.
> Feed definition is :
> {noformat}
> 
> 
> 
> 
> 
> 
> minutes(2)
> 
> UTC
> 
> 
> 
> 
> 
> 
> 
> 
>  path="/tmp/falcon-regression/FeedSlaMonitoring/input/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}"/>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> {noformat}
> Listing API response
> {noformat}
> dataqa@lda01:/mnt/users/pragya/defn/sla$ falcon instance -type feed -name 
> sla-feed -start 2015-10-30T11:20Z -listing
> ERROR: Bad Request; standalone="yes"?>FAILEDua1/org.apache.falcon.FalconException::javax.ws.rs.WebApplicationException:
>  javax.xml.bind.UnmarshalException: unexpected element (uri:, 
> local:instancesResult). Expected elements are 
> {}feedInstanceResult,{}instance,{}result
> {noformat}
> Stack trace is :
> {noformat}
> 2015-10-28 12:15:54,414 ERROR - [1963200284@qtp-2030538903-5 - 
> 0f02eeb7-1f02-4bea-bbb7-85d5e61b568f:dataqa:GET//instance/listing/feed/sla-feed]
>  ~ Failed to get instances listing (AbstractInstanceManager:528)
> org.apache.falcon.FalconException: Specified End date 2015-10-28T12:15Z is 
> before the entity was scheduled 2015-10-30T11:20Z
>   at 
> org.apache.falcon.resource.AbstractInstanceManager.getStartAndEndDate(AbstractInstanceManager.java:853)
>   at 
> org.apache.falcon.resource.AbstractInstanceManager.getStartAndEndDate(AbstractInstanceManager.java:842)
>   at 
> org.apache.falcon.resource.AbstractInstanceManager.getListing(AbstractInstanceManager.java:524)
>   at 
> org.apache.falcon.resource.InstanceManager.getListing(InstanceManager.java:141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1458) Update documentation on site and announce the 0.8 release

2015-11-17 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15008314#comment-15008314
 ] 

Sowmya Ramesh commented on FALCON-1458:
---

Done!

> Update documentation on site and announce the 0.8 release
> -
>
> Key: FALCON-1458
> URL: https://issues.apache.org/jira/browse/FALCON-1458
> Project: Falcon
>  Issue Type: Sub-task
>  Components: ease
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Release
> Fix For: 0.8
>
>
> * Update the documentation on site. 
> * Announce the release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1450) Prepare Falcon Release v0.8

2015-11-17 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1450.
---
Resolution: Fixed

0.8 release done!

> Prepare Falcon Release v0.8
> ---
>
> Key: FALCON-1450
> URL: https://issues.apache.org/jira/browse/FALCON-1450
> Project: Falcon
>  Issue Type: Task
>  Components: ease
>Affects Versions: 0.7
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Release
> Fix For: 0.8
>
>
> This is a parent holding ticket for tracking all tasks relating to 0.8 
> release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1458) Update documentation on site and announce the 0.8 release

2015-11-17 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1458.
---
Resolution: Fixed

> Update documentation on site and announce the 0.8 release
> -
>
> Key: FALCON-1458
> URL: https://issues.apache.org/jira/browse/FALCON-1458
> Project: Falcon
>  Issue Type: Sub-task
>  Components: ease
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: Release
> Fix For: 0.8
>
>
> * Update the documentation on site. 
> * Announce the release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1594) Update master changes.txt to change (Proposed Release version: 0.8) to Release version

2015-11-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1594:
--
Attachment: (was: FALCON-1594.V0.patch)

> Update master changes.txt to change (Proposed Release version: 0.8) to 
> Release version
> --
>
> Key: FALCON-1594
> URL: https://issues.apache.org/jira/browse/FALCON-1594
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: release
> Fix For: trunk, 0.8
>
>
> Update master changes.txt to change (Proposed Release version: 0.8) to 
> Release version after 0.8 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1594) Update master changes.txt to change (Proposed Release version: 0.8) to Release version

2015-11-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1594:
--
Attachment: FALCON-1594.V0.patch

> Update master changes.txt to change (Proposed Release version: 0.8) to 
> Release version
> --
>
> Key: FALCON-1594
> URL: https://issues.apache.org/jira/browse/FALCON-1594
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: release
> Fix For: trunk, 0.8
>
> Attachments: FALCON-1594.V0.patch
>
>
> Update master changes.txt to change (Proposed Release version: 0.8) to 
> Release version after 0.8 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FALCON-1454) Verify source tarball and run few end to end tests

2015-11-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh resolved FALCON-1454.
---
Resolution: Fixed

> Verify source tarball and run few end to end tests
> --
>
> Key: FALCON-1454
> URL: https://issues.apache.org/jira/browse/FALCON-1454
> Project: Falcon
>  Issue Type: Sub-task
>  Components: build-tools
>Affects Versions: 0.7
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: release
> Fix For: 0.8
>
>
> Verify released source tarball and run few tests on distributed and 
> standalone mode.
> Additionally track +1 from community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1594) Update master changes.txt to change (Proposed Release version: 0.8) to Release version

2015-11-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1594:
--
Attachment: FALCON-1594.V0.patch

> Update master changes.txt to change (Proposed Release version: 0.8) to 
> Release version
> --
>
> Key: FALCON-1594
> URL: https://issues.apache.org/jira/browse/FALCON-1594
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: release
> Fix For: trunk, 0.8
>
> Attachments: FALCON-1594.V0.patch
>
>
> Update master changes.txt to change (Proposed Release version: 0.8) to 
> Release version after 0.8 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1594) Update master changes.txt to change (Proposed Release version: 0.8) to Release version

2015-11-13 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1594:
--
Fix Version/s: 0.8

> Update master changes.txt to change (Proposed Release version: 0.8) to 
> Release version
> --
>
> Key: FALCON-1594
> URL: https://issues.apache.org/jira/browse/FALCON-1594
> Project: Falcon
>  Issue Type: Bug
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
>  Labels: release
> Fix For: trunk, 0.8
>
>
> Update master changes.txt to change (Proposed Release version: 0.8) to 
> Release version after 0.8 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1590) User friendly release notes

2015-11-10 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1545#comment-1545
 ] 

Sowmya Ramesh commented on FALCON-1590:
---

Thanks for uploading the document!

> User friendly release notes
> ---
>
> Key: FALCON-1590
> URL: https://issues.apache.org/jira/browse/FALCON-1590
> Project: Falcon
>  Issue Type: Sub-task
>  Components: ease
>Affects Versions: 0.8
>Reporter: Pallavi Rao
>Assignee: Sowmya Ramesh
> Fix For: 0.8
>
> Attachments: ReleaseNotes-ApacheFalcon-0.8.pdf
>
>
> 0.7 release onwards, we have been publishing a user-friendly release notes, 
> which was published here -> 
> https://cwiki.apache.org/confluence/display/FALCON/Release+Notes
> Need to have something similar for 0.8 too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1590) User friendly release notes

2015-11-10 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1543#comment-1543
 ] 

Sowmya Ramesh commented on FALCON-1590:
---

[~sriksun]: It's sowmyaramesh. Thanks!

> User friendly release notes
> ---
>
> Key: FALCON-1590
> URL: https://issues.apache.org/jira/browse/FALCON-1590
> Project: Falcon
>  Issue Type: Sub-task
>  Components: ease
>Affects Versions: 0.8
>Reporter: Pallavi Rao
>Assignee: Sowmya Ramesh
> Fix For: 0.8
>
> Attachments: ReleaseNotes-ApacheFalcon-0.8.pdf
>
>
> 0.7 release onwards, we have been publishing a user-friendly release notes, 
> which was published here -> 
> https://cwiki.apache.org/confluence/display/FALCON/Release+Notes
> Need to have something similar for 0.8 too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1569) Bug in setting the frequency of Feed retention coordinator

2015-11-09 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997421#comment-14997421
 ] 

Sowmya Ramesh commented on FALCON-1569:
---

commit d97052f3423fae8213e4e3bf677ee21b632b82b0
[~ajayyadava], [~pallavi.rao], [~pavan kumar]: Thanks for the quick review!

> Bug in setting the frequency of Feed retention coordinator 
> ---
>
> Key: FALCON-1569
> URL: https://issues.apache.org/jira/browse/FALCON-1569
> Project: Falcon
>  Issue Type: Bug
>  Components: retention
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
> Attachments: FALCON-1569.V0.patch
>
>
> Currently in FeedRetentionCoordinatorBuilder, timeUnit is used to determine 
> frequency of the retention coordinator.
> {code}
> TimeUnit timeUnit = entity.getFrequency().getTimeUnit();
> if (timeUnit == TimeUnit.hours || timeUnit == TimeUnit.minutes) {
> coord.setFrequency("${coord:hours(6)}");
> } else {
> coord.setFrequency("${coord:days(1)}");
> }
> {code}
> days(2) can be mapped to hours(48). If user uses hours(48) then retention 
> coordinator runs every 6 hours instead of running daily wasting the compute 
> resources. Instead get the time duration and use that to determine the 
> retention job frequency.
> Also fix it in FalconUnitTestBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1595) Falcon server loses ability to communicate with HDFS over time

2015-11-09 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997368#comment-14997368
 ] 

Sowmya Ramesh commented on FALCON-1595:
---

[~bvellanki]: What is the root cause for this issue? Why doesn't relogin done 
in AuthenticationInitializationService handle this case ? I am trying to 
understand if its one off case where token is just expiring and we try to dole 
out FS just before relogin. In that case similar to 
checkTGTAndReloginFromKeytab shouldn't we relogin if its close to expiry and 
not wait till its expired which is the current implementation.

> Falcon server loses ability to communicate with HDFS over time
> --
>
> Key: FALCON-1595
> URL: https://issues.apache.org/jira/browse/FALCON-1595
> Project: Falcon
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Balu Vellanki
>Assignee: Balu Vellanki
> Attachments: FALCON-1595.patch
>
>
> In a kerberos secured cluster where the Kerberos ticket validity is one day, 
> Falcon server eventually lost the ability to read and write to and from HDFS. 
> In the logs we saw typical Kerberos-related errors like "GSSException: No 
> valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)". 
> {code}
> 2015-10-28 00:04:59,517 INFO  - [LaterunHandler:] ~ Creating FS impersonating 
> user testUser (HadoopClientFactory:197)
> 2015-10-28 00:04:59,519 WARN  - [LaterunHandler:] ~ Exception encountered 
> while connecting to the server : javax.security.sasl.SaslException: GSS 
> initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Failed to find any Kerberos tgt)] (Client:680)
> 2015-10-28 00:04:59,520 WARN  - [LaterunHandler:] ~ Late Re-run failed for 
> instance sample-process:2015-10-28T03:58Z after 42 
> (AbstractRerunConsumer:84)
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: 
> "sample.host.com/127.0.0.1"; destination host is: "sample.host.com":8020; 
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1431)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1358)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>   at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy23.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
>   at 
> org.apache.falcon.rerun.handler.LateRerunConsumer.detectLate(LateRerunConsumer.java:108)
>   at 
> org.apache.falcon.rerun.handler.LateRerunConsumer.handleRerun(LateRerunConsumer.java:67)
>   at 
> org.apache.falcon.rerun.handler.LateRerunConsumer.handleRerun(LateRerunConsumer.java:47)
>   at 
> org.apache.falcon.rerun.handler.AbstractRerunConsumer.run(AbstractRerunConsumer.java:73)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS 
> initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Failed to find any Kerberos tgt)]
>   at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:685)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at 
> 

[jira] [Commented] (FALCON-1595) Falcon server loses ability to communicate with HDFS over time

2015-11-09 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997336#comment-14997336
 ] 

Sowmya Ramesh commented on FALCON-1595:
---

[~bvellanki]: Don't we already do that in AuthenticationInitializationService? 
Why is it required in HadoopClientFactory too? Ideally that logic shouldn't be 
added in HadoopClientFactory as its mainly  a factory implementation to dole 
out FileSystem handles based on the logged in user.

> Falcon server loses ability to communicate with HDFS over time
> --
>
> Key: FALCON-1595
> URL: https://issues.apache.org/jira/browse/FALCON-1595
> Project: Falcon
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Balu Vellanki
>Assignee: Balu Vellanki
> Attachments: FALCON-1595.patch
>
>
> In a kerberos secured cluster where the Kerberos ticket validity is one day, 
> Falcon server eventually lost the ability to read and write to and from HDFS. 
> In the logs we saw typical Kerberos-related errors like "GSSException: No 
> valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)". 
> {code}
> 2015-10-28 00:04:59,517 INFO  - [LaterunHandler:] ~ Creating FS impersonating 
> user testUser (HadoopClientFactory:197)
> 2015-10-28 00:04:59,519 WARN  - [LaterunHandler:] ~ Exception encountered 
> while connecting to the server : javax.security.sasl.SaslException: GSS 
> initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Failed to find any Kerberos tgt)] (Client:680)
> 2015-10-28 00:04:59,520 WARN  - [LaterunHandler:] ~ Late Re-run failed for 
> instance sample-process:2015-10-28T03:58Z after 42 
> (AbstractRerunConsumer:84)
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: 
> "sample.host.com/127.0.0.1"; destination host is: "sample.host.com":8020; 
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1431)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1358)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>   at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy23.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
>   at 
> org.apache.falcon.rerun.handler.LateRerunConsumer.detectLate(LateRerunConsumer.java:108)
>   at 
> org.apache.falcon.rerun.handler.LateRerunConsumer.handleRerun(LateRerunConsumer.java:67)
>   at 
> org.apache.falcon.rerun.handler.LateRerunConsumer.handleRerun(LateRerunConsumer.java:47)
>   at 
> org.apache.falcon.rerun.handler.AbstractRerunConsumer.run(AbstractRerunConsumer.java:73)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS 
> initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Failed to find any Kerberos tgt)]
>   at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:685)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:648)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:735)
>   at 

[jira] [Created] (FALCON-1594) Update master changes.txt to change (Proposed Release version: 0.8) to Release version

2015-11-09 Thread Sowmya Ramesh (JIRA)
Sowmya Ramesh created FALCON-1594:
-

 Summary: Update master changes.txt to change (Proposed Release 
version: 0.8) to Release version
 Key: FALCON-1594
 URL: https://issues.apache.org/jira/browse/FALCON-1594
 Project: Falcon
  Issue Type: Bug
Reporter: Sowmya Ramesh
Assignee: Sowmya Ramesh
 Fix For: trunk


Update master changes.txt to change (Proposed Release version: 0.8) to Release 
version after 0.8 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FALCON-1585) Add Falcon recipes DR document

2015-11-06 Thread Sowmya Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/FALCON-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14994286#comment-14994286
 ] 

Sowmya Ramesh commented on FALCON-1585:
---

[~ajayyadava]: I will take care of committing as I have to start working on 
RC0. Thanks for the review and all your help for 0.8 release!

> Add Falcon recipes DR document
> --
>
> Key: FALCON-1585
> URL: https://issues.apache.org/jira/browse/FALCON-1585
> Project: Falcon
>  Issue Type: Bug
>  Components: docs
>Reporter: Peeyush Bishnoi
>Assignee: Peeyush Bishnoi
> Fix For: trunk, 0.8
>
> Attachments: FALCON-1585.patch, FALCON-1585.v1.patch
>
>
> Add Falcon recipes (HDFS, Hive) DR document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FALCON-1569) Bug in setting the frequency of Feed retention coordinator

2015-11-06 Thread Sowmya Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/FALCON-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sowmya Ramesh updated FALCON-1569:
--
Attachment: FALCON-1569.V0.patch

> Bug in setting the frequency of Feed retention coordinator 
> ---
>
> Key: FALCON-1569
> URL: https://issues.apache.org/jira/browse/FALCON-1569
> Project: Falcon
>  Issue Type: Bug
>  Components: retention
>Reporter: Sowmya Ramesh
>Assignee: Sowmya Ramesh
> Fix For: trunk
>
> Attachments: FALCON-1569.V0.patch
>
>
> Currently in FeedRetentionCoordinatorBuilder, timeUnit is used to determine 
> frequency of the retention coordinator.
> {code}
> TimeUnit timeUnit = entity.getFrequency().getTimeUnit();
> if (timeUnit == TimeUnit.hours || timeUnit == TimeUnit.minutes) {
> coord.setFrequency("${coord:hours(6)}");
> } else {
> coord.setFrequency("${coord:days(1)}");
> }
> {code}
> days(2) can be mapped to hours(48). If user uses hours(48) then retention 
> coordinator runs every 6 hours instead of running daily wasting the compute 
> resources. Instead get the time duration and use that to determine the 
> retention job frequency.
> Also fix it in FalconUnitTestBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >