[jira] [Created] (HADOOP-14109) improvements to S3GuardTool destroy command

2017-02-23 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14109:
---

 Summary: improvements to S3GuardTool destroy command
 Key: HADOOP-14109
 URL: https://issues.apache.org/jira/browse/HADOOP-14109
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Steve Loughran
Priority: Minor


The S3GuardTool destroy operation initializes dynamoDB, and in doing so has 
some issues

# if the version of the table is incompatible, init fails, so table isn't 
deleteable
# if the system is configured to create the table on demand, then whenever 
destroy is called for a table that doesn't exist, it gets created and then 
destroyed.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13271) Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory

2017-02-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-13271:
-

> Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory
> -
>
> Key: HADOOP-13271
> URL: https://issues.apache.org/jira/browse/HADOOP-13271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> I'm seeing an intermittent failure of 
> {{TestS3AContractRootDir.testListEmptyRootDirectory}}
> The sequence of : deleteFiles(listStatus(Path("/)")) is failing because the 
> file to delete is root ...yet the code is passing in the children of /, not / 
> itself.
> hypothesis: when you call listStatus on an empty root dir, you get a file 
> entry back that says isFile, not isDirectory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14110) mark S3AFileSystem.getAmazonClient package private, export getBucketLocation(fs.getBucket());

2017-02-23 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14110:
---

 Summary: mark S3AFileSystem.getAmazonClient package private, 
export getBucketLocation(fs.getBucket());
 Key: HADOOP-14110
 URL: https://issues.apache.org/jira/browse/HADOOP-14110
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Steve Loughran
Priority: Critical


I'ved just noticed we are making the S3 client visible again. I really don't 
want that to happen, as at that point you lose control of how it gets used. 
Better to make getBucketLocation a visible method, possibly on the 
internal-use-only {{WriteOperationHelper}} class.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14111) cut some obsolete, ignored s3 tests in TestS3Credentials

2017-02-23 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14111:
---

 Summary: cut some obsolete, ignored s3 tests in TestS3Credentials
 Key: HADOOP-14111
 URL: https://issues.apache.org/jira/browse/HADOOP-14111
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Priority: Minor


There's a couple of tests in {{TestS3Credentials}} which are tagged 
{{@ignore}}. they aren't running, still have maintenance cost and appear in 
test runs as skipped. 

Proposed: Cut them out entirely.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14112) Über-jira ADL Phase I: Stabilization

2017-02-23 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14112:
---

 Summary: Über-jira ADL Phase I: Stabilization
 Key: HADOOP-14112
 URL: https://issues.apache.org/jira/browse/HADOOP-14112
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran


Uber JIRA to track things needed for ADL to be considered stable



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14113) review/correct ADL Docs

2017-02-23 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14113:
---

 Summary: review/correct ADL Docs
 Key: HADOOP-14113
 URL: https://issues.apache.org/jira/browse/HADOOP-14113
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/adl
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


Do a quick review of the ADL docs and edit where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-02-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/

[Feb 22, 2017 11:43:48 AM] (stevel) HADOOP-14099 Split S3 testing documentation 
out into its own file.
[Feb 22, 2017 5:41:07 PM] (mingma) HDFS-11411. Avoid OutOfMemoryError in 
TestMaintenanceState test runs.
[Feb 22, 2017 7:17:09 PM] (wangda) YARN-6143. Fix incompatible issue caused by 
YARN-3583. (Sunil G via
[Feb 22, 2017 9:34:20 PM] (liuml07) HADOOP-14102. Relax error message assertion 
in S3A test
[Feb 22, 2017 11:16:09 PM] (wang) HDFS-11438. Fix typo in error message of 
StoragePolicyAdmin tool.
[Feb 22, 2017 11:38:11 PM] (templedf) MAPREDUCE-6825. 
YARNRunner#createApplicationSubmissionContext method is
[Feb 22, 2017 11:46:07 PM] (kasha) YARN-6210. FairScheduler: Node reservations 
can interfere with
[Feb 22, 2017 11:58:49 PM] (kasha) YARN-6194. Cluster capacity in 
SchedulingPolicy is updated only on
[Feb 23, 2017 12:33:38 AM] (jing9) HDFS-4025. QJM: Sychronize past log segments 
to JNs that missed them.




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/diff-compile-javac-root.txt
  [168K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [128K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/326/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-02-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/

[Feb 22, 2017 5:41:07 PM] (mingma) HDFS-11411. Avoid OutOfMemoryError in 
TestMaintenanceState test runs.
[Feb 22, 2017 7:17:09 PM] (wangda) YARN-6143. Fix incompatible issue caused by 
YARN-3583. (Sunil G via
[Feb 22, 2017 9:34:20 PM] (liuml07) HADOOP-14102. Relax error message assertion 
in S3A test
[Feb 22, 2017 11:16:09 PM] (wang) HDFS-11438. Fix typo in error message of 
StoragePolicyAdmin tool.
[Feb 22, 2017 11:38:11 PM] (templedf) MAPREDUCE-6825. 
YARNRunner#createApplicationSubmissionContext method is
[Feb 22, 2017 11:46:07 PM] (kasha) YARN-6210. FairScheduler: Node reservations 
can interfere with
[Feb 22, 2017 11:58:49 PM] (kasha) YARN-6194. Cluster capacity in 
SchedulingPolicy is updated only on
[Feb 23, 2017 12:33:38 AM] (jing9) HDFS-4025. QJM: Sychronize past log segments 
to JNs that missed them.
[Feb 23, 2017 8:49:07 AM] (sunilg) YARN-6211. Synchronization improvement for 
moveApplicationAcrossQueues




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.TestEncryptedTransfer 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   
hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-compile-root.txt
  [124K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-compile-root.txt
  [124K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-compile-root.txt
  [124K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [272K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/238/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-jav

[jira] [Created] (HADOOP-14114) S3A can no longer handle unencoded + in URIs

2017-02-23 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14114:
--

 Summary: S3A can no longer handle unencoded + in URIs 
 Key: HADOOP-14114
 URL: https://issues.apache.org/jira/browse/HADOOP-14114
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory


Amazon secret access keys can include alphanumeric characters, but also / and + 
(I wish there was an official source that was really specific on what they can 
contain, but I'll have to rely on a few blog posts and my own experience).

Keys containing slashes used to be impossible to embed in the URL (e.g. 
s3a://access_key:secret_key@bucket/) but it is now possible to do it via URL 
encoding. Pluses used to work, but that is now *only* possible via URL encoding.

In the case of pluses, they don't appear to cause any other problems for 
parsing. So IMO the best all-around solution here is for people to URL-encode 
these keys always, but so that keys that used to work just fine can continue to 
work fine, all we need to do is detect that, log a warning, and we can 
re-encode it for the user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Hadoop 3.0.0-alpha2 startup issue

2017-02-23 Thread Pol, Daniel (BigData)
Hi !

I have a lab system running ok with alpha1 and I wanted to switch to alpha2. 
Unfortunately I run into issue trying to bring up HDFS, even after reformatting 
it. I keep getting this type of error in the HDFS daemon logs:
2017-02-22 15:05:45,577 ERROR namenode.NameNode (NameNode.java:main(1709)) - 
Failed to start namenode.
java.lang.AbstractMethodError: 
org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.init(Lorg/apache/commons/configuration2/SubsetConfiguration;)V
at 
org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:208)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:531)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:503)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:479)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:188)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:163)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:62)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:58)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1635)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1704)
2017-02-22 15:05:45,578 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
Exiting with status 1

Wondering if you have seen similar errors or how to fix it. I made sure all my 
settings point to alpha2 install

Have a nice day,
Dani
"The more I learn, the less I know"



Re: Hadoop 3.0.0-alpha2 startup issue

2017-02-23 Thread Wei-Chiu Chuang
Haven’t looked into this in details, but we bumped commons-configuration 
version in Hadoop 3.0 alpha2 in HADOOP-13660 (Upgrade commons-configuration 
version)

Pretty sure some APIs were changed in between. 

I’ll look into this and file a jira if this is a real issue.

Thanks!
Wei-Chiu Chuang
A very happy Clouderan

> On Feb 23, 2017, at 12:31 PM, Pol, Daniel (BigData)  
> wrote:
> 
> java.lang.AbstractMethodError



[jira] [Created] (HADOOP-14115) SimpleDateFormatter's are construted w/default Locale, causing malformed dates on some platforms

2017-02-23 Thread Hoss Man (JIRA)
Hoss Man created HADOOP-14115:
-

 Summary: SimpleDateFormatter's are construted w/default Locale, 
causing malformed dates on some platforms
 Key: HADOOP-14115
 URL: https://issues.apache.org/jira/browse/HADOOP-14115
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hoss Man


In at least one place I know of in Hadoop, {{SimpleDateFormatter}} is used to 
serialize {{Date}} object in a format intended for machine consumption -- and 
should be following strict formatting rules -- but the {{SimpleDateFormatter}}  
instance is not constructed with an explicit {{Locale}} so the platform default 
is used instead.  This causes things like "Day name in week" ({{E}}) to 
generate unexpected results depending on the Locale of the machine where the 
code is running, resulting in date-time strings that violate the formatting 
rules.

A specific example of this is {{AuthenticationFilter.createAuthCookie}} which 
has code that looks like this...

{code}
  Date date = new Date(expires);
  SimpleDateFormat df = new SimpleDateFormat("EEE, " +
  "dd-MMM- HH:mm:ss zzz");
  df.setTimeZone(TimeZone.getTimeZone("GMT"));
  sb.append("; Expires=").append(df.format(date));
{code}

...which can cause invalid expiration attributes in the {{Set-Cookies}} header 
like this (as noted by http-commons's {{ResponseProcessCookies}} class...

{noformat}
WARN: Invalid cookie header: "Set-Cookie: hadoop.auth=; Path=/; 
Domain=127.0.0.1; Expires=Ara, 01-Sa-1970 00:00:00 GMT; HttpOnly". Invalid 
'expires' attribute: Ara, 01-Sa-1970 00:00:00 GMT
{noformat}

There are very likely many other places in the hadoop code base where the 
default {{Locale}} is being unintentionally used when formatting Dates/Numbers.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Hadoop 3.0.0-alpha2 startup issue

2017-02-23 Thread Wei-Chiu Chuang
IIUC, HadoopTimelineMetricsSink is a class in Ambari project. So I filed 
AMBARI-20156  to track this 
bug.

Basically what happened was a Hadoop metrics API change, because we bumped the 
major version of commons-configuration. For Ambari to adopt this change, the 
code will need to be patched (s/configuration/configuration2/) and recompiled.

I also updated HADOOP-13660 
 and marked it as an 
incompatible change as a result.

You should able to workaround this issue by disabling this metrics in Ambari 
(not sure how to do though), but you will then lose this metrics of course.

Hope that helps,
Wei-Chiu Chuang
A very happy Clouderan

> On Feb 23, 2017, at 12:31 PM, Pol, Daniel (BigData)  
> wrote:
> 
> Hi !
> 
> I have a lab system running ok with alpha1 and I wanted to switch to alpha2. 
> Unfortunately I run into issue trying to bring up HDFS, even after 
> reformatting it. I keep getting this type of error in the HDFS daemon logs:
> 2017-02-22 15:05:45,577 ERROR namenode.NameNode (NameNode.java:main(1709)) - 
> Failed to start namenode.
> java.lang.AbstractMethodError: 
> org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.init(Lorg/apache/commons/configuration2/SubsetConfiguration;)V
>at 
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:208)
>at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:531)
>at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:503)
>at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:479)
>at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:188)
>at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:163)
>at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:62)
>at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:58)
>at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1635)
>at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1704)
> 2017-02-22 15:05:45,578 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status 1
> 
> Wondering if you have seen similar errors or how to fix it. I made sure all 
> my settings point to alpha2 install
> 
> Have a nice day,
> Dani
> "The more I learn, the less I know"
> 



[jira] [Created] (HADOOP-14116) FailoverOnNetworkExceptionRetry does not wait when failover on certain exception

2017-02-23 Thread Jian He (JIRA)
Jian He created HADOOP-14116:


 Summary: FailoverOnNetworkExceptionRetry does not wait when 
failover on certain exception 
 Key: HADOOP-14116
 URL: https://issues.apache.org/jira/browse/HADOOP-14116
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He


Below code, when doing failover, it does not wait like other condition does, 
which leads to a busy loop. 
{code}
   } else if (e instanceof SocketException
  || (e instanceof IOException && !(e instanceof RemoteException))) {
if (isIdempotentOrAtMostOnce) {
  return RetryAction.FAILOVER_AND_RETRY;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14117) TestUpdatePipelineWithSnapshots#testUpdatePipelineAfterDelete fails with bind exception

2017-02-23 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-14117:
-

 Summary: 
TestUpdatePipelineWithSnapshots#testUpdatePipelineAfterDelete fails with bind 
exception
 Key: HADOOP-14117
 URL: https://issues.apache.org/jira/browse/HADOOP-14117
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula


{noformat}

at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:317)
at 
org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1100)
at 
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1131)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1193)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1049)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:169)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:885)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:721)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:947)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:926)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1635)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2080)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2054)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots.testUpdatePipelineAfterDelete(TestUpdatePipelineWithSnapshots.java:100)
{noformat}

 *reference* 
https://builds.apache.org/job/PreCommit-HDFS-Build/18434/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14091) AbstractFileSystem implementaion for 'wasbs' scheme

2017-02-23 Thread Varada Hemeswari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varada Hemeswari reopened HADOOP-14091:
---

> AbstractFileSystem implementaion for 'wasbs' scheme
> ---
>
> Key: HADOOP-14091
> URL: https://issues.apache.org/jira/browse/HADOOP-14091
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
> Environment: humboldt
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: SECURE, WASB
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14091.001.patch, HADOOP-14091.002.patch
>
>
> Currently  org.apache.hadoop.fs.azure.Wasb provides AbstractFileSystem 
> implementation for 'wasb' scheme.
> This task refers to providing AbstractFileSystem implementation for 'wasbs' 
> scheme



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14091) AbstractFileSystem implementaion for 'wasbs' scheme

2017-02-23 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HADOOP-14091.

Resolution: Fixed

Sure, but let's track in our private channel. This is for community effort. 
Thanks

> AbstractFileSystem implementaion for 'wasbs' scheme
> ---
>
> Key: HADOOP-14091
> URL: https://issues.apache.org/jira/browse/HADOOP-14091
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
> Environment: humboldt
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: SECURE, WASB
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14091.001.patch, HADOOP-14091.002.patch
>
>
> Currently  org.apache.hadoop.fs.azure.Wasb provides AbstractFileSystem 
> implementation for 'wasb' scheme.
> This task refers to providing AbstractFileSystem implementation for 'wasbs' 
> scheme



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org