[DISCUSS] Move to gitbox

2018-12-07 Thread Akira Ajisaka
Hi all,

Apache Hadoop git repository is in git-wip-us server and it will be
decommissioned.
If there are no objection, I'll file a JIRA ticket with INFRA to
migrate to https://gitbox.apache.org/ and update documentation.

According to ASF infra team, the timeframe is as follows:

> - December 9th 2018 -> January 9th 2019: Voluntary (coordinated) relocation
> - January 9th -> February 6th: Mandated (coordinated) relocation
> - February 7th: All remaining repositories are mass migrated.
> This timeline may change to accommodate various scenarios.

If we got consensus by January 9th, I can file a ticket with INFRA and
migrate it.
Even if we cannot got consensus, the repository will be migrated by
February 7th.

Regards,
Akira

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge HDFS-12943 branch to trunk - Consistent Reads from Standby

2018-12-07 Thread Konstantin Shvachko
Hi Daryn,

Wanted to backup Chen's earlier response to your concerns about rotating
calls in the call queue.
Our design
1. targets directly the livelock problem by rejecting calls on the Observer
that are not likely to be responded in timely matter: HDFS-13873.
2. The call queue rotation is only done on Observers, and never on the
active NN, so it stays free of attacks like you suggest.

If this is a satisfactory mitigation for the problem could you please
reconsider your -1, so that people could continue voting on this thread.

Thanks,
--Konst

On Thu, Dec 6, 2018 at 10:38 AM Daryn Sharp  wrote:

> -1 pending additional info.  After a cursory scan, I have serious concerns
> regarding the design.  This seems like a feature that should have been
> purely implemented in hdfs w/o touching the common IPC layer.
>
> The biggest issue in the alignment context.  It's purpose appears to be
> for allowing handlers to reinsert calls back into the call queue.  That's
> completely unacceptable.  A buggy or malicious client can easily cause
> livelock in the IPC layer with handlers only looping on calls that never
> satisfy the condition.  Why is this not implemented via RetriableExceptions?
>
> On Thu, Dec 6, 2018 at 1:24 AM Yongjun Zhang 
> wrote:
>
>> Great work guys.
>>
>> Wonder if we can elaborate what's impact of not having #2 fixed, and why
>> #2
>> is not needed for the feature to complete?
>> 2. Need to fix automatic failover with ZKFC. Currently it does not doesn't
>> know about ObserverNodes trying to convert them to SBNs.
>>
>> Thanks.
>> --Yongjun
>>
>>
>> On Wed, Dec 5, 2018 at 5:27 PM Konstantin Shvachko 
>> wrote:
>>
>> > Hi Hadoop developers,
>> >
>> > I would like to propose to merge to trunk the feature branch HDFS-12943
>> for
>> > Consistent Reads from Standby Node. The feature is intended to scale
>> read
>> > RPC workloads. On large clusters reads comprise 95% of all RPCs to the
>> > NameNode. We should be able to accommodate higher overall RPC workloads
>> (up
>> > to 4x by some estimates) by adding multiple ObserverNodes.
>> >
>> > The main functionality has been implemented see sub-tasks of HDFS-12943.
>> > We followed up with the test plan. Testing was done on two independent
>> > clusters (see HDFS-14058 and HDFS-14059) with security enabled.
>> > We ran standard HDFS commands, MR jobs, admin commands including manual
>> > failover.
>> > We know of one cluster running this feature in production.
>> >
>> > There are a few outstanding issues:
>> > 1. Need to provide proper documentation - a user guide for the new
>> feature
>> > 2. Need to fix automatic failover with ZKFC. Currently it does not
>> doesn't
>> > know about ObserverNodes trying to convert them to SBNs.
>> > 3. Scale testing and performance fine-tuning
>> > 4. As testing progresses, we continue fixing non-critical bugs like
>> > HDFS-14116.
>> >
>> > I attached a unified patch to the umbrella jira for the review and
>> Jenkins
>> > build.
>> > Please vote on this thread. The vote will run for 7 days until Wed Dec
>> 12.
>> >
>> > Thanks,
>> > --Konstantin
>> >
>>
>
>
> --
>
> Daryn
>


[NOTICE] Mandatory relocation of Apache git repositories on git-wip-us.apache.org

2018-12-07 Thread Daniel Gruno

[IF YOUR PROJECT DOES NOT HAVE GIT REPOSITORIES ON GIT-WIP-US PLEASE
 DISREGARD THIS EMAIL; IT WAS MASS-MAILED TO ALL APACHE PROJECTS]

Hello Apache projects,

I am writing to you because you may have git repositories on the
git-wip-us server, which is slated to be decommissioned in the coming
months. All repositories will be moved to the new gitbox service which
includes direct write access on github as well as the standard ASF
commit access via gitbox.apache.org.

## Why this move? ##
The move comes as a result of retiring the git-wip service, as the
hardware it runs on is longing for retirement. In lieu of this, we
have decided to consolidate the two services (git-wip and gitbox), to
ease the management of our repository systems and future-proof the
underlying hardware. The move is fully automated, and ideally, nothing
will change in your workflow other than added features and access to
GitHub.

## Timeframe for relocation ##
Initially, we are asking that projects voluntarily request to move
their repositories to gitbox, hence this email. The voluntary
timeframe is between now and January 9th 2019, during which projects
are free to either move over to gitbox or stay put on git-wip. After
this phase, we will be requiring the remaining projects to move within
one month, after which we will move the remaining projects over.

To have your project moved in this initial phase, you will need:

- Consensus in the project (documented via the mailing list)
- File a JIRA ticket with INFRA to voluntarily move your project repos
  over to gitbox (as stated, this is highly automated and will take
  between a minute and an hour, depending on the size and number of
  your repositories)

To sum up the preliminary timeline;

- December 9th 2018 -> January 9th 2019: Voluntary (coordinated)
  relocation
- January 9th -> February 6th: Mandated (coordinated) relocation
- February 7th: All remaining repositories are mass migrated.

This timeline may change to accommodate various scenarios.

## Using GitHub with ASF repositories ##
When your project has moved, you are free to use either the ASF
repository system (gitbox.apache.org) OR GitHub for your development
and code pushes. To be able to use GitHub, please follow the primer
at: https://reference.apache.org/committer/github


We appreciate your understanding of this issue, and hope that your
project can coordinate voluntarily moving your repositories in a
timely manner.

All settings, such as commit mail targets, issue linking, PR
notification schemes etc will automatically be migrated to gitbox as
well.

With regards, Daniel on behalf of ASF Infra.

PS:For inquiries, please reply to us...@infra.apache.org, not your 
project's dev list :-).




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15988) Set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative mode

2018-12-07 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15988:
---

 Summary: Set empty directory flag to TRUE in 
DynamoDBMetadataStore#innerGet when using authoritative mode
 Key: HADOOP-15988
 URL: https://issues.apache.org/jira/browse/HADOOP-15988
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


We have the following comment and implementation in DynamoDBMetadataStore:
{noformat}
// When this class has support for authoritative
// (fully-cached) directory listings, we may also be able to answer
// TRUE here.  Until then, we don't know if we have full listing or
// not, thus the UNKNOWN here:
meta.setIsEmptyDirectory(
hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
{noformat}

We have authoritative listings now in dynamo since HADOOP-15621, so we should 
resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-12-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/

[Dec 6, 2018 12:50:28 PM] (vinayakumarb) HDFS-14113. EC : Add Configuration to 
restrict UserDefined Policies.
[Dec 6, 2018 7:39:59 PM] (bharat) HDDS-864. Use strongly typed codec 
implementations for the tables of the
[Dec 6, 2018 8:48:17 PM] (jlowe) MAPREDUCE-7159. FrameworkUploader: ensure 
proper permissions of
[Dec 6, 2018 9:27:28 PM] (ajay) HDDS-880. Create api for ACL handling in Ozone. 
(Contributed by Ajay
[Dec 6, 2018 9:33:52 PM] (hanishakoneru) HDDS-858. Start a Standalone Ratis 
Server on OM
[Dec 6, 2018 11:37:34 PM] (bharat) HDDS-892. Parse aws v2 headers without 
spaces in Ozone s3 gateway.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.registry.secure.TestSecureLogins 
   hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/patch-unit-hadoop-common-project_hadoop-registry.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/980/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [592K]
   

[jira] [Created] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before the test

2018-12-07 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15987:
---

 Summary: ITestDynamoDBMetadataStore should check if test ddb table 
set properly before the test
 Key: HADOOP-15987
 URL: https://issues.apache.org/jira/browse/HADOOP-15987
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


The jira covers the following:
* We should assert that the table name is configured when DynamoDBMetadataStore 
is used for testing, so the test should fail if it's not configured.
* We should assert that the test table is not the same as the production table, 
as the test table could be modified and destroyed multiple times during the 
test.
* This behavior should be added to the testing docs.

[Assume from junit 
doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
{noformat}
A set of methods useful for stating assumptions about the conditions in which a 
test is meaningful. A failed assumption does not mean the code is broken, but 
that the test provides no useful information. The default JUnit runner treats 
tests with failing assumptions as ignored.
{noformat}

A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15959) revert HADOOP-12751

2018-12-07 Thread Bolke de Bruin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bolke de Bruin reopened HADOOP-15959:
-

As mentioned off Jira, the report underpinning the this Jira is invalid and the 
revert should be reverted.

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.2.0, 2.7.8, 3.0.4, 3.1.2, 2.8.6, 2.9.3
>
> Attachments: HADOOP-15959-001.patch, HADOOP-15959-branch-2-002.patch, 
> HADOOP-15959-branch-2.7-003.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15986) Allowing files to be moved between encryption zones having the same encryption key

2018-12-07 Thread Adam Antal (JIRA)
Adam Antal created HADOOP-15986:
---

 Summary: Allowing files to be moved between encryption zones 
having the same encryption key
 Key: HADOOP-15986
 URL: https://issues.apache.org/jira/browse/HADOOP-15986
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Adam Antal


Currently HDFS blocks you from moving files from one encryption zone to 
another. On the surface this is fine, but we also allow multiple encryption 
zones to use the same encryption zone key. If we allow multiple zones to use 
the same zone key, we should also allow files to be moved between the zones. I 
believe this should be either we don't allow the same key to be used for 
multiple encryption zones, or we allow moving between zones when the key is the 
same. The latter is the most user-friendly and allows for different HDFS 
directory structures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org