Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-07-02 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/516/

[Jul 2, 2018 10:02:19 AM] (wang) HDFS-13703. Avoid allocation of 
CorruptedBlocks hashmap when no
[Jul 2, 2018 10:11:06 AM] (wang) HDFS-13702. Remove HTrace hooks from DFSClient 
to reduce CPU usage.




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.TestDatanodeReport 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLeaseRecovery 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestLeveldbConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   

[jira] [Created] (HDDS-211) Add a create container Lock

2018-07-02 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-211:
---

 Summary: Add a create container Lock
 Key: HDDS-211
 URL: https://issues.apache.org/jira/browse/HDDS-211
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Add a lock to guard multiple creations of the same container.

When multiple clients, try to create a container with the same containerID, we 
should succeed for one client, and for remaining clients we should throw 
StorageContainerException. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13715) diskbalancer does not work if one of the blockpools are empty on a Federated cluster

2018-07-02 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDFS-13715:
---

 Summary: diskbalancer does not work if one of the blockpools are 
empty on a Federated cluster
 Key: HDFS-13715
 URL: https://issues.apache.org/jira/browse/HDFS-13715
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Namit Maheshwari


Try to run diskbalancer when one of the blockpools are empty on a Federated 
cluster.

diskbalancer process run and completes successfully within seconds. Actual disk 
balancing does not happen. 

cc - [~bharatviswa], [~anu]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Merge ContainerIO branch (HDDS-48) in to trunk

2018-07-02 Thread Hanisha Koneru
+1

Thanks,
Hanisha









On 7/2/18, 9:24 AM, "Ajay Kumar"  wrote:

>+1 (non-binding)
>
>On 7/1/18, 11:21 PM, "Mukul Kumar Singh"  wrote:
>
>+1
>
>On 30/06/18, 11:33 AM, "Shashikant Banerjee"  
> wrote:
>
>+1(non-binding)
>
>Thanks
>Shashi
>
>On 6/30/18, 11:19 AM, "Nandakumar Vadivelu" 
>  wrote:
>
>+1
>
>On 6/30/18, 3:44 AM, "Bharat Viswanadham" 
>  wrote:
>
>Fixing subject line of the mail.
>
>
>Thanks,
>Bharat
>
>
>
>On 6/29/18, 3:10 PM, "Bharat Viswanadham" 
>  wrote:
>
>Hi All,
>
>Given the positive response to the discussion thread [1], 
> here is the formal vote thread to merge HDDS-48 in to trunk.
>
>Summary of code changes:
>1. Code changes for this branch are done in the 
> hadoop-hdds subproject and hadoop-ozone subproject, there is no impact to 
> hadoop-hdfs.
>2. Added support for multiple container types in the 
> datanode code path.
>3. Added disk layout logic for the containers to supports 
> future upgrades.
>4. Added support for volume Choosing policy to distribute 
> containers across disks on the datanode.
>5. Changed the format of the .container file to a 
> human-readable format (yaml)
>
>
> The vote will run for 7 days, ending Fri July 6th. I will 
> start this vote with my +1.
>
>Thanks,
>Bharat
>
>[1] 
> https://lists.apache.org/thread.html/79998ebd2c3837913a22097102efd8f41c3b08cb1799c3d3dea4876b@%3Chdfs-dev.hadoop.apache.org%3E
>
>
>
>
>
> -
>To unsubscribe, e-mail: 
> common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: 
> common-dev-h...@hadoop.apache.org
>
>
>
>
> -
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>
>-
>To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>
>
>
>-
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>
>-
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-07-02 Thread Jonathan Eagles
Release 3.0.3 is still broken due to the missing artifacts. Any update on
when these artifacts will be published?

On Wed, Jun 27, 2018 at 8:25 PM, Chen, Sammi  wrote:

> Hi Yongjun,
>
>
>
>
>
> The artifacts will be pushed to https://mvnrepository.com/
> artifact/org.apache.hadoop/hadoop-project after step 6 of Publishing
> steps.
>
> For 2.9.1, I remember I absolutely did the step before. I redo the step 6
> today and now 2.9.1 is pushed to the mvn repo.
>
> You can double check it. I suspect sometimes Nexus may fail to notify user
> when this is unexpected failures.
>
>
>
>
>
> Bests,
>
> Sammi
>
> *From:* Yongjun Zhang [mailto:yzh...@cloudera.com]
> *Sent:* Sunday, June 17, 2018 12:17 PM
> *To:* Jonathan Eagles ; Chen, Sammi <
> sammi.c...@intel.com>
> *Cc:* Eric Payne ; Hadoop Common <
> common-...@hadoop.apache.org>; Hdfs-dev ;
> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> *Subject:* Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)
>
>
>
> + Junping, Sammi
>
>
>
> Hi Jonathan,
>
>
>
> Many thanks for reporting the issues and sorry for the inconvenience.
>
>
>
> 1. Shouldn't the build be looking for artifacts in
>
>
>
> https://repository.apache.org/content/repositories/releases
>
> rather than
>
>
>
> https://repository.apache.org/content/repositories/snapshots
>
> ?
>
>
>
> 2.
>
> Not seeing the artifact published here as well.
>
> https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project
>
>
>
> Indeed, I did not see 2.9.1 there too. So included Sammi Chen.
>
>
>
> Hi Junping, would you please share which step in
>
> https://wiki.apache.org/hadoop/HowToRelease
>
> should have done this?
>
>
>
> Thanks a lot.
>
>
>
> --Yongjun
>
>
>
> On Fri, Jun 15, 2018 at 10:52 PM, Jonathan Eagles 
> wrote:
>
> Upgraded Tez dependency to hadoop 3.0.3 and found this issue. Anyone else
> seeing this issue?
>
>
>
> [ERROR] Failed to execute goal on project hadoop-shim: Could not resolve
> dependencies for project org.apache.tez:hadoop-shim:jar:0.10.0-SNAPSHOT:
> Failed to collect dependencies at org.apache.hadoop:hadoop-yarn-api:jar:3.0.3:
> Failed to read artifact descriptor for 
> org.apache.hadoop:hadoop-yarn-api:jar:3.0.3:
> Could not find artifact org.apache.hadoop:hadoop-project:pom:3.0.3 in
> apache.snapshots.https (https://repository.apache.
> org/content/repositories/snapshots) -> [Help 1]
>
> [ERROR]
>
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
>
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>
> [ERROR]
>
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
>
> [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/
> DependencyResolutionException
>
> [ERROR]
>
> [ERROR] After correcting the problems, you can resume the build with the
> command
>
> [ERROR]   mvn  -rf :hadoop-shim
>
>
>
> Not seeing the artifact published here as well.
>
> https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project
>
>
>
> On Tue, Jun 12, 2018 at 6:44 PM, Yongjun Zhang 
> wrote:
>
> Thanks Eric!
>
> --Yongjun
>
>
> On Mon, Jun 11, 2018 at 8:05 AM, Eric Payne 
> wrote:
>
> > Sorry, Yongjun. My +1 is also binding
> > +1 (binding)
> > -Eric Payne
> >
> > On Friday, June 1, 2018, 12:25:36 PM CDT, Eric Payne <
> > eric.payne1...@yahoo.com> wrote:
> >
> >
> >
> >
> > Thanks a lot, Yongjun, for your hard work on this release.
> >
> > +1
> > - Built from source
> > - Installed on 6 node pseudo cluster
> >
> >
> > Tested the following in the Capacity Scheduler:
> > - Verified that running apps in labelled queues restricts tasks to the
> > labelled nodes.
> > - Verified that various queue config properties for CS are refreshable
> > - Verified streaming jobs work as expected
> > - Verified that user weights work as expected
> > - Verified that FairOrderingPolicy in a CS queue will evenly assign
> > resources
> > - Verified running yarn shell application runs as expected
> >
> >
> >
> >
> >
> >
> >
> > On Friday, June 1, 2018, 12:48:26 AM CDT, Yongjun Zhang <
> > yjzhan...@apache.org> wrote:
> >
> >
> >
> >
> >
> > Greetings all,
> >
> > I've created the first release candidate (RC0) for Apache Hadoop
> > 3.0.3. This is our next maintenance release to follow up 3.0.2. It
> includes
> > about 249
> > important fixes and improvements, among which there are 8 blockers. See
> > https://issues.apache.org/jira/issues/?filter=12343997
> >
> > The RC artifacts are available at:
> > https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/
> >
> > The maven artifacts are available via
> > https://repository.apache.org/content/repositories/orgapachehadoop-1126
> >
> > Please try the release and vote; the vote will run for the usual 5
> working
> > days, ending on 06/07/2018 PST time. Would really appreciate your
> > participation here.
> >
> > I bumped into quite some issues along the way, many thanks to quite a few
> > people who helped, especially Sammi Chen, Andrew Wang, 

[jira] [Created] (HDFS-13713) Add specification of new API to FS specification, with contract tests

2018-07-02 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13713:
-

 Summary: Add specification of new API to FS specification, with 
contract tests
 Key: HDFS-13713
 URL: https://issues.apache.org/jira/browse/HDFS-13713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: fs, test
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Ewan Higgs


There's nothing in the FS spec covering the new API. Add it in a new .md file

* add FS model with the notion of a function mapping (uploadID -> Upload), the 
operations (list, commit, abort). The [TLA+ 
mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
 of HADOOP-13786 shows how to do this.
* Contract tests of not just the successful path, but all the invalid ones.
* implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13712) BlockReaderRemote.read() logging improvement

2018-07-02 Thread Gergo Repas (JIRA)
Gergo Repas created HDFS-13712:
--

 Summary: BlockReaderRemote.read() logging improvement
 Key: HDFS-13712
 URL: https://issues.apache.org/jira/browse/HDFS-13712
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 3.1.0
Reporter: Gergo Repas
Assignee: Gergo Repas


Logger.isTraceEnabled() shows up as a hot method via calls from 
BlockReaderRemote.read(). The attached patch reduces the number of such calls 
when trace-logging is turned off, and is on-par when the trace-logging is 
turned on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-210) ozone getKey command always expects the filename to be present along with file-path in "-file" argument

2018-07-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-210:
---

 Summary: ozone getKey command always expects the filename to be 
present along with file-path in "-file" argument
 Key: HDDS-210
 URL: https://issues.apache.org/jira/browse/HDDS-210
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
 Environment: ozone getKey command always expects the filename to be 
present along with the file-path for the "-file" argument.

It throws error if  filename is not provided.
{noformat}
[root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
/test1/
2018-07-02 06:45:27,355 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Command Failed : {"httpCode":0,"shortMessage":"/test1/exists. Download will 
overwrite an existing file. 
Aborting.","resource":null,"message":"/test1/exists. Download will overwrite an 
existing file. Aborting.","requestID":null,"hostName":null}
[root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
/test1/passwd
2018-07-02 06:45:39,722 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-07-02 06:45:40,354 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
2018-07-02 06:45:40,366 INFO conf.ConfUtils: raft.grpc.message.size.max = 
33554432 (custom)
2018-07-02 06:45:40,372 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
300 ms (default)
2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
raft.client.async.outstanding-requests.max = 100 (default)
2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
raft.client.async.scheduler-threads = 3 (default)
2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
1MB (=1048576) (default)
2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.message.size.max = 
33554432 (custom)
2018-07-02 06:45:40,814 INFO conf.ConfUtils: raft.client.rpc.request.timeout = 
3000 ms (default){noformat}
 

Expectation :

--

ozone getKey should work even when only file-path is provided (without 
filename). It should create a file in the given file-path with its key's name 
as its name.

i.e,

given , /test1 is a directory .

if  ./ozone oz -getKey /nnvolume1/bucket123/passwd -file /test1  is run,

file 'passwd' should be created in the directory /test1 .

 
Reporter: Nilotpal Nandi
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-209) createVolume command throws error when user is not present locally but creates the volume

2018-07-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-209:
---

 Summary: createVolume command throws error when user is not 
present locally but creates the volume
 Key: HDDS-209
 URL: https://issues.apache.org/jira/browse/HDDS-209
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


user "test_user3" does not exist locally. 

When -createVolume command is ran for the user "test_user3", it throws error on 
standard output but successfully creates the volume.

The exit code for the command execution is non-zero.

 

 
{noformat}
[root@ozone-vm bin]# ./ozone oz -createVolume /testvolume121 -user test_user3
2018-07-02 06:01:37,020 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-07-02 06:01:37,605 WARN security.ShellBasedUnixGroupsMapping: unable to 
return groups for user test_user3
PartialGroupNameException The user name 'test_user3' is not found. id: 
test_user3: no such user
id: test_user3: no such user
at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
 at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
 at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
 at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
 at 
org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
 at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
 at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
 at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
 at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
 at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
 at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
 at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
 at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
 at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
 at 
org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1547)
 at 
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1535)
 at 
org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
 at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
 at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
 at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
 at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
 at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
2018-07-02 06:01:37,611 [main] INFO - Creating Volume: testvolume121, with 
test_user3 as owner and quota set to 1152921504606846976 bytes.
{noformat}
 
{noformat}
[root@ozone-vm bin]# ./ozone oz -listVolume / -user test_user3
2018-07-02 06:02:20,385 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
 "owner" : {
 "name" : "test_user3"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "testvolume121",
 "createdOn" : "Thu, 05 Jun +50470 19:07:00 GMT",
 "createdBy" : "test_user3"
} ]

{noformat}
Expectation :

--

Volume should not present if local user is not present.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-208) ozone createVolume command ignores the first character of the "volume name" given as argument

2018-07-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-208:
---

 Summary: ozone createVolume command ignores the first character of 
the "volume name" given as argument
 Key: HDDS-208
 URL: https://issues.apache.org/jira/browse/HDDS-208
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


createVolume command ran to create volume "testvolume123".

Volume created with name "estvolume123" instead of "testvolume123". It ignores 
the first character of the volume name

 
{noformat}
[root@ozone-vm bin]# ./ozone oz -createVolume testvolume123 -user root
2018-07-02 05:33:35,510 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-07-02 05:33:36,093 [main] INFO - Creating Volume: estvolume123, with root 
as owner and quota set to 1152921504606846976 bytes.

{noformat}
 

ozone listVolume command :

 
{noformat}
[root@ozone-vm bin]# ./ozone oz -listVolume /
2018-07-02 05:36:47,835 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
 "owner" : {
 "name" : "root"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "nnvolume1",
 "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
 "createdBy" : "root"
..
..
}, {
 "owner" : {
 "name" : "root"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "estvolume123",
 "createdOn" : "Sat, 17 May +50470 08:01:41 GMT",
 "createdBy" : "root"
} ]
{noformat}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Merge ContainerIO branch (HDDS-48) in to trunk

2018-07-02 Thread Mukul Kumar Singh
+1

On 30/06/18, 11:33 AM, "Shashikant Banerjee"  wrote:

+1(non-binding)

Thanks
Shashi

On 6/30/18, 11:19 AM, "Nandakumar Vadivelu"  
wrote:

+1

On 6/30/18, 3:44 AM, "Bharat Viswanadham" 
 wrote:

Fixing subject line of the mail.


Thanks,
Bharat



On 6/29/18, 3:10 PM, "Bharat Viswanadham" 
 wrote:

Hi All,

Given the positive response to the discussion thread [1], here 
is the formal vote thread to merge HDDS-48 in to trunk.

Summary of code changes:
1. Code changes for this branch are done in the hadoop-hdds 
subproject and hadoop-ozone subproject, there is no impact to hadoop-hdfs.
2. Added support for multiple container types in the datanode 
code path.
3. Added disk layout logic for the containers to supports 
future upgrades.
4. Added support for volume Choosing policy to distribute 
containers across disks on the datanode.
5. Changed the format of the .container file to a 
human-readable format (yaml)


 The vote will run for 7 days, ending Fri July 6th. I will 
start this vote with my +1.

Thanks,
Bharat

[1] 
https://lists.apache.org/thread.html/79998ebd2c3837913a22097102efd8f41c3b08cb1799c3d3dea4876b@%3Chdfs-dev.hadoop.apache.org%3E





-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org