[jira] [Created] (HDFS-10994) Support "XOR-2-1-64k" policy in "hdfs erasurecode" command

2016-10-10 Thread SammiChen (JIRA)
SammiChen created HDFS-10994:


 Summary: Support "XOR-2-1-64k" policy in "hdfs erasurecode" command
 Key: HDFS-10994
 URL: https://issues.apache.org/jira/browse/HDFS-10994
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: SammiChen
Assignee: SammiChen


So far, "hdfs erasurecode" command supports three policies, RS-DEFAULT-3-2-64k, 
RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is going to add XOR-2-1-64k 
policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.6.5 (RC1)

2016-10-10 Thread Sangjin Lee
Thanks everyone for checking out this release! The vote passes with 15
+1's, 7 of which are binding, and no -1's.

I'll go ahead and wrap up the release, and will send an announcement in the
next day or two.

Regards,
Sangjin

On Mon, Oct 10, 2016 at 6:38 AM, Jason Lowe  wrote:

> +1 (binding)
>
> - Verified signatures and digests
> - Built native from source
> - Deployed to a single-node cluster and ran some sample jobs
>
> Jason
>
>
> On Sunday, October 2, 2016 7:13 PM, Sangjin Lee  wrote:
>
>
> Hi folks,
>
> I have pushed a new release candidate (R1) for the Apache Hadoop 2.6.5
> release (the next maintenance release in the 2.6.x release line). RC1
> contains fixes to CHANGES.txt, and is otherwise identical to RC0.
>
> Below are the details of this release candidate:
>
> The RC is available for validation at:
> http://home.apache.org/~sjlee/hadoop-2.6.5-RC1/.
>
> The RC tag in git is release-2.6.5-RC1 and its git commit is
> e8c9fe0b4c252caf2ebf1464220599650f119997.
>
> The maven artifacts are staged via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1050/.
>
> You can find my public key at
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.
>
> Please try the release and vote. The vote will run for the usual 5 days. I
> would greatly appreciate your timely vote. Thanks!
>
> Regards,
> Sangjin
>
>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-10-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/

[Oct 9, 2016 9:53:36 AM] (kai.zheng) HADOOP-13641. Update 
UGI#spawnAutoRenewalThreadForUserCreds to reduce
[Oct 10, 2016 5:55:49 AM] (kai.zheng) HDFS-10895. Update HDFS Erasure Coding 
doc to add how to use ISA-L based
[Oct 10, 2016 11:32:39 AM] (stevel) HADOOP-13696. change hadoop-common 
dependency scope of jsch to provided.




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestWriteReadStripedFile 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.datanode.TestDataNodeLifeline 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.TestMRJobClient 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/artifact/out/patch-compile-root.txt
  [312K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/artifact/out/patch-compile-root.txt
  [312K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/artifact/out/patch-compile-root.txt
  [312K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [260K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/120/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   

[jira] [Created] (HDFS-10993) rename may fail without a clear message indicating the failure reason.

2016-10-10 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10993:


 Summary: rename may fail without a clear message indicating the 
failure reason.
 Key: HDFS-10993
 URL: https://issues.apache.org/jira/browse/HDFS-10993
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Yongjun Zhang


Currently the FSDirRenameOp$unprotectedRenameTo  looks like
{code}
 static INodesInPath unprotectedRenameTo(FSDirectory fsd,
  final INodesInPath srcIIP, final INodesInPath dstIIP, long timestamp)
  throws IOException {
assert fsd.hasWriteLock();
final INode srcInode = srcIIP.getLastINode();
try {
  validateRenameSource(fsd, srcIIP);
} catch (SnapshotException e) {
  throw e;
} catch (IOException ignored) {
  return null;
}

String src = srcIIP.getPath();
String dst = dstIIP.getPath();
// validate the destination
if (dst.equals(src)) {
  return dstIIP;
}

try {
  validateDestination(src, dst, srcInode);
} catch (IOException ignored) {
  return null;
}

if (dstIIP.getLastINode() != null) {
  NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: " +
  "failed to rename " + src + " to " + dst + " because destination " +
  "exists");
  return null;
}
INode dstParent = dstIIP.getINode(-2);
if (dstParent == null) {
  NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: " +
  "failed to rename " + src + " to " + dst + " because destination's " +
  "parent does not exist");
  return null;
}

fsd.ezManager.checkMoveValidity(srcIIP, dstIIP, src);
// Ensure dst has quota to accommodate rename
verifyFsLimitsForRename(fsd, srcIIP, dstIIP);
verifyQuotaForRename(fsd, srcIIP, dstIIP);

RenameOperation tx = new RenameOperation(fsd, srcIIP, dstIIP);

boolean added = false;

INodesInPath renamedIIP = null;
try {
  // remove src
  if (!tx.removeSrc4OldRename()) {
return null;
  }

  renamedIIP = tx.addSourceToDestination();
  added = (renamedIIP != null);
  if (added) {
if (NameNode.stateChangeLog.isDebugEnabled()) {
  NameNode.stateChangeLog.debug("DIR* FSDirectory" +
  ".unprotectedRenameTo: " + src + " is renamed to " + dst);
}

tx.updateMtimeAndLease(timestamp);
tx.updateQuotasInSourceTree(fsd.getBlockStoragePolicySuite());

return renamedIIP;
  }
} finally {
  if (!added) {
tx.restoreSource();
  }
}
NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: " +
"failed to rename " + src + " to " + dst);
return null;
  }
{code}

There are several places that returns null without a clear message. Though that 
seems to be on purpose in the code, it left to user to guess what's going on.

It seems to make sense to have a warning for each failed scenario.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10992) file is under construction but no leases found

2016-10-10 Thread Chernishev Aleksandr (JIRA)
Chernishev Aleksandr created HDFS-10992:
---

 Summary: file is under construction but no leases found
 Key: HDFS-10992
 URL: https://issues.apache.org/jira/browse/HDFS-10992
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.1
 Environment: hortonworks 2.3 build 2557. 10 Datanodes , 2 NameNode in 
auto failover
Reporter: Chernishev Aleksandr


On hdfs after recording a small number of files (at least 1000) the size (150Mb 
- 1,6Gb) found 13 damaged files with incomplete last block.

hadoop fsck 
/staging/landing/stream/itc_dwh/811-ITF-ZO-P-bad/load_tarifer-zf-4_20160902165521521.csv
 -openforwrite -files -blocks -locations
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF-8 -Dsun.jnu.encoding=UTF-8
Connecting to namenode via 
http://hadoop-m1:50070/fsck?ugi=hdfs=1=1=1=1=%2Fstaging%2Flanding%2Fstream%2Fitc_dwh%2F811-ITF-ZO-P-bad%2Fload_tarifer-zf-4_20160902165521521.csv
FSCK started by hdfs (auth:SIMPLE) from /10.42.12.178 for path 
/staging/landing/stream/itc_dwh/811-ITF-ZO-P-bad/load_tarifer-zf-4_20160902165521521.csv
 at Mon Oct 10 17:12:25 MSK 2016
/staging/landing/stream/itc_dwh/811-ITF-ZO-P-bad/load_tarifer-zf-4_20160902165521521.csv
 920596121 bytes, 7 block(s), OPENFORWRITE:  MISSING 1 blocks of total size 
115289753 B
0. BP-1552885336-10.42.12.178-1446159880991:blk_1084952841_17798971 
len=134217728 repl=4 
[DatanodeInfoWithStorage[10.42.12.188:50010,DS-9ba44a76-113a-43ac-87dc-46aa97ba3267,DISK],
 
DatanodeInfoWithStorage[10.42.12.183:50010,DS-eccd375a-ea32-491b-a4a3-5ea3faca4171,DISK],
 
DatanodeInfoWithStorage[10.42.12.184:50010,DS-ec462491-6766-490a-a92f-38e9bb3be5ce,DISK],
 
DatanodeInfoWithStorage[10.42.12.182:50010,DS-cef46399-bb70-4f1a-ac55-d71c7e820c29,DISK]]
1. BP-1552885336-10.42.12.178-1446159880991:blk_1084952850_17799207 
len=134217728 repl=3 
[DatanodeInfoWithStorage[10.42.12.184:50010,DS-412769e0-0ec2-48d3-b644-b08a516b1c2c,DISK],
 
DatanodeInfoWithStorage[10.42.12.181:50010,DS-97388b2f-c542-417d-ab06-c8d81b94fa9d,DISK],
 
DatanodeInfoWithStorage[10.42.12.187:50010,DS-e7a11951-4315-4425-a88b-a9f6429cc058,DISK]]
2. BP-1552885336-10.42.12.178-1446159880991:blk_1084952857_17799489 
len=134217728 repl=3 
[DatanodeInfoWithStorage[10.42.12.184:50010,DS-7a08c597-b0f4-46eb-9916-f028efac66d7,DISK],
 
DatanodeInfoWithStorage[10.42.12.180:50010,DS-fa6a4630-1626-43d8-9988-955a86ac3736,DISK],
 
DatanodeInfoWithStorage[10.42.12.182:50010,DS-8670e77d-c4db-4323-bb01-e0e64bd5b78e,DISK]]
3. BP-1552885336-10.42.12.178-1446159880991:blk_1084952866_17799725 
len=134217728 repl=3 
[DatanodeInfoWithStorage[10.42.12.185:50010,DS-b5ff8ba0-275e-4846-b5a4-deda35aa0ad8,DISK],
 
DatanodeInfoWithStorage[10.42.12.180:50010,DS-9cb6cade-9395-4f3a-ab7b-7fabd400b7f2,DISK],
 
DatanodeInfoWithStorage[10.42.12.183:50010,DS-e277dcf3-1bce-4efd-a668-cd6fb2e10588,DISK]]
4. BP-1552885336-10.42.12.178-1446159880991:blk_1084952872_17799891 
len=134217728 repl=4 
[DatanodeInfoWithStorage[10.42.12.184:50010,DS-e1d8f278-1a22-4294-ac7e-e12d554aef7f,DISK],
 
DatanodeInfoWithStorage[10.42.12.186:50010,DS-5d9aeb2b-e677-41cd-844e-4b36b3c84092,DISK],
 
DatanodeInfoWithStorage[10.42.12.183:50010,DS-eccd375a-ea32-491b-a4a3-5ea3faca4171,DISK],
 
DatanodeInfoWithStorage[10.42.12.182:50010,DS-8670e77d-c4db-4323-bb01-e0e64bd5b78e,DISK]]
5. BP-1552885336-10.42.12.178-1446159880991:blk_1084952880_17800120 
len=134217728 repl=3 
[DatanodeInfoWithStorage[10.42.12.181:50010,DS-79185b75-1938-4c91-a6d0-bb6687ca7e56,DISK],
 
DatanodeInfoWithStorage[10.42.12.184:50010,DS-dcbd20aa-0334-49e0-b807-d2489f5923c6,DISK],
 
DatanodeInfoWithStorage[10.42.12.183:50010,DS-f1d77328-f3af-483e-82e9-66ab0723a52c,DISK]]
6. 
BP-1552885336-10.42.12.178-1446159880991:blk_1084952887_17800316{UCState=COMMITTED,
 truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-5f3eac72-eb55-4df7-bcaa-a6fa35c166a0:NORMAL:10.42.12.188:50010|RBW],
 
ReplicaUC[[DISK]DS-a2a0d8f0-772e-419f-b4ff-10b4966c57ca:NORMAL:10.42.12.184:50010|RBW],
 
ReplicaUC[[DISK]DS-52984aa0-598e-4fff-acfa-8904ca7b585c:NORMAL:10.42.12.185:50010|RBW]]}
 len=115289753 MISSING!

Status: CORRUPT
 Total size:920596121 B
 Total dirs:0
 Total files:   1
 Total symlinks:0
 Total blocks (validated):  7 (avg. block size 131513731 B)
  
  UNDER MIN REPL'D BLOCKS:  1 (14.285714 %)
  dfs.namenode.replication.min: 1
  CORRUPT FILES:1
  MISSING BLOCKS:   1
  MISSING SIZE: 115289753 B
  
 Minimally replicated blocks:   6 (85.71429 %)
 Over-replicated blocks:2 (28.571428 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:3
 Average block replication: 2.857143
 Corrupt blocks:   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-10-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/

[Oct 9, 2016 9:07:03 AM] (kai.zheng) HADOOP-12579. Deprecate 
WriteableRPCEngine. Contributed by Wei Zhou
[Oct 9, 2016 9:33:26 AM] (kai.zheng) MAPREDUCE-6780. Add support for HDFS 
directory with erasure code policy
[Oct 9, 2016 9:53:36 AM] (kai.zheng) HADOOP-13641. Update 
UGI#spawnAutoRenewalThreadForUserCreds to reduce
[Oct 10, 2016 5:55:49 AM] (kai.zheng) HDFS-10895. Update HDFS Erasure Coding 
doc to add how to use ISA-L based




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.hdfs.server.namenode.TestAddBlock 
   hadoop.hdfs.server.datanode.TestDataNodeLifeline 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/190/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 2.6.5 (RC1)

2016-10-10 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Built native from source- Deployed to a 
single-node cluster and ran some sample jobs
Jason
 

On Sunday, October 2, 2016 7:13 PM, Sangjin Lee  wrote:
 

 Hi folks,

I have pushed a new release candidate (R1) for the Apache Hadoop 2.6.5
release (the next maintenance release in the 2.6.x release line). RC1
contains fixes to CHANGES.txt, and is otherwise identical to RC0.

Below are the details of this release candidate:

The RC is available for validation at:
http://home.apache.org/~sjlee/hadoop-2.6.5-RC1/.

The RC tag in git is release-2.6.5-RC1 and its git commit is
e8c9fe0b4c252caf2ebf1464220599650f119997.

The maven artifacts are staged via repository.apache.org at:
https://repository.apache.org/content/repositories/orgapachehadoop-1050/.

You can find my public key at
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.

Please try the release and vote. The vote will run for the usual 5 days. I
would greatly appreciate your timely vote. Thanks!

Regards,
Sangjin


   

[jira] [Created] (HDFS-10991) libhdfs : Client compilation is failing for hdfsTruncateFile API

2016-10-10 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-10991:
-

 Summary: libhdfs :  Client compilation is failing for 
hdfsTruncateFile API
 Key: HDFS-10991
 URL: https://issues.apache.org/jira/browse/HDFS-10991
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
Priority: Blocker


{noformat}
/tmp/ccJNUj6m.o: In function `main':
test.c:(.text+0x812): undefined reference to `hdfsTruncateFile'
collect2: ld returned 1 exit status
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org