[jira] Commented: (HDFS-1508) Ability to do savenamespace without being in safemode

2010-11-30 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965599#action_12965599
 ] 

Konstantin Shvachko commented on HDFS-1508:
---

> Ops wants to do a checkpoint without the BNN/SNN (say it is down) and is 
> willing to make the NN unresponsive for a short while.

So they set it into safe mode, saveNamespace, then leave safe mode. A few 
seconds wont make a difference.

> Ability to do savenamespace without being in safemode
> -
>
> Key: HDFS-1508
> URL: https://issues.apache.org/jira/browse/HDFS-1508
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Attachments: savenamespaceWithoutSafemode.txt
>
>
> In the current code, the administrator can run savenamespace only after 
> putting the namenode in safemode. This means that applications that are 
> writing to HDFS encounters errors because the NN is in safemode. We would 
> like to allow saveNamespace even when the namenode is not in safemode.
> The savenamespace command already acquires the FSNamesystem writelock. There 
> is no need to require that the namenode is in safemode too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1523) TestLargeBlock is failing on trunk

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965569#action_12965569
 ] 

Konstantin Boudnik commented on HDFS-1523:
--

Here's the scenario of the test:
- read the file in chunks of 134217728 (128Mb)
- after last full read there are 513 bytes to be read
- 512 bytes of those have to be read from the first block
- 1 byte is going to be read from the last block (second one)

When test passes before reading last 513 bytes call to 
FSNameSystem.getBlockLocationsInternal returns last block of the file (size=1)

{noformat}
2010-11-30 19:44:49,439 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = 
[blk_-6779333650185181528_1001, blk_-3599
865432887782445_1001]
2010-11-30 19:44:49,440 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(881)) - last = 
blk_-3599865432887782445_1001
2010-11-30 19:44:49,457 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(148)) - ugi=cos   ip=/127.0.0.1   cmd=open 
   src=/home/cos/work/
H0.23/git/hdfs/build/test/data/2147484160.datdst=nullperm=null
2010-11-30 19:44:49,459 DEBUG hdfs.DFSClient 
(DFSInputStream.java:openInfo(113)) - newInfo = LocatedBlocks{
  fileLength=2147484161
  underConstruction=false
  blocks=[LocatedBlock{blk_-6779333650185181528_1001; 
getBlockSize()=2147484160; corrupt=false; offset=0; locs=[127.0.0.1:35608]}]
  lastLocatedBlock=LocatedBlock{blk_-3599865432887782445_1001; 
getBlockSize()=1; corrupt=false; offset=2147484160; locs=[127.0.0.1:35608]}
  isLastBlockComplete=true}
...
2010-11-30 19:45:23,880 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35608, dest: 
/127.0.0.1:51640, bytes: 2164261380, op
: HDFS_READ, cliID: DFSClient_1273505070, offset: 0, srvID: 
DS-212336177-192.168.102.126-35608-1291175019763, blockid: 
blk_-6779333650185181528_1001, durat
ion: 34361753463
2010-11-30 19:45:24,030 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = 
[blk_-6779333650185181528_1001, blk_-3599
865432887782445_1001]
2010-11-30 19:45:24,030 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(881)) - last = 
blk_-3599865432887782445_1001
2010-11-30 19:45:24,031 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(148)) - ugi=cos   ip=/127.0.0.1   cmd=open 
   src=/home/cos/work/H0.23/git/hdfs/build/test/data/2147484160.dat
dst=nullperm=null
2010-11-30 19:45:24,032 DEBUG datanode.DataNode (DataXceiver.java:(86)) - 
Number of active connections is: 2
2010-11-30 19:45:24,099 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - 
DatanodeRegistration(127.0.0.1:35608, 
storageID=DS-212336177-192.168.102.126-35608-1291175019763, infoPort=46218, 
ipcPort=38099):Number of active connections is: 3
2010-11-30 19:45:24,099 DEBUG datanode.DataNode (BlockSender.java:(140)) 
- block=blk_-3599865432887782445_1001, replica=FinalizedReplica, 
blk_-3599865432887782445_1001, FINALIZED
  getNumBytes() = 1
  getBytesOnDisk()  = 1
  getVisibleLength()= 1
  getVolume()   = 
/home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized
  getBlockFile()= 
/home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized/blk_-3599865432887782445
  unlinked=false
2010-11-30 19:45:24,101 DEBUG datanode.DataNode (BlockSender.java:(231)) 
- replica=FinalizedReplica, blk_-3599865432887782445_1001, FINALIZED
  getNumBytes() = 1
  getBytesOnDisk()  = 1
  getVisibleLength()= 1
  getVolume()   = 
/home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized
  getBlockFile()= 
/home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized/blk_-3599865432887782445
  unlinked=false
2010-11-30 19:45:24,103 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35608, dest: 
/127.0.0.1:51644, bytes: 5, op: HDFS_READ, cliID: DFSClient_1273505070, offset: 
0, srvID: DS-212336177-192.168.102.126-35608-1291175019763, blockid: 
blk_-3599865432887782445_1001, duration: 1854472
{noformat}

In case of failure:
{noformat}
2010-11-30 19:35:49,426 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = 
[blk_1170274882140601397_1001, blk_289191
6181488413346_1001]
2010-11-30 19:35:49,426 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(881)) - last = 
blk_2891916181488413346_1001
2010-11-30 19:35:49,427 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(148)) - ugi=cos   ip=/127.0.0.1   cmd=open 
   src=/home/cos/work/
hadoop/git/hdfs/build/test/data/2147484160.dat   dst=nullperm=null
2010-11-30 19:35:49,428 DEBUG hdfs.DFSClient 
(DFSInputStream.java:openInfo(113)) - newInfo = LocatedBlocks{
  fileLength=2147484161
  underConstruction=false
  blocks=[LocatedBlock{blk

[jira] Commented: (HDFS-1418) DFSClient Uses Deprecated "mapred.task.id" Configuration Key Causing Unecessary Warning Messages

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965560#action_12965560
 ] 

Konstantin Boudnik commented on HDFS-1418:
--

Yes, I have seen the code. It poses the problem nonetheless (and we already 
face it here): when a configuration parameter got deprecated in the upstream 
project (MR) we are forced to change downstream library. Pretty bad, I think.

> DFSClient Uses Deprecated "mapred.task.id" Configuration Key Causing 
> Unecessary Warning Messages
> 
>
> Key: HDFS-1418
> URL: https://issues.apache.org/jira/browse/HDFS-1418
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Ranjit Mathew
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HDFS-1418.patch
>
>
> Every invocation of the "hadoop fs" command leads to an unnecessary warning 
> like the following:
> {noformat}
> $ $HADOOP_HOME/bin/hadoop fs -ls /
> 10/09/24 15:10:23 WARN conf.Configuration: mapred.task.id is deprecated. 
> Instead, use mapreduce.task.attempt.id
> {noformat}
> This is easily fixed by updating 
> "src/java/org/apache/hadoop/hdfs/DFSClient.java".

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1523) TestLargeBlock is failing on trunk

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965559#action_12965559
 ] 

Konstantin Boudnik commented on HDFS-1523:
--

Test fails every time on
- 32-bit Ubuntu (kernel 2.6.28-11) machine.
- 64-bit Ubuntu (kernel 2.6.28-18) (Apache Hudson hadoop8)
Works on:
- Ubuntu 64-bit (kernel 2.6.35-23) works fine

Is it kernel problem?

> TestLargeBlock is failing on trunk
> --
>
> Key: HDFS-1523
> URL: https://issues.apache.org/jira/browse/HDFS-1523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>
> TestLargeBlock is failing for more than a week not on 0.22 and trunk with
> {noformat}
> java.io.IOException: Premeture EOF from inputStream
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
>   at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1418) DFSClient Uses Deprecated "mapred.task.id" Configuration Key Causing Unecessary Warning Messages

2010-11-30 Thread Ranjit Mathew (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965558#action_12965558
 ] 

Ranjit Mathew commented on HDFS-1418:
-

bq. Actually, the use of MR related configuration parameters (e.g. from 
upstream project) looks like a some sort of circular dependency to me. Isn't it?

In this case, it isn't - it's merely used as a distinguishing identifier, if 
available; a random number is used otherwise. In
{{src/java/org/apache/hadoop/hdfs/DFSClient.java}} we have:
{code}
256 String taskId = conf.get("mapred.task.id");
257 if (taskId != null) {
258   this.clientName = "DFSClient_" + taskId;
259 } else {
260   this.clientName = "DFSClient_" + r.nextInt();
261 }
{code}

> DFSClient Uses Deprecated "mapred.task.id" Configuration Key Causing 
> Unecessary Warning Messages
> 
>
> Key: HDFS-1418
> URL: https://issues.apache.org/jira/browse/HDFS-1418
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Ranjit Mathew
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HDFS-1418.patch
>
>
> Every invocation of the "hadoop fs" command leads to an unnecessary warning 
> like the following:
> {noformat}
> $ $HADOOP_HOME/bin/hadoop fs -ls /
> 10/09/24 15:10:23 WARN conf.Configuration: mapred.task.id is deprecated. 
> Instead, use mapreduce.task.attempt.id
> {noformat}
> This is easily fixed by updating 
> "src/java/org/apache/hadoop/hdfs/DFSClient.java".

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1508) Ability to do savenamespace without being in safemode

2010-11-30 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965556#action_12965556
 ] 

Sanjay Radia commented on HDFS-1508:


> saveNamespace with -f to indicate that NN should automatically enter safe mode
I was suggesting that -f implies that even though it is not in safemode go 
ahead (without putting it in safemode). The motivation being that doing 
saveNamespace is a big deal as could make the NN unresponsive for a while.

I had not thought about the checkpoint issue that Kons mentioned. If 
saveNamespace comes in the middle of a checkpoint then can the NN ignore the 
checkpoint sent by secondary?
(BTW with HDFS-1073 this problem disappears - one can take as many checkpoint 
as one likes).
If it complicates the BNN/SNN's checkpoint then this Jira should wait till 1073 
is done.

As far as motivation: Ops wants to do a checkpoint without the BNN/SNN (say it 
is down) and is willing to make the NN unresponsive for a short while.


> Ability to do savenamespace without being in safemode
> -
>
> Key: HDFS-1508
> URL: https://issues.apache.org/jira/browse/HDFS-1508
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Attachments: savenamespaceWithoutSafemode.txt
>
>
> In the current code, the administrator can run savenamespace only after 
> putting the namenode in safemode. This means that applications that are 
> writing to HDFS encounters errors because the NN is in safemode. We would 
> like to allow saveNamespace even when the namenode is not in safemode.
> The savenamespace command already acquires the FSNamesystem writelock. There 
> is no need to require that the namenode is in safemode too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1514) project.version in aop.xml is out of sync with build.xml

2010-11-30 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-1514:
-

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

It (and more) has been fixes as a part of HDFS-1516

> project.version in aop.xml is out of sync with build.xml
> 
>
> Key: HDFS-1514
> URL: https://issues.apache.org/jira/browse/HDFS-1514
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Patrick Kling
>Assignee: Patrick Kling
> Attachments: HDFS-1514.patch
>
>
> project.version in aop.xml is set to 0.22.0-SNAPSHOT whereas version in 
> build.xml is set to 0.23.0-SNAPSHOT. This causes ant test-patch to fail when 
> using a local maven repository.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1058) reading from file under construction fails if it reader beats writer to DN for new block

2010-11-30 Thread Thanh Do (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965545#action_12965545
 ] 

Thanh Do commented on HDFS-1058:


I think this is a design decision (stated in append design document at HDFS-265)
Here we trade performance for consistency,


> reading from file under construction fails if it reader beats writer to DN 
> for new block
> 
>
> Key: HDFS-1058
> URL: https://issues.apache.org/jira/browse/HDFS-1058
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node, hdfs client
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Todd Lipcon
>
> If there is a writer and concurrent reader, the following can occur:
> - The writer allocates a new block from the NN
> - The reader calls getBlockLocations
> - Reader connects to the DN and calls getReplicaVisibleLength
> - writer still has not talked to the DN, so DN doesn't know about the block 
> and throws an error

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1362) Provide volume management functionality for DataNode

2010-11-30 Thread Wang Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965543#action_12965543
 ] 

Wang Xu commented on HDFS-1362:
---

Hi Jerry, 

This is what I am doing now :)

> Provide volume management functionality for DataNode
> 
>
> Key: HDFS-1362
> URL: https://issues.apache.org/jira/browse/HDFS-1362
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: data-node
>Reporter: Wang Xu
>Assignee: Wang Xu
> Attachments: HDFS-1362.txt, Provide_volume_management_for_DN_v1.pdf
>
>
> The current management unit in Hadoop is a node, i.e. if a node failed, it 
> will be kicked out and all the data on the node will be replicated.
> As almost all SATA controller support hotplug, we add a new command line 
> interface to datanode, thus it can list, add or remove a volume online, which 
> means we can change a disk without node decommission. Moreover, if the failed 
> disk still readable and the node has enouth space, it can migrate data on the 
> disks to other disks in the same node.
> A more detailed design document will be attached.
> The original version in our lab is implemented against 0.20 datanode 
> directly, and is it better to implemented it in contrib? Or any other 
> suggestion?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1187) Modify fetchdt to allow renewing and canceling token

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965537#action_12965537
 ] 

Konstantin Boudnik commented on HDFS-1187:
--

bq. I noticed the same set of Fault Inject tests got run 4 times. 

Could you be more specific about there "4 times" ? What tests? Are they 
executed 4 times again and again? It's not clear. 

> Modify fetchdt to allow renewing and canceling token
> 
>
> Key: HDFS-1187
> URL: https://issues.apache.org/jira/browse/HDFS-1187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.22.0
>
> Attachments: fetchdt.patch, h1187-14.patch, h1187-15.patch
>
>
> I would like to extend fetchdt to allow renewing and canceling tokens.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1473) Refactor storage management into separate classes than fsimage file reading/writing

2010-11-30 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965525#action_12965525
 ] 

Konstantin Shvachko commented on HDFS-1473:
---

Sorry for late response. Was just looking at it on Sunday when Eli committed 
it, so I stopped. Now looking at it in trunk I feel it needs more work, mostly 
because I see lots of unnecessary public methods and classes.
# FSImageCompression should not have public methods except for toString().
# FSImageFormat should not be abstract and should not have anything public.
# FSImageSerialization should not be abstract. It does not have any abstract 
methods. I understand we will get rid of other public qualifiers once OIV moves 
inside namenode package. Could we please state it in the beginning of the class.
# FSImageFormat.Loader and FSImageFormat.Writer seem like a mismatch. Loader 
rhythms with saver, reader with writer. Just a teerminology issue. We used to 
load and save image. Now we load and write it. I'd stay with the traditional 
terms.
# FSImageSerialization.TL_DATA is the new name for former FSImage.FILE_PERM. I 
agree FILE_PERM is not the best choice, but TL_DATA lacks any meaning 
attributed to the object. And if it once will become a non-thread local object 
then the name will be completely irrelevant. I'd go with the original name and 
rename in a separate issue. In general, refactoring patches should avoid 
renaming.

> Refactor storage management into separate classes than fsimage file 
> reading/writing
> ---
>
> Key: HDFS-1473
> URL: https://issues.apache.org/jira/browse/HDFS-1473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.22.0, 0.23.0
>
> Attachments: hdfs-1473-prelim.txt, hdfs-1473.txt, hdfs-1473.txt, 
> hdfs-1473.txt
>
>
> Currently the FSImage class is responsible both for storage management (eg 
> moving around files, tracking file names, the VERSION file, etc) as well as 
> for the actual serialization and deserialization of the "fsimage" file within 
> the storage directory.
> I'd like to refactor the loading and saving code into new classes. This will 
> make testing easier and also make the major changes in HDFS-1073 easier to 
> understand.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1523) TestLargeBlock is failing on trunk

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965510#action_12965510
 ] 

Konstantin Boudnik commented on HDFS-1523:
--

Full stack trace:
Stacktrace
{noformat}
java.io.IOException: Premeture EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
at 
org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273)
at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225)
at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193)
at org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:117)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:477)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:528)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at 
org.apache.hadoop.hdfs.TestLargeBlock.checkFullFile(TestLargeBlock.java:142)
at 
org.apache.hadoop.hdfs.TestLargeBlock.runTest(TestLargeBlock.java:210)
at 
org.apache.hadoop.hdfs.TestLargeBlock.__CLR3_0_2y8q39ovnu(TestLargeBlock.java:171)
at 
org.apache.hadoop.hdfs.TestLargeBlock.testLargeBlockSize(TestLargeBlock.java:169)
{noformat}

> TestLargeBlock is failing on trunk
> --
>
> Key: HDFS-1523
> URL: https://issues.apache.org/jira/browse/HDFS-1523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>
> TestLargeBlock is failing for more than a week not on 0.22 and trunk with
> {noformat}
> java.io.IOException: Premeture EOF from inputStream
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
>   at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1523) TestLargeBlock is failing on trunk

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965511#action_12965511
 ] 

Konstantin Boudnik commented on HDFS-1523:
--

I can not reproduce this issue on my Ubuntu machine.

> TestLargeBlock is failing on trunk
> --
>
> Key: HDFS-1523
> URL: https://issues.apache.org/jira/browse/HDFS-1523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>
> TestLargeBlock is failing for more than a week not on 0.22 and trunk with
> {noformat}
> java.io.IOException: Premeture EOF from inputStream
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
>   at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1523) TestLargeBlock is failing on trunk

2010-11-30 Thread Konstantin Boudnik (JIRA)
TestLargeBlock is failing on trunk
--

 Key: HDFS-1523
 URL: https://issues.apache.org/jira/browse/HDFS-1523
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.22.0
Reporter: Konstantin Boudnik


TestLargeBlock is failing for more than a week not on 0.22 and trunk with
{noformat}
java.io.IOException: Premeture EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
{noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1406) TestCLI fails on Ubuntu with default /etc/hosts

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965504#action_12965504
 ] 

Konstantin Boudnik commented on HDFS-1406:
--

I have seen the same issue running Herriot system tests where the framework 
couldn't connect to a datanode exactly because of this reason.

> TestCLI fails on Ubuntu with default /etc/hosts
> ---
>
> Key: HDFS-1406
> URL: https://issues.apache.org/jira/browse/HDFS-1406
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Todd Lipcon
>Priority: Minor
> Attachments: Test.java
>
>
> Depending on the order of entries in /etc/hosts, TestCLI can fail. This is 
> because it sets fs.default.name to "localhost", and then the bound IPC socket 
> on the NN side reports its hostname as "foobar-host" if the entry for 
> 127.0.0.1 lists "foobar-host" before "localhost". This seems to be the 
> default in some versions of Ubuntu.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1522) Merge Block.BLOCK_FILE_PREFIX and DataStorage.BLOCK_FILE_PREFIX into one constant

2010-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1522:
--

Summary: Merge Block.BLOCK_FILE_PREFIX and DataStorage.BLOCK_FILE_PREFIX 
into one constant  (was: Merge Block.BLOCK_FILE_PREFIX and 
DataStorage.BLOCK_FILE_PREFIX into one contant)

> Merge Block.BLOCK_FILE_PREFIX and DataStorage.BLOCK_FILE_PREFIX into one 
> constant
> -
>
> Key: HDFS-1522
> URL: https://issues.apache.org/jira/browse/HDFS-1522
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.21.0
>Reporter: Konstantin Shvachko
> Fix For: 0.22.0
>
>
> Two semantically identical constant {{Block.BLOCK_FILE_PREFIX}} and 
> {{DataStorage.BLOCK_FILE_PREFIX}} should merged into one. Should be defined 
> in {{Block}}, imo.
> Also use cases of "blok_", like in {{DirectoryScanner}} should be replaced by 
> the this constant.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1522) Merge Block.BLOCK_FILE_PREFIX and DataStorage.BLOCK_FILE_PREFIX into one contant

2010-11-30 Thread Konstantin Shvachko (JIRA)
Merge Block.BLOCK_FILE_PREFIX and DataStorage.BLOCK_FILE_PREFIX into one contant


 Key: HDFS-1522
 URL: https://issues.apache.org/jira/browse/HDFS-1522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
 Fix For: 0.22.0


Two semantically identical constant {{Block.BLOCK_FILE_PREFIX}} and 
{{DataStorage.BLOCK_FILE_PREFIX}} should merged into one. Should be defined in 
{{Block}}, imo.
Also use cases of "blok_", like in {{DirectoryScanner}} should be replaced by 
the this constant.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1502) TestBlockRecovery triggers NPE in assert

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965502#action_12965502
 ] 

Konstantin Boudnik commented on HDFS-1502:
--

Looking somewhat more into this I see in a debugger that 
{{reply.getNumBytes()}} is 6000 {{newBlock.getNumBytes()}} is only 5000 (as set 
in the test class). So they are different which causes assertion to happen.

{noformat}
if(rState == bestState) {
  minLength = Math.min(minLength, r.rInfo.getNumBytes());
  participatingList.add(r);
}
{noformat}

The difference is set by the code above, apparently. I am not that well 
familiar with recovery process in particular, but it seems like there are some 
issues with this part of the code. Thoughts?


> TestBlockRecovery triggers NPE in assert
> 
>
> Key: HDFS-1502
> URL: https://issues.apache.org/jira/browse/HDFS-1502
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Eli Collins
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HDFS-1502.patch
>
>
> {noformat}
> Testcase: testRBW_RWRReplicas took 10.333 sec
> Caused an ERROR
> null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1881)
> at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
> at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:305)
> {noformat}
> {noformat}
> Block reply = r.datanode.updateReplicaUnderRecovery(
> r.rInfo, recoveryId, newBlock.getNumBytes());
> assert reply.equals(newBlock) &&
>reply.getNumBytes() == newBlock.getNumBytes() :
>   "Updated replica must be the same as the new block.";<- 
> line 1881
> {noformat}
> Not sure how reply could be null since  updateReplicaUnderRecovery always 
> returns a newly instantiated object.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1502) TestBlockRecovery triggers NPE in assert

2010-11-30 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-1502:
-

Attachment: HDFS-1502.patch

I have that patch which fixes NPE issue, but I still see two testcases failing 
on the assertion in the same line where NPE used to happen.

> TestBlockRecovery triggers NPE in assert
> 
>
> Key: HDFS-1502
> URL: https://issues.apache.org/jira/browse/HDFS-1502
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Eli Collins
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HDFS-1502.patch
>
>
> {noformat}
> Testcase: testRBW_RWRReplicas took 10.333 sec
> Caused an ERROR
> null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1881)
> at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
> at 
> org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:305)
> {noformat}
> {noformat}
> Block reply = r.datanode.updateReplicaUnderRecovery(
> r.rInfo, recoveryId, newBlock.getNumBytes());
> assert reply.equals(newBlock) &&
>reply.getNumBytes() == newBlock.getNumBytes() :
>   "Updated replica must be the same as the new block.";<- 
> line 1881
> {noformat}
> Not sure how reply could be null since  updateReplicaUnderRecovery always 
> returns a newly instantiated object.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1508) Ability to do savenamespace without being in safemode

2010-11-30 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965480#action_12965480
 ] 

Konstantin Shvachko commented on HDFS-1508:
---

We can use Sanjay's idea of using saveNamespace with -f to indicate that NN 
should automatically enter safe mode before saving the namespace and then leave 
it upon completion.

> Ability to do savenamespace without being in safemode
> -
>
> Key: HDFS-1508
> URL: https://issues.apache.org/jira/browse/HDFS-1508
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Attachments: savenamespaceWithoutSafemode.txt
>
>
> In the current code, the administrator can run savenamespace only after 
> putting the namenode in safemode. This means that applications that are 
> writing to HDFS encounters errors because the NN is in safemode. We would 
> like to allow saveNamespace even when the namenode is not in safemode.
> The savenamespace command already acquires the FSNamesystem writelock. There 
> is no need to require that the namenode is in safemode too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1508) Ability to do savenamespace without being in safemode

2010-11-30 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965475#action_12965475
 ] 

Konstantin Shvachko commented on HDFS-1508:
---

Dhruba, what is the use case for that? Why safemode is a problem.
It could more complicated than just removing the verification. Think of how it 
will work with checkpointing, if saveNameSpace() comes in the middle of a 
checkpoint. I think this was the main reason why it required safe mode.

> Ability to do savenamespace without being in safemode
> -
>
> Key: HDFS-1508
> URL: https://issues.apache.org/jira/browse/HDFS-1508
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Attachments: savenamespaceWithoutSafemode.txt
>
>
> In the current code, the administrator can run savenamespace only after 
> putting the namenode in safemode. This means that applications that are 
> writing to HDFS encounters errors because the NN is in safemode. We would 
> like to allow saveNamespace even when the namenode is not in safemode.
> The savenamespace command already acquires the FSNamesystem writelock. There 
> is no need to require that the namenode is in safemode too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1418) DFSClient Uses Deprecated "mapred.task.id" Configuration Key Causing Unecessary Warning Messages

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965454#action_12965454
 ] 

Konstantin Boudnik commented on HDFS-1418:
--

Actually, the use of MR related configuration parameters (e.g. from upstream 
project) looks like a some sort of circular dependency to me. Isn't it?

> DFSClient Uses Deprecated "mapred.task.id" Configuration Key Causing 
> Unecessary Warning Messages
> 
>
> Key: HDFS-1418
> URL: https://issues.apache.org/jira/browse/HDFS-1418
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Ranjit Mathew
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HDFS-1418.patch
>
>
> Every invocation of the "hadoop fs" command leads to an unnecessary warning 
> like the following:
> {noformat}
> $ $HADOOP_HOME/bin/hadoop fs -ls /
> 10/09/24 15:10:23 WARN conf.Configuration: mapred.task.id is deprecated. 
> Instead, use mapreduce.task.attempt.id
> {noformat}
> This is easily fixed by updating 
> "src/java/org/apache/hadoop/hdfs/DFSClient.java".

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965433#action_12965433
 ] 

Konstantin Boudnik commented on HDFS-1519:
--

It isn't like ant is the recommended way to build libhdfs. It might be simpler 
to do though. Basically all you need (if you have source tree handy) is to run 
{{ant compile-contrib -Dcompile.c++=yes -Dlibhdfs=yes}} and after a short while 
you'll find the library binaries under {{build/c++}}

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>Assignee: Konstantin Boudnik
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
> BUILD FAILED
> /usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
> dependencies:
>   resolve failed - see output for details
> Total time: 3 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1508) Ability to do savenamespace without being in safemode

2010-11-30 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965424#action_12965424
 ] 

Sanjay Radia commented on HDFS-1508:


The main difference between doing it in safemode and outside safemode is that 
the clients will not get the safemode exception but instead will find that they 
are blocked waiting on the NN; new connections will also be accepted up to the 
OS's limit.

Given that the NN will be unresponsive I suggest adding a -f parameter to 
indicate that its okay that it is not in safemode.

> Ability to do savenamespace without being in safemode
> -
>
> Key: HDFS-1508
> URL: https://issues.apache.org/jira/browse/HDFS-1508
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Attachments: savenamespaceWithoutSafemode.txt
>
>
> In the current code, the administrator can run savenamespace only after 
> putting the namenode in safemode. This means that applications that are 
> writing to HDFS encounters errors because the NN is in safemode. We would 
> like to allow saveNamespace even when the namenode is not in safemode.
> The savenamespace command already acquires the FSNamesystem writelock. There 
> is no need to require that the namenode is in safemode too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2010-11-30 Thread Jon Wilson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965415#action_12965415
 ] 

Jon Wilson commented on HDFS-1519:
--

I've been following the instructions at 
http://wiki.apache.org/hadoop/MountableHDFS, which worked (with minor 
modifications) with the release tarball for 0.20.2.  However, my application 
(ROOT: http://root.cern.ch) opens files with mode O_RDWR (it doesn't do 
anything that hdfs doesn't support with them, though), so I needed the later 
version with HDFS-861 fixed.

I did a bit of poking, and it looks like all the code for fuse-dfs and for 
libhdfs is present, but that libhdfs is not built in the release tarball.  So I 
do need to build libhdfs.  Is ant the recommended way to do this?  Looking at 
http://wiki.apache.org/hadoop/LibHDFS, perhaps I should just use make instead.  
I'll try that, and report back.

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>Assignee: Konstantin Boudnik
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
> BUILD FAILED
> /usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
> dependencies:
>   resolve failed - see output for details
> Total time: 3 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965407#action_12965407
 ] 

Konstantin Boudnik commented on HDFS-1519:
--

I am not certain if the release tar ball has fuse-dfs included in it (check for 
yourself), but I am sure you should be able to build from the source code 
according to the a couple of simple steps in {{src/contrib/fuse-dfs/README}}

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>Assignee: Konstantin Boudnik
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
> BUILD FAILED
> /usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
> dependencies:
>   resolve failed - see output for details
> Total time: 3 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1508) Ability to do savenamespace without being in safemode

2010-11-30 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965389#action_12965389
 ] 

dhruba borthakur commented on HDFS-1508:


code review here: https://reviews.apache.org/r/125/

> Ability to do savenamespace without being in safemode
> -
>
> Key: HDFS-1508
> URL: https://issues.apache.org/jira/browse/HDFS-1508
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Attachments: savenamespaceWithoutSafemode.txt
>
>
> In the current code, the administrator can run savenamespace only after 
> putting the namenode in safemode. This means that applications that are 
> writing to HDFS encounters errors because the NN is in safemode. We would 
> like to allow saveNamespace even when the namenode is not in safemode.
> The savenamespace command already acquires the FSNamesystem writelock. There 
> is no need to require that the namenode is in safemode too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1508) Ability to do savenamespace without being in safemode

2010-11-30 Thread dhruba borthakur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur updated HDFS-1508:
---

Attachment: savenamespaceWithoutSafemode.txt

The namenode need not be in safemode while runnign the saveNamespace command. 
The saveNamespace command acquires the FSNamesystem writelock, thus preventing 
anybody else from modifying the namespace.

The lease expiry thread in the LeaseManager acquires the FSNamesystem-writelock 
too, so it is well protected.

> Ability to do savenamespace without being in safemode
> -
>
> Key: HDFS-1508
> URL: https://issues.apache.org/jira/browse/HDFS-1508
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Attachments: savenamespaceWithoutSafemode.txt
>
>
> In the current code, the administrator can run savenamespace only after 
> putting the namenode in safemode. This means that applications that are 
> writing to HDFS encounters errors because the NN is in safemode. We would 
> like to allow saveNamespace even when the namenode is not in safemode.
> The savenamespace command already acquires the FSNamesystem writelock. There 
> is no need to require that the namenode is in safemode too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2010-11-30 Thread Jon Wilson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965387#action_12965387
 ] 

Jon Wilson commented on HDFS-1519:
--

I want to build fuse-dfs, which requires libhdfs.  Do I need to get an 
subversion checkout to run ant instead of using a release tarball?  Perhaps 
libhdfs and fuse-dfs are already built in the release tarball?  Thanks for the 
help so far.

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>Assignee: Konstantin Boudnik
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
> BUILD FAILED
> /usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
> dependencies:
>   resolve failed - see output for details
> Total time: 3 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1377) Quota bug for partial blocks allows quotas to be violated

2010-11-30 Thread Raghu Angadi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965378#action_12965378
 ] 

Raghu Angadi commented on HDFS-1377:


Thanks Eli. will review the patch tonight (tomorrow night at the latest).

> Quota bug for partial blocks allows quotas to be violated 
> --
>
> Key: HDFS-1377
> URL: https://issues.apache.org/jira/browse/HDFS-1377
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.1, 0.20.2, 0.21.0, 0.22.0, 0.23.0
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Blocker
> Fix For: 0.20.3, 0.21.1, 0.22.0, 0.23.0
>
> Attachments: hdfs-1377-1.patch, hdfs-1377-b20-1.patch, 
> hdfs-1377-b20-2.patch, hdfs-1377-b20-3.patch, HDFS-1377.patch
>
>
> There's a bug in the quota code that causes them not to be respected when a 
> file is not an exact multiple of the block size. Here's an example:
> {code}
> $ hadoop fs -mkdir /test
> $ hadoop dfsadmin -setSpaceQuota 384M /test
> $ ls dir/ | wc -l   # dir contains 101 files
> 101
> $ du -ms dir# each is 3mb
> 304   dir
> $ hadoop fs -put dir /test
> $ hadoop fs -count -q /test
> none inf   402653184  -5505024002 
>  101  317718528 hdfs://haus01.sf.cloudera.com:10020/test
> $ hadoop fs -stat "%o %r" /test/dir/f30
> 134217728 3# three 128mb blocks
> {code}
> INodeDirectoryWithQuota caches the number of bytes consumed by it's children 
> in {{diskspace}}. The quota adjustment code has a bug that causes 
> {{diskspace}} to get updated incorrectly when a file is not an exact multiple 
> of the block size (the value ends up being negative). 
> This causes the quota checking code to think that the files in the directory 
> consumes less space than they actually do, so the verifyQuota does not throw 
> a QuotaExceededException even when the directory is over quota. However the 
> bug isn't visible to users because {{fs count -q}} reports the numbers 
> generated by INode#getContentSummary which adds up the sizes of the blocks 
> rather than use the cached INodeDirectoryWithQuota#diskspace value.
> In FSDirectory#addBlock the disk space consumed is set conservatively to the 
> full block size * the number of replicas:
> {code}
> updateCount(inodes, inodes.length-1, 0,
> fileNode.getPreferredBlockSize()*fileNode.getReplication(), true);
> {code}
> In FSNameSystem#addStoredBlock we adjust for this conservative estimate by 
> subtracting out the difference between the conservative estimate and what the 
> number of bytes actually stored was:
> {code}
> //Updated space consumed if required.
> INodeFile file = (storedBlock != null) ? storedBlock.getINode() : null;
> long diff = (file == null) ? 0 :
> (file.getPreferredBlockSize() - storedBlock.getNumBytes());
> if (diff > 0 && file.isUnderConstruction() &&
> cursize < storedBlock.getNumBytes()) {
> ...
> dir.updateSpaceConsumed(path, 0, -diff*file.getReplication());
> {code}
> We do the same in FSDirectory#replaceNode when completing the file, but at a 
> file granularity (I believe the intent here is to correct for the cases when 
> there's a failure replicating blocks and recovery). Since oldnode is under 
> construction INodeFile#diskspaceConsumed will use the preferred block size  
> (vs of Block#getNumBytes used by newnode) so we will again subtract out the 
> difference between the full block size and what the number of bytes actually 
> stored was:
> {code}
> long dsOld = oldnode.diskspaceConsumed();
> ...
> //check if disk space needs to be updated.
> long dsNew = 0;
> if (updateDiskspace && (dsNew = newnode.diskspaceConsumed()) != dsOld) {
>   try {
> updateSpaceConsumed(path, 0, dsNew-dsOld);
> ...
> {code}
> So in the above example we started with diskspace at 384mb (3 * 128mb) and 
> then we subtract 375mb (to reflect only 9mb raw was actually used) twice so 
> for each file the diskspace for the directory is - 366mb (384mb minus 2 * 
> 375mb). Which is why the quota gets negative and yet we can still write more 
> files.
> So a directory with lots of single block files (if you have multiple blocks 
> on the final partial block ends up subtracting from the diskspace used) ends 
> up having a quota that's way off.
> I think the fix is to in FSDirectory#replaceNode not have the 
> diskspaceConsumed calculations differ when the old and new INode have the 
> same blocks. I'll work on a patch which also adds a quota test for blocks 
> that are not multiples of the block size and warns in 
> INodeDirectory#computeContentSummary if the computed size does not reflect 
> the cached value.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the iss

[jira] Commented: (HDFS-1362) Provide volume management functionality for DataNode

2010-11-30 Thread Jerry Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965365#action_12965365
 ] 

Jerry Tang commented on HDFS-1362:
--

I would suggest you go ahead to modify the FSDataset only. If there is enough 
interest to make it into the interface, it would be trivial to add the method 
signature into the FSDatasetInterface.  

> Provide volume management functionality for DataNode
> 
>
> Key: HDFS-1362
> URL: https://issues.apache.org/jira/browse/HDFS-1362
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: data-node
>Reporter: Wang Xu
>Assignee: Wang Xu
> Attachments: HDFS-1362.txt, Provide_volume_management_for_DN_v1.pdf
>
>
> The current management unit in Hadoop is a node, i.e. if a node failed, it 
> will be kicked out and all the data on the node will be replicated.
> As almost all SATA controller support hotplug, we add a new command line 
> interface to datanode, thus it can list, add or remove a volume online, which 
> means we can change a disk without node decommission. Moreover, if the failed 
> disk still readable and the node has enouth space, it can migrate data on the 
> disks to other disks in the same node.
> A more detailed design document will be attached.
> The original version in our lab is implemented against 0.20 datanode 
> directly, and is it better to implemented it in contrib? Or any other 
> suggestion?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965346#action_12965346
 ] 

Konstantin Boudnik commented on HDFS-1519:
--

The tarball you have downloaded is the release tarball. Once again, why do you 
need to run ant on a released version? You might have a perfectly legit reason 
for this, I just don't know your intentions. I am not sure if release tarballs 
were even suitable for running builds on them.

If you want to build Hadoop for yourself you can take a look [this 
link|http://wiki.apache.org/hadoop/HowToContribute]

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>Assignee: Konstantin Boudnik
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
> BUILD FAILED
> /usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
> dependencies:
>   resolve failed - see output for details
> Total time: 3 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2010-11-30 Thread Jon Wilson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965322#action_12965322
 ] 

Jon Wilson commented on HDFS-1519:
--

I downloaded the tarball from 
http://mirror.nyi.net/apache//hadoop/core/hadoop-0.21.0/hadoop-0.21.0.tar.gz 
(or possibly some other mirror...), so I don't have a .svn directory.
I unpacked the tarball into /usr/products/hadoop/v0_21_0/ANY/ (a sort of /opt 
type space), as user horton, and set the environment variable HADOOP_DIR to 
/usr/products/hadoop/v0_21_0/ANY/.  I am running ant from $HADOOP_DIR/hdfs as 
user horton.
When I run ant from $HADOOP_DIR/mapred, I see the same sort of problems.  When 
I run ant from $HADOOP_DIR/common, I do not encounter any problems.

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>Assignee: Konstantin Boudnik
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
> BUILD FAILED
> /usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
> dependencies:
>   resolve failed - see output for details
> Total time: 3 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2010-11-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965285#action_12965285
 ] 

Konstantin Boudnik commented on HDFS-1519:
--

What workspace do you use (e.g. what .svn/entries says at the top) and what is 
the revision in your workspace?

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>Assignee: Konstantin Boudnik
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
> BUILD FAILED
> /usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
> dependencies:
>   resolve failed - see output for details
> Total time: 3 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2010-11-30 Thread Jon Wilson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Wilson reopened HDFS-1519:
--


$ rm -r ~/.ivy2
HADOOP_DIR/hdfs$ ant clean compile
-- snip --
[ivy:resolve] :: problems summary ::
[ivy:resolve]  WARNINGS
[ivy:resolve]   module not found: org.apache.hadoop#hadoop-common;0.21.0
[ivy:resolve]    apache-snapshot: tried
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
[ivy:resolve]    maven2: tried
[ivy:resolve] 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
[ivy:resolve] 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
[ivy:resolve]   ::
[ivy:resolve]   ::  UNRESOLVED DEPENDENCIES ::
[ivy:resolve]   ::
[ivy:resolve]   :: org.apache.hadoop#hadoop-common;0.21.0: not found
[ivy:resolve]   ::
[ivy:resolve] 
[ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

BUILD FAILED
/usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
dependencies:
resolve failed - see output for details

Total time: 1 minute 45 seconds


I still see the same problem.  It looks like somehow the URL template is not 
correct, as the attempted URL is
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
rather than
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0-SNAPSHOT/hadoop-common-0.21.0-20101120.093342-38.jar

I've attempted to look around in the ivy xml files, but with no ivy experience, 
I couldn't figure out just what was constructing this URL.

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>Assignee: Konstantin Boudnik
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MES

[jira] Commented: (HDFS-1521) Persist transaction ID on disk between NN restarts

2010-11-30 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965145#action_12965145
 ] 

dhruba borthakur commented on HDFS-1521:


can we instead use the generationStamp that the NN maintains persistently on 
disk?

> Persist transaction ID on disk between NN restarts
> --
>
> Key: HDFS-1521
> URL: https://issues.apache.org/jira/browse/HDFS-1521
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>
> For HDFS-1073 and other future work, we'd like to have the concept of a 
> transaction ID that is persisted on disk with the image/edits. We already 
> have this concept in the NameNode but it resets to 0 on restart. We can also 
> use this txid to replace the _checkpointTime_ field, I believe.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.