Usually I pass -Dtest.build.data.basedirectory=D:/testDir as VM argument to run 
the test in Windows to avoid these problem.

Regards,
Ashish

-----Original Message-----
From: Nkechi Achara [mailto:[email protected]] 
Sent: 15 March 2016 03:52
To: Ted Yu
Cc: [email protected]
Subject: Re: Example of spinning up a Hbase mock style test for integration 
testing in scala

Hi Ted,

I believe it is an issue with long file name lengths in  windows as when i 
attempt to get to the directory it is trying to replicate the block to, I 
recieve the ever annoying error of:

The filename or extension is too long.

Does anyone know how to fix this?


On 14 March 2016 at 18:42, Ted Yu <[email protected]> wrote:

> You can inspect the output from 'mvn dependency:tree' to see if any 
> incompatible hadoop dependency exists.
>
> FYI
>
> On Mon, Mar 14, 2016 at 10:26 AM, Parsian, Mahmoud 
> <[email protected]>
> wrote:
>
>> Hi Keech,
>>
>> Please post your sample test, its run log, version of Hbase , hadoop, 
>> … And make sure that hadoop-core-1.2.1.jar is not your classpath 
>> (causes many errors!).
>>
>> Best,
>> Mahmoud
>> From: Nkechi Achara <[email protected]<mailto:
>> [email protected]>>
>> Date: Monday, March 14, 2016 at 10:14 AM
>> To: "[email protected]<mailto:[email protected]>" < 
>> [email protected]<mailto:[email protected]>>, Mahmoud Parsian 
>> < [email protected]<mailto:[email protected]>>
>>
>> Subject: Re: Example of spinning up a Hbase mock style test for 
>> integration testing in scala
>>
>>
>> Thanks Mahmoud,
>>
>> This is what I am using,  but as the previous reply stated, I  
>> receiving an exception when starting the cluster.
>> Thinking about it, it looks to be more of a build problem of my hbase 
>> mini cluster,  as I am receiving the following error:
>>
>> 16/03/14 12:29:00 WARN datanode.DataNode: IOException in
>> BlockReceiver.run():
>>
>> java.io.IOException: Failed to move meta file for 
>> ReplicaBeingWritten, blk_1073741825_1001, RBW
>>
>>   getNumBytes()     = 7
>>
>>   getBytesOnDisk()  = 7
>>
>>   getVisibleLength()= 7
>>
>>   getVolume()       =
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-be
>> d8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\d
>> ata\data1\current
>>
>>   getBlockFile()    =
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-be
>> d8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\d
>> ata\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\rbw
>> \blk_1073741825
>>
>>   bytesAcked=7
>>
>>   bytesOnDisk=7 from
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-be
>> d8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\d
>> ata\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\rbw
>> \blk_1073741825_1001.meta
>> to
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-be
>> d8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\d
>> ata\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\fin
>> alized\subdir0\subdir0\blk_1073741825_1001.meta
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.m
>> oveBlockFiles(FsDatasetImpl.java:615)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.
>> addBlock(BlockPoolSlice.java:250)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.ad
>> dBlock(FsVolumeImpl.java:229)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.f
>> inalizeReplica(FsDatasetImpl.java:1119)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.f
>> inalizeBlock(FsDatasetImpl.java:1100)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.
>> finalizeBlock(BlockReceiver.java:1293)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.
>> run(BlockReceiver.java:1233)
>>
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Caused by: 3: The system cannot find the path specified.
>>
>> at org.apache.hadoop.io.nativeio.NativeIO.renameTo0(Native Method)
>>
>> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:830)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.m
>> oveBlockFiles(FsDatasetImpl.java:613)
>>
>> ... 7 more
>>
>> 16/03/14 12:29:00 INFO datanode.DataNode: Starting CheckDiskError 
>> Thread
>>
>> Thanks,
>>
>> Keech
>>
>> On 14 Mar 2016 6:10 pm, "Parsian, Mahmoud" <[email protected]<mailto:
>> [email protected]>> wrote:
>> Hi Keech,
>>
>> You may use the org.apache.hadoop.hbase.HBaseCommonTestingUtility 
>> class to start a ZK, and an HBase cluster and then do your unit tests 
>> and integration.
>> I am using this with junit and it works very well. But I am using 
>> Java only.
>>
>> Best regards,
>> Mahmoud Parsian
>>
>>
>> On 3/13/16, 11:52 PM, "Nkechi Achara" <[email protected]<mailto:
>> [email protected]>> wrote:
>>
>> >Hi,
>> >
>> >I am trying to find an example of how to spin up a Hbase server in a 
>> >mock or integration style, so I can test my code locally in my IDE.
>> >I have tried fake-hbase and hbase testing utility and receive errors 
>> >especially when trying to start the cluster.
>> >Has anyone got any examples in scala to do this?
>> >
>> >Thanks,
>> >
>> >Keech
>>
>>
>

Reply via email to