Hi Yanbo,

I'm using hadoop-1.0.1 on local machine (not EC2)

conf/master : localhost
conf/slave : localhost
conf/core-site.xml :
<property>
    <name>fs.default.name</name>
    <value>s3://bucketname</value>
  </property>

<property>
<name>fs.s3.awsAccessKeyId</name>
<value>***********</value>
</property>

<property>
<name>fs.s3.awsSecretAccessKey</name>
<value>**************</value>
</property>

/etc/hosts :
192.168.2.24    DRPM4    # Added by NetworkManager
127.0.0.1    localhost.localdomain    localhost


$ bin/hadoop namenode -format : successful

$ bin/start-dfs.sh

Namenode log :

2012-07-25 13:19:32,896 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = DRPM4/192.168.2.24
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.1
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
************************************************************/
2012-07-25 13:19:33,037 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2012-07-25 13:19:33,049 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2012-07-25 13:19:33,050 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2012-07-25 13:19:33,050 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
started
2012-07-25 13:19:33,889 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2012-07-25 13:19:33,896 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
exists!
2012-07-25 13:19:33,904 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
registered.
2012-07-25 13:19:33,906 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
NameNode registered.
2012-07-25 13:19:33,946 INFO org.apache.hadoop.hdfs.util.GSet: VM
type       = 32-bit
2012-07-25 13:19:33,946 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
memory = 17.77875 MB
2012-07-25 13:19:33,946 INFO org.apache.hadoop.hdfs.util.GSet:
capacity      = 2^22 = 4194304 entries
2012-07-25 13:19:33,946 INFO org.apache.hadoop.hdfs.util.GSet:
recommended=4194304, actual=4194304
2012-07-25 13:19:33,986 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=alok
2012-07-25 13:19:33,986 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-07-25 13:19:33,986 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2012-07-25 13:19:33,995 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.block.invalidate.limit=100
2012-07-25 13:19:33,995 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)
2012-07-25 13:19:34,396 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStateMBean and NameNodeMXBean
2012-07-25 13:19:34,416 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
occuring more than 10 times
2012-07-25 13:19:34,442 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 1
2012-07-25 13:19:34,449 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 0
2012-07-25 13:19:34,449 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 110 loaded in 0 seconds.
2012-07-25 13:19:34,449 INFO org.apache.hadoop.hdfs.server.common.Storage:
Edits file /home/alok/work/s3s/hadoop/namenode/current/edits of size 4
edits # 0 loaded in 0 seconds.
2012-07-25 13:19:34,451 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 110 saved in 0 seconds.
2012-07-25 13:19:35,161 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 110 saved in 0 seconds.
2012-07-25 13:19:35,873 INFO
org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
entries 0 lookups
2012-07-25 13:19:35,873 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 1900 msecs
2012-07-25 13:19:35,888 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
= 0
2012-07-25 13:19:35,888 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
blocks = 0
2012-07-25 13:19:35,888 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
under-replicated blocks = 0
2012-07-25 13:19:35,888 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
over-replicated blocks = 0
2012-07-25 13:19:35,889 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Safe mode termination scan for invalid, over- and under-replicated blocks
completed in 15 msec
2012-07-25 13:19:35,889 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Leaving safe mode after 1 secs.
2012-07-25 13:19:35,889 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2012-07-25 13:19:35,889 INFO org.apache.hadoop.hdfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2012-07-25 13:19:35,897 INFO org.apache.hadoop.util.HostsFileReader:
Refreshing hosts (include/exclude) list
2012-07-25 13:19:35,905 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
QueueProcessingStatistics: First cycle completed 0 blocks in 7 msec
2012-07-25 13:19:35,905 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
QueueProcessingStatistics: Queue flush completed 0 blocks in 7 msec
processing time, 7 msec clock time, 1 cycles
2012-07-25 13:19:35,905 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
FSNamesystemMetrics registered.
2012-07-25 13:19:35,906 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2012-07-25 13:19:35,906 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
processing time, 1 msec clock time, 1 cycles
2012-07-25 13:19:35,916 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
transactions: 0 Total time for transactions(ms): 0Number of transactions
batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2012-07-25 13:19:35,916 WARN
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
thread received InterruptedException.java.lang.InterruptedException: sleep
interrupted
2012-07-25 13:19:35,917 INFO
org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
Monitor
java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:662)
2012-07-25 13:19:35,993 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException:
Problem binding to *bucket*/67.215.65.132:8020 : Cannot assign requested
address
    at org.apache.hadoop.ipc.Server.bind(Server.java:227)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind(Native Method)
    at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    ... 8 more

2012-07-25 13:19:35,994 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at DRPM4/192.168.2.24
************************************************************/

Thanks,


On Wed, Jul 25, 2012 at 12:05 PM, Yanbo Liang <yanboha...@gmail.com> wrote:

> Could you provide your execute environment, operation procedure and the
> detail log information?
>
>
> 2012/7/24 Alok Kumar <alok...@gmail.com>
>
>> Hi Yanbo,
>>
>> Thank you for your reply..
>>
>> Now I've made changes exactly , present in this link
>> http://wiki.apache.org/hadoop/AmazonS3
>>
>> But my namenode is not coming up with exception (I tried it both locally
>> and inside EC2)
>>
>> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.net.BindException: Problem binding to <bucket-name>.
>> s3.amazonaws.com/207.171.163.14:8020 : Cannot assign requested address
>>
>> fs.default.name = s3://<mybucket>    // Error : UnknownHost Exception in
>> Namenode!
>> or
>> fs.default.name = s3://<mybucket>.s3.amazonaws.com // Error
>> BindException in Namenode log!
>>
>> Also I can't see any dir created inside my bucket. ( I'm able to execute
>> command $ bin/hadoop dfs -ls s3://<bucket>/ )
>>
>> "bin/hadoop namenode -format " is saying succesfully formatted namenode
>> dir S3://bucket/hadoop/namenode , when it is not even existing there!
>>
>>
>> any suggestion?
>>
>> Thanks again.
>>
>>
>> On Tue, Jul 24, 2012 at 4:11 PM, Yanbo Liang <yanboha...@gmail.com>wrote:
>>
>>> I think you have made confusion about the integration of hadoop and S3.
>>> 1) If you set "dfs.data.dir=s3://******", it means that you set S3 as
>>> the DataNode local storage.
>>> You are still using HDFS as the under storage layer.
>>> As far as I know, it is not supported at present.
>>> 2)  The right way of using S3 integrated with Hadoop is to replace the
>>> HDFS with S3 just like you have tried.
>>> But I think you have missed some configuration parameters.
>>> "  fs.default.name=s3://<<mybucket> " is the most important parameter
>>> when you are using S3 to replace HDFS, but it is not enough. The detail
>>> configuration can be obtained from here
>>> http://wiki.apache.org/hadoop/AmazonS3
>>>
>>> Yanbo
>>>
>>>
>>> 2012/7/23 Alok Kumar <alok...@gmail.com>
>>>
>>>> Hello Group,
>>>>
>>>> I've hadoop setup locally running.
>>>>
>>>> Now I want to use Amazon s3://<mybucket> as my data store,
>>>> so i changed like " dfs.data.dir=s3://<mybucket>/hadoop/ " in my
>>>> hdfs-site.xml, Is it the correct way?
>>>> I'm getting error :
>>>>
>>>> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory
>>>> in dfs.data.dir: can not create directory: s3://<mybucket>/hadoop
>>>> 2012-07-23 13:15:06,260 ERROR
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
>>>> dfs.data.dir are invalid.
>>>>
>>>> and
>>>> when i changed like " dfs.data.dir=s3://<mybucket>/ "
>>>> I got error :
>>>>  ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
>>>> java.lang.IllegalArgumentException: Wrong FS: s3://<mybucket>/, expected:
>>>> file:///
>>>>     at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:381)
>>>>     at
>>>> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55)
>>>>     at
>>>> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:393)
>>>>     at
>>>> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
>>>>     at
>>>> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146)
>>>>     at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1574)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>>>
>>>> Also,
>>>> When I'm changing fs.default.name=s3://<<mybucket> , Namenode is not
>>>> coming up with error : ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException:
>>>> (Any way I want to run namenode locally, so I reverted it back to
>>>> hdfs://localhost:9000 )
>>>>
>>>> Your help is highly appreciated!
>>>> Thanks
>>>> --
>>>> Alok Kumar
>>>>
>>>
>>>
>>
>>
>> --
>> Alok Kumar
>>
>>
>>
>


-- 
Alok Kumar

Reply via email to