Thank you! Justin I would try doing the same like you did by adding
hdfs-site.xml to the class path on all supervisor nodes.

On Monday, June 22, 2015, Justin Workman <[email protected]> wrote:

> Also, as a side note. I have never tried to add the configuration in the
> code as you are. I have generally just made sure the same hdfs-site,
> core-site or hbase-site cluster xml files are available on the class path
> for my topologies on each supervisor node.
>
> Justin
>
> Sent from my iPhone
>
> On Jun 22, 2015, at 8:52 PM, Ajay Chander <[email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>
> Thank you Taylor! And Justin!
>
> I have tried doing the same with open source Apache Hadoop and it's the
> same Problem with it too. In specified this ...
> HdfsBolt().withFsUrl(hdfs://namenodepath:9000)
>
> This is where my storm application is going and talking to hdfs. On the
> ( open source Apache Hadoop ) hdfs side particularly on each node in
> hdfs-site.xml in enabled dfs.client.use.datanode.hostname and
> dfs.datanode.use.datanode.hostname to true expecting that the communication
> would happen through hostnames.
>
> On my storm side, how do I make my topology aware of those configuration
> changes?? How do I pass those two properties on my storm client side to my
> HdfsBolt?
>
> Basically what I did was, after I built the topology with builder I
> used addConfigurations() to add those two properties namely
> dfs.client.use.....
>
> public StormTopology build() {
>
> LOGGER.info("Building topology");
>
> Map<String, Object> clientHdfsConfig = newHashMap<String, Object>();
>
> clientHdfsConfig.put("dfs.client.use.datanode.hostname", true);
>
> clientHdfsConfig.put("dfs.datanode.use.datanode.hostname", true);
>
> TopologyBuilder builder = newTopologyBuilder();
>
> builder.setSpout(this.realtimeConfiguration.getKafkaSpoutId(),
>
> this.kafkaSpout);
>
> builder.setBolt(MESSAGE_KEY_ASSIGNER_BOLT_ID,
>
> new MessageKeyAssignerBolt()).shuffleGrouping(
>
> this.realtimeConfiguration.getKafkaSpoutId());
>
> builder.setBolt(this.realtimeConfiguration.getHdfsBoltId(),
>
> this.hdfsBolt).shuffleGrouping(MESSAGE_KEY_ASSIGNER_BOLT_ID)
>
> .addConfigurations(clientHdfsConfig);
>
>
> StormTopology stormTopology = builder.createTopology();
>
> LOGGER.info("Successfully built topology");
>
> return stormTopology;
>
> }
>
>
> I did it assuming that the storm will now be aware of those two propertiy
> changes on Hadoop side. But it doesn't resolve the problem.
>
>
> Below are the dependencies from my Pom.xml :
>
>
> <dependency>
> <groupId>org.apache.storm</groupId>
> <artifactId>storm-core</artifactId>
> <version>${storm.version}</version>
> <scope>provided</scope>
> </dependency>
>
>     <dependency>
> <groupId>org.apache.storm</groupId>
> <artifactId>storm-hdfs</artifactId>
> <version>0.9.3</version>
> <exclusions>
> <exclusion>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
>
> <dependency>
> <groupId>org.apache.hadoop</groupId>
> <artifactId>hadoop-client</artifactId>
> <version>${hadoop.version}</version>
> <exclusions>
> <exclusion>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
>
> <dependency>
> <groupId>org.apache.hadoop</groupId>
> <artifactId>hadoop-hdfs</artifactId>
> <version>2.6.0-cdh5.4.2</version>
> <exclusions>
> <exclusion>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
>
>
> Any pointers are highly appreciated.
>
>
> Thank you,
>
> Ajay
>
>
> On Monday, June 22, 2015, P. Taylor Goetz <[email protected]> wrote:
>
>> There might be others here that are using Storm with CDH, but since
>> Cloudera Manager is closed source/proprietary, you may well be better off
>> asking the question on a Cloudera forum for specific details.
>>
>> That being said, the community here is likely willing to help, but you’ll
>> need to get past the Cloudera-specific pieces and down to the open source
>> parts. If you know the HDFS URL for the cluster, you should be able to
>> connect.
>>
>> What version of Storm are you using? When you say “trying to write to
>> hadoop” do you mean HDFS, or something else? Are you using the storm-hdfs
>> component that ships with Apache Storm?
>>
>> The more details you can provide the better the community will be able to
>> help you.
>>
>> -Taylor
>>
>> On Jun 22, 2015, at 8:06 PM, Ajay Chander <[email protected]> wrote:
>>
>> > Hi Everyone,
>> >
>> > I am trying to write the data into hadoop from my storm topology. For
>> this communication to happen through hostnames, I have enabled couple of
>> properties namely "dfs.client.use.datanode.hostname"=true and
>> "dfs.datanode.use.datanode.hostname" = true in my cloudera manager. Now how
>> do I make my storm aware of those two properties. When my storm topology is
>> running by default it takes those properties as false. Now how do I
>> override those properties ("dfs.client.use.datanode.hostname"=true and
>> "dfs.datanode.use.datanode.hostname" = true) which are specific to my
>> hadoop in my hdfsbolt. ??
>> >
>> > Any help is highly appreciated.
>> >
>> > Thank you,
>> > Ajay
>>
>>

Reply via email to