i dont really know where "30s" comes from. please verify my config that are
them correct?
i SecondaryFileSystem my configPath just as below:
<property name="secondaryFileSystem">
                        <bean
class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <property name="fileSystemFactory">
                                <bean
class="org.apache.ignite.hadoop.fs.CachingHadoopFileSystemFactory">
                                    <property name="uri"
value="hdfs://localhost:9000/"/>
                                    <property name="configPaths">
                                        <list>

<value>/usr/local/hadoop/etc/hadoop/core-site.xml</value>
                                        </list>
                                    </property>

                                </bean>
                            </property>
                        </bean>
                    </property>
where /usr/local/hadoop/etc/hadoop/core-site.xml file is just as below:

<property>
      <name>fs.defaultFS</name>
      <value>hdfs://localhost:9000</value>
   </property>

   <property>
      <name>hadoop.tmp.dir</name>
      <value>/app/hadoop/tmp</value>
   </property>
but when i want to execute worcount example in hadoop i have use a config
in ignite_config folder for hadoop just as below. ignite_config folder
consist of two files name as core-site.xml and mapred-site.xml.
hint: command for executing wordcount example in hadoop
time hadoop --config
/home/mehdi/ignite-conf/ignite-configs-master/igfs-hadoop-fs-cache/ignite_conf
jar
/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar
wordcount /user/input/ /output

i have attached core-site.xml and mapred-site.xml. please give me an
aknowledge that is it correct this configuration for using igfs as cache
for hdfs?



On Tue, Mar 12, 2019 at 6:46 PM Ilya Kasnacheev <[email protected]>
wrote:

> Hello!
>
> Where does this "30s" setting comes from? I guess it should be nominated
> in ms, as a number. However, unless this value comes from Ignite in some
> way, we're not related to that.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 12 мар. 2019 г. в 13:14, Mehdi Seydali <[email protected]>:
>
>> i added java.lang.ClassNotFoundException:
>> com.ctc.wstx.io.InputBootstrapper from previous email and added to ignite
>> lib. after adding this library ignite node started but i have encounter
>> another error just like below:
>>
>> [13:03:28]    __________  ________________
>> [13:03:28]   /  _/ ___/ |/ /  _/_  __/ __/
>> [13:03:28]  _/ // (7 7    // /  / / / _/
>> [13:03:28] /___/\___/_/|_/___/ /_/ /___/
>> [13:03:28]
>> [13:03:28] ver. 2.6.0#20180710-sha1:669feacc
>> [13:03:28] 2018 Copyright(C) Apache Software Foundation
>> [13:03:28]
>> [13:03:28] Ignite documentation: http://ignite.apache.org
>> [13:03:28]
>> [13:03:28] Quiet mode.
>> [13:03:28]   ^-- Logging to file
>> '/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-41a0490a.log'
>> [13:03:28]   ^-- Logging by 'Log4JLogger [quiet=true,
>> config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'
>> [13:03:28]   ^-- To see **FULL** console log here add
>> -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
>> [13:03:28]
>> [13:03:28] OS: Linux 4.15.0-46-generic amd64
>> [13:03:28] VM information: Java(TM) SE Runtime Environment
>> 1.8.0_192-ea-b04 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM
>> 25.192-b04
>> [13:03:28] Configured plugins:
>> [13:03:28]   ^-- Ignite Native I/O Plugin [Direct I/O]
>> [13:03:28]   ^-- Copyright(C) Apache Software Foundation
>> [13:03:28]
>> [13:03:28] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
>> [tryStop=false, timeout=0]]
>> [13:03:28] Message queue limit is set to 0 which may lead to potential
>> OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due
>> to message queues growth on sender and receiver sides.
>> [13:03:29] Security status [authentication=off, tls/ssl=off]
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]
>> [13:03:31] HADOOP_HOME is set to /usr/local/hadoop
>> [13:03:31] Resolved Hadoop classpath locations:
>> /usr/local/hadoop/share/hadoop/common, /usr/local/hadoop/share/hadoop/hdfs,
>> /usr/local/hadoop/share/hadoop/mapreduce
>> [13:03:32] Nodes started on local machine require more than 20% of
>> physical RAM what can lead to significant slowdown due to swapping (please
>> decrease JVM heap size, data region size or checkpoint buffer size)
>> [required=5344MB, available=7953MB]
>> [13:03:34] Performance suggestions for grid  (fix if possible)
>> [13:03:34] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>> [13:03:34]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
>> options)
>> [13:03:34]   ^-- Set max direct memory size if getting 'OOME: Direct
>> buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM
>> options)
>> [13:03:34]   ^-- Disable processing of calls to System.gc() (add
>> '-XX:+DisableExplicitGC' to JVM options)
>> [13:03:34] Refer to this page for more performance suggestions:
>> https://apacheignite.readme.io/docs/jvm-and-system-tuning
>> [13:03:34]
>> [13:03:34] To start Console Management & Monitoring run
>> ignitevisorcmd.{sh|bat}
>> [13:03:34]
>> [13:03:34] Ignite node started OK (id=41a0490a)
>> [13:03:34] Topology snapshot [ver=4, servers=2, clients=0, CPUs=8,
>> offheap=3.1GB, heap=2.0GB]
>> [13:03:34]   ^-- Node [id=41A0490A-8A7B-4977-A48B-B5D98D49CF1B,
>> clusterState=ACTIVE]
>> [13:03:34] Data Regions Configured:
>> [13:03:34]   ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB,
>> persistenceEnabled=false]
>> [13:03:42] New version is available at ignite.apache.org: 2.7.0
>> [13:03:56] Topology snapshot [ver=5, servers=1, clients=0, CPUs=8,
>> offheap=1.6GB, heap=1.0GB]
>> [13:03:56]   ^-- Node [id=41A0490A-8A7B-4977-A48B-B5D98D49CF1B,
>> clusterState=ACTIVE]
>> [13:03:56] Data Regions Configured:
>> [13:03:56]   ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB,
>> persistenceEnabled=false]
>> [2019-03-12 13:04:08,878][ERROR][igfs-igfs-ipc-#64][IgfsImpl] File info
>> operation in DUAL mode failed [path=/output]
>> class org.apache.ignite.IgniteException: For input string: "30s"
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:43)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.fileSystemForUser(HadoopIgfsSecondaryFileSystemDelegateImpl.java:517)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.info
>> (HadoopIgfsSecondaryFileSystemDelegateImpl.java:296)
>>     at
>> org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.info
>> (IgniteHadoopIgfsSecondaryFileSystem.java:240)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsImpl.resolveFileInfo(IgfsImpl.java:1600)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsImpl.access$800(IgfsImpl.java:110)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:524)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:517)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsImpl.safeOp(IgfsImpl.java:1756)
>>     at org.apache.ignite.internal.processors.igfs.IgfsImpl.info
>> (IgfsImpl.java:517)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:341)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:332)
>>     at
>> org.apache.ignite.igfs.IgfsUserContext.doAs(IgfsUserContext.java:54)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.processPathControlRequest(IgfsIpcHandler.java:332)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.execute(IgfsIpcHandler.java:241)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.access$000(IgfsIpcHandler.java:57)
>>     at
>> org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$1.run(IgfsIpcHandler.java:167)
>>     at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>     at java.lang.Thread.run(Thread.java:748)
>> Caused by: class org.apache.ignite.IgniteCheckedException: For input
>> string: "30s"
>>     at
>> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7307)
>>     at
>> org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259)
>>     at
>> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:171)
>>     at
>> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.getValue(HadoopLazyConcurrentMap.java:191)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:93)
>>     ... 22 more
>> Caused by: java.lang.NumberFormatException: For input string: "30s"
>>     at
>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>>     at java.lang.Long.parseLong(Long.java:589)
>>     at java.lang.Long.parseLong(Long.java:631)
>>     at
>> org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1538)
>>     at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:430)
>>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:540)
>>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>>     at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
>>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
>>     at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:217)
>>     at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:214)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:422)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:214)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.create(HadoopBasicFileSystemFactoryDelegate.java:117)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.getWithMappedName(HadoopBasicFileSystemFactoryDelegate.java:95)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.access$001(HadoopCachingFileSystemFactoryDelegate.java:32)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:37)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:35)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.init(HadoopLazyConcurrentMap.java:173)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.access$100(HadoopLazyConcurrentMap.java:154)
>>     at
>> org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:82)
>>     ... 22 more
>> ============================================
>> i have three exm configuration file just like beow, i have execute hadoop
>> with below command with config switch
>>
>> time hadoop --config
>> /home/mehdi/ignite-conf/ignite-configs-master/igfs-hadoop-fs-cache/ignite_conf
>> jar
>> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar
>> wordcount /user/input/ /output
>>
>> i have created /user/input directory in hdfs and i want to put result in
>> output folder.
>>
>>
>>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>ignite</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.address</name>
    <value>127.0.0.1:11211</value>
  </property>
</configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>igfs://igfs@</value>
  </property>

  <property>
    <!-- FS driver class for the 'igfs://' URIs. -->
    <name>fs.igfs.impl</name>
    <value>org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem</value>
  </property>

  <property>
    <!-- FS driver class for the 'igfs://' URIs in Hadoop2.x -->
    <name>fs.AbstractFileSystem.igfs.impl</name>
    <value>org.apache.ignite.hadoop.fs.v2.IgniteHadoopFileSystem</value>
  </property>



</configuration>

Reply via email to