Hi, Petar,
An alternative way to configure IGFS over HDFS is to leave original configs
untouched as they were in original cluster.
But use another configs for IGFS and a special client script referring them
like the following:
---------------------------------
export IGNITE_HOME=....

VERSION=1.5.0-SNAPSHOT

for X in ${IGNITE_HOME}/libs/ignite-core-${VERSION}.jar
${IGNITE_HOME}/libs/ignite-hadoop/ignite-hadoop-${VERSION}.jar
${IGNITE_HOME}/libs/ignite-shmem-1.0.0.jar
do
  if [ ! -f ${X} ]; then
     echo "Error: Library [${X}] not found."
     exit 1
  fi

  HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:${X}
done

export HADOOP_CLASSPATH

hadoop --config ...../igfs-conf-dir/ "${@}"
---------------------------------

, where "igfs-conf-dir" stands for a folder where ignite IGFS configs
reside. These configs should be similar to examples provided in Ignite
distributions in $IGNITE_HOME/config/hadoop/ .
This way allows to have different fs.defaultFs and other settings for
underlying HDFS and IGFS, and does nor require to specify file system
schema in all operations explicitly.


On Fri, Feb 5, 2016 at 10:29 AM, Vladimir Ozerov <voze...@gridgain.com>
wrote:

> Hi Petar,
>
> Yes, if you overwrite fs.defaultFS property, you will not able to start
> HDFS anymore (please see corresponding note in docs:
> https://apacheignite-fs.readme.io/docs/installing-on-apache-hadoop).
>
> So if you want to use both IGFS and HDFS simultaneously, you should only
> specify IGFS file system class in core-site.xml:
>
> <property>
>     <name>fs.igfs.impl</name>
>     
> <value>org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem</value></property>
>
>
> And as fs.defaultFS remains pointing to HDFS, you will ahve to access IGFS
> using fully-qualified path. E.g. "igfs:///path/to/my/file" instead of
> "/path/to/my/file".
>
> Please let me know if you have any further questions.
>
> Vladimir.
>
>
> On Fri, Feb 5, 2016 at 1:57 AM, pshomov <pe...@activitystream.com> wrote:
>
>> Hi Val,
>>
>> Thank you for you quick response!
>>
>> >> I think you should download Hadoop Accelerator edition [1] and refer to
>> >> [2] for instructions on how to install it. It will plug into your
>> >> existing Hadoop installation and switch it to IGFS and Ignite's
>> >> map-reduce engine.
>>
>> It is exactly what I tried to do. Ran sbin/start-dfs.sh and got this
>>
>> Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI
>> for NameNode address (check fs.defaultFS): igfs://igfs@localhost is not
>> of
>> scheme 'hdfs'.
>>
>> Found this -
>>
>> http://apache-ignite-users.70518.x6.nabble.com/IllegalArgumentException-Invalid-URI-for-NameNode-address-check-fs-defaultFS-igfs-igfs-localhost-is--td1978.html
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/IllegalArgumentException-Invalid-URI-for-NameNode-address-check-fs-defaultFS-igfs-igfs-localhost-is--td1978.html
>> >
>> where I see again that HDFS service stops being available (which is all we
>> care about).
>>
>> >> You can also configure HDFS as a secondary file system for IGFS [3], so
>> >> you don't need to preload the data to IGFS - it will act as a caching
>> >> layer between you application and your data. As a result, Drill
>> >> application should be able to work with in-memory data without any code
>> >> changes.
>>
>> Are you implying that I might leave Hadoop as is and rather integrate IGFS
>> in Apache Drill instead?
>>
>>
>> Thank you once again for taking the time to share with me! Much
>> appreciated!
>>
>> Best regards,
>>
>> Petar
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Apache-Drill-querying-IGFS-accelerated-H-DFS-tp2840p2842.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>

Reply via email to