Petar,

Client nodes do not hold any data. In your scenario the opposite thing will
happen - all file system data will be stored remotely on data nodes, while
file system operations will be performed on the current node, even if it is
a client.

However, you can route file system requests to concrete nodes using Ignite
implementation of FileSystem - org.apache.ignite.hadoop.fs.
v1.IgniteHadoopFileSystem. You can access it as any other Hadoop file
system:

FileSystem igfs = FileSystem.get("igfs://igfs@*data_node_address*
:data_node_endpoint_port");

This way all file system requests will be routed to concrete node you
specified in the URI.

However, this is not the main use case for IGFS and Ignite will perform
attempts to find local node with IGFS first. You can disable this behavior
using special configuration parameters (see
org.apache.ignite.internal.processors.hadoop.igfs.HadoopIgfsUtils class).

Please let me know if you need any further assistance.

Vladimir.


On Mon, Feb 8, 2016 at 6:39 PM, pshomov <pe...@activitystream.com> wrote:

> Hi Vladimir,
>
> Regarding option 1
>
> >> 1) Have only one IGFS node in the cluster. That is, single machine with
> >> file system -> single node.
>
> If I have client nodes (running on my laptop or wherever) that connect to
> the single server node, they will cache the data but not do the work
> (operations on the secondaryFilesystem), yes?
>
> Petar
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/IGFS-dedicated-writer-node-multiple-reader-nodes-tp2882p2888.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>

Reply via email to