Am not sure I'm getting your problem yet, but mind sharing the error
you see specifically? That'd give me more clues.

On Fri, May 17, 2013 at 2:39 PM, Steve Lewis <lordjoe2...@gmail.com> wrote:
> Here is the issue -
> 1 - I am running a Java client on a machine unknown to the cluster - my
> default name on this pc is
> HYPERCHICKEN\local_admin - the name known to the cluster is slewis
>
> 2 Thew listed code
>   String connectString =   "hdfs://" + host + ":" + port + "/";*
>            Configuration config = new Configuration();*
>            config.set("fs.default.name",connectString);*
>               FileSystem fs  = FileSystem.get(config);*
>
> Arttempts to get a file system - it has not (to the best of my knowledge)
> altered the cluster -
> Yes, the next code will attempt to write files in a directory where I may
> have permission - at least slewis does but
> I cannot even get the file system
>
>
>
> This is the relevant section of  hdfs-site.xml
> <!-- Permissions configuration -->
> <property>
> <name>dfs.umaskmode</name>
> <value>077</value>
> <description>
> The octal umask used when creating files and directories.
> </description>
> </property>
>
> <property>
> <name>dfs.block.access.token.enable</name>
> <value>false</value>
> <description>
> Are access tokens are used as capabilities for accessing datanodes.
> </description>
> </property>
>
> <property>
> <name>dfs.namenode.kerberos.principal</name>
> <value>nn/_HOST@${local.realm}</value>
> <description>
> Kerberos principal name for the NameNode
> </description>
> </property>
>
> <property>
> <name>dfs.secondary.namenode.kerberos.principal</name>
> <value>nn/_HOST@${local.realm}</value>
> <description>
> Kerberos principal name for the secondary NameNode.
> </description>
> </property>
>
>
> <property>
> <name>dfs.namenode.kerberos.https.principal</name>
> <value>host/_HOST@${local.realm}</value>
> <description>
> The Kerberos principal for the host that the NameNode runs on.
> </description>
> </property>
>
> <property>
> <name>dfs.secondary.namenode.kerberos.https.principal</name>
> <value>host/_HOST@${local.realm}</value>
> <description>
> The Kerberos principal for the hostthat the secondary NameNode runs on.
> </description>
> </property>
>
> <property>
> <name>dfs.secondary.https.port</name>
> <value>50490</value>
> <description>The https port where secondary-namenode binds</description>
>
> </property>
>
> <property>
> <name>dfs.datanode.kerberos.principal</name>
> <value>dn/_HOST@${local.realm}</value>
> <description>
> The Kerberos principal that the DataNode runs as. "_HOST" is replaced by
> the real host name.
> </description>
> </property>
>
> <property>
> <name>dfs.web.authentication.kerberos.principal</name>
> <value>HTTP/_HOST@${local.realm}</value>
> <description>
> The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint.
>
> The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos
> HTTP SPENGO specification.
> </description>
> </property>
>
> <property>
> <name>dfs.web.authentication.kerberos.keytab</name>
> <value>/etc/security/keytabs/nn.service.keytab</value>
> <description>
> The Kerberos keytab file with the credentials for the
> HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint.
> </description>
> </property>
>
> <property>
> <name>dfs.namenode.keytab.file</name>
> <value>/etc/security/keytabs/nn.service.keytab</value>
> <description>
> Combined keytab file containing the namenode service and host principals.
> </description>
> </property>
>
> <property>
> <name>dfs.secondary.namenode.keytab.file</name>
> <value>/etc/security/keytabs/nn.service.keytab</value>
> <description>
> Combined keytab file containing the namenode service and host principals.
> </description>
> </property>
>
> <property>
> <name>dfs.datanode.keytab.file</name>
> <value>/etc/security/keytabs/dn.service.keytab</value>
> <description>
> The filename of the keytab file for the DataNode.
> </description>
> </property>
>
> <property>
> <name>dfs.https.port</name>
> <value>50470</value>
> <description>The https port where namenode binds</description>
> </property>
>
> <property>
> <name>dfs.https.address</name>
> <value>hadoop-master-01.ebi.ac.uk:50470</value>
> <description>The https address where namenode binds</description>
> </property>
>
> <property>
> <name>dfs.datanode.data.dir.perm</name>
> <value>700</value>
> <description>The permissions that should be there on dfs.data.dir
> directories. The datanode will not come up if the permissions are
> different on existing dfs.data.dir directories. If the directories
> don't exist, they will be created with this permission.
> </description>
> </property>
>
> <property>
> <name>dfs.cluster.administrators</name>
> <value>hdfs</value>
> <description>ACL for who all can view the default servlets in the
> HDFS</description>
> </property>
>
> <property>
> <name>dfs.permissions.superusergroup</name>
> <value>hadoop</value>
> <description>The name of the group of super-users.</description>
> </property>
>
> <property>
> <name>dfs.secondary.http.address</name>
> <value>hadoop-login.ebi.ac.uk:50090</value>
> <description>
> The secondary namenode http server address and port.
> If the port is 0 then the server will start on a free port.
> </description>
> </property>
>
> <property>
> <name>dfs.hosts</name>
> <value>/etc/hadoop/dfs.include</value>
> <description>Names a file that contains a list of hosts that are
> permitted to connect to the namenode. The full pathname of the file
> must be specified. If the value is empty, all hosts are
> permitted.</description>
> </property>
>
> <property>
> <name>dfs.hosts.exclude</name>
> <value>/etc/hadoop/dfs.exclude</value>
> <description>Names a file that contains a list of hosts that are
> not permitted to connect to the namenode. The full pathname of the
> file must be specified. If the value is empty, no hosts are
> excluded.
> </description>
> </property>
> <property>
> <name>dfs.webhdfs.enabled</name>
> <value>false</value>
> <description>Enable or disable webhdfs. Defaults to false</description>
> </property>
> <property>
> <name>dfs.support.append</name>
> <value>true</value>
> <description>Enable or disable append. Defaults to false</description>
> </property>
> </configuration>
>
> Here is the relevant section of core-site.xml
> <property>
> <name>hadoop.security.authentication</name>
> <value>simple</value>
> <description>
> Set the authentication for the cluster. Valid values are: simple or
> kerberos.
> </description>
> </property>
>
> <property>
> <name>hadoop.security.authorization</name>
> <value>false</value>
> <description>
> Enable authorization for different protocols.
> </description>
> </property>
>
> <property>
> <name>hadoop.security.groups.cache.secs</name>
> <value>14400</value>
> </property>
>
> <property>
> <name>hadoop.kerberos.kinit.command</name>
> <value>/usr/kerberos/bin/kinit</value>
> </property>
>
> <property>
> <name>hadoop.http.filter.initializers</name>
> <value>org.apache.hadoop.http.lib.StaticUserWebFilter</value>
> </property>
>
> </configuration>
>
>
>
> On Mon, May 13, 2013 at 5:26 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Hi Steve,
>>
>> A normally-written client program would work normally on both
>> permissions and no-permissions clusters. There is no concept of a
>> "password" for users in Apache Hadoop as of yet, unless you're dealing
>> with a specific cluster that has custom-implemented it.
>>
>> Setting a specific user is not the right way to go. In secure and
>> non-secure environments both, the user is automatically inferred by
>> the user actually running the JVM process - its better to simply rely
>> on this.
>>
>> An AccessControlException occurs when a program tries to write or
>> alter a defined path where it lacks permission. To bypass this, the
>> HDFS administrator needs to grant you access to such defined paths,
>> rather than you having to work around that problem.
>>
>> On Mon, May 13, 2013 at 3:25 PM, Steve Lewis <lordjoe2...@gmail.com>
>> wrote:
>> > -- I have been running Hadoop on a clister set to not check permissions.
>> I
>> > would run a java client on my local machine and would run as the local
>> user
>> > on the cluster.
>> >
>> > I say
>> > *      String connectString =   "hdfs://" + host + ":" + port + "/";*
>> > *            Configuration config = new Configuration();*
>> > *
>> > *
>> > *            config.set("fs.default.name",connectString);*
>> > *
>> > *
>> > *            FileSystem fs  = FileSystem.get(config);*
>> > *The above code works*
>> > *  *
>> > I am trying to port to a cluster where permissions are checked - I have
>>  an
>> > account but need to set a user and password to avoid Access Exceptions
>> >
>> > How do I do this and If I can only access certain directories how do I do
>> > that?
>> >
>> > Also are there some directories my code MUST be able to access outside
>> > those for my user only?
>> >
>> > Steven M. Lewis PhD
>> > 4221 105th Ave NE
>> > Kirkland, WA 98033
>> > 206-384-1340 (cell)
>> > Skype lordjoe_com
>>
>>
>>
>> --
>> Harsh J
>>
>
>
>
> --
> Steven M. Lewis PhD
> 4221 105th Ave NE
> Kirkland, WA 98033
> 206-384-1340 (cell)
> Skype lordjoe_com



-- 
Harsh J

Reply via email to