[
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925446#comment-13925446
]
Ted Yu commented on HBASE-8304:
-------------------------------
Thanks for the catch, Andy.
I was looking at https://builds.apache.org/job/hbase-0.98/214/console :
{code}
Started by an SCM change
Building remotely on ubuntu2 in workspace
/home/jenkins/jenkins-slave/workspace/HBase-0.98
Cleaning local Directory 0.98
java.io.IOException: remote file operation failed:
/home/jenkins/jenkins-slave/workspace/HBase-0.98 at
hudson.remoting.Channel@11dab09b:ubuntu2
at hudson.FilePath.act(FilePath.java:910)
at hudson.FilePath.act(FilePath.java:887)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:848)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:786)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1411)
at
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:651)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:560)
at hudson.model.Run.execute(Run.java:1670)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: java.nio.file.DirectoryNotEmptyException:
/home/jenkins/jenkins-slave/workspace/HBase-0.98/0.98/hbase-server/src/main/jamon/org/.svn/tmp
at
sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242)
{code}
I thought the above was due to build environment.
Will be more careful next time.
> Bulkload fails to remove files if fs.default.name / fs.defaultFS is
> configured without default port
> ---------------------------------------------------------------------------------------------------
>
> Key: HBASE-8304
> URL: https://issues.apache.org/jira/browse/HBASE-8304
> Project: HBase
> Issue Type: Bug
> Components: HFile, regionserver
> Affects Versions: 0.94.5
> Reporter: Raymond Liu
> Assignee: haosdent
> Labels: bulkloader
> Fix For: 0.98.1, 0.99.0
>
> Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch,
> HBASE-8304.patch
>
>
> When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as
> hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir
> where port is the hdfs namenode's default port. the bulkload operation will
> not remove the file in bulk output dir. Store::bulkLoadHfile will think
> hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy
> approaching instead of rename.
> The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS
> according to hbase.rootdir when regionserver started, thus, dest fs uri from
> the hregion will not matching src fs uri passed from client.
> any suggestion what is the best approaching to fix this issue?
> I kind of think that we could check for default port if src uri come without
> port info.
--
This message was sent by Atlassian JIRA
(v6.2#6252)