[
https://issues.apache.org/jira/browse/HADOOP-18291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743334#comment-17743334
]
ASF GitHub Bot commented on HADOOP-18291:
-----------------------------------------
virajjasani commented on PR #5843:
URL: https://github.com/apache/hadoop/pull/5843#issuecomment-1636603969
not sure, what is going wrong with jenkins env
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5843/1/console
```
Error when executing cleanup post condition:
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException:
Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such
as: node
at
org.jenkinsci.plugins.workflow.steps.StepDescriptor.checkContextAvailability(StepDescriptor.java:265)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:299)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196)
at
org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124)
```
```
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to hadoop2
at
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1784)
at
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:356)
at hudson.remoting.Channel.call(Channel.java:1000)
at hudson.FilePath.act(FilePath.java:1194)
at hudson.FilePath.act(FilePath.java:1183)
at hudson.FilePath.mkdirs(FilePath.java:1374)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:844)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1296)
at
org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:129)
at
org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:97)
at
org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:84)
java.nio.file.FileSystemException:
/home/jenkins/jenkins-home/workspace/hadoop-multibranch: Read-only file system
at
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
at
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at
java.base/sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:389)
at java.base/java.nio.file.Files.createDirectory(Files.java:690)
at
java.base/java.nio.file.Files.createAndCheckIsDirectory(Files.java:797)
```
> S3A prefetch - Implement LRU cache for SingleFilePerBlockCache
> --------------------------------------------------------------
>
> Key: HADOOP-18291
> URL: https://issues.apache.org/jira/browse/HADOOP-18291
> Project: Hadoop Common
> Issue Type: Sub-task
> Affects Versions: 3.4.0
> Reporter: Ahmar Suhail
> Assignee: Viraj Jasani
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.3.9
>
>
> Currently there is no limit on the size of disk cache. This means we could
> have a large number of files on files, especially for access patterns that
> are very random and do not always read the block fully.
>
> eg:
> in.seek(5);
> in.read();
> in.seek(blockSize + 10) // block 0 gets saved to disk as it's not fully read
> in.read();
> in.seek(2 * blockSize + 10) // block 1 gets saved to disk
> .. and so on
>
> The in memory cache is bounded, and by default has a limit of 72MB (9
> blocks). When a block is fully read, and a seek is issued it's released
> [here|https://github.com/apache/hadoop/blob/feature-HADOOP-18028-s3a-prefetch/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3CachingInputStream.java#L109].
> We can also delete the on disk file for the block here if it exists.
>
> Also maybe add an upper limit on disk space, and delete the file which stores
> data of the block furthest from the current block (similar to the in memory
> cache) when this limit is reached.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]