cacheCapacity -= node.getCacheCapacity();
cacheUsed -= node.getCacheUsed();
rt-circuit access. " +
Shall I create the jira directly?
On Thu, Oct 26, 2017 at 12:34 PM, Xie Gang <xiegang...@gmail.com> wrote:
> We use HDFS2.4 & 2.6, and recently hit a issue that DFSClient domain
> socket is disabled when datanode throw block invalid exception.
> The block is
Got the root cause, it's a dup of HDFS-8072
On Wed, Jan 10, 2018 at 2:20 PM, Xie Gang <xiegang...@gmail.com> wrote:
> Recently, we hit an issue that, there is a difference between the
> freeSpace of the datanode volume
The yarn shared the same server of the dn and has some file cache. Could it
The direct cause is that the freeSpace from dn is quit different from the
available space from df. After tracking down the code, freeSpace of the dn
is from dirFile.getUsableSpace(). could it have some p
> in which version of Hadoop you are seeing this?
> On 29 Jan 2018 3:26 pm, "Xie Gang" <xiegang...@gmail.com> wrote:
> We recently hit a issue that almost all the disk of the datanode got full
> even we
logNodeIsNotChosen(storage, "the node does not have enough "
+ storage.getStorageType() + " space"
+ " (required=" + requiredSize
+ ", scheduled=" + scheduledSize
+ ", remaining=" + remaining + ")");
, and will look into it further.
But not sure if we tried this before.
argets.length * dnConf.socketTimeout);<<<<-*
long writeTimeout = dnConf.socketWriteTimeout +
FileInputStream dataStream, FileInputStream metaStream,
ShortCircuitCache cache, long creationTimeMs, Slot slot) throws
, is there any other way to do this?
The rough idea is to change the RPC engine to change the shaded package
name back to the original one. but not sure if it could work.
Mail list logo