[
https://issues.apache.org/jira/browse/HDDS-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16823375#comment-16823375
]
Arpit Agarwal commented on HDDS-1452:
-------------------------------------
Yeah we will have to benchmark it.
If a container is full of 1KB files it may not be a good candidate for Erasure
Coding. If your entire cluster is full of 1KB files then we have other serious
problems, of course.
The one downside of putting multiple blocks in the same file (can we call it a
superblock?) is that deletes become harder. We will need to do some kind of
background GC/compaction of the superblocks.
> All chunks should happen to a single file for a block in datanode
> -----------------------------------------------------------------
>
> Key: HDDS-1452
> URL: https://issues.apache.org/jira/browse/HDDS-1452
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Datanode
> Affects Versions: 0.5.0
> Reporter: Shashikant Banerjee
> Assignee: Shashikant Banerjee
> Priority: Major
> Fix For: 0.5.0
>
>
> Currently, all chunks of a block happen to individual chunk files in
> datanode. ThisĀ idea here is to write all individual chunks to a single file
> in datanode.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]