[
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959419#comment-16959419
]
Bharat Viswanadham edited comment on HDDS-2356 at 10/25/19 4:25 AM:
--------------------------------------------------------------------
The above is an issue in OM, which might happen randomly, when there is another
handler thread in OM is updating the partInfo Map while flush thread commits
those entries. (During commit, we convert OmMultipartInfo to proto, during this
we will see the above error).
Above config are not related to OM, they are for SCM end.
{quote}However, writing fails due to no more blocks allocated. I guess my
cluster cannot keep up with the writing.
{quote}
we can see the error in SCM logs why no more blocks are being allocated. And
also this exception will be received by OM too.
was (Author: bharatviswa):
The above is an issue in OM, which might happen randomly, when there is another
handler thread in OM is updating the partInfo Map while flush thread commits
those entries. (During commit, we convert OmMultipartInfo to proto, during this
we will see the above error).
Above config are not related to OM, they are for SCM end.
{quote}However, writing fails due to no more blocks allocated. I guess my
cluster cannot keep up with the writing.
{quote}
we can see the error in SCM logs why no more blocks are being allocated. And
also this exception will be received by OM too.
> Multipart upload report errors while writing to ozone Ratis pipeline
> --------------------------------------------------------------------
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Manager
> Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM
> on a separate VM
> Reporter: Li Cheng
> Assignee: Bharat Viswanadham
> Priority: Blocker
> Fix For: 0.5.0
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path
> on VM0, while reading data from VM0 local disk and write to mount path. The
> dataset has various sizes of files from 0 byte to GB-level and it has a
> number of ~50,000 files.
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors
> related with Multipart upload. This error eventually causes the writing to
> terminate and OM to be closed.
>
> 2019-10-24 16:01:59,527 [OMDoubleBufferFlushThread] ERROR - Terminating with
> exit status 2: OMDoubleBuffer flush
> threadOMDoubleBufferFlushThreadencountered Throwable error
> java.util.ConcurrentModificationException
> at java.util.TreeMap.forEach(TreeMap.java:1004)
> at
> org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo.getProto(OmMultipartKeyInfo.java:111)
> at
> org.apache.hadoop.ozone.om.codec.OmMultipartKeyInfoCodec.toPersistedFormat(OmMultipartKeyInfoCodec.java:38)
> at
> org.apache.hadoop.ozone.om.codec.OmMultipartKeyInfoCodec.toPersistedFormat(OmMultipartKeyInfoCodec.java:31)
> at
> org.apache.hadoop.hdds.utils.db.CodecRegistry.asRawData(CodecRegistry.java:68)
> at
> org.apache.hadoop.hdds.utils.db.TypedTable.putWithBatch(TypedTable.java:125)
> at
> org.apache.hadoop.ozone.om.response.s3.multipart.S3MultipartUploadCommitPartResponse.addToDBBatch(S3MultipartUploadCommitPartResponse.java:112)
> at
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$flushTransactions$0(OzoneManagerDoubleBuffer.java:137)
> at java.util.Iterator.forEachRemaining(Iterator.java:116)
> at
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:135)
> at java.lang.Thread.run(Thread.java:745)
> 2019-10-24 16:01:59,629 [shutdown-hook-0] INFO - SHUTDOWN_MSG:
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]