[
https://issues.apache.org/jira/browse/HDDS-3694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jitendra Nath Pandey updated HDDS-3694:
---------------------------------------
Priority: Critical (was: Minor)
> Reduce dn-audit log
> -------------------
>
> Key: HDDS-3694
> URL: https://issues.apache.org/jira/browse/HDDS-3694
> Project: Hadoop Distributed Data Store
> Issue Type: Improvement
> Reporter: Rajesh Balamohan
> Assignee: Dinesh Chitlangia
> Priority: Critical
> Labels: Triaged, performance, pull-request-available
> Attachments: write_to_dn_audit_causing_high_disk_util.png
>
>
> Do we really need such fine grained audit log? This ends up creating too many
> entries for chunks.
> {noformat}
> 2020-05-31 23:31:48,477 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324230275483 bcsId: 93943}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,482 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267323565871437 bcsId: 93940}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,487 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324230275483 bcsId: 93943}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,497 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 166 locID: 104267324172472725 bcsId: 93934}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,501 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267323675906396 bcsId: 93958}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,504 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324230275483 bcsId: 93943}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,509 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 166 locID: 104267323685343583 bcsId: 93974}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,512 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 166 locID: 104267324172472725 bcsId: 93934}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,516 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324332380586 bcsId: 0} |
> ret=SUCCESS |
> 2020-05-31 23:31:48,726 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 166 locID: 104267324232634780 bcsId: 93964}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,733 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 166 locID: 104267323976323460 bcsId: 93967}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,740 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324131512723 bcsId: 93952}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,752 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267324230275483 bcsId: 93943}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,760 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 165 locID: 104267323675906396 bcsId: 93958}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,772 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 166 locID: 104267323685343583 bcsId: 93974}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,780 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 164 locID: 104267324304724389 bcsId: 0} |
> ret=SUCCESS |
> 2020-05-31 23:31:48,787 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 164 locID: 104267323991724421 bcsId: 93970}
> | ret=SUCCESS |
> 2020-05-31 23:31:48,794 | INFO | DNAudit | user=null | ip=null |
> op=WRITE_CHUNK {blockData=conID: 164 locID: 104267323725189479 bcsId: 93963}
> | ret=SUCCESS |
> {noformat}
> And ends up choking disk utilization with lesser write/mb/sec.
> Refer to 100+ writes being written with 0.52 MB/sec and choking entire disk
> utilization.
> !write_to_dn_audit_causing_high_disk_util.png|width=726,height=300!
>
> Also, the username and IP are currently set as null. This needs to be
> replaced by using details from grpc
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]