[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2018-03-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18517:
--
Fix Version/s: 2.0.0

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0, 2.0.0-alpha-2
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
>Priority: Major
> Fix For: 1.5.0, 2.0.0-alpha-2, 2.0.0
>
> Attachments: HBASE-18517.branch-1.001.patch, 
> HBASE-18517.master.001.patch
>
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-09 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18517:
--
Release Note: Sets a log length max of 1000 characters.  (was: Sets a log 
length max of 1k.)

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0, 2.0.0-alpha-2
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
> Fix For: 1.5.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18517.branch-1.001.patch, 
> HBASE-18517.master.001.patch
>
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18517:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0, 2.0.0-alpha-2
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
> Fix For: 1.5.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18517.branch-1.001.patch, 
> HBASE-18517.master.001.patch
>
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18517:
--
Affects Version/s: (was: 3.0.0)
   2.0.0-alpha-2
 Hadoop Flags: Incompatible change,Reviewed
 Release Note: Sets a log length max of 1k.
Fix Version/s: (was: 3.0.0)
   2.0.0-alpha-2

Pushed to branch-1 and branch-2 as well as master. Thanks [~vik.karma]

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0, 2.0.0-alpha-2
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
> Fix For: 1.5.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18517.branch-1.001.patch, 
> HBASE-18517.master.001.patch
>
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-05 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-18517:
--
Attachment: (was: HBASE-18374.master.001.patch)

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
> Fix For: 3.0.0, 1.5.0
>
> Attachments: HBASE-18517.branch-1.001.patch, 
> HBASE-18517.master.001.patch
>
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-05 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-18517:
--
Attachment: HBASE-18517.master.001.patch

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
> Fix For: 3.0.0, 1.5.0
>
> Attachments: HBASE-18517.branch-1.001.patch, 
> HBASE-18517.master.001.patch
>
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-05 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-18517:
--
Fix Version/s: 1.5.0
   3.0.0
Affects Version/s: 1.5.0
   3.0.0
   Status: Patch Available  (was: Open)

QA run

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
> Fix For: 3.0.0, 1.5.0
>
> Attachments: HBASE-18374.master.001.patch, 
> HBASE-18517.branch-1.001.patch
>
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-05 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-18517:
--
Attachment: HBASE-18374.master.001.patch
HBASE-18517.branch-1.001.patch

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
> Attachments: HBASE-18374.master.001.patch, 
> HBASE-18517.branch-1.001.patch
>
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-03 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-18517:
--
Description: 
We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 

In first case, one of the phoenix user had constructed a query with a really 
large list of Id filters (61 MB) that translated into HBase scan that was 
running slow which lead to responseTooSlow messages in the logs with the entire 
filter list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=...: %m%n
to 
log4j.appender.console.layout.ConversionPattern=...: %.1m%n


  was:
We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 
In one case one of the phoenix user had constructed a query with a really large 
list of Id filters (61 MB) that translated into HBase scan that was running 
slow which lead to responseTooSlow messages in the logs with the entire filter 
list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=...: %m%n
to 
log4j.appender.console.layout.ConversionPattern=...: %.1m%n



> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity 

[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-03 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-18517:
--
Description: 
We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 
In one case one of the phoenix user had constructed a query with a really large 
list of Id filters (61 MB) that translated into HBase scan that was running 
slow which lead to responseTooSlow messages in the logs with the entire filter 
list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=...: %m%n
to 
log4j.appender.console.layout.ConversionPattern=...: %.1m%n


  was:
We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 
In one case one of the phoenix user had constructed a query with a really large 
list of Id filters (61 MB) that translated into HBase scan that was running 
slow which lead to responseTooSlow messages in the logs with the entire filter 
list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: 
%m%n
to 
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: 
%.1m%n



> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In one case one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l ,