On 9/24/20 11:10 PM, Stephane Eranian via perfmon2-devel wrote: > Will, > > Can you test that the current git tree has all the bugs you reported fixed? > Thanks.
I have attached two more patches (typos and strncpy usage) from the Debian packaging. Andreas
From 47db6919e94239d07085adb3761cb144e98a6c9e Mon Sep 17 00:00:00 2001 From: Andreas Beckmann <a.beckm...@fz-juelich.de> Date: Sun, 22 Apr 2018 12:56:02 +0200 Subject: [PATCH 1/2] fix typos and normalize spacing most typos were found by Lintian Signed-off-by: Andreas Beckmann <a.beckm...@fz-juelich.de> --- lib/events/amd64_events_fam16h.h | 2 +- lib/events/amd64_events_fam17h_zen2.h | 2 +- lib/events/intel_bdx_unc_cbo_events.h | 8 +- lib/events/intel_bdx_unc_ha_events.h | 12 +-- lib/events/intel_bdx_unc_imc_events.h | 2 +- lib/events/intel_bdx_unc_pcu_events.h | 4 +- lib/events/intel_bdx_unc_qpi_events.h | 14 +-- lib/events/intel_bdx_unc_r3qpi_events.h | 2 +- lib/events/intel_knl_unc_cha_events.h | 2 +- lib/events/intel_skx_unc_cha_events.h | 10 +- lib/events/intel_skx_unc_imc_events.h | 2 +- lib/events/intel_skx_unc_m3upi_events.h | 12 +-- lib/events/intel_skx_unc_pcu_events.h | 2 +- lib/events/intel_skx_unc_upi_events.h | 4 +- lib/events/mips_74k_events.h | 2 +- lib/events/power4_events.h | 4 +- lib/events/power5+_events.h | 14 +-- lib/events/power5_events.h | 14 +-- lib/events/power6_events.h | 8 +- lib/events/power7_events.h | 18 ++-- lib/events/power8_events.h | 136 ++++++++++++------------ lib/events/power9_events.h | 76 ++++++------- lib/events/ppc970_events.h | 4 +- lib/events/ppc970mp_events.h | 4 +- lib/events/s390x_cpumf_events.h | 24 ++--- lib/pfmlib_itanium2.c | 2 +- lib/pfmlib_montecito.c | 2 +- 27 files changed, 193 insertions(+), 193 deletions(-) diff --git a/lib/events/amd64_events_fam16h.h b/lib/events/amd64_events_fam16h.h index 2eab1dc..b92e728 100644 --- a/lib/events/amd64_events_fam16h.h +++ b/lib/events/amd64_events_fam16h.h @@ -675,7 +675,7 @@ static const amd64_umask_t amd64_fam16h_cache_cross_invalidates[]={ .ucode = 0x4, }, { .uname = "IC_INVALIDATES_DC_DIRTY", - .udesc = "Exection of modified instruction or data too close to code", + .udesc = "Execution of modified instruction or data too close to code", .ucode = 0x8, }, { .uname = "IC_HITS_DC_CLEAN_LINE", diff --git a/lib/events/amd64_events_fam17h_zen2.h b/lib/events/amd64_events_fam17h_zen2.h index 71616e5..b61f75f 100644 --- a/lib/events/amd64_events_fam17h_zen2.h +++ b/lib/events/amd64_events_fam17h_zen2.h @@ -842,7 +842,7 @@ static const amd64_entry_t amd64_fam17h_zen2_pe[]={ .ngrp = 0, }, { .name = "DECODER_OVERRIDE_BRANCH_PRED", - .desc = "Numbner of decoder overrides of existing brnach prediction. This is a speculative event.", + .desc = "Number of decoder overrides of existing branch prediction. This is a speculative event.", .modmsk = AMD64_FAM17H_ATTRS, .code = 0x91, .flags = 0, diff --git a/lib/events/intel_bdx_unc_cbo_events.h b/lib/events/intel_bdx_unc_cbo_events.h index c359821..28e1faf 100644 --- a/lib/events/intel_bdx_unc_cbo_events.h +++ b/lib/events/intel_bdx_unc_cbo_events.h @@ -936,7 +936,7 @@ static intel_x86_entry_t intel_bdx_unc_c_pe[]={ }, { .name = "UNC_C_COUNTER0_OCCUPANCY", .code = 0x1f, - .desc = "Since occupancy counts can only be captured in the Cbos 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entryy.", + .desc = "Since occupancy counts can only be captured in the Cbos 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entry.", .modmsk = BDX_UNC_CBO_ATTRS, .cntmsk = 0xf, }, @@ -948,7 +948,7 @@ static intel_x86_entry_t intel_bdx_unc_c_pe[]={ }, { .name = "UNC_C_LLC_LOOKUP", .code = 0x34, - .desc = "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.", + .desc = "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.", .modmsk = BDX_UNC_CBO_NID_ATTRS, .flags = INTEL_X86_NO_AUTOENCODE, .cntmsk = 0xf, @@ -1127,7 +1127,7 @@ static intel_x86_entry_t intel_bdx_unc_c_pe[]={ }, { .name = "UNC_C_TOR_INSERTS", .code = 0x35, - .desc = "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent filters but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x1(0x182).", + .desc = "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent filters but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).", .modmsk = BDX_UNC_CBO_NID_ATTRS | _SNBEP_UNC_ATTR_ISOC | _SNBEP_UNC_ATTR_NC, .flags = INTEL_X86_NO_AUTOENCODE, .cntmsk = 0xf, @@ -1137,7 +1137,7 @@ static intel_x86_entry_t intel_bdx_unc_c_pe[]={ }, { .name = "UNC_C_TOR_OCCUPANCY", .code = 0x36, - .desc = "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent filters but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x (0x182)", + .desc = "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent filters but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).", .modmsk = BDX_UNC_CBO_NID_ATTRS | _SNBEP_UNC_ATTR_ISOC | _SNBEP_UNC_ATTR_NC, .flags = INTEL_X86_NO_AUTOENCODE, .cntmsk = 0x1, diff --git a/lib/events/intel_bdx_unc_ha_events.h b/lib/events/intel_bdx_unc_ha_events.h index a4ab858..764e8f4 100644 --- a/lib/events/intel_bdx_unc_ha_events.h +++ b/lib/events/intel_bdx_unc_ha_events.h @@ -1014,7 +1014,7 @@ static intel_x86_entry_t intel_bdx_unc_h_pe[]={ }, { .name = "UNC_H_SNOOP_OCCUPANCY", .code = 0x9, - .desc = "Accumulates the occupancy of either the local HA tracker pool that have snoops pending in every cycle. This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if an HT (HomeTracker) entry is available and this occupancy is decremented when all the snoop responses have retureturned.", + .desc = "Accumulates the occupancy of either the local HA tracker pool that have snoops pending in every cycle. This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if an HT (HomeTracker) entry is available and this occupancy is decremented when all the snoop responses have returned.", .modmsk = BDX_UNC_HA_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -1023,7 +1023,7 @@ static intel_x86_entry_t intel_bdx_unc_h_pe[]={ }, { .name = "UNC_H_SNOOP_RESP", .code = 0x21, - .desc = "Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.", + .desc = "Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.", .modmsk = BDX_UNC_HA_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -1032,7 +1032,7 @@ static intel_x86_entry_t intel_bdx_unc_h_pe[]={ }, { .name = "UNC_H_SNP_RESP_RECV_LOCAL", .code = 0x60, - .desc = "Number of snoop responses received for a Local request", + .desc = "Number of snoop responses received for a Local request", .modmsk = BDX_UNC_HA_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -1050,7 +1050,7 @@ static intel_x86_entry_t intel_bdx_unc_h_pe[]={ }, { .name = "UNC_H_TAD_REQUESTS_G0", .code = 0x1b, - .desc = "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save powewer.", + .desc = "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.", .modmsk = BDX_UNC_HA_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -1059,7 +1059,7 @@ static intel_x86_entry_t intel_bdx_unc_h_pe[]={ }, { .name = "UNC_H_TAD_REQUESTS_G1", .code = 0x1c, - .desc = "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save powewer.", + .desc = "Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.", .modmsk = BDX_UNC_HA_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -1122,7 +1122,7 @@ static intel_x86_entry_t intel_bdx_unc_h_pe[]={ }, { .name = "UNC_H_TXR_BL", .code = 0x10, - .desc = "Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.", + .desc = "Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.", .modmsk = BDX_UNC_HA_ATTRS, .cntmsk = 0xf, .ngrp = 1, diff --git a/lib/events/intel_bdx_unc_imc_events.h b/lib/events/intel_bdx_unc_imc_events.h index e6406d9..e3850b6 100644 --- a/lib/events/intel_bdx_unc_imc_events.h +++ b/lib/events/intel_bdx_unc_imc_events.h @@ -429,7 +429,7 @@ static intel_x86_entry_t intel_bdx_unc_m_pe[]={ }, { .name = "UNC_M_MAJOR_MODES", .code = 0x7, - .desc = "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.", + .desc = "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modes are channel-wide, and not a per-rank (or dimm or bank) mode.", .modmsk = BDX_UNC_IMC_ATTRS, .cntmsk = 0xf, .ngrp = 1, diff --git a/lib/events/intel_bdx_unc_pcu_events.h b/lib/events/intel_bdx_unc_pcu_events.h index 24b0bd5..0d67e09 100644 --- a/lib/events/intel_bdx_unc_pcu_events.h +++ b/lib/events/intel_bdx_unc_pcu_events.h @@ -316,7 +316,7 @@ static intel_x86_entry_t intel_bdx_unc_p_pe[]={ }, { .name = "UNC_P_PROCHOT_INTERNAL_CYCLES", .code = 0x9, - .desc = "Counts the number of cycles that we are in Interal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.", + .desc = "Counts the number of cycles that we are in internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.", .modmsk = BDX_UNC_PCU_ATTRS, .cntmsk = 0xf, }, @@ -346,7 +346,7 @@ static intel_x86_entry_t intel_bdx_unc_p_pe[]={ }, { .name = "UNC_P_UFS_TRANSITIONS_NO_CHANGE", .code = 0x79, - .desc = "Ring GV with same final and inital frequency", + .desc = "Ring GV with same final and initial frequency", .modmsk = BDX_UNC_PCU_ATTRS, .cntmsk = 0xf, }, diff --git a/lib/events/intel_bdx_unc_qpi_events.h b/lib/events/intel_bdx_unc_qpi_events.h index 18c010a..a4d1747 100644 --- a/lib/events/intel_bdx_unc_qpi_events.h +++ b/lib/events/intel_bdx_unc_qpi_events.h @@ -304,7 +304,7 @@ static intel_x86_entry_t intel_bdx_unc_q_pe[]={ }, { .name = "UNC_Q_DIRECT2CORE", .code = 0x13, - .desc = "Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.", + .desc = "Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.", .modmsk = BDX_UNC_QPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -331,7 +331,7 @@ static intel_x86_entry_t intel_bdx_unc_q_pe[]={ }, { .name = "UNC_Q_RXL_BYPASSED", .code = 0x9, - .desc = "Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transfered, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.", + .desc = "Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.", .modmsk = BDX_UNC_QPI_ATTRS, .cntmsk = 0xf, }, @@ -376,7 +376,7 @@ static intel_x86_entry_t intel_bdx_unc_q_pe[]={ }, { .name = "UNC_Q_RXL_FLITS_G1", .code = 0x2 | (1 << 21), /* extra ev_sel_ext bit set */ - .desc = "Counts the number of flits received from the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: datld therefore do: data flits * 8B / time.", + .desc = "Counts the number of flits received from the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.", .modmsk = BDX_UNC_QPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -385,7 +385,7 @@ static intel_x86_entry_t intel_bdx_unc_q_pe[]={ }, { .name = "UNC_Q_RXL_FLITS_G2", .code = 0x3 | (1 << 21), /* extra ev_sel_ext bit set */ - .desc = "Counts the number of flits received from the QPI Link. This is one of three groups that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: datld therefore do: data flits * 8B / time.", + .desc = "Counts the number of flits received from the QPI Link. This is one of three groups that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.", .modmsk = BDX_UNC_QPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -538,7 +538,7 @@ static intel_x86_entry_t intel_bdx_unc_q_pe[]={ }, { .name = "UNC_Q_TXL_FLITS_G0", .code = 0x0, - .desc = "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instfor L0) or 4B instead of 8B for L0p.", + .desc = "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.", .modmsk = BDX_UNC_QPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -547,7 +547,7 @@ static intel_x86_entry_t intel_bdx_unc_q_pe[]={ }, { .name = "UNC_Q_TXL_FLITS_G1", .code = 0x0 | (1 << 21), /* extra ev_sel_ext bit set */ - .desc = "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instfor L0) or 4B instead of 8B for L0p.", + .desc = "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.", .modmsk = BDX_UNC_QPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -556,7 +556,7 @@ static intel_x86_entry_t intel_bdx_unc_q_pe[]={ }, { .name = "UNC_Q_TXL_FLITS_G2", .code = 0x1 | (1 << 21), /* extra ev_sel_ext bit set */ - .desc = "Counts the number of flits trasmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: datld therefore do: data flits * 8B / time.", + .desc = "Counts the number of flits trasmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.", .modmsk = BDX_UNC_QPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, diff --git a/lib/events/intel_bdx_unc_r3qpi_events.h b/lib/events/intel_bdx_unc_r3qpi_events.h index 8d7f6a3..cbcc8c4 100644 --- a/lib/events/intel_bdx_unc_r3qpi_events.h +++ b/lib/events/intel_bdx_unc_r3qpi_events.h @@ -732,7 +732,7 @@ static intel_x86_entry_t intel_bdx_unc_r3_pe[]={ }, { .name = "UNC_R3_VNA_CREDITS_ACQUIRED", .code = 0x33, - .desc = "Number of QPI VNA Credit acquisitions. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder. VNA credits are used by all message classes in order to communicate across QPI. If a packet is unable to acquire credits, it will then attempt to use credts from the VN0 pool. Note that a single packet may require multiple flit buffers (i.e. when data is being transfered). Therefore, this event will increment by the number of credits acquired in each cycle. Filtering based on message class is not provided. One can count the number of packets transfered in a given message class using an qfclk event.", + .desc = "Number of QPI VNA Credit acquisitions. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder. VNA credits are used by all message classes in order to communicate across QPI. If a packet is unable to acquire credits, it will then attempt to use credts from the VN0 pool. Note that a single packet may require multiple flit buffers (i.e. when data is being transferred). Therefore, this event will increment by the number of credits acquired in each cycle. Filtering based on message class is not provided. One can count the number of packets transferred in a given message class using an qfclk event.", .modmsk = BDX_UNC_R3QPI_ATTRS, .cntmsk = 0x3, .ngrp = 1, diff --git a/lib/events/intel_knl_unc_cha_events.h b/lib/events/intel_knl_unc_cha_events.h index 5cd401b..c63e4b7 100644 --- a/lib/events/intel_knl_unc_cha_events.h +++ b/lib/events/intel_knl_unc_cha_events.h @@ -1154,7 +1154,7 @@ static const intel_x86_entry_t intel_knl_unc_cha_pe[]={ .code = 0xa4, }, { .name = "UNC_H_FAST_ASSERTED", - .desc = "Counts cycles source throttling is adderted", + .desc = "Counts cycles source throttling is asserted", .cntmsk = 0xf, .code = 0xa5, .ngrp = 1, diff --git a/lib/events/intel_skx_unc_cha_events.h b/lib/events/intel_skx_unc_cha_events.h index c94caa5..893237b 100644 --- a/lib/events/intel_skx_unc_cha_events.h +++ b/lib/events/intel_skx_unc_cha_events.h @@ -3120,7 +3120,7 @@ static intel_x86_entry_t intel_skx_unc_c_pe[]={ }, { .name = "UNC_C_COUNTER0_OCCUPANCY", .code = 0x1f, - .desc = "Since occupancy counts can only be captured in the Cbos 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entryy.", + .desc = "Since occupancy counts can only be captured in the Cbos 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entry.", .modmsk = SKX_UNC_CHA_ATTRS, .cntmsk = 0xf, }, @@ -3270,7 +3270,7 @@ static intel_x86_entry_t intel_skx_unc_c_pe[]={ }, { .name = "UNC_C_LLC_LOOKUP", .code = 0x34, - .desc = "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.", + .desc = "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.", .modmsk = SKX_UNC_CHA_ATTRS, .cntmsk = 0xf, .ngrp = 2, @@ -3606,7 +3606,7 @@ static intel_x86_entry_t intel_skx_unc_c_pe[]={ }, { .name = "UNC_C_SNOOP_RESP", .code = 0x5c, - .desc = "Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.", + .desc = "Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.", .modmsk = SKX_UNC_CHA_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -3660,7 +3660,7 @@ static intel_x86_entry_t intel_skx_unc_c_pe[]={ }, { .name = "UNC_C_TOR_INSERTS", .code = 0x35, - .desc = "Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent.", + .desc = "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.", .modmsk = SKX_UNC_CHA_FILT1_ATTRS, .cntmsk = 0xf, .flags = INTEL_X86_NO_AUTOENCODE, @@ -3670,7 +3670,7 @@ static intel_x86_entry_t intel_skx_unc_c_pe[]={ }, { .name = "UNC_C_TOR_OCCUPANCY", .code = 0x36, - .desc = "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T", + .desc = "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.", .modmsk = SKX_UNC_CHA_FILT1_ATTRS, .cntmsk = 0x1, .flags = INTEL_X86_NO_AUTOENCODE, diff --git a/lib/events/intel_skx_unc_imc_events.h b/lib/events/intel_skx_unc_imc_events.h index 39b0f27..87f8afb 100644 --- a/lib/events/intel_skx_unc_imc_events.h +++ b/lib/events/intel_skx_unc_imc_events.h @@ -386,7 +386,7 @@ static intel_x86_entry_t intel_skx_unc_m_pe[]={ }, { .name = "UNC_M_MAJOR_MODES", .code = 0x7, - .desc = "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.", + .desc = "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modes are channel-wide, and not a per-rank (or dimm or bank) mode.", .modmsk = SKX_UNC_IMC_ATTRS, .cntmsk = 0xf, .ngrp = 1, diff --git a/lib/events/intel_skx_unc_m3upi_events.h b/lib/events/intel_skx_unc_m3upi_events.h index 3accb44..7e0273e 100644 --- a/lib/events/intel_skx_unc_m3upi_events.h +++ b/lib/events/intel_skx_unc_m3upi_events.h @@ -826,15 +826,15 @@ static intel_x86_umask_t skx_unc_m3_rxc_flits_slot_bl[]={ }, { .uname = "P1_NOT_REQ", .ucode = 0x1000, - .udesc = "Slotting BL Message Into Header Flit -- Dont Need Pump 1", + .udesc = "Slotting BL Message Into Header Flit -- Don't Need Pump 1", }, { .uname = "P1_NOT_REQ_BUT_BUBBLE", .ucode = 0x2000, - .udesc = "Slotting BL Message Into Header Flit -- Dont Need Pump 1 - Bubblle", + .udesc = "Slotting BL Message Into Header Flit -- Don't Need Pump 1 - Bubble", }, { .uname = "P1_NOT_REQ_NOT_AVAIL", .ucode = 0x4000, - .udesc = "Slotting BL Message Into Header Flit -- Dont Need Pump 1 - Not Avaiil", + .udesc = "Slotting BL Message Into Header Flit -- Don't Need Pump 1 - Not Avail", }, { .uname = "P1_WAIT", .ucode = 0x800, @@ -845,7 +845,7 @@ static intel_x86_umask_t skx_unc_m3_rxc_flits_slot_bl[]={ static intel_x86_umask_t skx_unc_m3_rxc_flit_gen_hdr1[]={ { .uname = "ACCUM", .ucode = 0x100, - .udesc = "Flit Gen - Header 1 -- Acumullate", + .udesc = "Flit Gen - Header 1 -- Accumulate", }, { .uname = "ACCUM_READ", .ucode = 0x200, @@ -3277,7 +3277,7 @@ static intel_x86_entry_t intel_skx_unc_m3_pe[]={ }, { .name = "UNC_M3_TXC_AD_SPEC_ARB_NO_OTHER_PEND", .code = 0x32, - .desc = "AD speculative arb request asserted due to no other channel being active (have a valid entry but dont have credits to sendd)", + .desc = "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)", .modmsk = SKX_UNC_M3UPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -3343,7 +3343,7 @@ static intel_x86_entry_t intel_skx_unc_m3_pe[]={ }, { .name = "UNC_M3_TXC_BL_SPEC_ARB_NO_OTHER_PEND", .code = 0x37, - .desc = "BL speculative arb request asserted due to no other channel being active (have a valid entry but dont have credits to sendd)", + .desc = "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)", .modmsk = SKX_UNC_M3UPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, diff --git a/lib/events/intel_skx_unc_pcu_events.h b/lib/events/intel_skx_unc_pcu_events.h index 42b8a58..131a4f7 100644 --- a/lib/events/intel_skx_unc_pcu_events.h +++ b/lib/events/intel_skx_unc_pcu_events.h @@ -170,7 +170,7 @@ static intel_x86_entry_t intel_skx_unc_p_pe[]={ }, { .name = "UNC_P_PROCHOT_INTERNAL_CYCLES", .code = 0x9, - .desc = "Counts the number of cycles that we are in Interal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.", + .desc = "Counts the number of cycles that we are in internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.", .modmsk = SKX_UNC_PCU_ATTRS, .cntmsk = 0xf, }, diff --git a/lib/events/intel_skx_unc_upi_events.h b/lib/events/intel_skx_unc_upi_events.h index ff12e93..2769cdf 100644 --- a/lib/events/intel_skx_unc_upi_events.h +++ b/lib/events/intel_skx_unc_upi_events.h @@ -882,7 +882,7 @@ static intel_x86_entry_t intel_skx_unc_upi_pe[]={ }, { .name = "UNC_UPI_DIRECT_ATTEMPTS", .code = 0x12, - .desc = "Counts the number of Data Response(DRS) packets UPI attempted to send directly to the core or to a different UPI link. Note: This only counts attempts on valid candidates such as DRS packets destined for CHAs.", + .desc = "Counts the number of Data Response(DRS) packets UPI attempted to send directly to the core or to a different UPI link. Note: This only counts attempts on valid candidates such as DRS packets destined for CHAs.", .modmsk = SKX_UNC_UPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, @@ -979,7 +979,7 @@ static intel_x86_entry_t intel_skx_unc_upi_pe[]={ }, { .name = "UNC_UPI_RXL_BYPASSED", .code = 0x31, - .desc = "Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transfered, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.", + .desc = "Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.", .modmsk = SKX_UNC_UPI_ATTRS, .cntmsk = 0xf, .ngrp = 1, diff --git a/lib/events/mips_74k_events.h b/lib/events/mips_74k_events.h index 7615ebc..35e05c8 100644 --- a/lib/events/mips_74k_events.h +++ b/lib/events/mips_74k_events.h @@ -489,7 +489,7 @@ static const mips_entry_t mips_74k_pe []={ { .name = "NO_INSTRUCTIONS_FROM_REPLAY_CYCLES", .code = 0xb8, - .desc = "Number of cycles no instructions graduated from the time the pipe was flushed because of a replay until the first new instruction graduates. This is an indicator of the graduation bandwidth loss due to replay. Often times this replay is a result of event 25 and therefor an indicator of bandwidth lost due to cache misses", + .desc = "Number of cycles no instructions graduated from the time the pipe was flushed because of a replay until the first new instruction graduates. This is an indicator of the graduation bandwidth loss due to replay. Often times this replay is a result of event 25 and therefore an indicator of bandwidth lost due to cache misses", }, { .name = "MISPREDICTION_BRANCH_NODELAY_CYCLES", diff --git a/lib/events/power4_events.h b/lib/events/power4_events.h index 479eac2..8eeda17 100644 --- a/lib/events/power4_events.h +++ b/lib/events/power4_events.h @@ -450,7 +450,7 @@ static const pme_power_entry_t power4_pe[] = { .pme_name = "PM_GRP_DISP_SUCCESS", .pme_code = 0x5001, .pme_short_desc = "Group dispatch success", - .pme_long_desc = "Number of groups sucessfully dispatched (not rejected)", + .pme_long_desc = "Number of groups successfully dispatched (not rejected)", }, [ POWER4_PME_PM_LSU1_LDF ] = { .pme_name = "PM_LSU1_LDF", @@ -1542,7 +1542,7 @@ static const pme_power_entry_t power4_pe[] = { .pme_name = "PM_LARX_LSU0", .pme_code = 0xc73, .pme_short_desc = "Larx executed on LSU0", - .pme_long_desc = "A larx (lwarx or ldarx) was executed on side 0 (there is no coresponding unit 1 event since larx instructions can only execute on unit 0)", + .pme_long_desc = "A larx (lwarx or ldarx) was executed on side 0 (there is no corresponding unit 1 event since larx instructions can only execute on unit 0)", }, [ POWER4_PME_PM_GCT_EMPTY_CYC ] = { .pme_name = "PM_GCT_EMPTY_CYC", diff --git a/lib/events/power5+_events.h b/lib/events/power5+_events.h index 4d575fd..3b5c1ac 100644 --- a/lib/events/power5+_events.h +++ b/lib/events/power5+_events.h @@ -983,7 +983,7 @@ static const pme_power_entry_t power5p_pe[] = { .pme_name = "PM_FPU_FEST", .pme_code = 0x1010a8, .pme_short_desc = "FPU executed FEST instruction", - .pme_long_desc = "The floating point unit has executed an estimate instructions. This could be fres* or frsqrte* where XYZ* means XYZ or XYZ. Combined Unit 0 + Unit 1.", + .pme_long_desc = "The floating point unit has executed an estimate instructions. This could be fres* or frsqrte* where XYZ* means XYZ or XYZ. Combined Unit 0 + Unit 1.", }, [ POWER5p_PME_PM_FAB_M1toP1_SIDECAR_EMPTY ] = { .pme_name = "PM_FAB_M1toP1_SIDECAR_EMPTY", @@ -1457,7 +1457,7 @@ static const pme_power_entry_t power5p_pe[] = { .pme_name = "PM_FAB_HOLDtoNN_EMPTY", .pme_code = 0x722e7, .pme_short_desc = "Hold buffer to NN empty", - .pme_long_desc = "Fabric cyles when the Next Node out hold-buffers are emtpy. The signal is delivered at FBC speed and the count must be scaled accordingly.", + .pme_long_desc = "Fabric cyles when the Next Node out hold-buffers are empty. The signal is delivered at FBC speed and the count must be scaled accordingly.", }, [ POWER5p_PME_PM_DATA_FROM_LMEM ] = { .pme_name = "PM_DATA_FROM_LMEM", @@ -2135,7 +2135,7 @@ static const pme_power_entry_t power5p_pe[] = { .pme_name = "PM_GRP_DISP_SUCCESS", .pme_code = 0x300002, .pme_short_desc = "Group dispatch success", - .pme_long_desc = "Number of groups sucessfully dispatched (not rejected)", + .pme_long_desc = "Number of groups successfully dispatched (not rejected)", }, [ POWER5p_PME_PM_THRD_PRIO_DIFF_1or2_CYC ] = { .pme_name = "PM_THRD_PRIO_DIFF_1or2_CYC", @@ -2189,7 +2189,7 @@ static const pme_power_entry_t power5p_pe[] = { .pme_name = "PM_FAB_HOLDtoVN_EMPTY", .pme_code = 0x721e7, .pme_short_desc = "Hold buffer to VN empty", - .pme_long_desc = "Fabric cycles when the Vertical Node out hold-buffers are emtpy. The signal is delivered at FBC speed and the count must be scaled accordingly.", + .pme_long_desc = "Fabric cycles when the Vertical Node out hold-buffers are empty. The signal is delivered at FBC speed and the count must be scaled accordingly.", }, [ POWER5p_PME_PM_FPU1_FEST ] = { .pme_name = "PM_FPU1_FEST", @@ -2357,7 +2357,7 @@ static const pme_power_entry_t power5p_pe[] = { .pme_name = "PM_MEM_PW_CMPL", .pme_code = 0x724e6, .pme_short_desc = "Memory partial-write completed", - .pme_long_desc = "Number of Partial Writes completed. This event is sent from the Memory Controller clock domain and must be scaled accordingly.", + .pme_long_desc = "Number of Partial Writes completed. This event is sent from the Memory Controller clock domain and must be scaled accordingly.", }, [ POWER5p_PME_PM_THRD_PRIO_DIFF_minus5or6_CYC ] = { .pme_name = "PM_THRD_PRIO_DIFF_minus5or6_CYC", @@ -2807,7 +2807,7 @@ static const pme_power_entry_t power5p_pe[] = { .pme_name = "PM_MEM_NONSPEC_RD_CANCEL", .pme_code = 0x711c6, .pme_short_desc = "Non speculative memory read cancelled", - .pme_long_desc = "A non-speculative read was cancelled because the combined response indicated it was sourced from aother L2 or L3. This event is sent from the Memory Controller clock domain and must be scaled accordingly", + .pme_long_desc = "A non-speculative read was cancelled because the combined response indicated it was sourced from aother L2 or L3. This event is sent from the Memory Controller clock domain and must be scaled accordingly.", }, [ POWER5p_PME_PM_BR_PRED_CR_TA ] = { .pme_name = "PM_BR_PRED_CR_TA", @@ -2849,7 +2849,7 @@ static const pme_power_entry_t power5p_pe[] = { .pme_name = "PM_LSU0_DERAT_MISS", .pme_code = 0x800c2, .pme_short_desc = "LSU0 DERAT misses", - .pme_long_desc = "Total D-ERAT Misses by LSU0. Requests that miss the Derat are rejected and retried until the request hits in the Erat. This may result in multiple erat misses for the same instruction.", + .pme_long_desc = "Total D-ERAT Misses by LSU0. Requests that miss the Derat are rejected and retried until the request hits in the Erat. This may result in multiple erat misses for the same instruction.", }, [ POWER5p_PME_PM_FPU_STALL3 ] = { .pme_name = "PM_FPU_STALL3", diff --git a/lib/events/power5_events.h b/lib/events/power5_events.h index 683fe28..a65ff5a 100644 --- a/lib/events/power5_events.h +++ b/lib/events/power5_events.h @@ -974,7 +974,7 @@ static const pme_power_entry_t power5_pe[] = { .pme_name = "PM_FPU_FEST", .pme_code = 0x401090, .pme_short_desc = "FPU executed FEST instruction", - .pme_long_desc = "The floating point unit has executed an estimate instructions. This could be fres* or frsqrte* where XYZ* means XYZ or XYZ. Combined Unit 0 + Unit 1.", + .pme_long_desc = "The floating point unit has executed an estimate instructions. This could be fres* or frsqrte* where XYZ* means XYZ or XYZ. Combined Unit 0 + Unit 1.", }, [ POWER5_PME_PM_FAB_M1toP1_SIDECAR_EMPTY ] = { .pme_name = "PM_FAB_M1toP1_SIDECAR_EMPTY", @@ -1430,7 +1430,7 @@ static const pme_power_entry_t power5_pe[] = { .pme_name = "PM_FAB_HOLDtoNN_EMPTY", .pme_code = 0x722e7, .pme_short_desc = "Hold buffer to NN empty", - .pme_long_desc = "Fabric cyles when the Next Node out hold-buffers are emtpy. The signal is delivered at FBC speed and the count must be scaled accordingly.", + .pme_long_desc = "Fabric cyles when the Next Node out hold-buffers are empty. The signal is delivered at FBC speed and the count must be scaled accordingly.", }, [ POWER5_PME_PM_DATA_FROM_LMEM ] = { .pme_name = "PM_DATA_FROM_LMEM", @@ -2084,7 +2084,7 @@ static const pme_power_entry_t power5_pe[] = { .pme_name = "PM_GRP_DISP_SUCCESS", .pme_code = 0x300002, .pme_short_desc = "Group dispatch success", - .pme_long_desc = "Number of groups sucessfully dispatched (not rejected)", + .pme_long_desc = "Number of groups successfully dispatched (not rejected)", }, [ POWER5_PME_PM_THRD_PRIO_DIFF_1or2_CYC ] = { .pme_name = "PM_THRD_PRIO_DIFF_1or2_CYC", @@ -2138,7 +2138,7 @@ static const pme_power_entry_t power5_pe[] = { .pme_name = "PM_FAB_HOLDtoVN_EMPTY", .pme_code = 0x721e7, .pme_short_desc = "Hold buffer to VN empty", - .pme_long_desc = "Fabric cycles when the Vertical Node out hold-buffers are emtpy. The signal is delivered at FBC speed and the count must be scaled accordingly.", + .pme_long_desc = "Fabric cycles when the Vertical Node out hold-buffers are empty. The signal is delivered at FBC speed and the count must be scaled accordingly.", }, [ POWER5_PME_PM_SNOOP_RD_RETRY_RQ ] = { .pme_name = "PM_SNOOP_RD_RETRY_RQ", @@ -2294,7 +2294,7 @@ static const pme_power_entry_t power5_pe[] = { .pme_name = "PM_MEM_PW_CMPL", .pme_code = 0x724e6, .pme_short_desc = "Memory partial-write completed", - .pme_long_desc = "Number of Partial Writes completed. This event is sent from the Memory Controller clock domain and must be scaled accordingly.", + .pme_long_desc = "Number of Partial Writes completed. This event is sent from the Memory Controller clock domain and must be scaled accordingly.", }, [ POWER5_PME_PM_THRD_PRIO_DIFF_minus5or6_CYC ] = { .pme_name = "PM_THRD_PRIO_DIFF_minus5or6_CYC", @@ -2738,7 +2738,7 @@ static const pme_power_entry_t power5_pe[] = { .pme_name = "PM_MEM_NONSPEC_RD_CANCEL", .pme_code = 0x711c6, .pme_short_desc = "Non speculative memory read cancelled", - .pme_long_desc = "A non-speculative read was cancelled because the combined response indicated it was sourced from aother L2 or L3. This event is sent from the Memory Controller clock domain and must be scaled accordingly", + .pme_long_desc = "A non-speculative read was cancelled because the combined response indicated it was sourced from aother L2 or L3. This event is sent from the Memory Controller clock domain and must be scaled accordingly.", }, [ POWER5_PME_PM_BR_PRED_CR_TA ] = { .pme_name = "PM_BR_PRED_CR_TA", @@ -2780,7 +2780,7 @@ static const pme_power_entry_t power5_pe[] = { .pme_name = "PM_LSU0_DERAT_MISS", .pme_code = 0x800c2, .pme_short_desc = "LSU0 DERAT misses", - .pme_long_desc = "Total D-ERAT Misses by LSU0. Requests that miss the Derat are rejected and retried until the request hits in the Erat. This may result in multiple erat misses for the same instruction.", + .pme_long_desc = "Total D-ERAT Misses by LSU0. Requests that miss the Derat are rejected and retried until the request hits in the Erat. This may result in multiple erat misses for the same instruction.", }, [ POWER5_PME_PM_L2SB_RCLD_DISP ] = { .pme_name = "PM_L2SB_RCLD_DISP", diff --git a/lib/events/power6_events.h b/lib/events/power6_events.h index 90bd26a..05b2f59 100644 --- a/lib/events/power6_events.h +++ b/lib/events/power6_events.h @@ -800,8 +800,8 @@ static const pme_power_entry_t power6_pe[] = { [ POWER6_PME_PM_LSU_FLUSH_ALIGN ] = { .pme_name = "PM_LSU_FLUSH_ALIGN", .pme_code = 0x220cc, - .pme_short_desc = "Flush caused by alignement exception", - .pme_long_desc = "Flush caused by alignement exception", + .pme_short_desc = "Flush caused by alignment exception", + .pme_long_desc = "Flush caused by alignment exception", }, [ POWER6_PME_PM_DPU_HELD_FPU_CR ] = { .pme_name = "PM_DPU_HELD_FPU_CR", @@ -3494,8 +3494,8 @@ static const pme_power_entry_t power6_pe[] = { [ POWER6_PME_PM_FAB_ADDR_COLLISION ] = { .pme_name = "PM_FAB_ADDR_COLLISION", .pme_code = 0x5018e, - .pme_short_desc = "local node launch collision with off-node address ", - .pme_long_desc = "local node launch collision with off-node address ", + .pme_short_desc = "local node launch collision with off-node address", + .pme_long_desc = "local node launch collision with off-node address", }, [ POWER6_PME_PM_MRK_FXU_FIN ] = { .pme_name = "PM_MRK_FXU_FIN", diff --git a/lib/events/power7_events.h b/lib/events/power7_events.h index 7bfdf15..8def11b 100644 --- a/lib/events/power7_events.h +++ b/lib/events/power7_events.h @@ -598,8 +598,8 @@ static const pme_power_entry_t power7_pe[] = { [ POWER7_PME_PM_VSU0_16FLOP ] = { .pme_name = "PM_VSU0_16FLOP", .pme_code = 0xa0a4, - .pme_short_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt) ", - .pme_long_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt) ", + .pme_short_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt)", + .pme_long_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt)", }, [ POWER7_PME_PM_MRK_LSU_DERAT_MISS ] = { .pme_name = "PM_MRK_LSU_DERAT_MISS", @@ -1792,8 +1792,8 @@ static const pme_power_entry_t power7_pe[] = { [ POWER7_PME_PM_IC_BANK_CONFLICT ] = { .pme_name = "PM_IC_BANK_CONFLICT", .pme_code = 0x4082, - .pme_short_desc = "Read blocked due to interleave conflict. ", - .pme_long_desc = "Read blocked due to interleave conflict. ", + .pme_short_desc = "Read blocked due to interleave conflict.", + .pme_long_desc = "Read blocked due to interleave conflict.", }, [ POWER7_PME_PM_BR_MPRED_CR_TA ] = { .pme_name = "PM_BR_MPRED_CR_TA", @@ -1984,8 +1984,8 @@ static const pme_power_entry_t power7_pe[] = { [ POWER7_PME_PM_VSU1_2FLOP_DOUBLE ] = { .pme_name = "PM_VSU1_2FLOP_DOUBLE", .pme_code = 0xa08e, - .pme_short_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp) ", - .pme_long_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp) ", + .pme_short_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp)", + .pme_long_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp)", }, [ POWER7_PME_PM_THRD_PRIO_6_7_CYC ] = { .pme_name = "PM_THRD_PRIO_6_7_CYC", @@ -3143,7 +3143,7 @@ static const pme_power_entry_t power7_pe[] = { .pme_name = "PM_IC_PREF_WRITE", .pme_code = 0x408e, .pme_short_desc = "Instruction prefetch written into IL1", - .pme_long_desc = "Number of Instruction Cache entries written because of prefetch. Prefetch entries are marked least recently used and are candidates for eviction if they are not needed to satify a demand fetch.", + .pme_long_desc = "Number of Instruction Cache entries written because of prefetch. Prefetch entries are marked least recently used and are candidates for eviction if they are not needed to satisfy a demand fetch.", }, [ POWER7_PME_PM_BR_PRED ] = { .pme_name = "PM_BR_PRED", @@ -3670,8 +3670,8 @@ static const pme_power_entry_t power7_pe[] = { [ POWER7_PME_PM_VSU0_2FLOP_DOUBLE ] = { .pme_name = "PM_VSU0_2FLOP_DOUBLE", .pme_code = 0xa08c, - .pme_short_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp) ", - .pme_long_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp) ", + .pme_short_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp)", + .pme_long_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp)", }, [ POWER7_PME_PM_LSU_DC_PREF_STRIDED_STREAM_CONFIRM ] = { .pme_name = "PM_LSU_DC_PREF_STRIDED_STREAM_CONFIRM", diff --git a/lib/events/power8_events.h b/lib/events/power8_events.h index 92337f8..54f3d9e 100644 --- a/lib/events/power8_events.h +++ b/lib/events/power8_events.h @@ -1122,7 +1122,7 @@ static const pme_power_entry_t power8_pe[] = { .pme_name = "PM_ALL_CHIP_PUMP_CPRED", .pme_code = 0x610050, .pme_short_desc = "Initial and Final Pump Scope was chip pump (prediction=correct) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)", - .pme_long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types ( demand load,data,inst prefetch,inst fetch,xlate (I or d)", + .pme_long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types (demand load,data,inst prefetch,inst fetch,xlate (I or d)", }, [ POWER8_PME_PM_ALL_GRP_PUMP_CPRED ] = { .pme_name = "PM_ALL_GRP_PUMP_CPRED", @@ -1355,14 +1355,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_BR_UNCOND_BR0 ] = { .pme_name = "PM_BR_UNCOND_BR0", .pme_code = 0x40a0, - .pme_short_desc = "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.", - .pme_long_desc = "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.", + .pme_short_desc = "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was converted to a Resolve.", + .pme_long_desc = "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was converted to a Resolve.", }, [ POWER8_PME_PM_BR_UNCOND_BR1 ] = { .pme_name = "PM_BR_UNCOND_BR1", .pme_code = 0x40a2, - .pme_short_desc = "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.", - .pme_long_desc = "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.", + .pme_short_desc = "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was converted to a Resolve.", + .pme_long_desc = "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was converted to a Resolve.", }, [ POWER8_PME_PM_BR_UNCOND_CMPL ] = { .pme_name = "PM_BR_UNCOND_CMPL", @@ -1386,7 +1386,7 @@ static const pme_power_entry_t power8_pe[] = { .pme_name = "PM_CHIP_PUMP_CPRED", .pme_code = 0x10050, .pme_short_desc = "Initial and Final Pump Scope was chip pump (prediction=correct) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)", - .pme_long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types ( demand load,data,inst prefetch,inst fetch,xlate (I or d).", + .pme_long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types (demand load,data,inst prefetch,inst fetch,xlate (I or d).", }, [ POWER8_PME_PM_CLB_HELD ] = { .pme_name = "PM_CLB_HELD", @@ -1427,8 +1427,8 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_CMPLU_STALL_DMISS_L21_L31 ] = { .pme_name = "PM_CMPLU_STALL_DMISS_L21_L31", .pme_code = 0x2c018, - .pme_short_desc = "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3)", - .pme_long_desc = "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3).", + .pme_short_desc = "Completion stall by Dcache miss which resolved on chip (excluding local L2/L3)", + .pme_long_desc = "Completion stall by Dcache miss which resolved on chip (excluding local L2/L3).", }, [ POWER8_PME_PM_CMPLU_STALL_DMISS_L2L3 ] = { .pme_name = "PM_CMPLU_STALL_DMISS_L2L3", @@ -1458,7 +1458,7 @@ static const pme_power_entry_t power8_pe[] = { .pme_name = "PM_CMPLU_STALL_DMISS_REMOTE", .pme_code = 0x2c01c, .pme_short_desc = "Completion stall by Dcache miss which resolved from remote chip (cache or memory)", - .pme_long_desc = "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3).", + .pme_long_desc = "Completion stall by Dcache miss which resolved on chip (excluding local L2/L3).", }, [ POWER8_PME_PM_CMPLU_STALL_ERAT_MISS ] = { .pme_name = "PM_CMPLU_STALL_ERAT_MISS", @@ -1817,14 +1817,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_DATA_ALL_FROM_RL4 ] = { .pme_name = "PM_DATA_ALL_FROM_RL4", .pme_code = 0x62c04a, - .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either demand loads or data prefetch", - .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1", + .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to either demand loads or data prefetch", + .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1", }, [ POWER8_PME_PM_DATA_ALL_FROM_RMEM ] = { .pme_name = "PM_DATA_ALL_FROM_RMEM", .pme_code = 0x63c04a, - .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either demand loads or data prefetch", - .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1", + .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to either demand loads or data prefetch", + .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1", }, [ POWER8_PME_PM_DATA_ALL_GRP_PUMP_CPRED ] = { .pme_name = "PM_DATA_ALL_GRP_PUMP_CPRED", @@ -2069,14 +2069,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_DATA_FROM_RL4 ] = { .pme_name = "PM_DATA_FROM_RL4", .pme_code = 0x2c04a, - .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a demand load", - .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.", + .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to a demand load", + .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.", }, [ POWER8_PME_PM_DATA_FROM_RMEM ] = { .pme_name = "PM_DATA_FROM_RMEM", .pme_code = 0x3c04a, - .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a demand load", - .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.", + .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to a demand load", + .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.", }, [ POWER8_PME_PM_DATA_GRP_PUMP_CPRED ] = { .pme_name = "PM_DATA_GRP_PUMP_CPRED", @@ -2453,14 +2453,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_DPTEG_FROM_RL4 ] = { .pme_name = "PM_DPTEG_FROM_RL4", .pme_code = 0x2e04a, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request.", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a data side request", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a data side request.", }, [ POWER8_PME_PM_DPTEG_FROM_RMEM ] = { .pme_name = "PM_DPTEG_FROM_RMEM", .pme_code = 0x3e04a, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request.", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a data side request", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a data side request.", }, [ POWER8_PME_PM_DSLB_MISS ] = { .pme_name = "PM_DSLB_MISS", @@ -2928,7 +2928,7 @@ static const pme_power_entry_t power8_pe[] = { .pme_name = "PM_IBUF_FULL_CYC", .pme_code = 0x4086, .pme_short_desc = "Cycles No room in ibuff", - .pme_long_desc = "Cycles No room in ibufffully qualified tranfer (if5 valid).", + .pme_long_desc = "Cycles No room in ibufffully qualified transfer (if5 valid).", }, [ POWER8_PME_PM_IC_DEMAND_CYC ] = { .pme_name = "PM_IC_DEMAND_CYC", @@ -2939,14 +2939,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_IC_DEMAND_L2_BHT_REDIRECT ] = { .pme_name = "PM_IC_DEMAND_L2_BHT_REDIRECT", .pme_code = 0x4098, - .pme_short_desc = "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles)", - .pme_long_desc = "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles)", + .pme_short_desc = "L2 I cache demand request due to BHT redirect, branch redirect (2 bubbles 3 cycles)", + .pme_long_desc = "L2 I cache demand request due to BHT redirect, branch redirect (2 bubbles 3 cycles)", }, [ POWER8_PME_PM_IC_DEMAND_L2_BR_REDIRECT ] = { .pme_name = "PM_IC_DEMAND_L2_BR_REDIRECT", .pme_code = 0x409a, - .pme_short_desc = "L2 I cache demand request due to branch Mispredict ( 15 cycle path)", - .pme_long_desc = "L2 I cache demand request due to branch Mispredict ( 15 cycle path)", + .pme_short_desc = "L2 I cache demand request due to branch Mispredict (15 cycle path)", + .pme_long_desc = "L2 I cache demand request due to branch Mispredict (15 cycle path)", }, [ POWER8_PME_PM_IC_DEMAND_REQ ] = { .pme_name = "PM_IC_DEMAND_REQ", @@ -3209,14 +3209,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_INST_ALL_FROM_RL4 ] = { .pme_name = "PM_INST_ALL_FROM_RL4", .pme_code = 0x52404a, - .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to instruction fetches and prefetches", - .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1", + .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to instruction fetches and prefetches", + .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1", }, [ POWER8_PME_PM_INST_ALL_FROM_RMEM ] = { .pme_name = "PM_INST_ALL_FROM_RMEM", .pme_code = 0x53404a, - .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to instruction fetches and prefetches", - .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1", + .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Remote) due to instruction fetches and prefetches", + .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1", }, [ POWER8_PME_PM_INST_ALL_GRP_PUMP_CPRED ] = { .pme_name = "PM_INST_ALL_GRP_PUMP_CPRED", @@ -3467,14 +3467,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_INST_FROM_RL4 ] = { .pme_name = "PM_INST_FROM_RL4", .pme_code = 0x2404a, - .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to an instruction fetch (not prefetch)", - .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .", + .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to an instruction fetch (not prefetch)", + .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .", }, [ POWER8_PME_PM_INST_FROM_RMEM ] = { .pme_name = "PM_INST_FROM_RMEM", .pme_code = 0x3404a, - .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to an instruction fetch (not prefetch)", - .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .", + .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Remote) due to an instruction fetch (not prefetch)", + .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .", }, [ POWER8_PME_PM_INST_GRP_PUMP_CPRED ] = { .pme_name = "PM_INST_GRP_PUMP_CPRED", @@ -3497,7 +3497,7 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_INST_IMC_MATCH_CMPL ] = { .pme_name = "PM_INST_IMC_MATCH_CMPL", .pme_code = 0x1003a, - .pme_short_desc = "IMC Match Count ( Not architected in P8)", + .pme_short_desc = "IMC Match Count (Not architected in P8)", .pme_long_desc = "IMC Match Count.", }, [ POWER8_PME_PM_INST_IMC_MATCH_DISP ] = { @@ -3719,14 +3719,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_IPTEG_FROM_RL4 ] = { .pme_name = "PM_IPTEG_FROM_RL4", .pme_code = 0x2504a, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request.", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a instruction side request", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a instruction side request.", }, [ POWER8_PME_PM_IPTEG_FROM_RMEM ] = { .pme_name = "PM_IPTEG_FROM_RMEM", .pme_code = 0x3504a, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request.", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a instruction side request", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a instruction side request.", }, [ POWER8_PME_PM_ISIDE_DISP ] = { .pme_name = "PM_ISIDE_DISP", @@ -3923,8 +3923,8 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_L1_ICACHE_RELOADED_PREF ] = { .pme_name = "PM_L1_ICACHE_RELOADED_PREF", .pme_code = 0x30068, - .pme_short_desc = "Counts all Icache prefetch reloads ( includes demand turned into prefetch)", - .pme_long_desc = "Counts all Icache prefetch reloads ( includes demand turned into prefetch).", + .pme_short_desc = "Counts all Icache prefetch reloads (includes demand turned into prefetch)", + .pme_long_desc = "Counts all Icache prefetch reloads (includes demand turned into prefetch).", }, [ POWER8_PME_PM_L2_CASTOUT_MOD ] = { .pme_name = "PM_L2_CASTOUT_MOD", @@ -4181,8 +4181,8 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_L3_CO ] = { .pme_name = "PM_L3_CO", .pme_code = 0x438088, - .pme_short_desc = "l3 castout occuring ( does not include casthrough or log writes (cinj/dmaw)", - .pme_long_desc = "l3 castout occuring ( does not include casthrough or log writes (cinj/dmaw)", + .pme_short_desc = "l3 castout occurring (does not include casthrough or log writes (cinj/dmaw)", + .pme_long_desc = "l3 castout occurring (does not include casthrough or log writes (cinj/dmaw)", }, [ POWER8_PME_PM_L3_CO0_ALLOC ] = { .pme_name = "PM_L3_CO0_ALLOC", @@ -4199,8 +4199,8 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_L3_CO_L31 ] = { .pme_name = "PM_L3_CO_L31", .pme_code = 0x28086, - .pme_short_desc = "L3 CO to L3.1 OR of port 0 and 1 ( lossy)", - .pme_long_desc = "L3 CO to L3.1 OR of port 0 and 1 ( lossy)", + .pme_short_desc = "L3 CO to L3.1 OR of port 0 and 1 (lossy)", + .pme_long_desc = "L3 CO to L3.1 OR of port 0 and 1 (lossy)", }, [ POWER8_PME_PM_L3_CO_LCO ] = { .pme_name = "PM_L3_CO_LCO", @@ -4211,14 +4211,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_L3_CO_MEM ] = { .pme_name = "PM_L3_CO_MEM", .pme_code = 0x28084, - .pme_short_desc = "L3 CO to memory OR of port 0 and 1 ( lossy)", - .pme_long_desc = "L3 CO to memory OR of port 0 and 1 ( lossy)", + .pme_short_desc = "L3 CO to memory OR of port 0 and 1 (lossy)", + .pme_long_desc = "L3 CO to memory OR of port 0 and 1 (lossy)", }, [ POWER8_PME_PM_L3_CO_MEPF ] = { .pme_name = "PM_L3_CO_MEPF", .pme_code = 0x18082, - .pme_short_desc = "L3 CO of line in Mep state ( includes casthrough", - .pme_long_desc = "L3 CO of line in Mep state ( includes casthrough", + .pme_short_desc = "L3 CO of line in Mep state (includes casthrough)", + .pme_long_desc = "L3 CO of line in Mep state (includes casthrough)", }, [ POWER8_PME_PM_L3_GRP_GUESS_CORRECT ] = { .pme_name = "PM_L3_GRP_GUESS_CORRECT", @@ -4367,8 +4367,8 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_L3_P0_SN_INV ] = { .pme_name = "PM_L3_P0_SN_INV", .pme_code = 0x118080, - .pme_short_desc = "Port0 snooper detects someone doing a store to a line thats Sx", - .pme_long_desc = "Port0 snooper detects someone doing a store to a line thats Sx", + .pme_short_desc = "Port0 snooper detects someone doing a store to a line that is Sx", + .pme_long_desc = "Port0 snooper detects someone doing a store to a line that is Sx", }, [ POWER8_PME_PM_L3_P0_SN_MISS ] = { .pme_name = "PM_L3_P0_SN_MISS", @@ -4445,8 +4445,8 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_L3_P1_SN_INV ] = { .pme_name = "PM_L3_P1_SN_INV", .pme_code = 0x118082, - .pme_short_desc = "Port1 snooper detects someone doing a store to a line thats Sx", - .pme_long_desc = "Port1 snooper detects someone doing a store to a line thats Sx", + .pme_short_desc = "Port1 snooper detects someone doing a store to a line that is Sx", + .pme_long_desc = "Port1 snooper detects someone doing a store to a line that is Sx", }, [ POWER8_PME_PM_L3_P1_SN_MISS ] = { .pme_name = "PM_L3_P1_SN_MISS", @@ -5207,8 +5207,8 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_LSU_REJECT_LMQ_FULL ] = { .pme_name = "PM_LSU_REJECT_LMQ_FULL", .pme_code = 0x1e05c, - .pme_short_desc = "LSU reject due to LMQ full ( 4 per cycle)", - .pme_long_desc = "LSU reject due to LMQ full ( 4 per cycle).", + .pme_short_desc = "LSU reject due to LMQ full (4 per cycle)", + .pme_long_desc = "LSU reject due to LMQ full (4 per cycle).", }, [ POWER8_PME_PM_LSU_SET_MPRED ] = { .pme_name = "PM_LSU_SET_MPRED", @@ -5711,26 +5711,26 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_MRK_DATA_FROM_RL4 ] = { .pme_name = "PM_MRK_DATA_FROM_RL4", .pme_code = 0x2d14a, - .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load", - .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load.", + .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to a marked load", + .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to a marked load.", }, [ POWER8_PME_PM_MRK_DATA_FROM_RL4_CYC ] = { .pme_name = "PM_MRK_DATA_FROM_RL4_CYC", .pme_code = 0x4d12a, - .pme_short_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load", - .pme_long_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load.", + .pme_short_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group (Remote) due to a marked load", + .pme_long_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group (Remote) due to a marked load.", }, [ POWER8_PME_PM_MRK_DATA_FROM_RMEM ] = { .pme_name = "PM_MRK_DATA_FROM_RMEM", .pme_code = 0x3d14a, - .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load", - .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load.", + .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to a marked load", + .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to a marked load.", }, [ POWER8_PME_PM_MRK_DATA_FROM_RMEM_CYC ] = { .pme_name = "PM_MRK_DATA_FROM_RMEM_CYC", .pme_code = 0x2c12a, - .pme_short_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load", - .pme_long_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load.", + .pme_short_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group (Remote) due to a marked load", + .pme_long_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group (Remote) due to a marked load.", }, [ POWER8_PME_PM_MRK_DCACHE_RELOAD_INTV ] = { .pme_name = "PM_MRK_DCACHE_RELOAD_INTV", @@ -5945,14 +5945,14 @@ static const pme_power_entry_t power8_pe[] = { [ POWER8_PME_PM_MRK_DPTEG_FROM_RL4 ] = { .pme_name = "PM_MRK_DPTEG_FROM_RL4", .pme_code = 0x2f14a, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request.", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a marked data side request", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a marked data side request.", }, [ POWER8_PME_PM_MRK_DPTEG_FROM_RMEM ] = { .pme_name = "PM_MRK_DPTEG_FROM_RMEM", .pme_code = 0x3f14a, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request.", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a marked data side request", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a marked data side request.", }, [ POWER8_PME_PM_MRK_DTLB_MISS ] = { .pme_name = "PM_MRK_DTLB_MISS", @@ -7176,7 +7176,7 @@ static const pme_power_entry_t power8_pe[] = { .pme_name = "PM_VSU0_1FLOP", .pme_code = 0xa080, .pme_short_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished", - .pme_long_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finishedDecode into 1,2,4 FLOP according to instr IOP, multiplied by #vector elements according to route( eg x1, x2, x4) Only if instr sends finish to ISU", + .pme_long_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finishedDecode into 1,2,4 FLOP according to instr IOP, multiplied by #vector elements according to route(eg x1, x2, x4) Only if instr sends finish to ISU", }, [ POWER8_PME_PM_VSU0_2FLOP ] = { .pme_name = "PM_VSU0_2FLOP", diff --git a/lib/events/power9_events.h b/lib/events/power9_events.h index f352ace..40412a0 100644 --- a/lib/events/power9_events.h +++ b/lib/events/power9_events.h @@ -1216,8 +1216,8 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_CMPLU_STALL_DMISS_L21_L31 ] = { .pme_name = "PM_CMPLU_STALL_DMISS_L21_L31", .pme_code = 0x000002C018, - .pme_short_desc = "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3)", - .pme_long_desc = "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3)", + .pme_short_desc = "Completion stall by Dcache miss which resolved on chip (excluding local L2/L3)", + .pme_long_desc = "Completion stall by Dcache miss which resolved on chip (excluding local L2/L3)", }, [ POWER9_PME_PM_CMPLU_STALL_DMISS_L2L3_CONFLICT ] = { .pme_name = "PM_CMPLU_STALL_DMISS_L2L3_CONFLICT", @@ -1829,14 +1829,14 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_DATA_FROM_RL4 ] = { .pme_name = "PM_DATA_FROM_RL4", .pme_code = 0x000002C04A, - .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a demand load", - .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a demand load", + .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to a demand load", + .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to a demand load", }, [ POWER9_PME_PM_DATA_FROM_RMEM ] = { .pme_name = "PM_DATA_FROM_RMEM", .pme_code = 0x000003C04A, - .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a demand load", - .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a demand load", + .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to a demand load", + .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to a demand load", }, [ POWER9_PME_PM_DATA_GRP_PUMP_CPRED ] = { .pme_name = "PM_DATA_GRP_PUMP_CPRED", @@ -2243,14 +2243,14 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_DPTEG_FROM_RL4 ] = { .pme_name = "PM_DPTEG_FROM_RL4", .pme_code = 0x000002E04A, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request.", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request. When using Radix Page Translation, this count excludes PDE reloads. Only PTE reloads are included", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a data side request.", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a data side request. When using Radix Page Translation, this count excludes PDE reloads. Only PTE reloads are included", }, [ POWER9_PME_PM_DPTEG_FROM_RMEM ] = { .pme_name = "PM_DPTEG_FROM_RMEM", .pme_code = 0x000003E04A, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request.", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request. When using Radix Page Translation, this count excludes PDE reloads. Only PTE reloads are included", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a data side request.", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a data side request. When using Radix Page Translation, this count excludes PDE reloads. Only PTE reloads are included", }, [ POWER9_PME_PM_DSIDE_L2MEMACC ] = { .pme_name = "PM_DSIDE_L2MEMACC", @@ -2507,14 +2507,14 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_IC_DEMAND_L2_BHT_REDIRECT ] = { .pme_name = "PM_IC_DEMAND_L2_BHT_REDIRECT", .pme_code = 0x0000004098, - .pme_short_desc = "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles)", - .pme_long_desc = "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles)", + .pme_short_desc = "L2 I cache demand request due to BHT redirect, branch redirect (2 bubbles 3 cycles)", + .pme_long_desc = "L2 I cache demand request due to BHT redirect, branch redirect (2 bubbles 3 cycles)", }, [ POWER9_PME_PM_IC_DEMAND_L2_BR_REDIRECT ] = { .pme_name = "PM_IC_DEMAND_L2_BR_REDIRECT", .pme_code = 0x0000004898, - .pme_short_desc = "L2 I cache demand request due to branch Mispredict ( 15 cycle path)", - .pme_long_desc = "L2 I cache demand request due to branch Mispredict ( 15 cycle path)", + .pme_short_desc = "L2 I cache demand request due to branch Mispredict (15 cycle path)", + .pme_long_desc = "L2 I cache demand request due to branch Mispredict (15 cycle path)", }, [ POWER9_PME_PM_IC_DEMAND_REQ ] = { .pme_name = "PM_IC_DEMAND_REQ", @@ -2881,14 +2881,14 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_INST_FROM_RL4 ] = { .pme_name = "PM_INST_FROM_RL4", .pme_code = 0x000002404A, - .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to an instruction fetch (not prefetch)", - .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to an instruction fetch (not prefetch)", + .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to an instruction fetch (not prefetch)", + .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to an instruction fetch (not prefetch)", }, [ POWER9_PME_PM_INST_FROM_RMEM ] = { .pme_name = "PM_INST_FROM_RMEM", .pme_code = 0x000003404A, - .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to an instruction fetch (not prefetch)", - .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to an instruction fetch (not prefetch)", + .pme_short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Remote) due to an instruction fetch (not prefetch)", + .pme_long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Remote) due to an instruction fetch (not prefetch)", }, [ POWER9_PME_PM_INST_GRP_PUMP_CPRED ] = { .pme_name = "PM_INST_GRP_PUMP_CPRED", @@ -3109,14 +3109,14 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_IPTEG_FROM_RL4 ] = { .pme_name = "PM_IPTEG_FROM_RL4", .pme_code = 0x000002504A, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a instruction side request", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a instruction side request", }, [ POWER9_PME_PM_IPTEG_FROM_RMEM ] = { .pme_name = "PM_IPTEG_FROM_RMEM", .pme_code = 0x000003504A, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a instruction side request", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a instruction side request", }, [ POWER9_PME_PM_ISIDE_DISP_FAIL_ADDR ] = { .pme_name = "PM_ISIDE_DISP_FAIL_ADDR", @@ -3241,8 +3241,8 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_L1_ICACHE_RELOADED_PREF ] = { .pme_name = "PM_L1_ICACHE_RELOADED_PREF", .pme_code = 0x0000030068, - .pme_short_desc = "Counts all Icache prefetch reloads ( includes demand turned into prefetch)", - .pme_long_desc = "Counts all Icache prefetch reloads ( includes demand turned into prefetch)", + .pme_short_desc = "Counts all Icache prefetch reloads (includes demand turned into prefetch)", + .pme_long_desc = "Counts all Icache prefetch reloads (includes demand turned into prefetch)", }, [ POWER9_PME_PM_L1PF_L2MEMACC ] = { .pme_name = "PM_L1PF_L2MEMACC", @@ -5006,26 +5006,26 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_MRK_DATA_FROM_RL4_CYC ] = { .pme_name = "PM_MRK_DATA_FROM_RL4_CYC", .pme_code = 0x000004D12A, - .pme_short_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load", - .pme_long_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load", + .pme_short_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group (Remote) due to a marked load", + .pme_long_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group (Remote) due to a marked load", }, [ POWER9_PME_PM_MRK_DATA_FROM_RL4 ] = { .pme_name = "PM_MRK_DATA_FROM_RL4", .pme_code = 0x000003515C, - .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load", - .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load", + .pme_short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to a marked load", + .pme_long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group (Remote) due to a marked load", }, [ POWER9_PME_PM_MRK_DATA_FROM_RMEM_CYC ] = { .pme_name = "PM_MRK_DATA_FROM_RMEM_CYC", .pme_code = 0x000002C12A, - .pme_short_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load", - .pme_long_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load", + .pme_short_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group (Remote) due to a marked load", + .pme_long_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group (Remote) due to a marked load", }, [ POWER9_PME_PM_MRK_DATA_FROM_RMEM ] = { .pme_name = "PM_MRK_DATA_FROM_RMEM", .pme_code = 0x000001D148, - .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load", - .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load", + .pme_short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to a marked load", + .pme_long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Remote) due to a marked load", }, [ POWER9_PME_PM_MRK_DCACHE_RELOAD_INTV ] = { .pme_name = "PM_MRK_DCACHE_RELOAD_INTV", @@ -5240,14 +5240,14 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_MRK_DPTEG_FROM_RL4 ] = { .pme_name = "PM_MRK_DPTEG_FROM_RL4", .pme_code = 0x000002F14A, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request.", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request. When using Radix Page Translation, this count excludes PDE reloads. Only PTE reloads are included", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a marked data side request.", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group (Remote) due to a marked data side request. When using Radix Page Translation, this count excludes PDE reloads. Only PTE reloads are included", }, [ POWER9_PME_PM_MRK_DPTEG_FROM_RMEM ] = { .pme_name = "PM_MRK_DPTEG_FROM_RMEM", .pme_code = 0x000003F14A, - .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request.", - .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request. When using Radix Page Translation, this count excludes PDE reloads. Only PTE reloads are included", + .pme_short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a marked data side request.", + .pme_long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Remote) due to a marked data side request. When using Radix Page Translation, this count excludes PDE reloads. Only PTE reloads are included", }, [ POWER9_PME_PM_MRK_DTLB_MISS_16G ] = { .pme_name = "PM_MRK_DTLB_MISS_16G", @@ -5576,8 +5576,8 @@ static const pme_power_entry_t power9_pe[] = { [ POWER9_PME_PM_MRK_ST_DONE_L2 ] = { .pme_name = "PM_MRK_ST_DONE_L2", .pme_code = 0x0000010134, - .pme_short_desc = "marked store completed in L2 ( RC machine done)", - .pme_long_desc = "marked store completed in L2 ( RC machine done)", + .pme_short_desc = "marked store completed in L2 (RC machine done)", + .pme_long_desc = "marked store completed in L2 (RC machine done)", }, [ POWER9_PME_PM_MRK_ST_DRAIN_TO_L2DISP_CYC ] = { .pme_name = "PM_MRK_ST_DRAIN_TO_L2DISP_CYC", diff --git a/lib/events/ppc970_events.h b/lib/events/ppc970_events.h index 38cc6c4..2a2c60b 100644 --- a/lib/events/ppc970_events.h +++ b/lib/events/ppc970_events.h @@ -385,7 +385,7 @@ static const pme_power_entry_t ppc970_pe[] = { .pme_name = "PM_GRP_DISP_SUCCESS", .pme_code = 0x5001, .pme_short_desc = "Group dispatch success", - .pme_long_desc = "Number of groups sucessfully dispatched (not rejected)", + .pme_long_desc = "Number of groups successfully dispatched (not rejected)", }, [ PPC970_PME_PM_LSU1_LDF ] = { .pme_name = "PM_LSU1_LDF", @@ -1375,7 +1375,7 @@ static const pme_power_entry_t ppc970_pe[] = { .pme_name = "PM_LARX_LSU0", .pme_code = 0x727, .pme_short_desc = "Larx executed on LSU0", - .pme_long_desc = "A larx (lwarx or ldarx) was executed on side 0 (there is no coresponding unit 1 event since larx instructions can only execute on unit 0)", + .pme_long_desc = "A larx (lwarx or ldarx) was executed on side 0 (there is no corresponding unit 1 event since larx instructions can only execute on unit 0)", }, [ PPC970_PME_PM_GCT_EMPTY_CYC ] = { .pme_name = "PM_GCT_EMPTY_CYC", diff --git a/lib/events/ppc970mp_events.h b/lib/events/ppc970mp_events.h index f01f61a..54690c8 100644 --- a/lib/events/ppc970mp_events.h +++ b/lib/events/ppc970mp_events.h @@ -406,7 +406,7 @@ static const pme_power_entry_t ppc970mp_pe[] = { .pme_name = "PM_GRP_DISP_SUCCESS", .pme_code = 0x5001, .pme_short_desc = "Group dispatch success", - .pme_long_desc = "Number of groups sucessfully dispatched (not rejected)", + .pme_long_desc = "Number of groups successfully dispatched (not rejected)", }, [ PPC970MP_PME_PM_LSU1_LDF ] = { .pme_name = "PM_LSU1_LDF", @@ -1474,7 +1474,7 @@ static const pme_power_entry_t ppc970mp_pe[] = { .pme_name = "PM_LARX_LSU0", .pme_code = 0x727, .pme_short_desc = "Larx executed on LSU0", - .pme_long_desc = "A larx (lwarx or ldarx) was executed on side 0 (there is no coresponding unit 1 event since larx instructions can only execute on unit 0)", + .pme_long_desc = "A larx (lwarx or ldarx) was executed on side 0 (there is no corresponding unit 1 event since larx instructions can only execute on unit 0)", }, [ PPC970MP_PME_PM_GCT_EMPTY_CYC ] = { .pme_name = "PM_GCT_EMPTY_CYC", diff --git a/lib/events/s390x_cpumf_events.h b/lib/events/s390x_cpumf_events.h index 0a91224..965fe61 100644 --- a/lib/events/s390x_cpumf_events.h +++ b/lib/events/s390x_cpumf_events.h @@ -396,14 +396,14 @@ static const pme_cpumf_ctr_t cpumcf_z10_counters[] = { .name = "L1I_CACHELINE_INVALIDATES", .desc = "A cache line in the Level-1 I-Cache has been" " invalidated by a store on the same CPU as the Level-" - " 1 I-Cache", + "1 I-Cache", }, { .ctrnum = 138, .ctrset = CPUMF_CTRSET_EXTENDED, .name = "ITLB1_WRITES", .desc = "A translation entry has been written into the Level-" - " 1 Instruction Translation Lookaside Buffer", + "1 Instruction Translation Lookaside Buffer", }, { .ctrnum = 139, @@ -552,7 +552,7 @@ static const pme_cpumf_ctr_t cpumcf_z196_counters[] = { .name = "DTLB1_HPAGE_WRITES", .desc = "A translation entry has been written to the Level-1" " Data Translation Lookaside Buffer for a one-" - " megabyte page", + "megabyte page", }, { .ctrnum = 141, @@ -730,7 +730,7 @@ static const pme_cpumf_ctr_t cpumcf_zec12_counters[] = { .name = "DTLB1_HPAGE_WRITES", .desc = "A translation entry has been written to the Level-1" " Data Translation Lookaside Buffer for a one-" - " megabyte page", + "megabyte page", }, { .ctrnum = 140, @@ -963,7 +963,7 @@ static const pme_cpumf_ctr_t cpumcf_z13_counters[] = { .name = "DTLB1_HPAGE_WRITES", .desc = "A translation entry has been written to the Level-1" " Data Translation Lookaside Buffer for a one-" - " megabyte page", + "megabyte page", }, { .ctrnum = 132, @@ -971,7 +971,7 @@ static const pme_cpumf_ctr_t cpumcf_z13_counters[] = { .name = "DTLB1_GPAGE_WRITES", .desc = "A translation entry has been written to the Level-1" " Data Translation Lookaside Buffer for a two-" - " gigabyte page.", + "gigabyte page.", }, { .ctrnum = 133, @@ -1038,7 +1038,7 @@ static const pme_cpumf_ctr_t cpumcf_z13_counters[] = { .ctrset = CPUMF_CTRSET_EXTENDED, .name = "TX_NC_TEND", .desc = "A TEND instruction has completed in a non-" - " constrained transactional-execution mode", + "constrained transactional-execution mode", }, { .ctrnum = 143, @@ -1350,7 +1350,7 @@ static const pme_cpumf_ctr_t cpumcf_z13_counters[] = { .ctrset = CPUMF_CTRSET_EXTENDED, .name = "TX_NC_TABORT", .desc = "A transaction abort has occurred in a non-" - " constrained transactional-execution mode", + "constrained transactional-execution mode", }, { .ctrnum = 219, @@ -1493,7 +1493,7 @@ static const pme_cpumf_ctr_t cpumcf_z14_counters[] = { .ctrset = CPUMF_CTRSET_EXTENDED, .name = "TX_NC_TEND", .desc = "A TEND instruction has completed in a non-" - " constrained transactional-execution mode", + "constrained transactional-execution mode", }, { .ctrnum = 143, @@ -1777,7 +1777,7 @@ static const pme_cpumf_ctr_t cpumcf_z14_counters[] = { .ctrset = CPUMF_CTRSET_EXTENDED, .name = "TX_NC_TABORT", .desc = "A transaction abort has occurred in a non-" - " constrained transactional-execution mode", + "constrained transactional-execution mode", }, { .ctrnum = 244, @@ -1919,7 +1919,7 @@ static const pme_cpumf_ctr_t cpumcf_z15_counters[] = { .ctrset = CPUMF_CTRSET_EXTENDED, .name = "TX_NC_TEND", .desc = "A TEND instruction has completed in a non-" - " constrained transactional-execution mode", + "constrained transactional-execution mode", }, { .ctrnum = 143, @@ -2203,7 +2203,7 @@ static const pme_cpumf_ctr_t cpumcf_z15_counters[] = { .ctrset = CPUMF_CTRSET_EXTENDED, .name = "TX_NC_TABORT", .desc = "A transaction abort has occurred in a non-" - " constrained transactional-execution mode", + "constrained transactional-execution mode", }, { .ctrnum = 244, diff --git a/lib/pfmlib_itanium2.c b/lib/pfmlib_itanium2.c index 0ecd123..e9c378e 100644 --- a/lib/pfmlib_itanium2.c +++ b/lib/pfmlib_itanium2.c @@ -1495,7 +1495,7 @@ pfm_dispatch_irange(pfmlib_input_param_t *inp, pfmlib_ita2_input_param_t *mod_in * * - if the fine mode fails, then for all events, except IA64_TAGGED_INST_RETIRED_*, only * the first pair of ibr is available: ibrp0. This imposes some severe restrictions on the - * size and alignement of the range. It can be bigger than 4KB and must be properly aligned + * size and alignment of the range. It can be bigger than 4KB and must be properly aligned * on its size. The library relaxes these constraints by allowing the covered areas to be * larger than the expected range. It may start before and end after. You can determine how * far off the range is in either direction for each range by looking at the rr_soff (start diff --git a/lib/pfmlib_montecito.c b/lib/pfmlib_montecito.c index 88fa0b3..796065e 100644 --- a/lib/pfmlib_montecito.c +++ b/lib/pfmlib_montecito.c @@ -1661,7 +1661,7 @@ pfm_dispatch_irange(pfmlib_input_param_t *inp, pfmlib_mont_input_param_t *mod_in * * - if the fine mode fails, then for all events, except IA64_TAGGED_INST_RETIRED_*, only * the first pair of ibr is available: ibrp0. This imposes some severe restrictions on the - * size and alignement of the range. It can be bigger than 64KB and must be properly aligned + * size and alignment of the range. It can be bigger than 64KB and must be properly aligned * on its size. The library relaxes these constraints by allowing the covered areas to be * larger than the expected range. It may start before and end after the requested range. * You can determine the amount of overrun in either direction for each range by looking at -- 2.20.1
>From f19300539e42c3da6ac82c6bbadbd211200555b1 Mon Sep 17 00:00:00 2001 From: Andreas Beckmann <a.beckm...@fz-juelich.de> Date: Wed, 4 Dec 2019 23:53:35 +0100 Subject: [PATCH 2/2] fix incorrect strncpy() usage MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit gcc 9 failed on mips* with: /usr/include/mips64el-linux-gnuabi64/bits/string_fortified.h:106:10: error: ‘__builtin___strncpy_chk’ output truncated before terminating nul copying as many bytes from a string as its length [-Werror=stringop-truncation] pfmlib_mips.c: In function ‘pfm_mips_detect’: pfmlib_mips.c:147:2: note: length computed here 147 | strncpy(pfm_mips_cfg.model,buffer,strlen(buffer)); strncpy(dest, src, strlen(src)) does *not* copy the terminating '\0' strncpy(dest, src, strlen(src)+1) is identical to strcpy(dest, src) but the third argument to strncpy() should rather be based on the size of 'dest', not 'src' Signed-off-by: Andreas Beckmann <a.beckm...@fz-juelich.de> --- lib/pfmlib_mips.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/pfmlib_mips.c b/lib/pfmlib_mips.c index 61db613..52cb69c 100644 --- a/lib/pfmlib_mips.c +++ b/lib/pfmlib_mips.c @@ -144,7 +144,7 @@ pfm_mips_detect(void *this) if (strstr(buffer,"MIPS") == NULL) return PFM_ERR_NOTSUPP; - strncpy(pfm_mips_cfg.model,buffer,strlen(buffer)); + strcpy(pfm_mips_cfg.model, buffer); /* ret = pfmlib_getcpuinfo_attr("CPU implementer", buffer, sizeof(buffer)); if (ret == -1) -- 2.20.1
_______________________________________________ perfmon2-devel mailing list perfmon2-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/perfmon2-devel