This looks like a bug in MIB.

Ask the vendor to make sure the MIB matches the data returned by the device.

On Mon, Mar 17, 2025 at 11:11 AM Hisham Attar <hashi...@gmail.com> wrote:

> I have this Hitachi HCP device that reports status of ssds per node, SNMP
> responds at a cluster level and contains data for each node. I'm having an
> issue using the generator for this config because the table that contains
> SSD information uses the node numbers as the index and not the SSD disks
> themselves.
>
> Here is the snippet from the MIB
>
> --- SSD status and wear level information --
>
> hcpSsdNodeTable  OBJECT-TYPE
>
>        SYNTAX        SEQUENCE OF HcpSsdNodeTableEntry
>
>        MAX-ACCESS    not-accessible
>
>        STATUS        current
>
>        DESCRIPTION
>
>          "Table of SSDs in the HCP system nodes."
>
>        ::= { hcpNodes 9 }
>
>
>
> -- node SSD table row declaration
>
> hcpSsdNodeTableEntry  OBJECT-TYPE
>
>        SYNTAX       HcpSsdNodeTableEntry
>
>        MAX-ACCESS   not-accessible
>
>        STATUS       current
>
>        DESCRIPTION
>
>          "Information about each SSD."
>
>        INDEX { nodeNumber }
>
>        ::= { hcpSsdNodeTable 1 }
>
>
>
> -- node SSD table row syntax
>
> HcpSsdNodeTableEntry ::=
>
>        SEQUENCE {
>
>            ssdNodeNumber                  INTEGER,
>
>            ssdEnclosure                   DisplayString,
>
>            ssdSlot                        INTEGER,
>
>            ssdModel                       DisplayString,
>
>            ssdStatus                      DisplayString,
>
>            ssdWear                        INTEGER,
>
>            ssdThreshold                   INTEGER,
>
>            ssdConfigThreshold             INTEGER,
>
>            ssdHealth                      DisplayString
>
>        }
>
>
> The generated config creates a metric for the snmp.yml like this
>
>
>   - name: ssdStatus
>
>       oid: 1.3.6.1.4.1.116.5.46.1.9.1.5
>
>       type: DisplayString
>
>       help: Current status of the SSD - 1.3.6.1.4.1.116.5.46.1.9.1.5
>
>       indexes:
>
>       - labelname: nodeNumber
>
>         type: gauge
>
>
> SNMP walk metrics look like this
>
> HCP-MIB::ssdStatus.110.3 = STRING: online
>
> HCP-MIB::ssdStatus.110.4 = STRING: online
>
> HCP-MIB::ssdStatus.110.5 = STRING: online
>
> HCP-MIB::ssdStatus.110.6 = STRING: online
>
> HCP-MIB::ssdStatus.110.7 = STRING: online
>
> HCP-MIB::ssdStatus.110.8 = STRING: online
>
>
> This doesn't work because the index of the table is the node number and it
> just creates duplicate metrics. I can't do a lookup on anything else in
> that table, the only way I've been able to remedy it is manually add a
> label that makes the metrics unique into the snmp.yml config like this
>
>
>   - name: ssdStatus
>
>       oid: 1.3.6.1.4.1.116.5.46.1.9.1.5
>
>       type: DisplayString
>
>       help: Current status of the SSD - 1.3.6.1.4.1.116.5.46.1.9.1.5
>
>       indexes:
>
>       - labelname: nodeNumber
>
>         type: gauge
>
> *      - labelname: ssdSlot*
>
> *        type: gauge*
>
>
> My question is, is there a way to create a valid generator config on
> devices that return multiple entries in a table that use a non unique index?
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/prometheus-users/3fe25b2a-66c1-42da-8706-6dc88c3344bdn%40googlegroups.com
> <https://groups.google.com/d/msgid/prometheus-users/3fe25b2a-66c1-42da-8706-6dc88c3344bdn%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmpjKKevikWPU2PbWX5wqLRcDHuoWazLjJLAph9kZUbjxg%40mail.gmail.com.

Reply via email to