On 7 Jan 2024 at 21:28, Anton Gustafsson via Freedos-user wrote:
>
> Hi Roderick! I agree, in this case it is mostly to satisfy my 
> supervisor's need to follow protocol. All machines in the network are 
> supposed monitored (load, mem, swap etc) using a software called Zabbix 
> which has the ability to use either their own agent, ICMP or SNMP.
> 
> It is possible to use Zabbix without an agent, and for example only 
> check availability over ICMP pings. It is a slight workaround, since the 
> actual frontend on Zabbix shows hosts as unavailable when not using an 
> agent, though they it is possible to configure the system not to send 
> "host_down"-emails. There seem to be disagreements within the Zabbix 
> developer community whether this is a bug or a feature.
> 
> Anyway it feels a bit redundant since these machines are custom hardware 
> controllers and are integrated in the rest of the process system, and 
> the control room interacts with these machines continuously.
> 
DOS is not a multi-tasking OS, does not have a process scheduler and 
only uses a single CPU core.
There's a single application running in the foreground, having total 
control over pretty much everything, and that's it.

Yes you can have "resident" pieces of software (aka TSR) that remain 
loaded and "can do things in the background", but that "background" 
is just a piece of the CPU time, principally spent in an interrupt 
service routine. While the TSR is doing its thing, the piece of 
software in the foreground is asleep. Not knowing the details, I'd 
suspect that a feature-complete SNMP stack might easily turn out to 
be quite hefty.

SNMP is not necessarily a CPU hog, but a decent SNMP software stack 
will be neither simple, nor minuscule in the RAM.

Allright, desktop software (and its users) are normally tolerant to a 
bit of lag here and there.

If you're speaking Industrial Process Control, that's a red flag.
(Are you?)
You have a piece of legacy control software running in DOS, assuming 
that it has the whole machine for itself, likely relying on 
deterministic timing and response to incoming events. The software is 
possibly running a tight loop clocked by the system timer and/or 
interrupts from some dedicated I/O peripherals.

Losing critical CPU time to a nosey SNMP monitor is generally not an 
option! And there is no CPU process scheduler to sort tasks by 
priority. Spending too much time in an ISR is a big no-no.
And, anything running in the normal foreground context (i.e. outside 
of an ISR) has a lower priority than any IRQ.
Yes, IRQ's can be ordered by priority, but it generally requires 
programming of the IRQ controller. This isn't something you want to 
do to unsuspecting legacy control software "behind its back".


So apart from that principal problem above:

Roderick has already mentioned DMH software.
I also recall Lantronix having an SNMP module for their Xport line of 
miniature "terminal servers" (serial-to-ethernet gateways) - the old 
Lantronix had an embedded x86 environment similar to BIOS/DOS, not 
sure about the current product line.
Both are principally SDK's, providing SNMP in the form of a library, 
documentation and "interface bindings" for some programming 
environment, that allows you to add the missing bits specific to your 
project or platform. I.e., to deploy these SNMP products, you need to 
do some programming.

The DMH SNMP API probably depends on DMH's own TCP/IP stack, which in 
your case would be a second TCP/IP stack besides the Microsoft 
stack... not sure if they can run side by side and share a NIC...

The Lantronix SNMP module depends on Lantronix hardware and a 
particular development environment. Not really made for generic DOS.


Yet again, apart from the problems mentioned above:

Someone in this thread has already raised the question:
*What* would you want to monitor, in a PC with DOS?
DOS doesn't have a CPU scheduler.
Means you cannot quite monitor CPU usage, as seen by the OS.

You *could* monitor a select few CPU MSR's, related to system power 
management - the current state of EIST/TURBO, and the CPU core 
temperature. Provided that your (legacy?) CPU's support those HW 
features. Note that in DOS, the CPU never goes IDLE, and therefore 
the CPU's own internal power management will see the CPU's "control 
unit" as active all the time (100% CPU utilization).
You could possibly hazard access to a SuperIO HW monitor, which has 
information about fan RPM and voltages.

Are there any disk drives involved? These have SMART - but I don't 
think there's a BIOS service for SMART, or for crafting your own ATA 
command and passing that to the BIOS disk service... = you'd probably 
need your own disk driver, and circumvent the running system's disk 
driver to avoid an unhandled race condition...

If the control system had some interface, such as a page of RAM at a 
"well known address", or a "software interrupt" for that purpose, you 
could theoretically take some variables from the running control app 
and export those to SNMP. But... that's already a pretty tall order.

I've seen a control system engine or two, from back in the DOS era, 
that effectively had an internal scheduler and were running multiple 
tasks. I cannot promise they'd use HALT / MWAIT when they got the 
work done for that timer period, but they *could* run several 
internal "processes" in parallel. From the outside, to 3rd-party 
programs including TSR's, such a piece of software is opaque. There's 
no way to "tap into its internal intelligence", other than possibly 
through an extension API, if such was available.

I've also seen combinations of Windows + RTOS sharing a multi-core 
CPU, where Windows are politely asked on startup to vacate some 
cores, and these get occupied by the RTOS. The RTOS gets access
to a select few peripherals including IRQ handling. This is before 
virtualization became a thing / not a HV+VM relationship.
This sort of hybrid OS architecture is possible, but it relies on 
some rudimentary API in Windows allowing this to happen, plus a bit 
of custom software running on the Windows side, plus obviously an 
RTOS engine built for execution on the "side by side CPU cores".
It's definitely not possible to retrofit this to an existing machine 
running DOS on bare metal, with a custom fit set of drivers and 
foreground software.


To sum up, especially if the context is "industrial process control", 
my advice is: hands off the legacy system.
It probably has its own human interface.
The requirement to have this monitored by SNMP seems to me like the 
networking admins poking their fingers into areas outside of their 
responsibility, expertise and historical awareness :-)
If there's a shared interest in this at the political level, and the 
legacy system cannot realistically get extended in a clean way 
(upgrade or some such), I'd suggest adding external monitoring of 
some key signs of life.
External = using extra hardware and sensors, dedicated to the 
monitoring function.
Could be as little as an RPi, with some custom development.

If you have use for some general-purpose discrete analog and digital 
I/O, check out the ADAM-6000 series by Advantech. They also have an 
SNMP agent built in, with access to the external I/O.
https://www.advantech.com/en/support/details/firmware?id=1-1B2K5BB

Frank

P.S.: if against all odds there's a desire to do some custom 
programming for the old DOS-based system, Mr. Brutman has already 
chimed in. Knowing the software he has created, for networking in 
DOS, I believe he is *the* person you should ask for help :-)


_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user

Reply via email to