It's all up to Wes (or other net-snmp maintainers?) really. I don't know
what's up with Sourceforge, my merge requests were showing some sort of
error where it can't list the commits. Maybe they get stale and time out?
I rejected them all and sent a new merge request for my one change to 'from
netsnmp import Session' - a minor fix that addresses an issue with error
propagation.
https://sourceforge.net/p/net-snmp/code/merge-requests/6/
I'd be happy to re-up the snmp_api.c merge request, maybe I will wait to
see how that works and do them one at a time this time.
Also, Wes, w...@net-snmp-pro.com is bouncing...
On Thu, Mar 17, 2016 at 1:33 PM, Gabe <r...@un1x.su> wrote:
> You're absolutely right regarding get_async.c - the way I have that
> designed was solely for the performance benefit of having as much static
> logic as possible statically compiled. Usability was a far gone
> afterthought for that.
>
> My logic was also, having the arrays defined in C opens the possibility of
> more segmentation faults or buffer overruns, plus the need to maintain
> memory within C. I was taking data I wasn't sure the legitimacy of and
> passing it into C, so that was a concern too, and Python was able to do
> "pre-checks" on that. Passing python objects to C to work on also seems to
> allow Python garbage collection to work automagically as long as you do
> maintain the references properly (I'm not sure I did 100%, but as I said
> previously the memory utilization didn't seem to leak)
>
> Regarding the GIL for synchronous and thread-safe implementations, all I
> did was release during and re-lock after the network IO
> https://github.com/xstaticxgpx/netsnmp-py3/blob/master/netsnmp/get.c#L79
>
> I utilized the synchronous implementations in several Python threaded
> applications and it worked wonderfully, but didn't scale nearly as well as
> get_async.c on a dumbed down level.
>
> I'll be following your progress on this :)
> Unfortunately my focus has been shifted elsewhere, and I don't think I'll
> ever have the time to come back and give this the love and attention it
> deserves. But I would like to stay in contact.
>
> -Gabe
>
>
>
> On Thu, Mar 17, 2016 at 3:14 PM, David Hankins <dhank...@twitter.com>
> wrote:
>
>> SNMP async polling was also specifically what I wanted to rework the api
>> for ^_^. I actually never implemented any of the synchronous API
>> functions, I ran out of time during our 'hack week.'
>>
>> Perhaps this is a better link of my specific work;
>>
>> https://sourceforge.net/u/hcf64/net-snmp/ci/python_api/tree/python/netsnmp/snmp_api.c
>>
>> I've left the rest of the netsnmp/ python module basically untouched.
>>
>> I think we've done the same things with different approaches.
>>
>> Whereas your get_async.c internalizes a select loop, the snmp_api
>> approach lets you perform select in the (python) caller. This is a
>> tradeoff - obvious performance advantages in native C versus including more
>> business logic inside the polling loop in Python. My personal (and totally
>> subjective) opinion is that I would probably just write the entire
>> application in C if I were considering including the select loop there (and
>> I have done that at previous jobs ^_^).
>>
>> I guess that's the root of my (hopefully fair) criticism of get_async.c;
>> you are basically using python to supply an iterable of hosts, and it is
>> not doing much else for you. This can just as easily be done in C with not
>> very much additional effort, having tightly coupled the rest of the
>> application in C...and that's perfectly fine and fair thing to do, my last
>> large network SNMP poller for billing data was all C from config (lex/yacc)
>> to storage.
>>
>> Just as a contrast, in snmp_api.c's approach you would get the typified
>> model of async high resolution large scale polling this way;
>>
>> from __future__ import print_function
>>> import ipaddr # undocumented requirement :(
>>> from netsnmp import snmp_api
>>> import select
>>> class SnmpEx(object):
>>> OIDS = [[1, 3, 6, 1, 2, 1, 1,
>>> idx
>>> , 0] for
>>> idx
>>> in range(1, 9)]
>>> # sys MIB for ex
>>>
>>> ERR_TO_NAME = {value: key for key, value in
>>> snmp_api.errorcodes.items()}
>>> def __init__(self, hostname, community, queries, oids):
>>> sess_cfg = snmp_api.snmp_sess_init()
>>> sess_cfg['peername'] = hostname
>>> sess_cfg['timeout'] = 250000
>>> sess_cfg['retries'] = 0
>>> sess_cfg['community'] = community
>>> sess_cfg['version'] = snmp_api.versions['2c']
>>> sess_cfg['callback'] = self.callback
>>> self._max_in_flight = 5
>>> self._in_flight = 0
>>> self._session = snmp_api.snmp_open(sess_cfg)
>>> def callback(self, operation, reqid, pdu, raw_pdu):
>>> if pdu['errstat'] != 0:
>>> errstat = pdu['errstat']
>>> print('SNMP Error: %s/%d %s' %
>>> (operation, reqid, self.ERR_TO_NAME.get(errstat,
>>> str(errstat))),
>>> file=sys.stderr)
>>> return 1
>>> if operation == 'RECEIVED_MESSAGE':
>>> delta = time.time() - self.send_time
>>> self.continue_sending()
>>> return 1
>>> def continue_
>>> sending
>>> (self):
>>> # Or whatever your work flow is.
>>> while self._in_flight < self._max_in_flight:
>>> pdu = snmp_api.snmp_pdu_create(snmp_api.commands['GET'])
>>> for oid in self.OIDS:
>>> snmp_api.snmp_add_null_var(pdu, oid)
>>> if snmp_api.snmp_send(self._session, pdu) == 0:
>>> snmp_api.snmp_perror('snmp_bench: snmp_send(session, pdu):')
>>> break
>>> else:
>>> self._in_flight += 1
>>> def run(self):
>>> self.continue_
>>> sending
>>> ()
>>> while True:
>>> (fd_set, timeout) = snmp_api.snmp_select_info(self._session)
>>> # Be very carfeful here. 'timeout' can be 0.0 with pending
>>> queries!
>>> if timeout is not None:
>>> break
>>> (rset, wset, eset) = select.select(fd_set, [], [], timeout)
>>> if rset:
>>> snmp_api.snmp_read(rset)
>>> else:
>>> snmp_api.snmp_timeout()
>>> snmp_api.snmp_close(self._session)
>>
>>
>> Sorry for formatting. And my thinking is that this could be the root of
>> a new net-snmp/python/netsnmp/netsnmp.py. Which you can then sub-class on
>> that day when you have to, and do a piece differently ... include your own
>> event timers to send out the queries in constant time, etc etc. Again just
>> contrasting that with getting out the C source code and building new static
>> wheels or eggs.
>>
>> The Net-SNMP code has grown organically over many years (and I've been
>> writing for it since CMU-SNMP, so, it's a little easier to navigate for me
>> maybe). I wouldn't say it's disgusting, but it has certainly grown ugly.
>> I'd just rather deal with its idiosyncracies and warts on Python's side,
>> where I have more tools at my disposal to smooth its edges and where I'm
>> not as "locked in" to one application model. It's easier on the python
>> side to quickly change how the application works.
>>
>> I don't really IRC, but stay in touch!
>>
>> On Thu, Mar 17, 2016 at 11:00 AM, Gabe <r...@un1x.su> wrote:
>>
>>> Hi David,
>>>
>>> Glad to see some shared interest in this!
>>>
>>> I've made a lot of updates to my project since my posting to the mailing
>>> list, including Python 2 (tested with 2.7 at least) support.
>>>
>>> The only thing I needed to add to facilitate both Py2 and Py3
>>> functionality was add a macro in the C code to interface with the PyString
>>> vs PyUnicode as seen on the top here:
>>> https://github.com/xstaticxgpx/netsnmp-py3/blob/master/netsnmp/_api.h
>>>
>>> I took a lazier route in terms of interfacing with C from Python. I
>>> didn't implement the PDU structures, and only partially implemented the
>>> session structure. Currently my project lacks all the stuff required for
>>> SNMPv3, for example.
>>>
>>> I only focused on implementing the basic (v1/v2c) polling functions
>>> (GET/GETNEXT/WALK), but basic (v1/v2c) SET should just be a matter of
>>> cloning one of the existing functions, adding a value variable somewhere,
>>> and updating snmp_pdu_create(<type>) call.
>>>
>>> --
>>>
>>> Something you may be interested in looking at is the asynchronous
>>> polling methodology I developed;
>>>
>>>
>>> https://github.com/xstaticxgpx/netsnmp-py3/blob/master/netsnmp/get_async.c
>>> https://github.com/xstaticxgpx/netsnmp-py3/blob/master/test_async.py
>>>
>>> There's some cruft in that test_async.py you can ignore, but it boils
>>> down to this:
>>>
>>> Using Python multiprocessing, spawn ZeroMQ IPC receivers, then pass 4096
>>> chunks of devices to multiple get_async() C function instances, all of
>>> which communicate back via the ZeroMQ IPC socket. I was stuffing the
>>> returned data into redis in real time, but there are other ways to deal
>>> with it.
>>>
>>> There's a lot going on there and I'm having a hard time typing up a
>>> decent overview. Basically what this allows is to do MASSIVE amounts of
>>> SNMP polling, very efficiently.
>>>
>>> For example, we had a use case where we needed to poll (SNMPGET) 8-12
>>> OIDs from upwards of 14 million devices. This get_async methodology could
>>> poll, from a single Dell R710 (dual xeon, single 1GB nic), all that
>>> information in under 20 minutes. This was all to unique, remote, devices.
>>>
>>> If you were just polling basic sysDesc or sysUptime, it could do ~14
>>> million unique polls in 7 minutes if I'm remember correctly. The memory
>>> utilization was totally reasonable during the entire run too, somehow I
>>> managed to avoid any serious memory leaks (at least in my experience)...
>>>
>>> I had left the position where I was working on this use case, however I
>>> was looking to extend this to be distributable across more than 1 host, but
>>> I hadn't made any progress on that front.
>>>
>>> One of the last things I did however, was *try* to reduce overhead
>>> during PDU creation.
>>>
>>> https://github.com/xstaticxgpx/netsnmp-py3/blob/master/netsnmp/interface.c#L148
>>> Here you would pass a OID string, and it would return the void pointer
>>> for that object as a PyLong object, so you can then re-pass that into
>>> C...as seen here:
>>>
>>> https://github.com/xstaticxgpx/netsnmp-py3/blob/master/netsnmp/get_async.c#L140
>>>
>>> https://github.com/xstaticxgpx/netsnmp-py3/blob/master/netsnmp/get_async.c#L147
>>>
>>> In practice, I believe I saw the snmp_clone_pdu() function was doing the
>>> same process as creating a brand new PDU object, so I don't believe that
>>> had any beneficial impact on performance.
>>>
>>> --
>>>
>>> Depending on what you're trying to accomplish, the above may or may not
>>> make any sense, or seem appealing. I would love to speak with you further
>>> regarding it if you're interested, perhaps on IRC somewhere?
>>>
>>> Personally, I found the net-snmp source code disgustingly convoluted and
>>> hard to work with. My opinion is that trying to replicate it 1:1 into
>>> python would not be very beneficial.
>>>
>>> Other than that, best of luck in your endeavors!! Hope some of this wall
>>> of text helped, please feel free to dig through my source code on github...
>>>
>>> -Gabe
>>>
>>> On Thu, Mar 17, 2016 at 1:05 PM, David Hankins <dhank...@twitter.com>
>>> wrote:
>>>
>>>> Hi Gabe,
>>>>
>>>> I'm also working on improving the python bindings (tho I'm not
>>>> targeting python 3 specifically, I'm trying to be future proof there);
>>>>
>>>> https://sourceforge.net/u/hcf64/net-snmp/ci/python_api/tree/python/
>>>>
>>>> My approach was also a little different; I made the new python bindings
>>>> (snmp_api.c) as close to a 1:1 map with Net-SNMP API as possible. My
>>>> strategy there is to make the "pythonic" SNMP library on python's side
>>>> (either a rewrite of netsnmp.py, or a different module).
>>>>
>>>> In retrospect one thing I think I would do differently would be to use
>>>> custom types for the PDU and the two different types of Session objects
>>>> (snmp_open() vs snmp_sess_open()), rather than the Capsule. The
>>>> implementation I have requires I transform the PDU structure into a
>>>> dictionary to return to the caller. This creates many python objects and
>>>> causes a GC storm later in heavy use. A custom type wrapping the Net-SNMP
>>>> structs would construct objects on demand, so I don't think it would have
>>>> that GC penalty, and it would be more reverse compatible (to 2.6 and
>>>> earlier).
>>>>
>>>> I also think I should consider a new strategy for freeing and taking
>>>> the GIL, to handle threaded python applications.
>>>>
>>>> On Thu, Mar 3, 2016 at 9:48 AM, Wes Hardaker <w...@net-snmp-pro.com>
>>>> wrote:
>>>>
>>>>> Gabe <r...@un1x.su> writes:
>>>>>
>>>>> > I've been working on a Python3 C Extension for the net-snmp API.
>>>>> Still very much a work in progress, but I feel like
>>>>> > it's at a point now that I can share:
>>>>> >
>>>>> > https://github.com/xstaticxgpx/netsnmp-py3
>>>>> >
>>>>> > Mostly just wanted to gauge interest on this, however any feedback
>>>>> > would be much appreciated.
>>>>>
>>>>> Glad to hear someone is plugging away at it. Did you eventually want
>>>>> to
>>>>> contribute it back to the Net-SNMP project, or keep it as a separate
>>>>> piece of work?
>>>>> --
>>>>> Wes Hardaker
>>>>> Please mail all replies to net-snmp-coders@lists.sourceforge.net
>>>>>
>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>> Transform Data into Opportunity.
>>>>> Accelerate data analysis in your applications with
>>>>> Intel Data Analytics Acceleration Library.
>>>>> Click to learn more.
>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
>>>>> _______________________________________________
>>>>> Net-snmp-coders mailing list
>>>>> Net-snmp-coders@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/net-snmp-coders
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Network Tools & Automation
>>>> Twitter, Inc.
>>>>
>>>
>>>
>>
>>
>> --
>> Network Tools & Automation
>> Twitter, Inc.
>>
>
>
--
Network Tools & Automation
Twitter, Inc.
------------------------------------------------------------------------------
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
_______________________________________________
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders