[Assimilation] Assimilation progress report: Assimilation crash bug fixed ror real.

2020-10-18 Thread Alan Robertson
Hi,

Below is the executive summary followed by the "story-form" of my report.

*CMA (Collective Management Authority)*
*Wins*: Previously mentioned crash bug wasn't really fixed (as I was afraid). 
However, the new and better tools allowed me to track it down and fix it when 
it recurred. I fixed a few other bugs as well. This time I know a specific 
cause to the problem and fixed that specific problem. I'm far more confident 
that it's fixed than I was before.
*Situation*: No doubt many more bugs to fix.
*Issues*: There are some communication bugs where communication with nanoprobes 
backs up and eventually stops - apparently waiting for ACKs. Occasionally, 
commands to create new Neo4j graph nodes fail.
*Next Action*: Fix more bugs: Fix communication issue. Retry commands to work 
around the Neo4j node creation issue.

*Nanoprobe*
*Wins*: Fixed two bugs in discovery scripts. ("packages" and "netconfig").
*Situation*:  Largely unchanged.
*Issues*: Communication with CMA is currently problematic as noted above.
*Next Action*: Track down and fix the communication issue(s).

*Story form report:*
To quote my friend Sarah Kiefhaber: "Problems that go away by themselves, come 
back by themselves". It basically went away by itself, and it came back by 
itself. But this time, I know why it has gone away. Hopefully, it won't be back 
again.
The crash issue was related to not handling some C-data-structure reference 
counts correctly. I wrote more instrumentation to detect these things going 
forward. The C-code is quite protective of its data structures, and it could be 
used to detect this condition. I made this particular problem go away, and I 
made it where it should be detected without crashing in the future. There is a 
rather longish version of this story in my development journal 
<https://docs.google.com/document/d/1CoEdOKE3l1HR-56pKpe2e8bpuy3Wd4sxTzWO3dk-xbY/edit?usp=sharing>.
 That document also contains details on the other miscellaneous bugs I've fixed 
along the way. Of course, there are other kinds of problems which might occur 
with respect to the C code, and those other types of problems might not be 
caught. But the majority of the use-after-free problems should be caught 
without a crash. Ask if you want access to that link to my development journal. 
It's just me talking to myself...

Another bug: "Address xxx does not belong on this subnet". This was for a /32 
address, (aka a /128 address). It should be fixed now.

-- 
  Alan Robertson
  al...@unix.sh___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation progress report: Assimilation crash bug fixed? [YAY!!!]

2020-10-05 Thread Alan Robertson
Hi,

Below is the executive summary followed by the "story-form" of my report.

*CMA (Collective Management Authority)*
*Wins*: Terrible, nasty, horrible CMA crash appears to be fixed by finding 
out-of-date ctypes bindings. Several other bugs fixed. New memory problem 
detection software installed and written. Much better diagnostic tools now 
installed and written (some of each).
*Situation*: No doubt many more bugs to fix.
*Issues*: At least two bugs that I observed need more investigation. Can only 
connect one nanoprobe for some unknown reason. New memory detection methods are 
expensive. They can't stay enabled in production.
*Next Action*: Fix more bugs. Fix nanoprobe connection issue.

*Nanoprobe*
*Wins*: None
*Situation*:  some regression
*Issues*: Nanoprobes not running on the CMA node can't seem to connect to the 
CMA. Nanoprobes shutting down don't appear to declare themselves dead. 
Difficult to run unit tests in new environment.
*Next Action*: Add discovery scripts, and other directories and files.

*Story form report:*
Along the way, I discovered that something in the process allowed an old set of 
ctypes bindings to persist in this new build process. Getting rid of that seems 
to have made the crash bug go away. Bad ctypes bindings have caused this 
problem before. It's not surprising - if that was the real problem.

Either that, or it was the presence of much better debugging tools for C/Python 
connections and the threat of good tools made it go into hiding ;-).  [Some I 
found, and some I built]. But if it wasn't just stale bindings, then the good 
tools will make finding it much easier!

With this out of the way, making progress should be much easier. I fixed a 
handful of other bugs today, and will continue to do that no doubt for a while. 
The next one to attack is probably nanoprobes from other machines having 
trouble connecting.


-- 
  Alan Robertson
  al...@unix.sh___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation Progress report

2020-08-22 Thread Alan Robertson
Hi,

I'm going to try a new format of progress report, adapted from how my 
organization in Charter does it. Let me know if you have thoughts about the 
format. Anyone else who can volunteer to run a copy of the Nanoprobe to make 
sure it at least starts up correctly would be very much appreciated. I'm pretty 
excited about the progress made in recent weeks!

*Nanoprobe builds*
*Wins*: Every test so far has worked exactly as expected. Build concept and 
method appear to be working perfectly.
*Situation*: Tested on every version of Ubuntu going back to 2016, along with a 
few versions of Debian. Tests of many versions of Debian now integrated into 
our CI testing.
*Issues*: Need to test some recent versions of CentOS and SuSE.  Hoping for 
more volunteers to test some more versions. Also should investigate if we 
should build glib2 and zlib from source as well.
*Next Action*: Copy to AssimilationSystems.com (CentOS) and try there. Try a 
few versions under Docker(?).

*Nanoprobe packaging*
*Wins*: First round attempts of building .deb packages appear to have succeeded.
*Situation*: Only the nanoprobe itself is currently included in the packages. 
Need to add other needed directories and discovery scripts.
*Issues*: None
*Next Action*: Add discovery scripts, and other directories and files.

*CMA (Collective Management Authority)*
*Wins*: Fixed a few bugs.
*Situation*: Making progress, a bit slowly. Testing is using the containerized 
version of Neo4j, un-containerized version of the CMA.
*Issues*: at least one bug was seen which has not recurred. It seems likely it 
will come back.
*Next Action*: continue testing, and fixing bugs.

-- 
  Alan Robertson
  al...@unix.sh___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Can you run this and make sure it starts up on your Linux box?

2020-08-06 Thread Alan Robertson
Hi,

It is my belief that the new method of building nanoprobe results in a 
universal binary - one that can be run on 64-bit Intel machines with any 
version of  glibc no older than Centos6.  This is the version of the nanoprobe 
I'm testing in my environment, and it seems to be fine on the latest Ubuntu 
stable version.

What I would like for you to do is see if it starts up correctly on the 
version(s) of Linux you have handy, and report your results to the mailing list 
(or just to me, your choice). Please include your OS version and/or glibc 
version.

Use this command line:
./nanoprobe --foreground

Although in practice, the nanoprobe needs to run as root, for this test you 
don't need to run it as root. It should start up and produce messages like this:

 ** (nanoprobe:16129): CRITICAL **: create_pid_file.474: Cannot create pid file 
[/var/run/nanoprobe]. Reason: Failed to create file 
'/var/run/nanoprobe.SVGKO0': Permission denied
create_pid_file.476: Cannot create pid file [/var/run/nanoprobe]. Reason: 
Failed to create file '/var/run/nanoprobe.SVGKO0': Permission denied

The sha256 checksum for this version is:
4be5fa9116cdf08a0eff3a60585af1832230df213f5fe734ea6f3c6ce8001640  nanoprobe

This is the same value in the sums file below. Please verify your checksum 
before trying it. Links to the executable and checksum file are below:

Dropbox nanoprobe link: https://www.dropbox.com/s/0rkjg9yr2mmeggt/nanoprobe?dl=0
Checksums link: https://www.dropbox.com/s/4p0w4r8naakrg7c/sums?dl=0

If you'd rather build your own version, then pull the current rel2 branch and 
then:
cd docker
 ./dockit
This should work for you, and it's my belief that it will produce exactly the 
same nanoprobe checksum (in the docker/nanoprobe directory). This would also be 
interesting to me if you're interested.

Curious to hear how it works for you.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] GitHub "project" page for the Assimilation 2.0.0 release.

2020-08-03 Thread Alan Robertson
I recently discovered the Project pages for GitHub. They work a bit like a 
simplified version of Trello, but are tied into GitHub issues. You can find 
this page here:  https://github.com/orgs/assimilation/projects/1

It has most of the things I want to do in order to put out the 2.0.0 release.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation update... [build system stuff]

2020-01-08 Thread Alan Robertson
JC found some things I'd overlooked in the nanoprobe build process. I believe 
that's now taken care of.

I am taking of Thursday and Friday to see if I can get the CMA container built 
and operating correctly.

JC has done a bunch of work on ensuring the integrity of our build process. 
This is particularly important since we are building Python, libsodium, and 
libpcap from source tar balls. I did a bit more to make it even a bit better 
this afternoon. Once JC incorporates my changes into his, we'll have a 
high-integrity build chain. 
[https://github.com/assimilation/assimilation-official/blob/rel_2_dev/docker/rel2/getsigningkey.py].

We are building our nanoprobe binary so that it will run every version of 
64-bit Intel Linux (similar to Go binaries) using meson. That seems to work as 
advertised - although we obviously haven't tried it everywhere yet ;-).

Nanoprobe builds appear to be 100% reproducible, and common across all Intel 
Linux systems. JC will still build packages for them - at least one RPM and one 
DEB package (or more if we need them). But since it only contains text files 
and a universal binary, it seems like one RPM and one DEB for each architecture 
may very well be enough.

As I mentioned before, I'll be using Docker as a packaging mechanism for the 
CMA, and also for Neo4j.

Given that this emphasis on changing the build chain is delaying the release, 
you might wonder why I'm putting so much emphasis on it. The reason is simple: 
The old code was hard to install, and the installation script was fragile and 
hard to trust. Nanoprobes have to work everywhere. I don't have time to rewrite 
the C code in Go, so this is a good alternative.

By the way, we've been using Keybase (keybase.io) for project chats. It seems 
pretty cool...
  Assimilation Keybase Team: https://keybase.io/team/assimilation


  -- Alan




-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Two steps forward...

2019-12-31 Thread Alan Robertson
And one step back.

There is an unfortunate interaction between ctypesgen (which I've mentioned 
before) and the latest versions of Python 3.7 and 3.8.  Things were working 
great, and then Python upgraded itself. It took me several days to figure out 
what was going on and untangle it and figure out how to work around it. But 
that's done now.

Now, I'm off to creating a version of the CMA container to run the system tests 
against...

If you are interested in the details of the unfortunate interaction, you can 
read the details here: https://github.com/davidjamesca/ctypesgen/issues/77

It's possible I'll be a few days into 2010 before this gets out. I guess that's 
what vacation is for...

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] New build procedure picture...

2019-12-27 Thread Alan Robertson
Hi,

Below (and attached) is my current thinking on the new build procedure:


Lots of this is working - but not all of it ;-). I do have a few days left ;-)

The only thing I'm not sure of is if the CMA container can directly use the 
assimilation client shared library produced by the CentOS6 (old crufty libc) 
directly. If it can't, then we'll have to build it in the CMA container, then 
clean it up afterwards.

Our final target is a CMA container, and a fully-bound (universal-for-Linux) 
nanoprobe binary - which will then be packaged using fpm into a variety of 
target packages - as many as needed for the various OS versions. It *might* be 
the case that we only need a single RPM version and a single .deb version. Not 
sure yet. But all they are doing is making the nanoprobe binaries easy to 
install and manage across OS versions.

-- 
 Alan Robertson
 al...@unix.sh___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Assimilation Release 2 and end of year

2019-12-23 Thread Alan Robertson
Unit tests are working again.

On Fri, Dec 20, 2019, at 1:45 PM, Alan Robertson wrote:
> I'm off between now and the end of the year.
> 
> My main goal besides family things associated with Christmas and New 
> Year's is to get Release 2 out before the end of the year.
> 
> -- 
>   Alan Robertson
>   al...@unix.sh
> ___
> Assimilation mailing list - Discovery-Driven Monitoring
> Assimilation@lists.community.tummy.com
> http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
> http://assimmon.org/
>

-- 
  Alan Robertson
  al...@assimilationsystems.com
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation Release 2 and end of year

2019-12-20 Thread Alan Robertson
I'm off between now and the end of the year.

My main goal besides family things associated with Christmas and New Year's is 
to get Release 2 out before the end of the year.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Assimilation: Looking for help on...

2019-08-28 Thread Alan Robertson
Add the in-progress tag, and drop the help-wanted tag from the GitHub issue if 
you're going to work on it. I've cc'ed the mailing list...

On Wed, Aug 28, 2019, at 12:03 PM, borgified wrote:
> i figured out a way to work fpm to be used at work so this is the 2nd day im 
> playing with this tool and it's pretty great. already got rpm and deb 
> packages out of it. today im figuring out the homebrew (mac packaging). super 
> excited to apply what ive learned so far... just need time!
> 
> -- JC
> 
> On Sat, Aug 24, 2019 at 2:18 PM Alan Robertson 
>  wrote:
>> I'd love some help with these items (mentioned in my previous email):
>> 
>>  - Update the cmake files to make nanoprobe as a fully bound binary
>>  - Create nanoprobe packages with FPM - for CentOS/RedHat, OpenSUSE, 
>> Ubuntu/Debian, (Snap?, Others?)
>>  - Create a CMA docker image
>> 
>>  I created Issues for them in GitHub. You can find them here with a [help 
>> wanted] tag.
>> 
>>  -- Alan
>> 
>>  On Sat, Aug 24, 2019, at 11:16 AM, Alan Robertson wrote:
>>  > Hi,
>>  > 
>>  > We now have a good version of ctypesgen to work from (I'm now a 
>>  > committer on the project) and the automated unit tests all pass with 
>>  > flying colors under Python 3.7. This includes using a containerized 
>>  > version of Neo4j, and managing it as a container.
>>  > 
>>  > My goal is for the primary packaging of the CMA to be a docker 
>>  > container. This will eliminate most of the installation pain. It won't 
>>  > _require_ being in a container, but that's how I'm going to package it.
>>  > 
>>  > I'm also thinking the right way to package the nanoprobe is as a 
>>  > fully-bound binary - which is the approach that Go and Ubuntu's Snap 
>>  > use. Then use FPM to create packages to install the nanoprobe on 
>>  > various OSes - all from the exact same fully-bound binary. Cmake is 
>>  > pretty terrible at creating packages (as I found out). In the process, 
>>  > I will make sure that the nanoprobe will make any directories it needs, 
>>  > etc.
>> 
>>  -- 
>>  Alan Robertson
>> al...@assimilationsystems.com

-- 
 Alan Robertson
 al...@assimilationsystems.com
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation: Looking for help on...

2019-08-24 Thread Alan Robertson
I'd love some help with these items (mentioned in my previous email):

- Update the cmake files to make nanoprobe as a fully bound binary
- Create nanoprobe packages with FPM - for CentOS/RedHat, OpenSUSE, 
Ubuntu/Debian, (Snap?, Others?)
- Create a CMA docker image

I created Issues for them in GitHub. You can find them here with a [help 
wanted] tag.

  -- Alan

On Sat, Aug 24, 2019, at 11:16 AM, Alan Robertson wrote:
> Hi,
> 
> We now have a good version of ctypesgen to work from (I'm now a 
> committer on the project) and the automated unit tests all pass with 
> flying colors under Python 3.7. This includes using a containerized 
> version of Neo4j, and managing it as a container.
> 
> My goal is for the primary packaging of the CMA to be a docker 
> container. This will eliminate most of the installation pain. It won't 
> _require_ being in a container, but that's how I'm going to package it.
> 
> I'm also thinking the right way to package the nanoprobe is as a 
> fully-bound binary - which is the approach that Go and Ubuntu's Snap 
> use. Then use FPM to create packages to install the nanoprobe on 
> various OSes - all from the exact same fully-bound binary. Cmake is 
> pretty terrible at creating packages (as I found out). In the process, 
> I will make sure that the nanoprobe will make any directories it needs, 
> etc.

-- 
  Alan Robertson
  al...@assimilationsystems.com
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation Status update and thoughts - 2019/August

2019-08-24 Thread Alan Robertson
Hi,

We now have a good version of ctypesgen to work from (I'm now a committer on 
the project) and the automated unit tests all pass with flying colors under 
Python 3.7. This includes using a containerized version of Neo4j, and managing 
it as a container.

My goal is for the primary packaging of the CMA to be a docker container. This 
will eliminate most of the installation pain. It won't _require_ being in a 
container, but that's how I'm going to package it.

I'm also thinking the right way to package the nanoprobe is as a fully-bound 
binary - which is the approach that Go and Ubuntu's Snap use. Then use FPM to 
create packages to install the nanoprobe on various OSes - all from the exact 
same fully-bound binary. Cmake is pretty terrible at creating packages (as I 
found out). In the process, I will make sure that the nanoprobe will make any 
directories it needs, etc.

FPM => https://fpm.readthedocs.io/en/latest/intro.html

My next tasks are:

1) Completely rewrite the queries that start the discovery from query values in 
the JSON discovery data (necessary)
2) Get the system-level tests running again

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Python3 Assimilation progress...

2019-07-27 Thread Alan Robertson
All the "normal" tests now pass with Python3, with a slightly hacked ctypesgen 
output file. Since I'm using my version of ctypesgen, this should be easy to 
fix. But I'd like to add a test case for this anomaly in ctypesgen before I put 
out a new version of it. I also finally reached the ctypesgen developers, and 
they seem interested in picking up these changes, and letting me become a 
ctypesgen developer.

Regarding the one still remaining broken test...
There is a series of code which is embedded a "main" programs in a number of 
python files. Those don't seem to work with import errors. I'm confident that 
this is due to something about the test methodology, not the Assimilation code. 
I'll figure that out and fix it.

But it is a very very good progress. It's nearly ready to start developing on 
Assimilation again. And I have a hard deadline (end of year) for getting this 
release out - which is a good thing too.

I'd guess this will take me a day or two to figure out or work around.

Once I get this done, here is one set of queries that I need to rewrite the 
underlying code for. I also need to run the system-level tests. This progress 
so far feels really good!

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Ctypesgen, python 3, etc

2019-07-24 Thread Alan Robertson
I just generated the Assimilation ctypes bindings with the new ctypesgen I 
mentioned before, and  ran my unit tests - which exercise pretty much all the 
C/Python bindings I use - and it passed!

So, that's progress!!

  -- Alan

On Wed, Jul 24, 2019, at 8:24 AM, N.J. van der Horn (Nico) wrote:
> Thumbs up, Die Hard 2019 !
> 
> Respect,
> Nico van der Horn
> 
> Op 24-7-2019 om 14:24 schreef Alan Robertson:
> > Hi,
> >
> > As many of you know, Python 2 reaches end of life at the end of this year. 
> > So, I've been working to get the current code moved to Python 3.
> >
> > One of the big impediments in this process is the fact that our code relies 
> > heavily on ctypesgen which has been unmaintained for several years. This 
> > caused problems with recent versions of libc which have some floating point 
> > constants which it didn't recognize. The code even had some Python 1 
> > constructs in it. Parts of it were a bit crufty.
> >
> > I've just finished a preliminary port of ctypesgen to python 3 including 
> > fixing these bugs, getting rid of some crufty constructs, reformatting the 
> > code with black, and updating the tests to use tox+pytest - my favorite 
> > combination. You can find that code here: 
> > https://github.com/Alan-R/ctypesgen This version works with Python 2 or 
> > Python 3 and all the libraries on my recent Ubuntu system.
> >
> > With this taken care of, I can now move the rest of the Assimilation code 
> > to Python >= 3.7 (exclusively). There are quite a few advantages to python 
> > 3. For example, the ctypesgen tests run about twice as fast on Python 3.7 
> > as they did on Python 2.7. It also supports type declarations and checks 
> > using mypy. I've been using Python 3.7 for quite a while at work, and I 
> > love it.
> >
> > I still plan on packaging the CMA as a docker container, which means 
> > dependency problems disappear, and ease of installation goes way up. Of 
> > course, if you want to build it yourself, you can always run it outside of 
> > a container.
> >
> > I've been using gRPC at work. I'm thinking about that for some of the CMA 
> > APIs. There are good bindings for most languages. In particular, there is a 
> > need for an event API to get notified when various kinds of events occur in 
> > the collective.
> >
> 
> _______
> Assimilation mailing list - Discovery-Driven Monitoring
> Assimilation@lists.community.tummy.com
> http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
> http://assimmon.org/
>

-- 
  Alan Robertson
  al...@assimilationsystems.com
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Containerized Neo4j

2019-07-06 Thread Alan Robertson
As I mentioned a few months ago, I've moved towards running Neo4j from a 
container by default. I just got the unit tests working right.

So, I haven't given up on this - just been busy at work.

When/If I can get the things I've been doing at work available as open source, 
Assimilation could make really good use of some of it.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Neo4j in a container...

2019-04-07 Thread Alan Robertson
Gabe, 

I think you're also missing some context here - or assuming it.

I'm not certain that I'll be starting or stopping Neo4j in production from the 
Assimilation code. I might start it if it's not started, but that would be all.

But I will _certainly_ be managing it in the tests. And you certainly don't 
want the tests to crap on your "production" database like they do now. Right 
now for that and other reasons, only a few people run the tests. I'd like for 
others to be able to run them without as much arcane knowledge as now.

At the moment, I can't run tests because of a breakage between them and Neo4j. 
That's what motivated writing this code - and opening up the discussion on 
containers.

 -- Alan

On Sun, Apr 7, 2019, at 4:11 PM, Alan Robertson wrote:
> Thanks for looking at the code.
> 
> It is very opinionated* by design* - because it knows what it's doing so you 
> don't have to. It's only for running Neo4j - and only standalone. It knows 
> Neo4j. I'm OK with it knowing Neo4j. That's all it's supposed to know. It's 
> not general for any container. Running Neo4j in a cluster is a completely 
> different deal. Then you're probably dealing with k8s or Docker swarm - or 
> build-your-own infrastructure. Running it on a different machine is also a 
> different deal.
> 
> The only assumption it makes that's specific to Assimilation is the default 
> name of the container. It's not an issue - I just picked one.
> 
> If you know of something else that's not guaranteed to be correct, then 
> please do us all a favor, and detail it. General rants without specific 
> complaints aren't as helpful as specific things. Or better yet, pull requests 
> ;-)
> 
> Goals:
>  - to use Neo4j's Docker image
>  - to make it simple to use
>  - to make it reliable to use
>  - avoid going beyond Neo4j's documented and recommended methods and 
> procedures
> 
> Assumptions for this subclass:
>  - It's running Neo4j standalone (not clustered)
>  - Neo4j is running on the same machine as the CMA
> 
> Things that are OK or advantageous
>  - knowing a lot about Neo4j
>  - use the Docker API rather than the command line
>  - living within the assumptions above
>  - making other assumptions consistent with normal practice (e. g. a database 
> normally owns its files - 
>  it's typically a requirement). Of course, I could always add an option that 
> would allow you to screw up
>  to support the extremely rare case where you don't want your database to own 
> its files...
>  Me, I don't like screwing up...
> 
> Other implementations would be more complex - involving things like ssh or 
> k8s commands and so on which all introduce their own problems.
> 
> But they would all be different subclasses. Feel free to write them :-D.
> 
> The code does go beyond Neo4j's normal documented procedures in two ways:
>  - it allows setting the Neo4j password if you've forgotten the current one 
> (force=True)
>  - it is able to completely wipe out a database (useful for testing). Don't 
> use this in production ;-).
>  I've needed both of these things when developing.
> 
> Which of these assumptions do you object to?
> 
> I'm not trying to say this is the only way you can run it, but it's what will 
> happen if you install it out of the box. I want it to work immediately out of 
> the box. That's a great goal, IMHO...
> 
>  -- Alan
> 
> 
> 
> On Sun, Apr 7, 2019, at 2:46 PM, Gabe Nydick wrote:
>> Just a follow up. I've looked at the commit and the implementation and it 
>> proves my point exactly on how not to provide software. The runtime provided 
>> is entirely opinionated. It makes all sorts of assumptions about what will 
>> need to be done and what parameters are needed. Depending on the container 
>> implementation, these things are not guaranteed to be correct.
>> 
>> Old school Dockerfiles have all of those parameters and assumptions. People 
>> don't write those kind of Dockerfiles anymore for the reasons I mentioned. 
>> Orchestration and deployment require flexibility so complex Dockerfiles for 
>> runtime are going away, they're only remaining for build.
>> 
>> The proper way to implement a runtime for a to-be-containerized application 
>> is to provide no specifics. What I mean by that is anything provided by the 
>> vendor should be transparent to the config and environment flowing through 
>> it from the OS/orchestration system to the software, not a script with set 
>> parameters. The runtime Dockerfile could be as simple as
>> 
>> FROM assimilation:3.2-build-10
>> 
>> 
>> and that's it. The orchestration/runtime will provide everything else.
>> 
>> 
>> 
>> On Sun, A

Re: [Assimilation] Neo4j in a container...

2019-04-07 Thread Alan Robertson
e is who's writing the 
>> operational software. The actual implementation still takes the same 
>> for-thought and experience. Scratch that, you now have to give MORE thought 
>> and have MORE experience to be able to think through the many layers of 
>> abstraction. Each of these layers has added more complexity and failure 
>> modes without solving all of the complexity and modes below them.
>> 
>> You're going to be handing your software to people to run in whatever 
>> environment they're used to.
>> 
>> Here's literally my best and most serious advice, off the top of my head.
>>  1. Forget about the container approach as a solution and keep it as a POC 
>> implementation for people to download and run. If they run infra, they'll be 
>> able to figure out how to install neo4j for themselves
>>  2. make the software easily managed That basically means...
>>1. allow for all boot-time config to be passed via command line 
>> parameters AND/OR environment variables.
>>2. Configuration file grammar be rock solid. For example, don't follow 
>> Hashicorp's example of HCL. It's a config that can be defined via JSON but 
>> at the same time, you can use YAML, with YAML being the mainstream, yet in 
>> the YAML you can have objects with the same name. If you try to convert that 
>> config to JSON for easy processing and automation, you're fucked.
>>3. an effective but simple control plane should be implemented, for 
>> example `assim start`, `assim stop`, `assim status`, on both command line 
>> and network RPC
>>4. fast and accurate state inspection. be able to tell the health of the 
>> system (assim) in no more than 5 minutes, including any runtime state needed 
>> to make a decision on how to proceed when recovering from failure.
>>5. don't bind successful exec of the process to anything else, if the 
>> supporting systems aren't up yet, just hang out and wait until they are. Of 
>> course there are implications of that that I know you know what they are.
>>6. provide a network RPC endpoint for the aforementioned state 
>> inspection, generally it's http, but if you want to make inroads in the tech 
>> world, you'll need to also support thrift, fb-thrift, and protobuffs.
>> 
>> There may be more things, but that's off the top of my head.
>> 
>> In conclusion, you shouldn't be responsible for getting the components to 
>> work on every system, just make sure they work properly together without any 
>> sort of exotic requirements and trust the implementers will know how to 
>> install the components.
>> 
>> G
>> 
>> On Sun, Apr 7, 2019 at 12:45 PM Alan Robertson  wrote:
>>> Hi,
>>> 
>>>  For a variety of reasons, I've decided to move some things in Assimilation 
>>> to containers. The first of these is Neo4j.
>>> 
>>>  This is for several reasons:
>>> 
>>>  - I've had trouble with some versions of Neo4j not installing correctly 
>>> under in some OSes in some circumstances.
>>>  - I can much more easily control where it puts database files and logs
>>>  - It's much easier to start and stop and manage a container than it is to 
>>> manage a service
>>>  - The same APIs work in Windows and on the Mac too ;-)
>>> 
>>>  I'm treating Docker as a universal packaging method. Yay universal 
>>> packaging!
>>> 
>>>  Anyway, I've just committed the source for this code. It was a bit bigger 
>>> than I thought (300+ lines), but it's clean and it works.
>>> 
>>>  You can find it here:
>>> https://github.com/assimilation/assimilation-official/blob/rel_2_dev/cma/neo4j.py
>>> 
>>>  -- 
>>>  Alan Robertson
>>>  al...@unix.sh
>>>  ___
>>>  Assimilation mailing list - Discovery-Driven Monitoring
>>> Assimilation@lists.community.tummy.com
>>> http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
>>> http://assimmon.org/

--
 Alan Robertson
 al...@unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Neo4j in a container...

2019-04-07 Thread Alan Robertson
Hi,

For a variety of reasons, I've decided to move some things in Assimilation to 
containers. The first of these is Neo4j.

This is for several reasons:

 - I've had trouble with some versions of Neo4j not installing correctly under 
in some OSes in some circumstances.
 - I can much more easily control where it puts database files and logs
  - It's much easier to start and stop and manage a container than it is to 
manage a service
  - The same APIs work in Windows and on the Mac too ;-)

I'm treating Docker as a universal packaging method. Yay universal packaging!

Anyway, I've just committed the source for this code. It was a bit bigger than 
I thought (300+ lines), but it's clean and it works.

You can find it here:
https://github.com/assimilation/assimilation-official/blob/rel_2_dev/cma/neo4j.py

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Merry Christmas and a Happy New Year!

2018-12-26 Thread Alan Robertson
Hi,

Hoping you all have a Merry Christmas and a Happy New Year - or all the 
holidays you celebrate at this time of year!

I will eventually send out an annual (Christmas?) letter as soon as my wife and 
I write it ;-)



PS: I just wrote the code to do the missing queries for release 2. Now all I 
have to do is make it work ;-)

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Progress on JSON queries...

2018-12-10 Thread Alan Robertson
As I mentioned in an earlier email, I've moved the JSON data into sqlite3. 
Going from the graph nodes to the JSON values works smashingly well, and is 
well-tested by the unit tests.

Recently, I've been looking at how to go the other direction - querying from 
JSON attributes back to graph nodes. Sqlite3 lets me write queries like that 
(on the JSON).  I now have a basic understanding of how to write those queries 
and make some sense out of them.

As an example, JSON for discovery of file attributes looks something like this:
{
  "discovertype": "fileattrs",
  "description": "file and directory attributes1",
  "host": "servidor",
  "proxy": "local/local",
  "source": "/home/alanr/monitor/src/discovery_agents/fileattrs",
  "data": {
"/bin/": {"owner": "root", "group": "root", "type": "d", "perms": 
{"owner":{"read":true, "write":true, "exec":true, "setid":false}, "group": 
{"read":true, "write":true, "exec":true, "setid":false}, "other": {"read":true, 
"write":false, "exec":true}, "sticky":true}, "octal": "0755"},
"/bin/ash": {"owner": "root", "group": "root", "type": "l", "perms": 
{"owner":{"read":true, "write":true, "exec":true, "setid":false}, "group": 
{"read":true, "write":true, "exec":true, "setid":false}, "other": {"read":true, 
"write":true, "exec":true}, "sticky":false}, "octal": "0777"},
"/bin/bash": {"owner": "root", "group": "root", "type": "-", "perms": 
{"owner":{"read":true, "write":true, "exec":true, "setid":false}, "group": 
{"read":true, "write":false, "exec":true, "setid":false}, "other": 
{"read":true, "write":false, "exec":true}, "sticky":false}, "octal": "0755"},
"/bin/bunzip2": {"owner": "root", "group": "root", "type": "-", "perms": 
{"owner":{"read":true, "write":true, "exec":true, "setid":false}, "group": 
{"read":true, "write":false, "exec":true, "setid":false}, "other": 
{"read":true, "write":false, "exec":true}, "sticky":false}, "octal": "0755"},
"/bin/busybox": {"owner": "root", "group": "root", "type": "-", "perms": 
{"owner":{"read":true, "write":true, "exec":true, "setid":false}, "group": 
{"read":true, "write":false, "exec":true, "setid":false}, "other": 
{"read":true, "write":false, "exec":true}, "sticky":false}, "octal": "0755"},
"/bin/bzcat": {"owner": "root", "group": "root", "type": "-", "perms": 
{"owner":{"read":true, "write":true, "exec":true, "setid":false}, "group": 
{"read":true, "write":false, "exec":true, "setid":false}, "other": 
{"read":true, "write":false, "exec":true}, "sticky":false}, "octal": "0755"},
"/bin/bzcmp": {"owner": "root", "group": "root", "type": "l", "perms": 
{"owner":{"read":true, "write":true, "exec":true, "setid":false}, "group": 
{"read":true, "write":true, "exec":true, "setid":false}, "other": {"read":true, 
"write":true, "exec":true}, "sticky":false}, "octal": "0777"},
"/bin/bzdiff": {"owner": "root", "group": "root", "type": "-", "perms": 
{"owner":{"read":true, "write":true, "exec":true, "setid":false}, "group": 
{"read":true, "write":false, "exec":true, "setid":false}, "other": 
{"read":true, "write":false, "exec":true}, "sticky":false}, "octal": "0755"}
}
}

A query looking for files with sticky bits looks like this:
SELECT hash, key, value
FROM (HASH_fileattrs
CROSS JOIN json_each(json_extract(data, '$.data'))
AS result) WHERE json_extract(result.value, '$.perms.sticky') == 1

The output from this query looks like this:
(u'1c87c30fb3119f04ab26bfc98ee5a9f91d55e420fee1f9d630e4bcf0', u'/bin/', 
u'{"owner":"root","group":"root","type":"d","perms":{"owner":{"read":true,"write":true,"exec":true,"setid":false},"group":{"read":true,"write":true,"exec":true,"setid":false},"other":{"read":true,"write":false,"exec":true},"sticky":true},"octal":"0755"}')

Although I don't know any reason to index this particular table, I could create 
indexes on file names, or owners, or permissions, etc. It takes a bit of 
getting used to, but it looks like it can get the job done.

One could then write up a query against Neo4j using these hash values to relate 
back to the corresponding nodes in the graph database. It's a big clumsy, but 
should be completely hidden from the customers.

  -- Alan



-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Progress...

2018-11-14 Thread Alan Robertson
It appears that I can bring up release 2 without it doing anything weird (like 
getting exceptions). It may not be doing everything right yet, but it's no 
longer doing anything obviously wrong... Most of the hangups had been around 
processing ARP discovery. That's where a lot of new code about subnets, and 
network segments lived...

There is one bit of development that still needs to be done before getting 
ReallySeriousAboutTesting(TM). The way I reorganized storing discovery data 
means I need to rewrite the classes that do queries. Queries that start in 
Neo4j and get the discovery (JSON) data are already handled. But those that 
need to query discovery data and go back and find the Neo4j data that goes with 
it will need a bit more thought.

But this is pretty exciting :-).

-- 
  Alan Robertson
  al...@assimilationsystems.com
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Unit tests working again...

2018-11-10 Thread Alan Robertson
Hi,

When I upgraded to Ubuntu 18.04 a few months back, I ran into two new problems 
that kept tests from working.

The first was something changed causing one of the tests to have to be executed 
with the testing program in the current directory. I don't know what the 
underlying cause is, but the test now always runs in that directory.

The second was that one of the tests was looking for a " in a log message. The 
new version of glib on 18.04 changed that quote in the log message to a 
different double-quote character (maybe alt+34?) that displayed _almost_ the 
same in my terminal font. So I changed the part of the message it was expecting 
to not include the quote symbol.

Progress...



-- 
  Alan Robertson
  al...@assimilationsystems.com
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Getting the cobwebs off my keyboard ;-)

2018-10-01 Thread Alan Robertson
OK. My keyboard doesn't have cobwebs - but the Assimilation synapses in my 
brain had a few ;-)

I've started back on development on Assimilation :-D.

I had some concerns about using ctypesgen - but it looks like others have 
picked up the slack - and things look a bit more hopeful -- 
https://lists.osgeo.org/pipermail/grass-dev/2018-September/089586.html

I'm going to contact them and ask how this is going, and if they're going to 
split out ctypesgen from Grass...

I just pushed a commit which seems to cause one of the tests to hang - but it 
runs fine outside of pytest.  Sigh...

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Working through some bugs...

2018-04-25 Thread Alan Robertson
Dejan has been diligently getting things ready for release 2 by testing it. 
Silly Dejan! ;-)

But there seem to be some issues that need straightening out, and they're 
taking a while to get settled.

We are still working, just not as fast as we'd like.

Thought you deserved to know what's going on.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] pylint...

2018-03-22 Thread Alan Robertson
While I was tearing up everything and putting it back together again, I 
neglected to run pylint for a very long time.  As a result, there were lots and 
lots of things that needed tweaking. It also found a few places where some of 
the code was likely broken - as a result of the tearing up, rebuilding, and not 
fully testing yet. The operational code (stuff in the cma directory) is now 
clean.  I committed the changes that I made and pushed them upstream.

For those who don't know how this works (which is likely everyone but me):

You cd into the cma directory then run
../buildtools/lintme.sh > some-file-name

And we should not have any warnings. I got all of them out of the operational 
code, but not out of the system tests.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] CodeDocs.xyz - hosted Doxygen automatically generated from github

2018-03-22 Thread Alan Robertson
Hi,

I'd like to do something better for the site at assimproj.org. While looking 
around, I discovered codedocs.xyz.  I didn't do anything but sign up for the 
sevice, and I got some pretty good results for a first attempt:

https://codedocs.xyz/assimilation/assimilation-official/

There are ways to tell it to use our Doxygen config file. I'll give that a 
shot. It's automatically generated when a push to the default branch is done.

I just changed the default branch - not sure if it's had an effect on this - 
but I'm pretty I have to do it to change branch names anyway ;-)

  -- Alan
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] What do you think of containers?

2018-03-18 Thread Alan Robertson
Hi Joe,

My  thoughts were to only package the CMA as a docker container. And in
a perfect world- this would be the only way we'd supply the CMA.
The previous release had the ability to support container discovery, and
built all our packages using Docker, but didn't use containers as part
of the product.
Customers would still need to install a nanoprobe on each (container)
host so we could discover everything - including containers and what's
going on inside them.
FWIW - the container support in previous releases alluded to above
was only for Docker - not openvz or lxc. It wouldn't be  hard to add
this support, but someone sufficiently motivated (maybe you?) would
have to do it.
You can see the code to support discovery of docker containers and
vagrant VMs here:
https://github.com/assimilation/assimilation-official/blob/rel_2_dev/discovery_agents/assim_common.sh
Basically, you need to implement something analogous to "docker exec"
and "docker ps" for each supported encapsulation methods. Note that I
didn't say "container" - since VMs can work this way too...
--
  Alan Robertson
  al...@unix.sh



On Sun, Mar 18, 2018, at 10:35 AM, jjs - mainphrame wrote:
> Hi Alan,
> 
> I've been running dns, mail and anti spam infrastructure, as well as
> nagios and spacewalk, in containers. Mostly openvz 7, also some ubuntu
> based lxc containers. Containers themselves are all centos (6 & 7) or
> ubuntu 16.04. So I welcome any momentum towards making containers a
> first tier platform.> 
> Joe
> 
> On Sun, Mar 18, 2018 at 7:38 AM, Alan Robertson <al...@unix.sh> wrote:>> Hi,
>> 
>> In the last release, I had to make releases against 12 different
>> Linux variants. I expect this will have not change a lot.>> 
>> Building the nanoprobes requires 2 libraries beyond libc - so
>> creating those packages is pretty easy.>> 
>> Building the CMA for most of these is much more complicated - it has
>> lots more dependencies.>> 
>> So, I'm toying with the idea of releasing the nanoprobe packages for
>> all those environments, and only releasing the CMA as a container -
>> against a single platform - probably CentOS 7. This container would
>> need to have a single IP address visible on the system-wide network,
>> and would have to be able to subscribe to IP multicast packets (where
>> available), but off hand, I don't think it should need any unusual
>> capabilities.>> 
>> It would mean that what I'm testing would be a bit more like what
>> people are running (although the OS can still throw a container
>> monkey wrench in - it's just not very likely).>> 
>> What do you folks think of this?
>>
>>
>>
>> --
>>Alan Robertson al...@unix.sh
>>  ___
>>  Assimilation mailing list - Discovery-Driven Monitoring
>>  Assimilation@lists.community.tummy.com
>>  http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
>>  http://assimmon.org/
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Progress report - SQLite for JSON

2018-03-18 Thread Alan Robertson
Hi,

I've put the code for this into source control and run the unit tests on it. It 
passes all the tests.  The integration between Neo4j and the SQLite JSON store 
was pretty straightforward - perhaps a bit more straightforward than I expected.

There is one place where I know there's a problem in a query or two that need 
to be rewritten, but the bulk of the system's capabilities don't depend on 
those queries.  There are also a great many additional queries that could be 
written given that we can now do more powerful queries against the JSON.

I'm not yet creating indexes on the JSON - which would be necessary to make 
these new queries run in reasonable times.

But it's good progress.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Thinking about adding Postgres to the Assimilation project

2018-02-21 Thread Alan Robertson
I've looked at an JSON implementation on top of SQLite a bit more. It seems 
pretty fast. I haven't tried creating JSON indexes, or creating queries that 
use them. The only quirk I can find so far is that it treats JSON bool values 
into integers.  That seems tolerable.

At first blush - without explicit indexes, it seems to be writing about 
70mb/sec over NFS to my server in the basement. I did declare a uniqueness 
constraint on the hash value - so it created a primary index.

Next thing is to create an implementation similar to the one I created for flat 
files - keeping in mind I want to be able to create queries that take advantage 
of indexes.

-- 
  Alan Robertson
  al...@unix.sh

On Wed, Feb 14, 2018, at 8:41 AM, Alan Robertson wrote:
> Hi Clint,
> 
> On Wed, Feb 14, 2018, at 7:19 AM, Clint Byrum wrote:
> > Excerpts from Alan Robertson's message of 2018-02-12 19:12:41 -0700:
> > > So, that's a total of 4 possible methods of storing the data...
> > > (a) in Neo4j, (b) in Postgres, (c) in flat files, (d) in S3.  Of 
> > > those four, only Postgres will provide indexing.
> > > If on the other hand, we break out big JSON blobs into individual nodes 
> > > in Neo4j, we can search that
> > > instead (as Michael Hunger suggested).
> > 
> > What about SQLite?
> 
> I hadn't thought of it to be honest.
> 
> > It's no worse than flat JSON and gives you transactions
> > and indexing. The major drawback is that it uses database-level locking,
> > so writes are going to serialize entirely.
> 
> Not sure what it does with JSON.  If it can't index the JSON, then 
> there's no advantage that I can see. Given the nature of this JSON data 
> (invariant), transactions are not an issue...
> 
> _[Goes away and reads stuff...]_
> 
> They state that it's faster and smaller in SQLite than in flat files. I 
> believe that. It's certainly fewer inodes ;-).
> 
> It looks like one can use SQLite with the JSON1 extension to store and 
> query JSON with indexes (indexes based on JSON expressions) if one is 
> careful on how one writes the queries. I suspect we can do that...
> 
> Write locking isn't likely to kill us. It'll clearly be much better than 
> what we've been doing. There are solutions to that if it becomes an 
> issue. 
> 
> I'll go away and read some more about locking and how it would interact 
> with our (disabled-by-default) REST interface to the database... 
> 
> Thanks for the suggestion!
> 
> _[Goes away to read more stuff...]_
> 
> It looks like WAL mode in SQLite solves the problem of concurrent 
> writers :-D.  And  its locking (in any of its modes) would be friendly 
> to our use of the database.
> 
> So this looks reasonable. I'll give it more thought. Maybe write some code...
> 
> Everyone should feel free to offer your insights and experiences 
> regarding SQLite...
> 
> 
> -- 
>   Alan Robertson
>   al...@unix.sh
> ___
> Assimilation mailing list - Discovery-Driven Monitoring
> Assimilation@lists.community.tummy.com
> http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
> http://assimmon.org/
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Thinking about adding Postgres to the Assimilation project

2018-02-14 Thread Alan Robertson
Hi Clint,

On Wed, Feb 14, 2018, at 7:19 AM, Clint Byrum wrote:
> Excerpts from Alan Robertson's message of 2018-02-12 19:12:41 -0700:
> > So, that's a total of 4 possible methods of storing the data...
> > (a) in Neo4j, (b) in Postgres, (c) in flat files, (d) in S3.  Of those 
> > four, only Postgres will provide indexing.
> > If on the other hand, we break out big JSON blobs into individual nodes in 
> > Neo4j, we can search that
> > instead (as Michael Hunger suggested).
> 
> What about SQLite?

I hadn't thought of it to be honest.

> It's no worse than flat JSON and gives you transactions
> and indexing. The major drawback is that it uses database-level locking,
> so writes are going to serialize entirely.

Not sure what it does with JSON.  If it can't index the JSON, then there's no 
advantage that I can see. Given the nature of this JSON data (invariant), 
transactions are not an issue...

_[Goes away and reads stuff...]_

They state that it's faster and smaller in SQLite than in flat files. I believe 
that. It's certainly fewer inodes ;-).

It looks like one can use SQLite with the JSON1 extension to store and query 
JSON with indexes (indexes based on JSON expressions) if one is careful on how 
one writes the queries. I suspect we can do that...

Write locking isn't likely to kill us. It'll clearly be much better than what 
we've been doing. There are solutions to that if it becomes an issue. 

I'll go away and read some more about locking and how it would interact with 
our (disabled-by-default) REST interface to the database... 

Thanks for the suggestion!

_[Goes away to read more stuff...]_

It looks like WAL mode in SQLite solves the problem of concurrent writers :-D.  
And  its locking (in any of its modes) would be friendly to our use of the 
database.

So this looks reasonable. I'll give it more thought. Maybe write some code...

Everyone should feel free to offer your insights and experiences regarding 
SQLite...


-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Thinking about adding Postgres to the Assimilation project

2018-02-13 Thread Alan Robertson
I wrote some passably reasonable code to persist JSON to disk (using its hash 
as a key) as per an off-list suggestion.

You can see it here: 
https://github.com/assimilation/assimilation-official/blob/rel_2_dev/cma/invariant_data.py

The base code is very straightforward. It has lots of comments, and a little 
extra sugar to make it more Pythonic - which more than doubled the number of 
lines . Fortunately, it didn't make it much more complex - just more bulky.

The code to write the data is shown below:

try:
os_openmode = os.O_EXCL+os.O_CREAT+os.O_WRONLY
with os.fdopen(os.open(pathname, os_openmode, self.filemode), 'w') 
as file_obj:
file_obj.write(value)
file_obj.flush()
if not self.delayed_sync:
os.fsync(file_obj.fileno())  # Make sure the bits get saved 
for real...
except OSError as oopsie:
if oopsie.errno == errno.EEXIST:  # Already exists
pass
elif oopsie.errno == errno.ENOENT:  # Directory is missing...

self._create_missing_directories(os.path.join(self.root_directory,
  
key[:self.hash_chars]))
# Try again (recursively)...
self.put(value, key)
else:
raise oopsie
return key

If delayed_sync is enabled, then you should call the object.sync() to do an 
"end transaction" type thing. It does a syncfs() on the filesystem. It's a bit 
faster to do a syncfs() on the filesystem once than several separate fsync() 
calls.

We ignore EEXIST because if the data exists, it's the same data - so we don't 
need to write it again.

The search code isn't right yet. Searching will be slower than Postgres for 
most cases, but faster than the current code.

-- 
  Alan Robertson
  al...@unix.sh

On Mon, Feb 12, 2018, at 7:12 PM, Alan Robertson wrote:
> Two less heavyweight ways to store the JSON data from some off-list 
> discussions:
> 
> 1) Just store each JSON value in a flat file. Since the "keys" are 
> already hash values, one could imagine a directory structure that looks 
> a bit like this:
> 
>  
> 
>  # 
> That's 16^3 (4096) possible subdirectories
>
> [JSON-string-in-full-key-filename]
>   This supports up to 16M different values for each discovery type 
> while not putting more than 4096 entries in
>any directory, and only two layers of directories.
>Since they're hash keys, they should spread across the 
> subdirectories nicely.
> It will generate a _lot_ of inodes for big sites.
> Since the contents are invariant (which makes them idempotent), 
> a simple fsync is likely enough to ensure
> data integrity and substitute for transactions.
> I wrote a bare-bones version of this code last night... It's as 
> simple as it sounds...
> 
> 2) Store them as S3 objects (if you're using AWS). Slightly concerned 
> about the transaction-like semantics
>  but I suspect AWS/S3 is pretty reliable... 
> 
> So, that's a total of 4 possible methods of storing the data...
> (a) in Neo4j, (b) in Postgres, (c) in flat files, (d) in S3.  Of 
> those four, only Postgres will provide indexing.
> If on the other hand, we break out big JSON blobs into individual nodes 
> in Neo4j, we can search that
> instead (as Michael Hunger suggested).
> 
> Even if we break it out into separate nodes in Neo4j for searching, we 
> still need to keep the original JSON around.
> 
> The flat file code I wrote is a subclass of a class that's intended to 
> hold 3 or 4 of these alternative implementations.
> 
> This way if we change our mind, it will be comparatively straightforward 
> to convert from one to another. Given the incredibly simple semantics of 
> the data, even doing the conversion on a live system wouldn't be that 
> hard...
> 
> Right now, I'm leaning towards flat files - understanding that it has 
> costs and potential complexities (particularly the number of inodes).
> 
> -- 
>   Alan Robertson
>   al...@unix.sh
> 
> On Fri, Feb 9, 2018, at 2:27 PM, Welch, Bill wrote:
> > Just intuition here, but embedding postrges inside Assimilation seems 
> > heavy for just JSON. Of course, you already embed neo4j so you know how 
> > to handle having a dependency on a large subsystem.
> > 
> > Then, there's mongodb vs postgres vs ...:
> > https://www.sisense.com/blog/postgres-vs-mongodb-for-storing-json-data/
> > https://www.quora.com/Which-is-better-storing-json-objects-in-json-files-in-Redis-or-MongoDB-RethinkDB
> > 
> > On 9/2/18, 10:05, "Assimila

Re: [Assimilation] Thinking about adding Postgres to the Assimilation project

2018-02-09 Thread Alan Robertson
V2 is in source control as the _rel_2_dev_ branch -- here
https://github.com/assimilation/assimilation-official/tree/rel_2_dev
--
  Alan Robertson
  al...@unix.sh



On Fri, Feb 9, 2018, at 2:02 PM, N.J. van der Horn (Nico) wrote:
> All efforts to save on resources now, can be profitable in future
> additions ;-)>  Where can I take a peek in V2 ?
> 
>  Regards, Nico
> 
> 
> Op 9-2-2018 om 16:04 schreef Alan Robertson:
>> This data is generated by a variety of shell scripts that do
>> discovery -  potentially dozens of them - and each is different.
>> Some of the most critical data is decomposed to attributes - but not
>> most of it.

>>> 
> -- 
> 
> Met vriendelijke groet / With best regards, 
>  Nico van der Horn
> 
> Vanderhorn logo
> 
> Voorstraat 55
>  3135 HW Vlaardingen
>  The Netherlands
> *T *   +31 (0)10 - 248 60 60
> *W *  www.vanderhorn.nl
> *E *   n...@vanderhorn.nl
>  
> _
> Assimilation mailing list - Discovery-Driven Monitoring
> Assimilation@lists.community.tummy.com
> http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation> 
> http://assimmon.org/

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Thinking about adding Postgres to the Assimilation project

2018-02-09 Thread Alan Robertson
This data is generated by a variety of shell scripts that do discovery -  
potentially dozens of them - and each is different. Some of the most critical 
data is decomposed to attributes - but not most of it.

-- 
  Alan Robertson
  al...@unix.sh

On Fri, Feb 9, 2018, at 2:58 AM, Michael Hunger wrote:
> I think this is ok. 
> I wished we had full document support yet.  
> 
> I know that pg has really good jsonb support, so go for it. 
> 
> Did you ever try to destrucure the data into properties? Not sure how 
> deeply nested it is? And leave off all that are just defaults 
> 
> Von meinem iPhone gesendet
> 
> > Am 09.02.2018 um 04:10 schrieb Alan Robertson <al...@unix.sh>:
> > 
> > Hi,
> > 
> > There is one set of data that when I insert it into Neo4j - it's really, 
> > really slow. It's discovery data - which is JSON - and sometimes very large 
> > - a few megabytes. Many of them are smallish, but having items a few 
> > kilobytes is common, and dozens of kilobytes is also common, and some few 
> > things are in the megabyte+ range. [Because of compression, I can send up 
> > to 3 megabytes of this JSON over UDP].
> > 
> > There are a few things I can do with Neo4j to make inserting it faster, but 
> > I don't think a lot -- and when I get done, the data is very hard to query 
> > against (it involves regexes against unindexed data, and is a performance 
> > nightmare).
> > 
> > Postgres has JSON support, and it has real transactions and a reputation 
> > for being very solid. I did some benchmarking and it is a a couple of 
> > orders of magnitude faster than Neo4j with both of them untuned. In 
> > addition, Postgres JSON (jsonb) can have indexes over the JSON information 
> > - greatly improving the query capabilities over what Neo4j can do for this 
> > same data.
> > 
> > I'm not thinking about doing anything except moving this one class of data 
> > to Postgres. This particular class of data is also idempotent, which has 
> > advantages when you have multiple databases involved...
> > 
> > Since this particular type of data is its own object in the Python, having 
> > it be in Postgres wouldn't likely be horrible to implement.
> > 
> > If I'm going to do this in the next year or two, it makes sense to couple 
> > it with the rest of the backwards-incompatible changes I'm already putting 
> > into release 2.
> > 
> > Does anyone think this is a show-stopper to use two databases?
> > 
> > -- 
> >  Alan Robertson
> >  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Thinking about adding Postgres to the Assimilation project

2018-02-08 Thread Alan Robertson
Hi,

There is one set of data that when I insert it into Neo4j - it's really, really 
slow. It's discovery data - which is JSON - and sometimes very large - a few 
megabytes. Many of them are smallish, but having items a few kilobytes is 
common, and dozens of kilobytes is also common, and some few things are in the 
megabyte+ range. [Because of compression, I can send up to 3 megabytes of this 
JSON over UDP].

There are a few things I can do with Neo4j to make inserting it faster, but I 
don't think a lot -- and when I get done, the data is very hard to query 
against (it involves regexes against unindexed data, and is a performance 
nightmare).

Postgres has JSON support, and it has real transactions and a reputation for 
being very solid. I did some benchmarking and it is a a couple of orders of 
magnitude faster than Neo4j with both of them untuned. In addition, Postgres 
JSON (jsonb) can have indexes over the JSON information - greatly improving the 
query capabilities over what Neo4j can do for this same data.

I'm not thinking about doing anything except moving this one class of data to 
Postgres. This particular class of data is also idempotent, which has 
advantages when you have multiple databases involved...

Since this particular type of data is its own object in the Python, having it 
be in Postgres wouldn't likely be horrible to implement.

If I'm going to do this in the next year or two, it makes sense to couple it 
with the rest of the backwards-incompatible changes I'm already putting into 
release 2.

Does anyone think this is a show-stopper to use two databases?

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Release 2 progress...

2018-01-27 Thread Alan Robertson
Hi all,

Thanks to some fixes from Dejan, and some work on my part, release 2 is 
beginning to do useful things.

Release 2 is primarily about restructuring the database and how I use it, and 
switching to much more recent versions of the Neo4j bindings, and of Neo4j 
itself.

It also now knows abou network segments and subnets. For people with no 
duplicate IPs or MAC addresses, this is unlikely to matter. But for those who 
have these problems, it is expected to be very helpful.

And as a bonus, there is now code to analyze all the recent patches issued by 
CentOS. The intent is to cover more patch sources, and then tell you what 
systems need updating to be eliminate known security problems.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Ramblings about IP addresses, MAC addresses, subnets and network segments

2017-10-12 Thread Alan Robertson
> So, it sounds like the tuple to fully qualify a device just needs to
> be longer?
That's the short answer.

The longer answer is that subnets and network segments (broadcast
domains) have the same ambiguity problem. And there's no obvious name
for a network segment. Even for subnets, which kind of have names, there
might be two 10.10.10.0/24 subnets ;-).

Network segments are a collection of subnets sharing a broadcast domain.
And of course, there's the usual misconfiguration that one will run
into. Two different IP addresses on a single subnet have different
netmasks... IP addresses that just don't belong anywhere showing up on a
network segment (as opposed to deliberately showing up).

IP addresses are associated with subnets, and NICs are associated with
network segments...
I took me some time to get all this straight in my head. It's so much
easier if everything is unique.
I think I have code which does this roughly as well as it can be done -
at least that's what I think. I'll know better once I can get tests
running again. I seem to have run into a new issue that I haven't
figured out yet:  https://github.com/neo4j/neo4j/issues/10222

--
  Alan Robertson
  al...@unix.sh



On Mon, Oct 9, 2017, at 01:51 PM, Gabe Nydick wrote:
> So, it sounds like the tuple to fully qualify a device just needs to
> be longer?> 
> On Sat, Oct 7, 2017 at 5:21 PM, David Lang <da...@lang.hm> wrote:
>> In the distant past I saw network cards where all ports on one card
>> used the same MAC address (the assumption was that they would be
>> plugged into different switches/networks and so it did't hurt)>> 
>>  In the recent past, I ran into problems on OpenWRT where different
>>  machines assigned the same MAC address to bridge interfaces. Things
>>  worked as long as they were on different VLANs...>> 
>>  David Lang
>> 
>>   On Tue, 3 Oct 2017, Alan Robertson wrote:
>> 
>>> Hi Jeff,
>>>
>>>  I appreciate your comment - and I wish I believed it was certainly
>>>  true. But I suspect that it's not guaranteed - and  even if you
>>>  don't have insane or brain-dead admins... Every loopback device has
>>>  the MAC address of 00-00-00-00-00-00. It's clearly a special case,
>>>  but if you look at the fact that MAC addresses are generated on the
>>>  fly for virtual machines and containers - it now gets much more
>>>  scary. There's no guarantee that there won't be duplicates - if you
>>>  have enough infrastructure.  When  mergers and acquisitions happen,
>>>  they may have duplicate (virtual) MACs - but they don't know it. I
>>>  can just about guarantee you that duplicate *IP *addresses happen
>>>  when you have containers (not to mention 127.0.0.0/8). They
>>>  generate IP addresses which only have to be unique within that
>>>  particular network segment - which is completely under their
>>>  control. The truth is that the same constraint applies to the MAC
>>>  addresses they're generating - they also only have to be unique in
>>>  that broadcast domain (network segment). Here are a few virtual
>>>  subnets  from my desktop machine:
>>>  172.17..0/16, 172.18.0.1/16 172.19.0.0/16, 172.20.0.0/16.  They
>>> start addressing each segment from "0.1" Those are the same
>>> ones that you'll get if you install Docker and mess around
>>> with it for a while.  It does look like Docker is doing a
>>> reasonable job of generating at least semi-random MAC
>>> addresses But unlike real MAC addresses, virtual MAC
>>> addresses have no central repository of addresses to pull
>>> from to make sure they're unique. I certainly would like for
>>> you to be right - and the loopback device was the only
>>> exception. Unfortunately, I suspect that in real life that
>>> duplicate MACs will show up across the enterprise.  But it
>>> won't matter to the people involved - and no one will know
>>> unless somehow two duplicates show up on the same network
>>> segment. That is much much less likely than having
>>> duplicates across the enterprise. Impossible if the
>>> generator of those MAC addresses is something like
>>> Kubernetes - which *is *in control of the infrastructure for
>>> that subnet.
>>>
>>>  --
>>>   Alan Robertson al...@unix.sh
>>>
>>>
>>>
>>>  On Tue, Oct 3, 2017, at 12:39 PM, Jeff Silverman wrote:>>>> Alan,
>>>>
>>>>  Duplicate MAC addresses are something that a sysadmin

[Assimilation] Unsubscribing from this list.

2017-09-15 Thread Alan Robertson
To unsubscribe from this list, please click on the link at the bottom of
every one of these email.

Unsubscribing works just fine. The person who had it appear to not work
had subscribed from two different email addresses.

This will be much faster for you than waiting for me to notice your
email, and unsubscribe you myself.

-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] dropbox account is dead?

2017-09-15 Thread Alan Robertson
Hi Alba,

The account is active, but Dropbox broke all public links a few days ago
:-(. It wasn't an accident...

I just updated the link in the script in Dropbox. In case that doesn't
work, here a few places to look around and see if you can install it by
hand:

Build:         
https://www.dropbox.com/sh/7dk588n0oxmjh27/AAAOiAqSecvhOI_mS0bJpPDsa?dl=0
Releases: 
https://www.dropbox.com/sh/g1ijoy1pbpbtpyq/AACzwZnensRymSSxFtHQcNrMa?dl=0
1.1.7:         
https://www.dropbox.com/sh/2197qf05cpvg7xp/AAB5D9ZpE4tfiZWo85u8VMEia?dl=0

Or you can modify the script yourself...


--
  Alan Robertson
  al...@unix.sh



On Fri, Sep 15, 2017, at 02:37 AM, Alba Ferri Fitó wrote:
> Hi everyone,
> I just heard about assimilation project, and I find it very interesting!
> 
> I'm tring to download and install the software but I guess, Alan, you do not 
> have your dropbox account activated anymore?
> 
> + echo 
> https://dl.dropboxusercontent.com/u/65564307/builds/Releases/1.1.7/debian_7-x86_64
> + : Could NOT retrieve 
> https://dl.dropboxusercontent.com/u/65564307/builds/Releases/1.1.7/debian_7-x86_64/build.out
> 
> Would love to give it a try!
> Alba.
> _
> Assimilation mailing list - Discovery-Driven Monitoring
> Assimilation@lists.community.tummy.com
> http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
> http://assimmon.org/
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Assimilation update...

2017-09-12 Thread Alan Robertson
Hi Jon,

Also it's the kind of things that I've been putting off because they
were too disruptive and too likely to not work. Might as well get them
all done -- and it feels good to do so ;-).   It also feels good just to
get back to the code ;-). 

Life's been kinda crazy. Still is, just a bit more structure to the
crazy from time to time. My current employer has a very similar project
they're doing - so I'm a really good match for that organization. It's
fun, but not as much fun as getting this back going. I had a new guy
sign up to contribute to the project. No contributions yet, but I kind
of expect some from him after this 2.0.0 release. He's using it at work.

I have other restructuring I want to do in the future, but I don't think
any of those change either the API to the clients, or the database
format. I want to create a sort of workload manager - for some of the
things I want to do in the future, and some of the things we're already
doing.

   Thanks for caring and checking back in with me!

-- 
  Alan Robertson
  al...@unix.sh

On Tue, Sep 12, 2017, at 11:10 AM, Jon Cotton wrote:
> Nice to hear from you, Alan. Thank you for the update!
> It makes a lot of sense to bundle multiple BC-breaking changes together
> into the same release. Looking forward to it.
> -Jon Cotton
> 
> 
> 
> 
> 
> 
> 
> CONFIDENTIALITY STATEMENT: This email message, together with all
> attachments, is intended only for the individual or entity to which it is
> addressed and may contain legally privileged or confidential information.
> Any dissemination, distribution or copying of this communication by
> persons or entities other than the intended recipient, is strictly
> prohibited, and may be unlawful. If you have received this communication
> in error please contact the sender immediately and delete the transmitted
> material and all copies from your system, or if received in hard copy
> format, return the material to us via the United States Postal Service.
> Thank you.
> ___
> Assimilation mailing list - Discovery-Driven Monitoring
> Assimilation@lists.community.tummy.com
> http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
> http://assimmon.org/
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation update...

2017-09-10 Thread Alan Robertson
Hi,

It's been a really long time since I sent any emails out about the open
source Assimilation project. There are quite a few reasons for that, but
I've gotten back to work on the project and thought it would be good to
send out an update.

For a few months, I've been working on updating how we use Neo4j to use
more "modern" Neo4j constructs, and to make a couple of other changes.
This will be part of a "release 2" version of Neo4j - since several
changes will not be backwards compatible.

The neo4j update went well, and looks pretty much complete. It was a lot
of work, and required a lot of testing, but I'm quite happy with it - it
simplifies the code - which is always goodness.

The other two things are to change the way I represent the heartbeat
rings that Assimilation uses. This was a medium-complex change, and is
good to bundle into an non-compatible release. This change is
essentially complete, and passes all my tests.

The last change relates to uniqueness of IP and MAC addresses. The old
code assumed IP and MAC addresses were unique across the enterprise.
This is obviously not true,  but that's what the code assumed - and for
the most part, we got away with it.

The new code (when written) will only expect MAC and IP addresses to be
unique within a subnet - not across the entire enterprise. Since things
break when this isn't true, it's a much more reasonable assumption. This
will be a medium-complex change.

There may be other things that come along with this, but I'll let you
know as they bubble up. The intent is to bundle any other
non-backwards-compatible changes into this release. No others come to
mind at the moment, but I'll let you know.


-- 
  Alan Robertson
  al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Fixed Assimilation Installer

2016-05-28 Thread Alan Robertson
Hi folks,

I've fixed the installer to deal with the recent changes in Neo4j and
py2neo (which I mentioned in a previous email). Feel free to get a new
copy via https://bit.ly/assiminstall for installing new systems.  This
link redirects here:
   
https://raw.githubusercontent.com/assimilation/assimilation-official/master/buildtools/installme

Because it's the installer, it can't be part of what's being installed -
so it's outside of the normal release procedures.

-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Neo4j 3.x installer implications

2016-05-25 Thread Alan Robertson
Hi all,

It's come to my attention that the current installer will fail on
several platforms because it will try and install the latest version of
Neo4j.  This has several problems for the short term:

  * Neo4j requires Java 8 (or 1.8 depending) - which is not likely
what's installed
  * The installer just looks to see if Java is installed, and doesn't
worry about the version
  * If you fix those (which is easy enough), then the CMA will complain
that Neo4j 3.x isn't a supported version

I'll fix these to the degree that I can in the next release. I can't fix
it for the CMA for Debian Wheezy - because that version of Java isn't
available for Wheezy even in backports. I'll lock it down to Neo4j 2.x
and the appropriate Java version.

One thing I'll also do is lock down the versions of Neo4j and py2neo
when I install them so that this doesn't happen in the future.

Sorry for the inconvenience!

-- Alan

-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Visualization and graph queries...

2016-04-27 Thread Alan Robertson
Hi Gabe,

On 04/27/2016 07:46 PM, Gabe Nydick wrote:
>
> I apologize, I haven't contributed in a long time, so this may be
> moot, but in my experience, working on a system that manages nearly 1
> million servers, a regular rdbms database was more than powerful enough.
>
It's not a question of power. It's a question of the right tool for the
right job. Most people only have a relational database in their toolbox
- so in their mind it's obviously the right tool for the job. But if
you're willing to learn to use a screwdriver [Far Side cartoon
<http://www.fredwehner.de/buggsold/fs7.gif>], you might find that it
works better than a hammer for screwing in screws ;-). You can use a
hammer to fasten things with screws, but you'll get better results with
a screwdriver...
>
> You can use adjacency patterns to store graphs in an rdbms and do the
> other stuff you want more easily.
>
> Has this been discussed?
>
Thanks for asking. It's good to have this in the mailing list archives.

I thought about it early on in the project but rejected it. Until I
discovered graph databases, I didn't persist anything at all - it was
obviously going to be too ugly. I got incredibly excited when I heard
about Neo4j.

The part of the data where Neo4j isn't a perfect fit, is more like a
document database than a relational database. There is little or none of
this where a relational database is a better fit.

What I _love_ about Neo4j is that it's a near-perfect match for the
object model of my code. Neo4j is effectively an object store for me. It
makes the code clean and easy to write. I also love that it's schemaless
- as I add new capabilities, I don't have to stop the process,
restructure the database, and then restart it.

Writing a query like the one that Michael Hunger just wrote in 5 lines
would be an absolute nightmare in SQL. In fact, you can't write it in
SQL at all. You'd have to write code that would join against this table,
then against that one, then against another one - and again, and again,
and again -- until the result didn't get any bigger, and save all the
intermediate results so you could get the paths. Then you'd still have
to reconstruct the graph out of all that stuff that you got. And each
one of these joins would be slower than the single Cypher query.

Finding shortest path (or doing path enumeration) is something you _can_
do in SQL, but OMG it's a nightmare. If your only object is a human and
your only relationship type is "knows" or "friend", it's still horrible,
but not as nightmarish as the data center case where there many kinds of
objects (currently around 12) and many kinds of relationships (currently
15 or so).

And there are still so many things we don't yet model...

Managing a graph in a relational database is like programming a Turing
machine <https://en.wikipedia.org/wiki/Turing_machine>. You can compute
_anything_ with a Turing machine, but you wouldn't want to. Just throw
people, money, iron and ibuprofen at it. No problem ;-).

Graphs are a well-known and well-studied weakness of relational
databases. Really.

-- Alan

> On Apr 27, 2016 5:37 PM, "Alan Robertson" <al...@unix.sh> wrote:
>
> Hi,
>
> Looking for thoughts or comments...
>
> It's pretty obvious that one of the big advantages of Graph
> databases is the fact that visualization is a natural thing to do.
> But it's also quite apparent that we haven't done it yet...
>
> We have a REST interface which returns JSON in a fashion similar
> to what an SQL database would return - rows of columns of data...
>
> But for visualization, what we need is a query which returns a
> subgraph.
>
> See this Todo item: https://trello.com/c/jN2IUKFn
>
> For example:
>
> I want everything which depends upon this service (or server, or
> switch) directly or indirectly as a graph - that is nodes and
> relationships. All of them.
>
> Or everything on which it depends
>
> Or everything in both directions
>
> Neo4j doesn't directly support it, but it does provide some good
> tools - you can get all the paths that start at the node(s) in
> question. But then you have to remove duplicates between the paths
> and create a single coherent graph. The ToDo item has a few links
> to thoughts in this area. Feel free to bring them into the
> discussion if you're interested.
>
> And I think it would be good to provide "shorthand" graphs. For
> example, clients and servers are connected to each other through
> an IP:port combination. For the purposes of looking at
> dependencies, the IP:port is irrelevant. It should go from client
> to server directly. From this perspective, the IP:port is an
> artifact of the IP protocol.
>
> Or

Re: [Assimilation] Visualization and graph queries...

2016-04-27 Thread Alan Robertson
Hi Michael,

This looks interesting!

On 04/27/2016 07:57 PM, Michael Hunger wrote:
> you can return the distinct data
>
> 
> match path = shortestPath( (n)-[*]->(m) )
> unwind nodes(path) as n
> with collect(distinct n) as nodes
> unwind rels(path) as r
> return collect(distinct r), nodes

I hadn't looked at either unwind or collect before... I read about them
just now. I was somewhat aware of collect before, but not unwind.
Interesting...

This looks really cool. I added your query to the Trello card:

https://trello.com/c/jN2IUKFn

If you think I got something wrong there - let me know...

It seems like (m) could just be (). And maybe a name like (start) would
be better than (n) in the match statement would be better, just because
of the confusion with the next unwind statement. Is that right?

Transforming this into useful JSON would be pretty easy!

As always, your wisdom is much appreciated!

-- Alan

>
>
> On Thu, Apr 28, 2016 at 2:37 AM, Alan Robertson <al...@unix.sh
> <mailto:al...@unix.sh>> wrote:
>
> Hi,
>
> Looking for thoughts or comments...
>
> It's pretty obvious that one of the big advantages of Graph
> databases is the fact that visualization is a natural thing to do.
> But it's also quite apparent that we haven't done it yet...
>
> We have a REST interface which returns JSON in a fashion similar
> to what an SQL database would return - rows of columns of data...
>
> But for visualization, what we need is a query which returns a
> subgraph.
>
> See this Todo item: https://trello.com/c/jN2IUKFn
>
> For example:
>
> I want everything which depends upon this service (or server, or
> switch) directly or indirectly as a graph - that is nodes and
> relationships. All of them.
>
> Or everything on which it depends
>
> Or everything in both directions
>
> Neo4j doesn't directly support it, but it does provide some good
> tools - you can get all the paths that start at the node(s) in
> question. But then you have to remove duplicates between the paths
> and create a single coherent graph. The ToDo item has a few links
> to thoughts in this area. Feel free to bring them into the
> discussion if you're interested.
>
> And I think it would be good to provide "shorthand" graphs. For
> example, clients and servers are connected to each other through
> an IP:port combination. For the purposes of looking at
> dependencies, the IP:port is irrelevant. It should go from client
> to server directly. From this perspective, the IP:port is an
> artifact of the IP protocol.
>
> Or if one is looking at servers, then one could even eliminate the
> services that cause the dependencies. It depends on the level of
> detail one wants to have.
>
> The idea of leaving out (compressing) detail seems essential to
> me. That could happen either in the client (JavaScript code) or on
> the server. If it happened on the JS side, it could be be expanded
> or compressed out quickly - without re-issuing the query.
>
> What are your thoughts about this?  Is it important? What would
> you do with it?
>
> -- 
>
> Alan Robertson / CTO
> al...@assimilationsystems.com
> <mailto:al...@assimilationsystems.com>/ +1 303.947.7999
> <tel:%2B1%20303.947.7999>
>
> Assimilation Systems Limited
> http://AssimilationSystems.com
>
> Twitter <https://twitter.com/ossalanr> Linkedin
> <https://www.linkedin.com/in/alanr> skype
> <https://htmlsig.com/skype?username=alanr_unix.sh>
>
>


-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Docker Support

2016-04-27 Thread Alan Robertson
Hi,

Right now, we have minimal support for Docker - basically what comes
from the fact that our agent is running on the host. We clearly don't
want to install a nanoprobe inside each container, that's way too
heavyweight - and some of the things we want to do can't be done from a
normal low-privilege container, but /can/ be done from the host OS.

Here are things I know I want to discover for Docker:

  * Discover what packages are installed
  * Discover what OS and version is installed
  * Discover checksums of binaries, libraries and JARs
  * Commands in $PATH
  * Installed monitoring agents
  * Probably quite a few other things too ;-)

Most of these items can only be done from inside the container (or at
least easily).

I've prototyped gathering installed packages from Docker instances. I
did this because I think that installed packages are good example of the
kind of information you'd like to have. If you have a dozen Docker
containers, then you have a dozen sets of packages which might be out of
date and therefore vulnerable.

What I discovered in the process is that what you want to have is a
proxy function that will run a command inside the container, and then
divide the discovery agent up into parts that run standard commands
inside the container (rpm, dpkg-query, etc) and those that don't have to
run inside the container. This keeps you from having to install the
discovery agent inside the container. Not too surprising ;-).

Of course, you also want to gather the information from docker info +
docker ps + docker inspect as well.

After a little investigation it looks like the same approach work for
vagrant, and probably other forms of virtualization as well.

All this "proxied" information would go into objects in the database.

Right now, we have a base class called System, with a subclass called
Drone. Systems are general boxes with some intelligence to them. Drones
are Systems which run our agents.  It seems that it would be appropriate
to create a new System subclass called ProxySystem - which might in turn
have subclasses like DockerSystem or VagrantSystem, etc.

A DockerSystem would have a relationship called "runningon" between
itself and its Drone host.

But I think if it's done right, that adding new types of proxy systems
should be straightforward once the DockerSystems are implemented.

Part of this mean writing a common library for proxy functions that I
can share with (include (".") in) all the discovery agents.

This may open the doors for things like discovery of switches through
snmp and similar things I've been avoiding ;-). Perhaps even (though
generally a bad idea) SshSystems for things like my home router which
support ssh but are too small to put an agent on.

Some kinds of monitoring can be done from outside the container, but
some will have to run inside the container - depending on how the
monitoring script/agent is written. We'd have to add new metadata to our
monitoring agents so we can tell which ones can be run from outside the
container, and which have to run inside

There's also the NAT address and port mapping that has to be understood
too...

But for basic things, it shouldn't be too hard...


-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] boilerplate.sh

2016-03-18 Thread Alan Robertson
Hi Jeff,

Sending this to the list is OK.

But you're right and you're wrong:

right:  You're right about the shell script. It's been so long
I'd forgotten that.
wrong:Yes, it is what I meant to say -- I was just wrong ;-).
It's kind of you to think I mistyped - instead of missthought ;-)


On 03/16/2016 07:39 AM, Jeffrey S. Haemer wrote:
> Alan,
>
> On Wed, Mar 16, 2016 at 7:15 AM, Alan Robertson <al...@unix.sh
> <mailto:al...@unix.sh>> wrote:
>
>
> So, the only way to call a script is if it /has/ the #! at the
> top. With it you can call it either way. Without it you can /only/
> call it by /sh my_cool_script/.
>
>
> That may not be what you meant. You can call a shell script without a #! .
>
> $ echo 'echo hello, world' > hello; chmod +x hello; ./hello
> hello, world
>
>  
> -- 
> Jeffrey Haemer <jeffrey.hae...@gmail.com
> <mailto:jeffrey.hae...@gmail.com>>
> 720-837-8908 [cell], http://seejeffrun.blogspot.com [blog],
> http://www.youtube.com/user/goyishekop [vlog]
> /פרייהייט? דאס איז יאַנג דינען וואָרט./


-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Best Practice Scoring

2016-02-05 Thread Alan Robertson
Hi,

I have implemented the per-host scoring system code and it's in source
control now. It was pretty straightforward.

I also implemented some new security rules for auditd.conf, and added a
new type of best practice capability.

One of the things that I had to do to measure auditd.conf compliance was
to validate the permissions of the log files that auditd creates.  To do
that I had to wait until the auditd.conf discovery came in, and then
trigger discovery of the permissions of the audit files based on where
auditd.conf said they were stored.

That means when this discovery came in, I had to run a rule for each
file listed in the discovery. Currently I didn't have any such
capability, so I added a FOREACH function.  It looks like this:

This rule makes sure that each file listed in the discovery is owned by
root.

Konsole output
 "nist_V-38495": {
   "category": "security",
   "rule": "FOREACH(\"EQ($owner, root)\")"
 }


Note the argument to FOREACH is a string that is invoked for each top
level item in the discovery data. Its argument is a string that is
evaluated (like 'eval' in the shell) against each discovery item (file)
in the discovery output.  Here's a sample of the kind of data that it's
being evaluated against:

Konsole output
{ "/var/log/audit" : {  
 "group" : "root",
 "octal" : "0750",
 "owner" : "root",
 "perms" : {  
 "group" : {  
 "exec" : true,
 "read" : true,
 "setid" : false,
 "write" : false
   },
 "other" : {  
 "exec" : false,
 "read" : false,
 "write" : false
   },
 "owner" : {  
 "exec" : true,
 "read" : true,
 "setid" : false,
 "write" : true
   },
 "sticky" : false
   },
 "type" : "d"
   }
}

I'm not running SELinux on this machine, but if it had SELinux security
attributes, or ACLs, or other extended attributes, those would also show
up here in a similar format.

The rule for validating that it's not group or other writable looks like
this:

Konsole output
 "nist_V-38493": {
   "category": "security",
   "rule": "MUST(FOREACH(\"AND(EQ($perms.group.write,False),
EQ($perms.other.write, False))\"))"
   },

So, I make sure that perms.group.write and perms.other.write is both False.
-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Scoring systems for best practices...

2016-01-31 Thread Alan Robertson
Hi Atom,

On 01/31/2016 10:09 AM, Atom Powers wrote:
>
> Ostensibly this sounds like a good idea. I'm concerned that is might
> be top much feature creep and not really core to the project.
>

If the idea of looking at best practices is bad - then this is more bad.

If the idea of looking at best practices is good, then this makes it
useful. If you install the software and it tells you that 3000 servers
are out of compliance - where do you start? It's overwhelming.  This
should be helpful with that. Fix the machines with the highest scores,
then go from there.

One of the hardest things in the world is to prioritize things. This
gives you a method that will make that easy (but see below about the
algorithm). I'll write a query that shows you all the servers sorted by
score - in reverse order. It's a very simple query...

> There is also the fact that different organizations will rate the best
> practices differently. A web service company has a very different risk
> surface than an embedded controller manufacturer.
>
Perfectly understood and I agree.

The deal is - I'm going to write a /very/ simple algorithm (count the
number of out of compliance rules). Feel free to write your own. Each
rule set can have its own algorithm. No one knows the answer to the
question of what's the perfect algorithm. Until someone pays me to do
it, I'm not going to spend many brain cycles on trying to come up with a
really good one.

A half-ass not-terrible algorithm is much better than none at all. This
is a good place for computer science students or open source
contributors to jump in.

It's not that any algorithm is ever going to be perfect - because there
is not good real-world analytical model of risk - but that's OK. This
lets you demonstrate that you're actively getting better - making
progress. It shows that you're spending the company's money in a way
that is improving things compared to how they were.

If you can demonstrate it to your management - then they can demonstrate
it to their management - and on up the line. Someone, somewhere will
make beautiful charts for managers and CFOs that show lines that heading
towards zero...

It won't be perfect - but it will actually help, and perhaps more
importantly, it will help you get and keep funding for security.

And I'll likely have a first draft of this (at least at the server score
level) by this evening. It's easy and it makes sense. I won't have spent
much more than a day on it. I'm writing the test code for it as I write
this note...


>
> On Sun, Jan 31, 2016, 06:05 Alan Robertson <al...@unix.sh> wrote:
>
> Hi all,
>
> Yesterday, I talked to a guy who used to help manage security for
> some of Raytheon's high-security work government work. He had an
> absolutely great suggestion - which I'll likely implement soon.
>
> He suggested that we need to devise a scoring system for best
> practices (by category). The idea is that 0 means that it has no
> known issues, and a big number is worse than a small number. So,
> it scores like golf - the guy with the lowest score wins ;-).
>
> *Why this matters:*
> This means that the management can demonstrate that they are
> continually making progress toward the goal of having all their
> machines be compliant - and show concrete progress that management
> can buy into (charts of progress over time). It also means they
> know which machines are in the worst shape. For getting started,
> it lets the security team know what to do next. They can also
> measure and demonstrate that they are actually making things
> better. Not only does it make it clear what you should do next,
> but it makes your management happy ;-).
>
> *Regarding implementation:*
> My thoughts are to associate each rule set with an algorithm for
> computing scores for the various categories of scores (security,
> network, etc). I may want to add importance to the various rules
> as well. Not sure if that should go in the IT best practices
> project or just in my code... I'm thinking just in the
> assimilation code - because getting a community to agree on that
> sounds hard...
>
> And then store the score for each evaluated category in the
> success/fail/NA hash table that we already keep.
>
> And in turn bubble those up to the score for the server, and the
> score for the domain the server is part of (currently only global
> is implemented). Domains are "political" divisions intended to be
> for multi-tenant environments.
>
> In the case of the global scores and likely the server scores,
> I'll need to make sure I use a form of query that won't have
> problems with concurrency (read/modify/write issues). May have to
> read up a littl

Re: [Assimilation] Scoring systems for best practices...

2016-01-31 Thread Alan Robertson
On 01/31/2016 06:12 PM, Brad Knowles wrote:
> On Jan 31, 2016, at 1:59 PM, Alan Robertson <al...@unix.sh> wrote:
>
>> If the idea of looking at best practices is good, then this makes it useful. 
>> If you install the software and it tells you that 3000 servers are out of 
>> compliance - where do you start? It's overwhelming.  This should be helpful 
>> with that. Fix the machines with the highest scores, then go from there.
> Compliance with what?  Whose idea of what is important, and by how much?
The idea that you choose. I don't care which one you choose. I'll
implement some as part of the open source project based on NIST STIGs,
and a few others. My company is likely to supply some more for specific
purposes.
>
> The wonderful thing about standards is that there are so many to choose from. 
>  ;)
>
>
> For example, what I’ve seen with PCI-DSS standards should scare the holy 
> living crap out of anyone with a credit card or a bank account.
>
> You mean, you want to keep these servers untouched for weeks and months on 
> end because that was what passed your latest official audit and you dare not 
> apply any patches or security fixes until the next audit run?
>
> And you don’t allow an automated audit to run on any kind of computerized 
> schedule, but instead only when you manually kick it off every quarter, or 
> when someone decided that the sky is falling and suddenly you’ve got to run 
> it across all the thousands of machines in your network and try to stop the 
> massive hemorrhaging of millions and billions of dollars per second?
We check continually. Alert you within minutes (or worst-case hours)
after something pops out of compliance.
>
> Or, no you can’t fix your SSL security problems for the next couple of years 
> because we’ve got too many problems trying to now quickly roll out these damn 
> chip cards to cover our asses that we’ve been leaving out hanging in the wind 
> for the past decade?
>
> To the entire PCI-DSS industry — Stick your head in the sand much?
80% of everyone who gets into compliance fails the next year (according
to Verizon). Audits are spot-checks. We check everything you let us
check. Makes little difference in the resource consumption.
>
>> One of the hardest things in the world is to prioritize things. This gives 
>> you a method that will make that easy (but see below about the algorithm). 
>> I'll write a query that shows you all the servers sorted by score - in 
>> reverse order. It's a very simple query…
> I can see two closely related projects that separately work to solve 
> different parts of the problem.
>
> One works on the standards that should be set regarding what should be 
> monitored, what constitutes the compliance spectrum for each rule, and how to 
> score and weight each rule against the others.  NIST Security Content 
> Automation Protocol (SCAP) would be one example.
>
> The other works on solving the more technical problem of how do you monitor 
> all these things and apply the rules as given.
Or the IT Best Practices project if you will ;-)
http://ITBestPractices.info/

It has the advantage of being open, not being English-only, and not
being based on XML ;-).
>
>
> IMO, this project seems to fall more towards the latter category.
Yes. We provide mechanisms to stay in compliance continually. I've
written a number of articles about this.




-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Scoring systems for best practices...

2016-01-31 Thread Alan Robertson
Hi all,

Yesterday, I talked to a guy who used to help manage security for some
of Raytheon's high-security work government work. He had an absolutely
great suggestion - which I'll likely implement soon.

He suggested that we need to devise a scoring system for best practices
(by category). The idea is that 0 means that it has no known issues, and
a big number is worse than a small number. So, it scores like golf - the
guy with the lowest score wins ;-).

*Why this matters:*
This means that the management can demonstrate that they are continually
making progress toward the goal of having all their machines be
compliant - and show concrete progress that management can buy into
(charts of progress over time). It also means they know which machines
are in the worst shape. For getting started, it lets the security team
know what to do next. They can also measure and demonstrate that they
are actually making things better. Not only does it make it clear what
you should do next, but it makes your management happy ;-).

*Regarding implementation:*
My thoughts are to associate each rule set with an algorithm for
computing scores for the various categories of scores (security,
network, etc). I may want to add importance to the various rules as
well. Not sure if that should go in the IT best practices project or
just in my code... I'm thinking just in the assimilation code - because
getting a community to agree on that sounds hard...

And then store the score for each evaluated category in the
success/fail/NA hash table that we already keep.

And in turn bubble those up to the score for the server, and the score
for the domain the server is part of (currently only global is
implemented). Domains are "political" divisions intended to be for
multi-tenant environments.

In the case of the global scores and likely the server scores, I'll need
to make sure I use a form of query that won't have problems with
concurrency (read/modify/write issues). May have to read up a little. I
can do that. I should double-check my work with the Neo4j community to
make sure I did it right.

Thoughts?

-- Alan Robertson
   al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] New system management / security survey - please take it :-)

2016-01-13 Thread Alan Robertson
Hi,

If you manage, secure, or plan for IT environments or DevOps, we’d love
for you to take our System Management survey. Right now, we’re busy
planning on how to make the Assimilation Suite better in 2016.  Your
responses will be a huge help in giving us a sharp focus on how best to
improve IT  management for you and others in the IT community. If you
can help us out, we’ll send you a small token of our appreciation.

Details are here:
*http://assimilationsystems.com/2016/01/12/take-our-survey/*

-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Received a ... byte packet from [...] that didn't make any FrameSets.

2015-12-30 Thread Alan Robertson
On 12/30/2015 04:03 AM, Markus "Shorty" Uckelmann wrote:
> Am 29.12.2015 um 19:10 schrieb Alan Robertson:
>> Hi,
>>
>> Someone posted this error on the #assimilation IRC channel on
>> irc.freenode.net, so I thought I'd share the messages here along with
>> the probable cause:
>
> That someone was me. Sorry for leaving the channel to soon.
I know. I wasn't worried. It seemed good to share this with the community.
>
> BTW: I run a Vagrant setup with two boxes. One runs the CMA (and
> nanoprobe) and the other only runs a Nanoprobe. I install CMA and
> Nanoprobe via the "assiminstall" script. After the CMA is is installed
> I copy the keys like this:
>
> cd /usr/share/assimilation/crypto.d/
> cp '#CMA#0.pub' '#CMA#1.pub' /vagrant/
>
> On the "client" I install the nanoprobe service with assiminstall and
> copy the keys like this:
>
> chmod 700 /usr/share/assimilation/crypto.d
> cp /vagrant/*.pub /usr/share/assimilation/crypto.d/
That looks reasonable.
>
> Since the assiminstall script doesn't start the services I can easily
> copy the keys. After this is done, I start the services.
Is this what you did before, or what you did now?
>
>> If you get an error from a nanoprobe about a packet that didn't make any
>> framesets, accompanied by a crypto message like this, it's likely that
>> this particular nanoprobe didn't have a copy of the CMA public key(s).
>> These files are named #CMA#0.puband #CMA#1.pub and are in the
>> directory /usr/share/assimilation/crypto.d/.
>>
>> You have to provide the CMA's public keys to nanoprobes that aren't
>> running on the CMA. You should *not* copy the private (.secret) keys
>> over.
>
> The public keys are the same:
>
> From the CMA
>
> [root@cma crypto.d]# ls -la
> total 28
> drwx-- 2 assimilation assimilation 4096 Dec 30 11:27 .
> drwxr-xr-x 8 root root  131 Dec 30 11:27 ..
> -rw-r--r-- 1 assimilation assimilation   32 Dec 30 11:27 #CMA#0.pub
> -rw--- 1 assimilation assimilation   32 Dec 30 11:27
> #CMA#0.secret
> -rw-r--r-- 1 assimilation assimilation   32 Dec 30 11:27 #CMA#1.pub
> -rw--- 1 assimilation assimilation   32 Dec 30 11:27
> #CMA#1.secret
> -rw-r--r-- 1 root root   32 Dec 30 11:27
> cma.local@@62197c2f903452bb59b2a9d8eb501db7.pub
> -rw--- 1 root root   32 Dec 30 11:27
> cma.local@@62197c2f903452bb59b2a9d8eb501db7.secret
> [root@cma crypto.d]# sha256sum \#*.pub
> bbd7a6ebb5d397d0e66c7fad5d2531779fc6a91e1c44322e3f4b140d1bc5b22e
> #CMA#0.pub
> bb01569d2dbb985edaf72892ae3de72b6b865b7adc5e37d541acc3963c3ecd29
> #CMA#1.pub
>
>
> From the "external" Nano (e.g. not the one running on the CMA):
>
> [root@box1 crypto.d]# sha256sum \#*.pub
> bbd7a6ebb5d397d0e66c7fad5d2531779fc6a91e1c44322e3f4b140d1bc5b22e
> #CMA#0.pub
> bb01569d2dbb985edaf72892ae3de72b6b865b7adc5e37d541acc3963c3ecd29
> #CMA#1.pub
> [root@box1 crypto.d]# ls -la
> total 20
> drwx-- 2 root root 4096 Dec 30 11:43 .
> drwxr-xr-x 8 root root  131 Dec 30 11:43 ..
> -rw-r--r-- 1 root root   32 Dec 30 11:43
> box1.local@@422d966c59c85a95391742b7661ab7dc.pub
> -rw--- 1 root root   32 Dec 30 11:43
> box1.local@@422d966c59c85a95391742b7661ab7dc.secret
> -rw-r- 1 root root   32 Dec 30 11:43 #CMA#0.pub
> -rw-r- 1 root root   32 Dec 30 11:43 #CMA#1.pub
>
>
>> If you try and connect this nanoprobe to a new CMA than the one it was
>> last connected to, then this could also happen. [This occasionally
>> happens in my test environment at home - where I have several CMAs].
>
> I have only one CMA.
>
>> Given how the software works, it's a good idea to give the systems names
>> other than 'localhost' - or it will get confused. Systems have to have
>> distinct names according to uname -n. Naming problems like this will
>> most likely show up on the CMA end.
>
> That was a problem with Vagrant. The hostname is set at boot time and
> somehow Journald doesn't catch up. After a reboot Journald logs with
> the correct hostname.
OK. This wasn't related to your problem, as far as I know.
> Everything else was using the right hostname before the reboot:
>
> [root@cma ~]# hostname -f
> cma.local
> [root@cma ~]# hostnamectl status
>Static hostname: cma.local
>  Icon name: computer-vm
>Chassis: vm
> Machine ID: d598365e3abc4d8ea37ef07821507343
>Boot ID: 994ebbd5ff674aa385570067fa7612bd
> Virtualization: kvm
>   Operating System: CentOS Linux 7 (Core)
>CPE OS Name: cpe:/o:centos:centos:7
> Kernel: Linux 3.10.0-327.3.1.e

Re: [Assimilation] Travis-CI and Assimilation

2015-12-03 Thread Alan Robertson
We support ipv4 addresses but only by using ipv6. So, we can't
(currently) run on an ipv4-only platform. I couldn't imagine any such
platforms existing in the last 10 years or so.

I don't care if you can't route or send to/from "real" ipv6 addresses.
But internally, everything in Assimilation is an ipv6 address. When I
want to send to 10.1.2.3, I send instead to :::10.1.2.3. So, we
support ipv4 addresses - but we translate them to ipv6. On the wire, our
packets go out as ipv4 packets, not ipv6. But if you disable ipv6 in the
kernel, then we choke because that translation code is part of the ipv6
stack.

The vendor who doesn't support ipv6 is Google, actually ;-).  And
Travis-CI inherits that from them...

I find it really hard to imagine that Google built anything that was
ipv4-only. For 10 years it hasn't made sense. Maybe you should talk to
them ;-)

I could put code at the bottom end of my network stack that would figure
out that an IPv6 address was ipv4 and then send it as ipv4. But that
would be ugly. As far as I know, it would only be needed for Travis-CI
(Google).

Until today, I had no idea that the the "Google app cloud" was ipv4-only.

/me grumbles and thinks about putting the kludge into the code...


On 12/03/2015 03:08 PM, Jeff Silverman wrote:
> People,
>
> For the foreseeable future, both IPv4 and IPv6 will have to be
> supported by all vendors.  It's that simple.  You have to support IPv4
> because at the present time, there are devices that only support IPv4,
> and that will be the case for years, perhaps decades to come.  You
> have to support IPv6 because the planet is running out of IPv4
> addresses.  Everybody in the computer business in any capacity either
> knows this or should have known this.   I would amazed if anybody
> argued with my position on this issue.  That's not the same as
> actually doing something about it, but nobody - nobody! - disagrees
> that supporting both protocols is best practices and all new systems
> should do so.
>
> So if you have a vendor who is unwilling or unable to support both
> protocols, then I would drop them.  If Assimilation does not support
> both protocols, then that is a major, serious bug and should be
> addressed ASAP.
>
>
> Jeff
>
>
> On Thu, Dec 3, 2015 at 1:06 PM, Alan Robertson <al...@unix.sh
> <mailto:al...@unix.sh>> wrote:
>
> Hi Atom,
>
> I'm aware of that they are limited by their choice of suppliers,
> but they didn't ever mention ipv6 was going away until about 5
> days ago. I don't read their blog, so I didn't know until our
> builds started breaking randomly.
>
> If they supported ::1 (loopback), our testing would work just fine.
>
> They made a choice to eliminate this - and IMHO, a week is
> insufficient notice, and a note on a blog is an inadequate way to
> let people know.
>
> We now have to either wait for them to decide to keep a little old
> infrastructure around, or move somewhere else.
>
> From the perspective of the kernel, we don't support ipv4. It's
> something that would take a while to fix, and it's an incredibly
> stupid way to spend very limited resources.
>
> We can do everything we want to do using ipv6 - except test under
> Travis-CI :-(.
>
> -- Alan
>
>
> On 12/03/2015 01:52 PM, Atom Powers wrote:
>>
>> TravisCI can't really do anything until Google does something.
>> https://cloud.google.com/compute/docs/networks-and-firewalls#networks
>>
>>
>> On Thu, Dec 3, 2015 at 12:34 PM Alan Robertson <al...@unix.sh>
>> <mailto:al...@unix.sh> wrote:
>>
>> Hi,
>>
>> Travis-CI (continuous integration) has switched their build
>> infrastructure over to an /ipv4 only/ setup. They announced
>> this a few days ago:
>> https://blog.travis-ci.com/2015-11-27-moving-to-a-more-elastic-future
>>
>> The Assimilation software is ipv6-only - so this isn't a
>> match made in heaven :-(.
>>
>> I've created a Travis-CI GitHub issue for it here:
>> https://github.com/travis-ci/travis-ci/issues/5200
>>
>> I have no idea what they'll do about this - if anything...
>>
>> -- 
>>
>> Alan Robertson / CTO 
>>
>> al...@assimilationsystems.com
>> <mailto:al...@assimilationsystems.com>/ +1 303.947.7999
>> 
>>
>> Assimilation Systems Limited
>> http://AssimilationSystems.com
>>
>> Twitter <https://twitter.com/ossalanr> Linkedin
>> <https://www.linkedin.co

[Assimilation] push-button installer under development

2015-09-18 Thread Alan Robertson

Hi,

I was recently made aware of how annoying it is to install the 
Assimilation software.


My apologies for that. But the good news is that I'm working on a script 
which should install it successfully for any system we are producing 
packages for.


It's working quite well now - and I'm really excited about it. It kicks 
grass - without even playing soccer!


I'll also add Debian builds to what we're building via Docker.

If it goes well, I should be well on the way to the next release next week.

But the install script will currently install 1.0 - so that should be 
available even without a new release.


More next week!

-- Alan Robertson
al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Moving source control to github

2015-09-14 Thread Alan Robertson
Hi all,

I've had a lot of requests to move our source control to github. It
makes sense, and with a little help from my friends (like Jeff Haemer
and JC), we've moved our source control to github.

This will make it more visible to the world at large, more readily
accessible to more people, and we'll get to do what the cool kids do ;-).

The project URL is: https://github.com/assimilation
Our official repository is:
https://github.com/assimilation/assimilation-official

JC has travis-ci integration ready to go, and I have my docker build
system going. The documentation isn't updated yet, but we'll get to that
RealSoonNow. Since all the documentation is driven from source control,
I'm perfectly happy to get pull requests for that, or anything else for
that matter.

Have a great week!


-- 

Alan Robertson / CTO
al...@assimilationsystems.com <mailto:al...@assimilationsystems.com>/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter <https://twitter.com/ossalanr> Linkedin
<https://www.linkedin.com/in/alanr> skype
<https://htmlsig.com/skype?username=alanr_unix.sh>

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Discovered confusing information about the 1.0 release

2015-08-27 Thread Alan Robertson
Hi,

Simone Setti discovered that I made a mistake while building the final
1.0 packages. I built and tested them exactly as I intended to, but
there was one problem, which he pointed out to me. The versions in the
1.0 folder still show up as version 0.5 - because I forgot to update the
Cmake configuration. I am sorry for the confusion, but the code that's
up there is the 1.0 version. Sigh...

My thanks to Simone, and my apologies to everyone for the confusion!

-- Alan


 thanks for download link, i see in the 1.0 release folder that
 ubuntu package are 0.5xxx version, is this correct?

 Best regards

 Simone Setti
 Sistemi Informativi
 Lamp San Prospero S.p.A.






-- 

Alan Robertson / CTO
al...@assimilationsystems.com mailto:al...@assimilationsystems.com/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter https://twitter.com/ossalanr Linkedin
https://www.linkedin.com/in/alanr skype
https://htmlsig.com/skype?username=alanr_unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation development update

2015-08-25 Thread Alan Robertson
Hi,

I've been working on the code for keeping systems continually in
compliance with best practices. The first real version of this code is
working and is in source control. The infrastructure for measuring
things against best practice rules is there. The code for evaluating
best practices and remembering the results is there. There are about 20
rules from the NIST STIGs in place. More coming!

Some sample output from my desktop machine appears below.

Konsole output
FAIL:   networking ID itbp-1 IN($net.core.default_qdisc, fq_codel,
codel)
FAIL:   security ID nist_V-38511 EQ($net.ipv4.ip_forward, 0)
PASS:   security ID nist_V-38523
EQ($net.ipv4.conf.all.accept_source_route, 0)
PASS:   security ID nist_V-38524 EQ($net.ipv4.conf.all.accept_redirects, 0)
FAIL:   security ID nist_V-38526 EQ($net.ipv4.conf.all.secure_redirects, 0)
FAIL:   security ID nist_V-38528 EQ($net.ipv4.conf.all.log_martians, 1)
FAIL:   security ID nist_V-38529
EQ($net.ipv4.conf.default.accept_source_route, 0)
FAIL:   security ID nist_V-38532
EQ($net.ipv4.conf.default.secure_redirects, 0)
FAIL:   security ID nist_V-38533
EQ($net.ipv4.conf.default.accept_redirects, 0)
PASS:   security ID nist_V-38535
EQ($net.ipv4.icmp_echo_ignore_broadcasts, 1)
PASS:   security ID nist_V-38537
EQ($net.ipv4.icmp_ignore_bogus_error_responses, 1)
PASS:   security ID nist_V-38539 EQ($net.ipv4.tcp_syncookies, 1)
PASS:   security ID nist_V-38542 EQ($net.ipv4.conf.all.rp_filter, 1)
PASS:   security ID nist_V-38544 EQ($net.ipv4.conf.default.rp_filter, 1)
FAIL:   security ID nist_V-38548
EQ($net.ipv6.conf.default.accept_redirects, 0)
PASS:   security ID nist_V-38596 EQ($kernel.randomize_va_space, 2)
n/a:security ID nist_V-38597 EQ($kernel.exec-shield, 1)
FAIL:   security ID nist_V-38600
EQ($net.ipv4.conf.default.send_redirects, 0)
FAIL:   security ID nist_V-38601 EQ($net.ipv4.conf.all.send_redirects, 0)

All but one of the rules are security-related. They are all taken from
the IT Best Practices project at http://ITBestPractices.info. Although
this particular set of STIGs is for RHEL6, they work just as well on my
Ubuntu desktop.

The next step is to add more discovery scripts and rules. The next ones
I look at are likely going to be permissions-related.

-- 

Alan Robertson / CTO
al...@assimilationsystems.com mailto:al...@assimilationsystems.com/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter https://twitter.com/ossalanr Linkedin
https://www.linkedin.com/in/alanr skype
https://htmlsig.com/skype?username=alanr_unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Assimilation Project Installation Question

2015-08-21 Thread Alan Robertson
On 08/21/2015 11:22 AM, William Klausmeyer wrote:
 Alan,

 Thank you for the quick response.

 I found the .deb for libsodium. In the Ubuntu 14.04 directory, there
 isn'ta .deb for it in the 1.0 release. You have to look in the 0.5
 release to get the .deb, but that installation file seems to work
 perfectly!

I should copy it over to the other 1.0 Ubuntu releases. Obviously I
missed that one :-(

OR MAYBE JC SHOULD INCLUDE IT AS PART OF HIS BUILDS ;-)

He builds the source and installs, just doesn't build a package for it.
When I do the Docker builds, I build a package for it - which is why I
have a copy and he doesn't.

Sorry for the confusion!  Glad it installed properly for you.

One thing that's happened around the time that I put the release out -
Neo4j added authentication to their REST interface. The easiest way to
deal with that for the moment, is to disable Neo4j authentication.
http://neo4j.com/docs/stable/security-server.html

You could also make sure the key goes in the environment (I've forgotten
the env var name).



-- 

Alan Robertson / CTO
al...@assimilationsystems.com mailto:al...@assimilationsystems.com/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter https://twitter.com/ossalanr Linkedin
https://www.linkedin.com/in/alanr skype
https://htmlsig.com/skype?username=alanr_unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Metric collection?

2015-07-14 Thread Alan Robertson
We don't collect time series data. Sorry :-(

There are some things that we could do to make sure the right custom
data is collected by someone else depending on configuration. But there
are lots of tools that do this, and it's actually a pretty different
problem. It's important, but we're currently not doing it.

The appropriate scaling methods are quite different, therefore the right
protocols, etc. are quite different.

On 07/14/2015 02:00 PM, Atom Powers wrote:
 As I'm looking at all the things that Assimilation can do for me I
 wonder if it can also help me get data into Graphite (etc.) Is there
 the ability to send collected data to Carbon Cache or some other data
 broker?



-- 

Alan Robertson / CTO
al...@assimilationsystems.com mailto:al...@assimilationsystems.com/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter https://twitter.com/ossalanr Linkedin
https://www.linkedin.com/in/alanr skype
https://htmlsig.com/skype?username=alanr_unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Best Practices as code - progress report!

2015-07-10 Thread Alan Robertson
Is that something you'd be willing to do? ;-)


On 07/10/2015 07:57 AM, glaws wrote:
 Alan:

 You might consider adding Assimilation to this page?

 https://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems

 Greg Lawson

 --

 On 07/09/2015 08:05 PM, Alan Robertson wrote:
 Hi Atom,

 Thanks for your reply. It gets kinda lonely when I write things and
 no one replies ;-)
 -snip


 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/


-- 

Alan Robertson / CTO
al...@assimilationsystems.com mailto:al...@assimilationsystems.com/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter https://twitter.com/ossalanr Linkedin
https://www.linkedin.com/in/alanr skype
https://htmlsig.com/skype?username=alanr_unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Nagios Agent Support: WAS Re: Best Practices as code - progress report!

2015-07-10 Thread Alan Robertson
On 07/10/2015 08:43 AM, Atom Powers wrote:
  Although with this feature and if the Nagios agent support works the
 way I hope it does then I may be able to build a case for Assimilation
 sooner.

How do you hope the Nagios agent support works?

Which of the Nagios agents do you care about?


-- 

Alan Robertson / CTO
al...@assimilationsystems.com mailto:al...@assimilationsystems.com/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter https://twitter.com/ossalanr Linkedin
https://www.linkedin.com/in/alanr skype
https://htmlsig.com/skype?username=alanr_unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Best Practices as code - progress report!

2015-07-10 Thread Alan Robertson
On 07/10/2015 12:31 PM, Atom Powers wrote:
 On Fri, Jul 10, 2015 at 10:51 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 On 07/10/2015 08:43 AM, Atom Powers wrote:

  

 Having spent many mind-numbing hours reading over the  250 rules
 I have so far, I don't believe there is any portable, useful,
 universal form. Each of the kinds of rules (and there appear to be
 more than 20 kinds so far) has its own vocabulary and domain of
 knowledge and methodology. It's hard to generalize this, and in
 the end, I suspect no one (possibly excepting you) would wind up
 using the result. At least no one that I know of. And that make it
 a lot harder to do for /zero/ return on investment. If no one uses
 it (and I don't know of anyone who will), I'm not going to spend
 my time on it.


 If there is no method or intention of creating a portable, useful form
 of the checks then I must have misunderstood the meaning of
 mechanically-verifiable, or at least the scope of it.

 I hadn't looked at Lynis before but now that I have I see that he has
 already created a mechanical verification of these
Not so much /these/ rules (so far), as his own set of rules. I don't
think he's spent too much time on the NIST STIGs. He also has rules they
don't. I ran across Lynis while thinking about doing this work. He's a
good guy - and we keep in touch. I'm trying to bring this all together
in the same place, and then get people to volunteer to do translations,
add rules, etc.
 rules (although I'm dubious of the mechanical label). That, and your
 comments, suggest that the IT-bestpractices project is more about
 curating the best practices into one place to serve as a reference for
 Assimilation, Lynis, and other future projects.

 +1 to that

That's exactly what I intended - but also for manual verification when a
particular tool doesn't cover a particular rule yet. But more to the
point really, is that the text descriptions often are bigger pain in the
neck. The code is comparatively easy, but having a good explanation is
actually harder. And it would be nice to have translations in multiple
languages as well.

It's worth noting that a few of the NIST STIGs say interview the
sysadmin and So, although these are currently in the database, but
are not easily covered by mechanical rules. I need to figure out how to
deal with those over time.

The intent is to be able to eventually be able to give a URL to the web
server at itbestpractices.info http://itbestpractices.info/ and get
back the JSON that describes the particular hardening recommendation in
multiple languages - so it can be shown to users by the developers of
these products. This would be open to all without charge - assuming it
doesn't get too expensive.

In my case I can imagine someone getting an alert saying Rule Foo123 was
violated, and then when they want details, I'd go fetch from the web
server. Or I might encapsulate a copy of them in my product. They'll be
licensed under Apache 2 - which seems to be the most business-friendly
license around.

I will be touching on this effort at my OSCON talk 2 weeks from yesterday.
http://assimilationsystems.com/events/oscon-2015/

Hopefully this makes some sense.

I assume that the +1 means that makes sense, and you like it ;-)

If so, make sure we have your favorite rules covered - get your security
friends involved!

*That goes for everyone on this list :-D.*

-- 

Alan Robertson / CTO
al...@assimilationsystems.com mailto:al...@assimilationsystems.com/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter https://twitter.com/ossalanr Linkedin
https://www.linkedin.com/in/alanr skype
https://htmlsig.com/skype?username=alanr_unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Nagios Agent Support: WAS Re: Best Practices as code - progress report!

2015-07-10 Thread Alan Robertson
On 07/10/2015 02:00 PM, Atom Powers wrote:
 A replacement for NRPE, with all the sexy that Assimilation provides.

It is that. Sexy is undiminished ;-).

You can't manually configure any monitoring. You have to give us
templates to describe that particular kind of monitoring - for example
MySQL, or sshd, or whatever. You *can* tweak the different instances of
mysql and so on, but it's not especially pretty to do so, and it's all
by editing files.

Once you do that, whenever we see that particular kind of service show
up, we just monitor it. And if the monitoring agent can be told what
port it's on, the templates can tell the agent things like ports, IP
addresses, and so on automatically.

There is a small difference between templates for monitoring a service
(like MySql or Neo4j, or sshd) and those for monitoring a server (like
sensors or load average).  But you can create templates for both types
of monitoring.

Sensors and load average are both configured automatically if the
underlying stuff they need is available. Our default load averaging
monitoring is very conservative (tries to avoid false alarms). It is my
experience that as important as load average is (and it's my
second-favorite metric), people underestimate the load average a server
can have and still perform adequately.

For load average, by default we scale the load average to the number of
CPUs available - because that's the right behavior.

Of course, we still support the OCF resource agents as well. In my
experience, those agents are very solid. To my knowledge, OCF resource
agents all monitor services, not servers.




 On Fri, Jul 10, 2015 at 12:09 PM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 On 07/10/2015 08:43 AM, Atom Powers wrote:
  Although with this feature and if the Nagios agent support works
 the way I hope it does then I may be able to build a case for
 Assimilation sooner.

 How do you hope the Nagios agent support works?

 Which of the Nagios agents do you care about?


 -- 

 Alan Robertson / CTO
 al...@assimilationsystems.com
 mailto:al...@assimilationsystems.com/ +1 303.947.7999
 tel:%2B1%20303.947.7999

 Assimilation Systems Limited
 http://AssimilationSystems.com

 Twitter https://twitter.com/ossalanr Linkedin
 https://www.linkedin.com/in/alanr skype
 https://htmlsig.com/skype?username=alanr_unix.sh




 -- 
 Perfection is just a word I use occasionally with mustard.
 --Atom Powers--


-- 

Alan Robertson / CTO
al...@assimilationsystems.com mailto:al...@assimilationsystems.com/ +1
303.947.7999

Assimilation Systems Limited
http://AssimilationSystems.com

Twitter https://twitter.com/ossalanr Linkedin
https://www.linkedin.com/in/alanr skype
https://htmlsig.com/skype?username=alanr_unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] New blog post: Simplicity Is King

2015-03-26 Thread Alan Robertson
Hi,

I wanted to share my new blog post on the Assimilation Project with you...


http://assimilationsystems.com/?author=1  


Simplicity Is King
http://assimilationsystems.com/2015/03/26/simplicity-is-king/

http://assimilationsystems.com/?author=1

Our lives are filled with things designed to make our lives easier. In
many of these cases, these things we get to make our lives easier wind
up making it more complex.  Nowhere is this more apparent than in IT.
 We have so many choices of ways to create services, to deploy them, and
to manage […]

Read more of this post
http://assimilationsystems.com/2015/03/26/simplicity-is-king/


Thanks!

-- Alan Robertson
   al...@unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Proposed common best practices as code support structure...

2015-03-18 Thread Alan Robertson
Hi,

I've been talking to Michael Boelen of the Lynis project about creating
a best practices community particularly around security, and he asked me
to make a concrete proposal about exactly what I had in mind - so here
it is ;-)
You can also find the document here:
https://docs.google.com/document/d/1EdlNR8W7lUqZQPkBoIGSH-t0BevlqOY6rDUzJ57Qfw8/edit?usp=sharing

PS:  Happy Birthday Michael!

*


  Introduction

This is a proposal for creating a useful common shared best practice
recommendations database.  Although our primary emphasis is security,
the database is structured to accommodate a variety of domains of best
practices.  Possible additional best-practice domains include
networking, performance and availability.  It is noteworthy that we are
primarily interested in best practices which can be automatically
verified - allowing associated vendors to translate these best practices
to code.  This database is a key portion of our common “best practices
as code” effort.  Because there are a variety of ways of ensuring
compliance with these best practices, this database does not contain the
details of exactly how one might best verify compliance with these best
practices.


/*This proposal is being sent out for comments, and is subject to change
until it is finalized.*/


  General Outline

The idea here is to be able to share certain information about best
practices in a way which is helpful to end-users.  Here are a few of the
design goals of this effort:

  *

Data will be available to end users via a web server.

  *

Each collection of data for a given best practice recommendation
will have a unique identifier.  A given best practice might have
different descriptions for different operating systems, but any
given best practice recommendation will have exactly one identifier.

  *

It is a requirement that the structure supports multiple languages.
Although not every piece of text will be available in multiple
languages, it is necessary that the structure we create supports
multiple languages.  All text data is encoded in UTF8, which allows
for any source language while minimizing the complexity of the
implementation.

  *

More generic and more specific versions of best practice texts are
available for various operating systems and distributions.  The web
server will allow the specification of attributes such as operating
system, version, and language and it will select the closest
available version of the desired text.

  *

Recommendations will be able to be tagged with external identifiers
for the various external agencies which make the same recommendation.


  Web Server


  Directory and Data structure

The project directory structure is designed to ease management of the
body of recommendations and facilitate the work of the web server.  The
proposed directory structure is outlined in the next subsection.


Directory Structure

In general, the recommendation data structure is proposed to look
something like this:

  *

top-level-directory

  o

recommendation-domain.domain

  +

os-class.class (for example posix.classor windows.class)

  #

os-type.os

  *

distribution (if applicable.distro

  o

optional-major-release-version-level.rel

Recommendations potentially exist at any level in the tree below the
recommendation-domain level.  The web server would take the supplied
set of attributes - and return the information which is the closest and
most complete match to the request.


For this directory structure, it is expected that the information for a
particular security identifier would be normally placed as high in the
tree as possible.  Versions should only be placed lower in the tree if
the text explaining the recommendation or its remedy differs significantly.


An example

If a request specified
os=linux,distro=redhat,release=6,id=42,domain=security and the following
files existed:

  *

security.domain/posix.class/42.json

  *

security.domain/posix.class/linux.os/42.json

  *

security.domain/posix.class/linux.os/redhat.distro/42.json

The web application would return the information from the file
security.domain/posix.class/linux.os/redhat.distro/42.json.  If the same
request was made substituting distro=suse,release=11, then the
information found in security.domain/posix.class/linux.os/42.jsonwould
be returned instead.


Proposed File Contents

Each file is named according to a unique identifier for this particular
recommendation.  Each file contains up to three different kinds of data:

  *

Explanation of the recommendation, how to tell if it’s complied
with, and the potential consequences of not being in compliance.

  *

Description of how to come into compliance with the recommendation.

  *

External tags - references to outside 

[Assimilation] Project status

2015-03-09 Thread Alan Robertson
Hi all,

I've written several new discovery scripts - aimed at monitoring and
security.  They're in source control now.  I kicked off a build, so they
should be available in packages in a half-hour or so.

Here they are:

commands  - finds the set of commands installed in the usual places
findmnt   - details on mounted filesystems
mdadm - Linux software RAID configuration
nsswitch  - /etc/nsswitch.conf discovery
pam   - PAM discovery - all your PAM config in a single JSON object
partitions- discovery of available disk partitions
sshd  - discovery of sshd configuration /etc/ssh/sshd_config
monitoringagents - updated to discover Nagios monitoring agents
(previously found OCF, lsb and upstart)

If I have the time, I might put out a minor release this week.  I expect
the next major feature will be Nagios agent integration - so you can use
Nagios agents for monitoring.

-- Alan Robertson
   al...@unix.sh



___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Projects underway...

2015-03-04 Thread Alan Robertson
I'm still looking at the right way to do comprehensive security
monitoring.  Started looking at the OpenSCAP project to see if I can
leverage their set of checks and (perhaps more importantly)their
detailed description of things.

Carrie Oswald and Sebastian Berm have started on the effort to support
Nagios monitoring plugins.  I updated the monitoring_agents discovery
plugin to discover Nagios monitoring plugins.

I've wrote three more discovery agents:

findmnt - uses the findmnt command to discover details about mounted
filesystems (similar to 'mount')
commands - finds the names of commands installed in common places
(/bin, /usr/bin, etc)
nsswitch - discovers nsswitch settings

The updates mentioned above are available in source control.  I've just
kicked of an official build to make packages available.

-- Alan Robertson
   al...@assimilationsystems.com OR al...@unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Smartd interface

2015-02-24 Thread Alan Robertson
On 02/24/2015 07:43 AM, Bolesław Tokarski wrote:
 Hello, Alan,

 I am adding the mailing list to CC as I believe you have omitted it by
 accident. I have also put the whole text of your email (although I
 comment in-line), so you wouldn't need to re-post you previous one.
Sorry for getting in a hurry to respond.  I had to leave for the airport
and thought I could give a reasonable response before going.  I'm back
to a more normal schedule for a while...

Thanks for CCing the mailing list!

 2015-02-24 14:48 GMT+01:00 Alan Robertson al...@unix.sh
 mailto:al...@unix.sh:

 Hi Boleslaw,

 Thanks for the note!

  From your perspective, what would be the best interface between a
  nanoprobe and smartd running on the same machine?

 I would rather not add C library dependencies to the nanoprobe to
 connect to smartd.  It's not that universally used.  Of course, one of
 the reasons is that its such a pain to deal with.  If you're not going
 to link new code into the nanoprobe, then you'd need to write a script
 to do the monitoring and one to do discovery.

 We have a distinction between discovery - which is all JSON based, and
 monitoring which is communicated by exit codes.  Smartd is an
 interesting case, where we would likely want both.


 That would be all fine... almost. For the exit codes, this might
 possible with smartctl, as it returns some bitmask-based output. It
 provides OK/NOK for SMART status, error counter (status) and self-test
 failure (status).

 Now, the almost part. I find people out there in the wild doing
 precognitive HDD failure diagnosis. They are using some custom level
 of particular SMART attributes to put the red light on. The potential
 attributes are described in [1], an example script that does it per
 user-defined values (drive-specific) is described in [2]. This seems
 to be confirmed by the fails in our environment, where the SSD drives
 rarely had any SMART warnings on them before just disappearing. Also
 you cannot just run smartctl. You need to provide it with a device
 to check. Pushing /dev/sd* is not an option, since if that's an array,
 you also need to provide additional arguments to smartctl. In the
 newer smartctl version (6.2), I found that smartctl --scan-open
 provides reliable enumeration of HDDs in the system (but it's only
 direct SATA and SAS behind a megaraid controller that I tested).

 
  Currently smartd queries smart data frequently (-i interval) so its
  daemon functionality is a cron equivalent, but it keeps the state in
  memory, so can be used to limit the number of smart calls to the
  underlying hdd (which might be dangerous). Aside the general smart
  status (ok/nok), it has counters for io errors and some vendor
  parameters, and optionally self-test log results. Due to differences
  in SCSI/ATA interfaces (and even vendor-specific parameters), the
  attributes kept vary, but can be provided too.
 
  I was thinking about some query-answer mechanism, perhaps a JSON
 API?
  That would force smartmontools guys to add a library dependency, I
  will need to raise the issue with them. And, maybe JSON is too
  complicated for the querying agent?
 Creating well-formed JSON is easy.  You don't need a library to do
 that.  I do it from shell scripts with no libraries ;-).  Parsing it
 usefully is harder, but they don't have to do that.  We do that very
 usefully in our C code.  We even use this same C code to handle
 JSON for
 our Python code.  The only thing that a library would do for you would
 be to escape strings that *might* have  characters or control
 characters in them (although in practice, our code would likely not be
 bothered by non-NULL control characters).  If that isn't going to
 happen, then it's all just printfs after that...
 
  What is your opinion on that?
 I would love to be able to monitor hard drives.  If it spits out JSON,
 then that's a big win from the discovery perspective.  If we could
 also
 have an exit code interface, that would be good too.


 For an exit code interface to smartd (as opposed to running smartctl),
 we'd need a smartdc, which would query the daemon. For JSON, would you
 be launching a query against a (UNIX? TCP?) socket or would you run an
 external command that should produce the JSON data?
I would rather just have a command (with options) we could fork/exec and
produce JSON.  Otherwise, we have to build the code into the nanoprobe. 
Then normal UNIX permissions apply, and this is simpler than any kind of
network or socket permissions.  If we use any kind of socket, it is
likely that there would be corresponding header files, and libraries to
link against and so on.  If the output is JSON based then
fork/exec-and-read-a-JSON-string is likely to be much easier manage from
a dependency perspective.

My objection is to having library

[Assimilation] Valentine's Day release is out

2015-02-14 Thread Alan Robertson
Hi,

I've tagged version 0.5 today - Valentine's Day.

This release is primarily a bug fix release - with the fix for the
assimcli query bug, and a couple of fixes for some JSON parsing issues.

Currently doing the official builds for the release.  They'll be up in
the usual places in a few hours.  I also expect to publish some Docker
images to play with next week.  The infrastructure for all that is part
of this release.  I'll let you know when they're up.

Here's the release description:


  version 0.5 - the Valentine's day release - 14 February 2015

This is release is sixth in a series of releases intended to culminate
in an awesomely useful release. It is primarily a bug fix release. This
release is eminently suitable for deployments in environments where the
caveats are acceptable. We have quite a few pre-built Ubuntu packages,
and a few CentOS/RHEL packages for this version in the 0.5 subdirectory
ofhttp://bit.ly/assimreleases. A tar ball for this version can be found
here: http://hg.linux-ha.org/assimilation/archive/v0.5.tar.gz


New Features

  * We now produce Docker images for several versions of Linux, suitable
for doing demos, testing, and learning about the software.


Bug Fixes

  * Fixed a bug where command line (assimcli) queries sometimes failed
due to interactions with Linux security modules
  * Fixed a longstanding-but-previously-unknown bugs where it didn't
like floating point numbers or negative integers in JSON


Caveats

  * No alerting, or interface to existing alerting (hooks to build your
own interface are included)
  * high availability option for the CMA is roll-your-own using
Pacemaker or similar
  * the queries need to have more indexes for larger installations.
  * The CMA may suffer performance problems when discovering IP
addresses when large numbers of nanoprobes are on a subnet.
  * no GUI


___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Preparing for next release

2014-10-02 Thread Alan Robertson
Hi,

I just put out a set of public builds of the current source in
preparation for the next release - which I hope to put out next week.

This is the code I mentioned had been tested with a 100-node system.  I
put in several improvements since then, and toned down the debugging to
be tolerable.

I have done some testing with 200-node systems, but it's not always
reliable in 200-node systems.  This has a lot to do with my test
environment - but some is due to the code.  This is not an architecture
issue - just a coding issue.  Should fix it fairly soon.

There may have to be be some code not-yet-written to make it scale
reliably for really big systems.

This version is well-worth testing.  The URL for the official builds is:
   
https://www.dropbox.com/sh/4olv5bw1cx8bduq/AADkfkqzXOLfA-cwHIdlcdGTa/builds

Let the list know what you think!

Let me know if you have questions about how to install it.

-- Alan Robertson
   al...@unix.sh

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation CMA on RHEL (and CentOS)

2014-06-04 Thread Alan Robertson
Hi,

I now have a Docker instance of the Assimilation CMA which passes all
tests on CentOS6.  It does this by using python 2.7 from the CentOS
software collections.  I now *completely* understand the use and value
of software collections.  I confess I missed out on the value of the
presentation that Joe Brockmeier gave.  My apologies to Joe - it was
truly my loss...  :-(.

This has been a problem because of how old RHEL6/CentOS6 is by now...

There is a good bit of swinging a dead chicken over one's head to set it
up and use it properly, but I now know everything that's involved - and
it's all done using supported mechanisms...  I had to rewrite a small
amount of my code, but that had other advantages - so I'm glad I did it.

It doesn't yet install everything so that all is transparent and golden,
but I now know all the things that need to happen - and I've seen it run
and things that failed before work now.

This is all documented (so to speak) by the CentOS Dockerfile which
you can find here:
http://hg.linux-ha.org/assimilation/file/tip/docker/CentOS/Dockerfile

Assuming my laryngitis and bronchitis don't get worse ;-) I should have
all the fixes in to have it work nicely in the RH-ecosystem put in this
week.

Next stop after that:  RPMs for RH-ecosystem systems...

Anyway, Hurray!!

-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Peer-to-peer connections and firewalls

2014-05-27 Thread Alan Robertson
Hi,

It has come to my attention that some enterprises (maybe even most of
them) won't allow the current peer-to-peer networking arrangement.  This
is because it requires allowing an any:any firewall rule on port 1984.

For those people, I'll develop an alternative topology - where all the
clients just heartbeat a limited number (like one or two) systems.

This will still be very low overhead, just not as low as the
peer-to-peer arrangement.

Packets will only go one way, so it will be half the traffic of periodic
pings.

Everything else about the architecture will be unchanged.

This is not complicated, since the nanoprobes don't know the topology --
and they don't care.


-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Version

2014-03-29 Thread Alan Robertson
Hi folks,

Version 0.1.2 is now out.  Please download it, try and it and comment on
it in the mailing list.  The main new features are the command line
query tool - lets you see what the system has done for you, and the
checksum change notification tool.  The complete set of release notes is
on the project web site, and also included below for you reading pleasure.

Enjoy!!



Version 0.1.2 - the 'very interesting' release - 20 March 2014

This is the second in a series of releases intended to culminate in a
truly useful release. This release is suitable for limited trials in an
environment where the caveats are acceptable. You can find a few
pre-built Ubuntu packages for this version
here:https://www.dropbox.com/sh/h32lz3mtb8wwgmp/26AyspFaxL/Releases/0.1.2
https://www.dropbox.com/sh/h32lz3mtb8wwgmp/WZKH4OWw1h/Releases/0.1.2 A
tar ball for this version can be found
here:http://hg.linux-ha.org/assimilation/archive/v0.1.2.tar.gz


Features

These features are new with release 0.1.2.

  * added /assimcli/ - a command line query tool with more than 15 cool
canned queries. These queries are also available through the REST
interface.
  o allipports: get all port/ip/service/hosts
  o allips: get all known IP addresses
  o allservers: get known servers
  o allservicestatus: status of all monitored services
  o allswitchports: get all switch port connections
  o crashed: get 'crashed' servers
  o down: get 'down' servers
  o downservices: get 'down' services
  o findip: get system owning IP
  o findmac: get system owning MAC addr
  o hostdependencies: host's service dependencies
  o hostipports: get all port/ip/service/hosts
  o hostservicestatus: monitored service status on host
  o hostswitchports: get switch port connections
  o list: list all queries
  o shutdown: get gracefully shutdown servers
  o unknownips: find unknown IPs
  o unmonitored: find unmonitored services
  * added a checksum monitoring capability - for network-facing
binaries, libraries and JARs.
  * updated to a newer and faster version of the py2neo library
  * updated the CMA to use the Glib mainloop event scheduler
  * added a certain amount of Docker compatibility. Assimilation now
builds and installs correctly for CentOS 6 (but some tests seem to
fail).


Bug Fixes

  * Fixed the memory leak from 0.1.1 - which turned out to be minor.
  * Fixed a subtle bug in the Store class where it would appear to lose
values put into node attributes
  * Fixed lots of bugs in the REST queries - and renamed them to be more
command line friendly


Caveats

  * Object deletion not yet reliable or complete
  * No alerting, or interface to alerting (hooks to build your own
interface are included)
  * communication is neither authenticated nor confidential
  * heterogeneous system support (POSIX and Windows - but now someone is
working on Windows!)
  * statistical data collection
  * CDP support for Cisco switch discovery
  * high availability option for the CMA
  * the queries need to have more indexes for larger installations.

Features that are expected for a monitoring solution but
are *not* included include these:

  * useful alerting (but you can probably integrate your own)
  * heterogeneous system support (POSIX and Windows - but someone is now
looking at Windows - yay!)
  * statistical data collection Note that these features are understood
to be important and are planned - but this first release does not
include them.

-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Canned queries which we currently support

2014-03-28 Thread Alan Robertson
Hi,

Here is the set of queries which the command line query tool (and also
the REST interface) supports.


allipports  get all port/ip/service/hosts
allips  get all known IP addresses
allservers  get known servers
allswitchports  get all switch port connections
crashed get 'crashed' servers
downget 'down' servers
findip  get system owning IP
findmac get system owning MAC addr
hostipports get all port/ip/service/hosts
hostswitchports get switch port connections for a server
listlist all queries
shutdownget gracefully shutdown servers
unknownips  find unknown IPs
unmonitored find unmonitored services


This list is the output of *a**ssimcli query list* reformatted into a table.

Below is the output from *assimcli query unmonitored* from my desktop
machine.
servidor /home/alanr/.dropbox-dist/dropbox:{0.0.0.0:17500:tcp}
servidor /sbin/rpc.statd:{0.0.0.0:33469:tcp,:::45445:tcp6}
servidor /usr/bin/skype:{0.0.0.0:16270:tcp}
servidor /usr/bin/tprintdaemon:{0.0.0.0:5552:tcp}
servidor /usr/sbin/dnsmasq:{192.168.122.1:53:tcp}
servidor /usr/sbin/sshd:{0.0.0.0:22:tcp,:::22:tcp6}

Below is sample output from *assimcli query allswitchports *from my
desktop machine.
servidor:eth0-GS724T_10_10_10_250[Netgear Gigabit Smart
Switch]:*g6*[Alan's office, north wall, white jack]


These queries seem to be a great match to interface to a chat 'bot
plugin - for those of you who use chat rooms for your support processes.

Reactions?  Any queries you can think of you'd like to see added? They
only take a few minutes (5-10) to write.

-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] What command line queries would be handy for you?

2014-03-25 Thread Alan Robertson

Hi,

The next release (due on Friday) will have the command line query tool 
in it, and a few built-in queries.


What queries would be handy for you to have?

That is, what kinds of things would you like to know about your 
infrastructure?


I can certainly add a few by then [in addition to fixing the ones I 
currently have in there ;-)].


Thanks!

-- Alan Robertson
al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] The story of a new command line query capability

2014-03-19 Thread Alan Robertson
Hi,

I was at the Boulder DevOps meetup last Monday, and heard a great
presentation from Ned McClain from Applied Trust.

He was talking about how important communication is to a DevOps-oriented
organization, and specifically how important that chat rooms are to
their particular company and how they do business.  I've talked to them
a few times about doing a trial with them.

Although he said a lot of good things, three things in particular that
they do that struck a chord in terms of the Assimilation project.

First:
Major system events (server down, etc) show up in a common chat room.
Second:
They have a 'bot that they query for lots of things including things
like DNS lookups
Third:
They use their chat room and its searchable logs for education of
their staff

So, what made sense to me is that I should write a command line query
capability - for lots of reasons, including integrating with 'bots (they
use Hubot), and I should also write a script to forward certain events
to the chat log just like they do.  The current fork/exec interface is
perfect for this.

Since they use their chat room extensively in their environment, and
they use it for education, it seems like a good thing to enable.

After talking to the speaker, I've decided to work with them to do a
trial sometime in the next month or so.

The command I wrote for this is a general command-line interface - not
at all specific to Hubot.

So -- making progress...


-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Next release...

2014-03-18 Thread Alan Robertson
Hi,

I had originally planned the next release for this Thursday.  However, I
was sick all last week, so I'm behind.  Let's try for next Friday.

-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] How many people are running tripwire?

2014-03-01 Thread Alan Robertson
The D3 Javascript library has been suggested as the way to perform such
visualizations.

As far as documentation - either the answer is no, and it's very
simple -- or yes, and it's self-describing...  Depends on what you
count as documentation.

Basically the API provides really only one kind of access - query
access.  That is, you can issue any registered query and get the results
via REST.  All the queries have metadata describing them, and there is a
query to get you the set of available queries.

The set of available queries is known to be inadequate, but they're easy
to add.



On 03/01/2014 12:57 PM, Jeff Mayrand wrote:
 If its rest it should be pretty accessible... Perhaps I can put
 together some visualizations for it... Imaging a live map of
 communication between nodes  It could be helpful for security to
 see holy cow why is this talking out to these nodes?  or for mapping
 back to things like Configuration Management Databases  Is your
 rest api fully documented?



 On Sat, Mar 1, 2014 at 1:15 PM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 I program in several languages. The wire protocol is quite
 specialized. For other external messaging I can support many
 interfaces. The question is what do you want to do, not what
 language is it in...

 We have good data storage and a reasonable REST API. What we need
 is something to use it :-)


 On March 1, 2014 11:05:25 AM MST, Jeff Mayrand
 jeff.mayr...@gmail.com mailto:jeff.mayr...@gmail.com wrote:

 A ok... Hey I'm still following along but I have been
 giving a bunch of data driven projects from our CIO who is
 moving to VP of Manufacturing and you know how that goes... 

 I've been meaning to try and allocate some time on the project
 with you guys but its just been a little crazy...   I was
 wondering if you are familiar with Meteor and Mongodb for an
 interface... I have been working pretty heavily with this
 reactive based website framework and it might be something
 that could be the basis for some of your dashboards.  That is
 the extent we are using it.  Building out broadcast event
 based data messages to different channels etc...   But it
 looks like its really good stuff albeit still in beta...  

 The other thing I was curious I know you are a C developer but
 have you ever considered something like Node.js as the basis
 for the messaging in assimilation?  

 Just a thought...

 Talk to ya later!



 On Sat, Mar 1, 2014 at 12:05 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 On 02/28/2014 03:27 PM, Jeff Mayrand wrote:
 We arent using Tripwire but we do have a threat detection
 system.  I'll ping some of our security guys and see what
 we are using.  Are you looking for test cases to see if
 assimilation's behavior will trigger an IDS?
 No, I was just really asking about tripwire.

 There really isn't anything that the Assimilation code
 does that could trigger a network-based IDS.  It doesn't
 send any packets, except to communicate between the
 nanoprobes and the CMA - and that's all UDP on our port
 (default: 1984).




 On Thu, Feb 27, 2014 at 5:37 PM, Alan Robertson
 al...@unix.sh mailto:al...@unix.sh wrote:

 I think these are pretty straightforward guys...

 I suspect they were telling the truth.  I've always
 found them to be a
 good bunch.

 They might not be doing a great job, but they have
 _something_ like
 tripwire (or so I suspect).

 It was more than 10% - but not hugely more.


 On 02/27/2014 03:26 PM, R P Herrold wrote:
  On Thu, 27 Feb 2014, Alan Robertson wrote:
 
  While I was up front, I asked how many folks were
 running tripwire or
  similar.  The result was something like 15-20% of
 the audience said they
  were.
  tripwire was supplanted by aide, when tripwire tried an
  unsuccessful conversion fromFOSS to proprietary, a la
  sendmail, and then backed away from it.  Also,
 tripwire was
  not aware of how to keep up with pre-linking,
 unlike some
  later products
 
  I am surprised the count was that high, and suspect
 you got
  some raised hands because of peer pressure, so as
 to seem to
  be thoughtful as to host level security issues
 before peers

Re: [Assimilation] How many people are running tripwire?

2014-02-28 Thread Alan Robertson
On 02/28/2014 03:27 PM, Jeff Mayrand wrote:
 We arent using Tripwire but we do have a threat detection system. 
 I'll ping some of our security guys and see what we are using.  Are
 you looking for test cases to see if assimilation's behavior will
 trigger an IDS?
No, I was just really asking about tripwire.

There really isn't anything that the Assimilation code does that could
trigger a network-based IDS.  It doesn't send any packets, except to
communicate between the nanoprobes and the CMA - and that's all UDP on
our port (default: 1984).



 On Thu, Feb 27, 2014 at 5:37 PM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 I think these are pretty straightforward guys...

 I suspect they were telling the truth.  I've always found them to be a
 good bunch.

 They might not be doing a great job, but they have _something_ like
 tripwire (or so I suspect).

 It was more than 10% - but not hugely more.


 On 02/27/2014 03:26 PM, R P Herrold wrote:
  On Thu, 27 Feb 2014, Alan Robertson wrote:
 
  While I was up front, I asked how many folks were running
 tripwire or
  similar.  The result was something like 15-20% of the audience
 said they
  were.
  tripwire was supplanted by aide, when tripwire tried an
  unsuccessful conversion fromFOSS to proprietary, a la
  sendmail, and then backed away from it.  Also, tripwire was
  not aware of how to keep up with pre-linking, unlike some
  later products
 
  I am surprised the count was that high, and suspect you got
  some raised hands because of peer pressure, so as to seem to
  be thoughtful as to host level security issues before peers
 
  -- Russ herrold


 --
 Alan Robertson al...@unix.sh - @OSSAlanR

 Openness is the foundation and preservative of friendship...  Let
 me claim from you at all times your undisguised opinions. -
 William Wilberforce
 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 mailto:Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/




-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] How many people are running tripwire?

2014-02-27 Thread Alan Robertson
Hi,

Last Monday, I did a demo of the current version of the Assimilation
code.  The demo worked (yay!) and I got quite positive responses from
the folks in the audience.

While I was up front, I asked how many folks were running tripwire or
similar.  The result was something like 15-20% of the audience said they
were.

This is a fairly forward-looking group of folks.  So, the
mini-tripwire-for-free capability seems like it could be worthwhile for
quite a few users.

I am currently working to make it optional, and have made good progress
on that.

-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] latest version in source control requires reinstalling ALL nanoprobes

2014-01-22 Thread Alan Robertson
This is more of a development issue than a main mailing list issue.  If
we want to continue past this, it would be good to switch to the
development list.

What I changed is more fundamental than that.  I essentially changed how
a protocol version number would be stored...

All layers above this are designed to not need version numbering.  But
the lowest level had a stupid oversight in it, and I really don't expect
it to be reasonable to modify this lowest layer.

The basic format of everything we send is type/length/value (TLV).  TLV
message formats are quite flexible with respect to growth and extension
without version numbering.  The LLDP spec is a good example of how one
can extend it significantly.

Unfortunately, because the packet format is UDP-based, I originally
limited the length field to 2 bytes.  HOWEVER, with compression, an
uncompressed field can be above 64K bytes.  So this was a brain fart - a
disconnect between the fact that this is UDP-based and the fact that I
had planned for compression - thereby supporting things larger than 64K
bytes.  Duhhh!!

Most of the interesting data is sent in JSON - and the JSON is very
flexible in terms of ignoring things you don't understand.  This is
where most interesting versioning will occur.

In addition, a single number for protocol versioning is likely
insufficient - since you may need to know if a particular nanoprobe
supports a particular directive.  That's not so much a protocol change
as it is an application change/version.  And there will be a lot more
than 256 of them...  And in open source, version numbers are not easy to
track, and are often not even linear...

Should it become necessary in the future, a more fruitful approach would
likely be to have each nanoprobe return a list of directives it supports
when it first starts up.  That would cover the peculiarities of the open
source fork/merge model better than a linear version number.  That
combined with the basic flexibility of a TLV protocol combined with JSON
for more complex things would cover all the kinds of oddities one is
likely to run into.

This addition would not require any kind of change format on the wire.

 low-level-format details

General packet format:
Frameset type:2 bytes
Frameset length: 3 bytes
Frameset flags:   2 bytes

[A UDP packet on the wire is permitted to contain more than one Frameset]

The Frameset header is followed by a series of Frames:

Each Frame is a TLV triple:
Frame Type:  2 bytes
Frame length:3 bytes
Frame value (n bytes where n is given by the Frame Length above)

The last Frame in a Frameset is supposed to be type 0, length 0.

The main reason I can think of for updating this low-level format is one of:
16777216 bytes isn't enough for a single uncompressed primitive value
65536 isn't enough distinct message types

Given that the wire protocol is UDP, 2^24 seems like enough bytes.  If
we need more than 2^16 basic message types, then we've done something
wrong...

If someone knows of a reason why one would need more than 2^24 bytes for
a single data item (Frame), then that would be good to know.

Of course, if that comes up, one could always make a frame type that
says multi-part frame followed by multi-part-frame-continuation as a
kludge for this kind of frame - and the frame contents then encapsulated
and split across these frames.  Ditto for huge framesets...

Since this low-level format is perfectly happy to support binary, this
should work fine (assuming it comes up).


On 01/22/2014 03:00 PM, R P Herrold wrote:
 On Wed, 22 Jan 2014, Alan Robertson wrote:

 Hopefully this won't occur again any time in the foreseeable future...
 the conventional way the RFC's do this, so as to not have to 
 rely on hope is to have a protocol version indicator (usually 
 not more than 8 bits wide. pPssibly less if it can be 
 'over-loaded' by bit-mapping onto a small range value)  in the 
 headers, so that one might be strict in what one states, and 
 liberal in what one receives

 -- Russ herrold


-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation Interview at LCA (linux.conf.au)

2014-01-18 Thread Alan Robertson

Hi,

While I was at LCA week before last (or last week depending on when you 
read this), I did an interview about the Assimilation Project.


Seems like it came out pretty well, and gave a few good talking points 
about the project.


/http://www.youtube.com/watch?v=o556f2C15z0list=SPmiuOcBMoxjdzEQTvqwHH46VbCib-_Icdindex=2/

-- Alan Robertson
al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Demo problem at LCA

2014-01-14 Thread Alan Robertson
Hi,

While I was at linux.conf.au (LCA) last week, I did a cool demo of
zero-configuration discovery-driven monitoring.

The discovery and monitoring worked mostly correctly.  It discovered
services, recognized them and started monitoring on them all automatically.

However, processing of switch discovery information failed.  I looked at
the issue today, and determined that it is caused by their HP switches
giving only the MAC address for the management address.  The LLDP spec
says that should be an IP address, but MAC addresses are permitted.

So, I'm fixing that right now.



-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] New post: Introducing Automated service monitoring

2013-12-30 Thread Alan Robertson
Hi,

I've written a new blog post introducing automated service monitoring. 
You can find it here: http://bit.ly/1cBX2bN



From a monitoring perspective, one of the most exciting possibilities in
the Assimilation project http://assimproj.org/ comes from the
integration of monitoring and discovery.

We've recently implemented the rules which will cause services to be
automatically monitored once they're discovered.  In other words, you
don't have to tell the system to monitor these services, they'll just
get monitored automatically.

For example, if you have a rule which describes how to monitor mysql,
then whenever it sees the mysql service, it can start monitoring it
without human intervention.

In this blog post, I'll describe how to tell the system to monitor
services by using LSB-style init scripts
http://refspecs.linuxbase.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-generic/iniscrptact.html.
 
Later posts will cover more interesting ways of monitoring.
-
Read more here: http://bit.ly/1cBX2bN

-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Trial Dropbox build folder

2013-12-09 Thread Alan Robertson
Excellent.

Thanks!   [that makes two who can see it -- probably enough]

On 12/09/2013 09:13 AM, Wes Freeman wrote:
 I can see them.

 Wes

 On Mon, Dec 9, 2013 at 9:06 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 Hi,

 I'd like for folks on the list to see if they can see the builds under
 this URL.  There are a couple of more layers of directories, then two
 cpack packages for Ubuntu.

 Here's the URL:
 https://www.dropbox.com/sh/7dk588n0oxmjh27/bdS9TiY3cd

 Please let me (or the list) know if it works for you.

 --
 Alan Robertson al...@unix.sh - @OSSAlanR

 Openness is the foundation and preservative of friendship...  Let
 me claim from you at all times your undisguised opinions. -
 William Wilberforce
 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 mailto:Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/




-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Python AttributeError while doing tests with testify

2013-11-21 Thread Alan Robertson
Hi,

This appears to be some sort of incompatibility between Py2Neo 1.6.1 and
the Assimilation code.  From what I can tell, 1.6.1 shouldn't be any
kind of big change in py2neo - but still it broke our code.  Sigh...
I'll look into this tomorrow and see what I can learn.  I've CCed Nigel
Small about this - to see if he has any suggestions.

I'm committed to a conference show floor appearance this morning and a
presentation this evening - or I'd have already looked at it ;-).

FYI: This issue came up on the -devel list - where I also more-or-less
ignored it there too ;-).

And another FYI:  It's better to subscribe to the list - sometimes I
forget to CC the person sending the email.

But I will look at it.


On 11/21/2013 06:03 AM, Geldermann, Jan -bhn wrote:
 Hey,

 I am running Python 2.7 with all the packages you listed on your website on a 
 CentOs 6.  When I try to run testify tests I get the following errors in 
 the attached logfile. Also when I try to run the cma and a nanoprobe connects 
 to the cma I get the following message:

 CRITICAL: MessageDispatcher type 'exceptions.AttributeError' exception 
 ['tuple' object has no attribute 'get_properties'] occurred while handling 
 [JSDISCOVERY] FrameSetFrameset from [::x]:40991
  Begin JSDISCOVERY Message 'tuple' object has no attribute 
 'get_properties' Exception Traceback 
 messagedispatcher.py.57:dispatch: 
 self.dispatchtable[fstype].dispatch(origaddr, frameset)
 dispatchtarget.py.168:dispatch: drone.logjson(json)
 droneinfo.py.126:logjson: Drone._JSONprocessors[dtype](self, jsonobj)
 droneinfo.py.251:_add_tcplisteners: allourips = self.get_owned_ips()
 droneinfo.py.100:get_owned_ips: ,   params=params)]
 store.py.418:load_cypher_nodes: yield self.constructobj(cls, node)
 store.py.497:constructobj: kwargs = node.get_properties()

 Don't have any idea why it's crashing. Tried to set the debug level high for 
 more details but no success.

 Thanks for helping me out!

 Jan Geldermann



 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/


-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Python AttributeError while doing tests with testify

2013-11-21 Thread Alan Robertson
I forgot to say that 1.6.0 doesn't have this problem.

On 11/21/2013 07:24 AM, Alan Robertson wrote:
 Hi,

 This appears to be some sort of incompatibility between Py2Neo 1.6.1
 and the Assimilation code.  From what I can tell, 1.6.1 shouldn't be
 any kind of big change in py2neo - but still it broke our code. 
 Sigh... I'll look into this tomorrow and see what I can learn.  I've
 CCed Nigel Small about this - to see if he has any suggestions.

 I'm committed to a conference show floor appearance this morning and a
 presentation this evening - or I'd have already looked at it ;-).

 FYI: This issue came up on the -devel list - where I also more-or-less
 ignored it there too ;-).

 And another FYI:  It's better to subscribe to the list - sometimes I
 forget to CC the person sending the email.

 But I will look at it.


 On 11/21/2013 06:03 AM, Geldermann, Jan -bhn wrote:
 Hey,

 I am running Python 2.7 with all the packages you listed on your website on 
 a CentOs 6.  When I try to run testify tests I get the following errors in 
 the attached logfile. Also when I try to run the cma and a nanoprobe 
 connects to the cma I get the following message:

 CRITICAL: MessageDispatcher type 'exceptions.AttributeError' exception 
 ['tuple' object has no attribute 'get_properties'] occurred while handling 
 [JSDISCOVERY] FrameSetFrameset from [::x]:40991
  Begin JSDISCOVERY Message 'tuple' object has no attribute 
 'get_properties' Exception Traceback 
 messagedispatcher.py.57:dispatch: 
 self.dispatchtable[fstype].dispatch(origaddr, frameset)
 dispatchtarget.py.168:dispatch: drone.logjson(json)
 droneinfo.py.126:logjson: Drone._JSONprocessors[dtype](self, jsonobj)
 droneinfo.py.251:_add_tcplisteners: allourips = self.get_owned_ips()
 droneinfo.py.100:get_owned_ips: ,   params=params)]
 store.py.418:load_cypher_nodes: yield self.constructobj(cls, node)
 store.py.497:constructobj: kwargs = node.get_properties()

 Don't have any idea why it's crashing. Tried to set the debug level high for 
 more details but no success.

 Thanks for helping me out!

 Jan Geldermann



 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/


 -- 
 Alan Robertson al...@unix.sh - @OSSAlanR

 Openness is the foundation and preservative of friendship...  Let me claim 
 from you at all times your undisguised opinions. - William Wilberforce


 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/


-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Continuous Integration milestone

2013-11-19 Thread Alan Robertson

  
  
Hi all,

Thanks to the tireless efforts of JC (aka fwiffo) and the occasional
source tweak by me, we now have an automated build systems which
correctly builds and tests the project for 6 different flavors of
Ubuntu!

 Hurray!

Up until now, those efforts have also been mostly thankless as well.

I want to correct that and give JC a big THANK
  YOU! for his work on the
Assimilation project.

  Thanks JC!


Below you'll find the tweets documenting the last set of builds
(that is, the first completely correct builds)!


  

  
@AllegNotifier
   
build #99 has finished
  with status = SUCCESS on vux (Ubuntu 12.10/i686). check it
  out https://gist.github.com/borgified/6575947 
  
  

  
  

  
   @AllegNotifier
   
build #99 has finished
  with status = SUCCESS on urquan (Ubuntu 13.04/x86_64).
  check it out https://gist.github.com/borgified/6575945
  

  
  

  
   @AllegNotifier
   
build #99 has finished
  with status = SUCCESS on melnorme (Ubuntu 13.04/i686).
  check it out https://gist.github.com/borgified/6575946
  

  
  

  
   @AllegNotifier
   
build #99 has finished
  with status = SUCCESS on chmmr (Ubuntu 12.10/x86_64).
  check it out https://gist.github.com/borgified/6575942
  

  
  

  
   @AllegNotifier
   
build #99 has finished
  with status = SUCCESS on arilou (Ubuntu 12.04/x86_64).
  check it out https://gist.github.com/borgified/6575944
  

  
  

  
   @AllegNotifier
   
build #99 has finished
  with status = SUCCESS on supox (Ubuntu 11.10/x86_64).
  check it out https://gist.github.com/borgified/6575939
  

  


  So, now on to more platforms -- hopefully including CentOS (aka
  Red Hat).

 Thanks again JC!

-- 
Alan Robertson al...@unix.sh - @OSSAlanR

"Openness is the foundation and preservative of friendship...  Let me claim from you at all times your undisguised opinions." - William Wilberforce


  

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Building RPMS: WAS Re: Problems with cpack28 under CentOS 6

2013-11-17 Thread Alan Robertson

Generally, cpack sucks for RPMs. Sorry!

There are various people on the list (Jamie and some new folks) who know 
a lot more about building RPMs for the project.


I've CCed one of them (and changed the subject to get their attention).

Let's see what they have to say.

Thanks!

-- Alan Robertson
al...@unix.sh



On 11/18/2013 12:06 AM, Geldermann, Jan -bhn wrote:

Hi AssimilationCommunity,

I get an error when I run cpack28 on my CentOS 6 server. I used the source 
code from your repository and cmake seemed to work.

Here the cmake28 Log:
# cmake28 /tmp/assimilation/
-- The C compiler identification is GNU 4.4.7
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Found PkgConfig: /usr/bin/pkg-config (found version 0.23)
-- checking for module 'glib-2.0'
--   found glib-2.0, version 2.22.5
-- Looking for unistd.h
-- Looking for unistd.h - found
-- Looking for sys/utsname.h
-- Looking for sys/utsname.h - found
-- Looking for fcntl.h
-- Looking for fcntl.h - found
-- Looking for mcheck.h
-- Looking for mcheck.h - found
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for sys/socket.h
-- Looking for sys/socket.h - found
-- Looking for netdb.h
-- Looking for netdb.h - found
-- Looking for clock_gettime
-- Looking for clock_gettime - not found
-- Looking for endprotoent
-- Looking for endprotoent - found
-- Looking for fcntl
-- Looking for fcntl - found
-- Looking for g_get_real_time
-- Looking for g_get_real_time - not found
-- Looking for g_get_monotonic_time
-- Looking for g_get_monotonic_time - not found
-- Looking for g_get_environ
-- Looking for g_get_environ - not found
-- Looking for getaddrinfo
-- Looking for getaddrinfo - found
-- Looking for geteuid
-- Looking for geteuid - found
-- Looking for kill
-- Looking for kill - found
-- Looking for mcheck
-- Looking for mcheck - found
-- Looking for mcheck_pedantic
-- Looking for mcheck_pedantic - found
-- Looking for setpgid
-- Looking for setpgid - found
-- Looking for sigaction
-- Looking for sigaction - found
-- Looking for uname
-- Looking for uname - found
-- Looking for GetComputerNameA
-- Looking for GetComputerNameA - not found
-- Found Doxygen: /usr/bin/doxygen (found version 1.6.1)
-- Configuring done
-- Generating done
-- Build files have been written to: /data/binaries

(also when I run make install I get no errors)

Then I tried to run cpack28:
# cpack28
CPack: Create package using RPM
CPack: Install projects
CPack: - Run preinstall target for: assimilation
CPack: - Install project: assimilation
CPack: -   Install component: cma-component
CPack: -   Install component: nanoprobe-component
CPack: Create package
CPackRPM: Will use GENERATED spec file: 
/data/binaries/_CPack_Packages/Linux/RPM/SPECS/assimilation-NANOPROBE-COMPONENT.spec
CMake Error at /usr/share/cmake28/Modules/CPackRPM.cmake:946 (file):
   file Internal CMake error when trying to open file:
   /data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-NANOPROBE-COMPONENT.err
   for reading.


CMake Error at /usr/share/cmake28/Modules/CPackRPM.cmake:947 (file):
   file Internal CMake error when trying to open file:
   /data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-NANOPROBE-COMPONENT.out
   for reading.


CPackRPM:Debug: You may consult rpmbuild logs in:
CPackRPM:Debug:- 
/data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-NANOPROBE-COMPONENT.err
CPackRPM:Debug: ***  ***
CPackRPM:Debug:- 
/data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-NANOPROBE-COMPONENT.out
CPackRPM:Debug: ***  ***
CPack Error: Error while execution CPackRPM.cmake
CPackRPM: Will use GENERATED spec file: 
/data/binaries/_CPack_Packages/Linux/RPM/SPECS/assimilation-cma-component.spec
CMake Error at /usr/share/cmake28/Modules/CPackRPM.cmake:946 (file):
   file Internal CMake error when trying to open file:
   /data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-cma-component.err for
   reading.


CMake Error at /usr/share/cmake28/Modules/CPackRPM.cmake:947 (file):
   file Internal CMake error when trying to open file:
   /data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-cma-component.out for
   reading.


CPackRPM:Debug: You may consult rpmbuild logs in:
CPackRPM:Debug:- 
/data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-cma-component.err
CPackRPM:Debug: ***  ***
CPackRPM:Debug:- 
/data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-cma-component.out
CPackRPM:Debug: ***  ***
CPack Error: Error while execution CPackRPM.cmake
CPackRPM: Will use GENERATED spec file: 
/data/binaries/_CPack_Packages/Linux/RPM/SPECS/assimilation-nanoprobe-component.spec
CMake Error at /usr/share/cmake28/Modules/CPackRPM.cmake:946 (file):
   file Internal CMake error when trying to open file:
   /data/binaries/_CPack_Packages/Linux/RPM/rpmbuild-nanoprobe-component.err
   for reading.


CMake Error at /usr/share/cmake28/Modules/CPackRPM.cmake:947 (file):
   file Internal

Re: [Assimilation] Recent pushes to Mercurial...

2013-11-14 Thread Alan Robertson
I just changed the code so that if you set the environment variable
BROKENDNS it will ignore this error.

It will push upstream when the test sequence completes - usually about
10-12 minutes from now.


On 11/14/2013 06:35 AM, Alan Robertson wrote:

 This seems to be the problem...

 + testify tests
 ..** (process:20039): DEBUG: START of live C Class
 object dump:
 ** (process:20039): DEBUG: NetAddr object 66.35.48.29:80 at 0x188a120
 ref count 1
 ** (process:20039): DEBUG: NetAddr object 66.35.48.29:80 at 0x1593290
 ref count 1
 ** (process:20039): DEBUG: END of live C Class object dump.
 fail: tests.cclass_wrappers_test pyNetAddrTest.test_dns_strinit
 There were multiple errors in this test:
 Traceback (most recent call last):
 File ./tests/cclass_wrappers_test.py, line 269, in test_dns_strinit
 self.assertRaises(ValueError, pyNetAddr, 'www.frodo.middleearth')
 *AssertionError: ValueError not raised*
 Traceback (most recent call last):
 File ./tests/cclass_wrappers_test.py, line 278, in tearDown
 assert_no_dangling_Cclasses()
 File ./tests/cclass_wrappers_test.py, line 51, in
 assert_no_dangling_Cclasses
 raise AssertionError, Dangling C-class objects - %d still around % count
 AssertionError: Dangling C-class objects - 2 still around
   The real problem is the *AssertionError: ValueError not raised*.
 Everything else is caused by that.  This happened at my daughter's
 house on my last trip - so I know what it is...  You have a broken
 DNS.  When I do a lookup on www.frodo.middleearth I don't get an
 error.  Since there is no top level domain middleearth (at least in
 this universe) the code expects an error.  It doesn't get one, so it
 bitches.  The dangling classes are the result of stopping the test
 mid-stream.  Usually if you have dangling classes, you should check
 above to see if any other errors have occurred.

 I thought about putting a broken_dns flag in the test, but I didn't. 
 Guess I need to do that...  Or failing that, you could complain to
 your DNS provider.

 This is the crap where you give a bad domain to someone like Verizon,
 and it helpfully redirects you to some advertisement page (or
 equivalent).



 On 11/14/2013 12:01 AM, Fwiffo wrote:
 Yup I ran it again with 1.6.0 version of py2neo but don't think that
 fixed the errors. I write more in detail tomorrow. You can see logs
 in the usual spot. 

 -- JC

 On Nov 13, 2013, at 2:04 PM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 JC:

 Have you had a chance to make this change yet?

 Have questions?


 On 11/10/2013 08:28 AM, Alan Robertson wrote:
 Hi,

 When I rewrote the database code I switched to requiring the latest
 version of Py2neo.  This is a required change.  I think this is
 version 1.6.0.  But it will NOT work with 1.5.x.

 It should run with the latest stable Neo4j, but I've been testing
 with the 2.0.x test version.

 I aim to have this run with the 2.x version of Neo4j when it
 becomes official.

 The problems now in the logs look like they're related to py2neo
 version.


 On 11/8/2013 1:26 PM, John Carpenter wrote:
 aw man! i guess it was user error after all.. i mustve missed
 those tweets (like this one):

 https://twitter.com/AllegNotifier/status/398615383633956864

 so good news... notifier still works. bad news still failing
 tests. also i should fix the logic in my script so that when it
 passes, it'll still send tweet.

 latest build console output are found
 here: http://www.spathiwa.com/jenkins/builds.html

 im running it again right now (in case new changes got added)

 -- JC




 On Fri, Nov 8, 2013 at 10:19 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 Sounds great!

 Fwiffo borgif...@gmail.com mailto:borgif...@gmail.com wrote:

 I ran a set of builds right after you said you had made
 changes and I was waiting for my twitter notification to
 pop up but it never did. Just now I realized that the way
 I scripted it was to send notification on build failure so
 the fact that I got no notice is surely a good sign! I'm
 eager to verify the status of the builds (and to fix my
 script logic). -- JC

 On Nov 8, 2013, at 5:00 AM, Alan Robertson
 al...@unix.sh wrote: Hi, As was noted earlier on the
 list, the code didn't pass tests. Yesterday I got it
 to pass tests (didn't install something correctly),
 and late last night, I fixed some bugs that kept it
 from running correctly. Next week after I'm home, I'll
 work a bit more in earnest to ensure that it is a bit
 better tested. The current unit tes ts are nice, but
 they are only unit tests. Anyone who wants to
 volunteer to look at higher level system testing -
 please send an email to the list. Thanks! -- Alan
 Robertson al...@unix.sh

Re: [Assimilation] Recent pushes to Mercurial...

2013-11-13 Thread Alan Robertson
JC:

Have you had a chance to make this change yet?

Have questions?


On 11/10/2013 08:28 AM, Alan Robertson wrote:
 Hi,

 When I rewrote the database code I switched to requiring the latest
 version of Py2neo.  This is a required change.  I think this is
 version 1.6.0.  But it will NOT work with 1.5.x.

 It should run with the latest stable Neo4j, but I've been testing with
 the 2.0.x test version.

 I aim to have this run with the 2.x version of Neo4j when it becomes
 official.

 The problems now in the logs look like they're related to py2neo version.


 On 11/8/2013 1:26 PM, John Carpenter wrote:
 aw man! i guess it was user error after all.. i mustve missed those
 tweets (like this one):

 https://twitter.com/AllegNotifier/status/398615383633956864

 so good news... notifier still works. bad news still failing tests.
 also i should fix the logic in my script so that when it passes,
 it'll still send tweet.

 latest build console output are found
 here: http://www.spathiwa.com/jenkins/builds.html

 im running it again right now (in case new changes got added)

 -- JC




 On Fri, Nov 8, 2013 at 10:19 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 Sounds great!

 Fwiffo borgif...@gmail.com mailto:borgif...@gmail.com wrote:

 I ran a set of builds right after you said you had made
 changes and I was waiting for my twitter notification to pop
 up but it never did. Just now I realized that the way I
 scripted it was to send notification on build failure so the
 fact that I got no notice is surely a good sign! I'm eager to
 verify the status of the builds (and to fix my script logic).
 -- JC

 On Nov 8, 2013, at 5:00 AM, Alan Robertson
 al...@unix.sh wrote: Hi, As was noted earlier on the
 list, the code didn't pass tests. Yesterday I got it to
 pass tests (didn't install something correctly), and late
 last night, I fixed some bugs that kept it from running
 correctly. Next week after I'm home, I'll work a bit more
 in earnest to ensure that it is a bit better tested. The
 current unit tes ts are nice, but they are only unit
 tests. Anyone who wants to volunteer to look at higher
 level system testing - please send an email to the list.
 Thanks! -- Alan Robertson al...@unix.sh
 
 
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 mailto:Assimilation@lists.community.tummy.com
 
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/ 


 -- 
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.





 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/


-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Anyone going to SC13?

2013-11-13 Thread Alan Robertson
Hi,

Is anyone going to Supercomputing 2013?

It's in my back yard -- and I'd be love to meet any of you who come out
this way.

Let me know!

-- 
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim 
from you at all times your undisguised opinions. - William Wilberforce
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Recent pushes to Mercurial...

2013-11-08 Thread Alan Robertson
Hi,

As was noted earlier on the list, the code didn't pass tests.  Yesterday
I got it to pass tests (didn't install something correctly), and late
last night, I fixed some bugs that kept it from running correctly.

Next week after I'm home, I'll work a bit more in earnest to ensure that
it is a bit better tested.

The current unit tests are nice, but they are only unit tests.

Anyone who wants to volunteer to look at higher level system testing -
please send an email to the list.

Thanks!

-- Alan Robertson
al...@unix.sh
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Recent pushes to Mercurial...

2013-11-08 Thread Alan Robertson
Sounds great!

Fwiffo borgif...@gmail.com wrote:
I ran a set of builds right after you said you had made changes and I
was waiting for my twitter notification to pop up but it never did.
Just now I realized that the way I scripted it was to send notification
on build failure so the fact that I got no notice is surely a good
sign! I'm eager to verify the status of the builds (and to fix my
script logic). 

-- JC

 On Nov 8, 2013, at 5:00 AM, Alan Robertson al...@unix.sh wrote:
 
 Hi,
 
 As was noted earlier on the list, the code didn't pass tests. 
Yesterday
 I got it to pass tests (didn't install something correctly), and late
 last night, I fixed some bugs that kept it from running correctly.
 
 Next week after I'm home, I'll work a bit more in earnest to ensure
that
 it is a bit better tested.
 
 The current unit tests are nice, but they are only unit tests.
 
 Anyone who wants to volunteer to look at higher level system testing
-
 please send an email to the list.
 
Thanks!
 
-- Alan Robertson
al...@unix.sh
 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com

http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] cma on ubuntu common issue to all versions of ubuntu

2013-11-07 Thread Alan Robertson
Hi,

I pushed out changes to fix this.  I hadn't tested an installed version
for a while - there were a bunch of python modules not being installed. 
Stupid.

However, it looks like there are still other problems...

I'll let you know what I find out...


On 11/07/2013 05:28 AM, Alan Robertson wrote:
 Hmmm  Let me see if I have this problem (I don't think so - I run
 Ubuntu).


 OK.  I _do_ have this problem.  I'll figure it out in the next few
 hours before our booth opens.

 I kept only running it out of my workspace - where it worked just fine.

 Sorry!!

 -- Alan Robertson
 al...@unix.sh



 On 11/07/2013 02:41 AM, Peter Sørensen wrote:

 Hi,

  

 I have the same problem on Ubuntu.  Any solution to this ?

  

 Best regards

  

 Peter Sørensen/Univ.Of.Southern.Denmark/email: mas...@sdu.dk
 mailto:mas...@sdu.dk

  

 *Fra:*assimilation-boun...@lists.community.tummy.com
 [mailto:assimilation-boun...@lists.community.tummy.com] *På vegne af
 *John Carpenter
 *Sendt:* 19. oktober 2013 01:15
 *Til:* Assimilation Project
 *Emne:* [Assimilation] cma on ubuntu common issue to all versions of
 ubuntu

  

  
 + sudo /etc/init.d/cma stop
 Traceback (most recent call last):
   File /usr/lib/python2.7/dist-packages/assimilation/cma.py, line 101, in 
 module
  
 import cmainit
 ImportError: No module named cmainit
  
  
  
 cpack completes, resulting *.deb installs fine too. but there's some issue 
 with cma.py. 
 It's looking for a module named cmainit but I don't know where that might 
 come from.
  
  
 you can reproduce this problem by just running /usr/sbin/cma too.
  
  
 i think if this is fixed, then testify tests step might actually pass!
  
  
 if you need to inspect full output: 
 http://www.spathiwa.com/jenkins/builds.html
  
 urquan (13.04 x64) is one of the ones that got the furthest will show this 
 problem.
  
  
 i've tried going into /usr/sbin/cma and editing the script to say import 
 CMAinit but that didnt seem to fix.
  
  
  
 for rpm based (centos/fedora), havent been able to get past cpack yet cuz 
 some of the depencencies
 arent well supported like ctypesgen.
  


 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/



 ___
 Assimilation mailing list - Discovery-Driven Monitoring
 Assimilation@lists.community.tummy.com
 http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
 http://assimmon.org/

___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


[Assimilation] Assimilation talk in Nuremberg - Open Source Monitoring Conference

2013-11-05 Thread Alan Robertson
http://www.youtube.com/watch?feature=player_embeddedv=qbk2-f31q-E
___
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/


Re: [Assimilation] Problem when calling cpack

2013-11-03 Thread Alan Robertson
Santiago Newbery and David Klann will also be here.


On 11/02/2013 09:47 PM, Fwiffo wrote:
 Sure that is no problem. Each gist is saved in a git repository so we
 can pull up past commits where builds were successful and github even
 gives us a permalink to each revision. 

 Have fun at LISA both if you. I'll catch up when you get back. 

 -- JC

 On Nov 2, 2013, at 5:41 PM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 Hi *Pamela*,

 I'd be delighted to have you take over the documentation :-D.  Really
 delighted.

 We'll talk at LISA.

 Hi *JC:*
 Can you keep two pointers - one to the last build, and one to the
 last successful build for each platform?



 On 11/02/2013 11:12 AM, Pamela Lynn Howell wrote:

 I would definitely vote for troubleshooting section.

 On Nov 2, 2013 1:07 PM, Fwiffo borgif...@gmail.com
 mailto:borgif...@gmail.com wrote:

 I'd be ok with either way. The script I wrote to copy the output
 and make it available in a GitHub gist can be run in any Jenkins
 instance (it isn't anything fancy) I just have to put better
 comments on it so others can use it too. 

 The advantage of using the links is that it'll be somewhat up to
 date (close to latest build). But the disadvantage is that it
 doesn't guarantee a successful build so broken builds would get
 linked too. 

 Maybe error messages would be just as valuable but I think that
 would belong in a diff section perhaps troubleshooting. 

 What do you think? Make new section on troubleshooting? I could
 provide a bunch of errors because I encounter them all the time.
 Would be useful for folks to look up issues like forgetting the
 ctypesgen dependency like we just had. 

 -- JC

 On Nov 2, 2013, at 4:34 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 Everything on the web site is produced from the source tree in
 Mercurial.

 Most of the base documents are from the docfiles directory -
 which you can find here online:
 http://hg.linux-ha.org/assimilation/file/tip/docfiles

 There are two possibilities that come to mind:
 - include some of our build output in these files
 - put a link to your latest build output live online in one
 of these files

 What say ye?

 Of course, I can't do much while traveling, but at least my
 laptop, brain and phone (auxiliary brain) all seem to be working.


 On 11/01/2013 09:26 PM, Fwiffo wrote:
 Oh you mean as it to the Linux ha website. Ya sure but I dunno
 how :)

 -- JC

 On Nov 1, 2013, at 2:30 PM, Pamela Lynn Howell
 pamelahow...@gmail.com mailto:pamelahow...@gmail.com wrote:

 Hey JC,
 I'm looking at the docs, do you think sample build log would
 be useful to add? Cuz I do.

 ---pam

 On Nov 1, 2013 4:39 PM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 There are a number of places we can host this.

 I'm sure we can host them at the Linux-HA site - like the
 source and the current web site.  Or we can host them
 somewhere else on our own (lots of options).

 Have you tried to digitally sign any yet?

 We need to establish a project signing key to use so
 people can know that the packages we put out are from us
 (not tampered with).


 On 11/01/2013 12:33 PM, Fwiffo wrote:
 I've gotta find a place to upload them to then automate
 that too

 -- JC

 On Nov 1, 2013, at 10:11 AM, Alan Robertson
 al...@unix.sh mailto:al...@unix.sh wrote:

 Where can he find the package you built?

 On 11/01/2013 10:04 AM, Fwiffo wrote:
 Hi Peter

 Sorry for the lack of explanation earlier. I linked
 you the output logs of the whole building process. You
 can see what commands were issued cuz those lines are
 prepended with a + sign. You can also see the result
 of running those commands and the versions of each
 package and use those as a guide. 

 -- JC

 On Nov 1, 2013, at 4:37 AM, Peter Sørensen
 mas...@sdu.dk mailto:mas...@sdu.dk wrote:

 Thanks but what can I achieve with that ?

  

 Best regards

  

 Peter

 *Fra:*John Carpenter [mailto:borgif...@gmail.com]
 *Sendt:* 1. november 2013 11:37
 *Til:* Peter Sørensen
 *Cc:* soren.han...@ril.com
 mailto:soren.han...@ril.com;
 assimilation@lists.community.tummy.com
 mailto:assimilation@lists.community.tummy.com
 *Emne:* Re: [Assimilation] Problem wneh calling cpack

  

 this might be
 useful: http://www.spathiwa.com/jenkins/builds.html

 i even have your version (12.04)

  

  

 On Fri, Nov 1, 2013 at 2:37 AM, Peter Sørensen
 mas...@sdu.dk mailto:mas...@sdu.dk wrote

Re: [Assimilation] Problem when calling cpack

2013-11-02 Thread Alan Robertson
Everything on the web site is produced from the source tree in Mercurial.

Most of the base documents are from the docfiles directory - which you
can find here online:
http://hg.linux-ha.org/assimilation/file/tip/docfiles

There are two possibilities that come to mind:
- include some of our build output in these files
- put a link to your latest build output live online in one of these
files

What say ye?

Of course, I can't do much while traveling, but at least my laptop,
brain and phone (auxiliary brain) all seem to be working.


On 11/01/2013 09:26 PM, Fwiffo wrote:
 Oh you mean as it to the Linux ha website. Ya sure but I dunno how :)

 -- JC

 On Nov 1, 2013, at 2:30 PM, Pamela Lynn Howell pamelahow...@gmail.com
 mailto:pamelahow...@gmail.com wrote:

 Hey JC,
 I'm looking at the docs, do you think sample build log would be
 useful to add? Cuz I do.

 ---pam

 On Nov 1, 2013 4:39 PM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 There are a number of places we can host this.

 I'm sure we can host them at the Linux-HA site - like the source
 and the current web site.  Or we can host them somewhere else on
 our own (lots of options).

 Have you tried to digitally sign any yet?

 We need to establish a project signing key to use so people can
 know that the packages we put out are from us (not tampered with).


 On 11/01/2013 12:33 PM, Fwiffo wrote:
 I've gotta find a place to upload them to then automate that too

 -- JC

 On Nov 1, 2013, at 10:11 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 Where can he find the package you built?

 On 11/01/2013 10:04 AM, Fwiffo wrote:
 Hi Peter

 Sorry for the lack of explanation earlier. I linked you the
 output logs of the whole building process. You can see what
 commands were issued cuz those lines are prepended with a +
 sign. You can also see the result of running those commands
 and the versions of each package and use those as a guide. 

 -- JC

 On Nov 1, 2013, at 4:37 AM, Peter Sørensen mas...@sdu.dk
 mailto:mas...@sdu.dk wrote:

 Thanks but what can I achieve with that ?

  

 Best regards

  

 Peter

 *Fra:*John Carpenter [mailto:borgif...@gmail.com]
 *Sendt:* 1. november 2013 11:37
 *Til:* Peter Sørensen
 *Cc:* soren.han...@ril.com mailto:soren.han...@ril.com;
 assimilation@lists.community.tummy.com
 mailto:assimilation@lists.community.tummy.com
 *Emne:* Re: [Assimilation] Problem wneh calling cpack

  

 this might be useful: http://www.spathiwa.com/jenkins/builds.html

 i even have your version (12.04)

  

  

 On Fri, Nov 1, 2013 at 2:37 AM, Peter Sørensen mas...@sdu.dk
 mailto:mas...@sdu.dk wrote:

 Thanks - I overlooked this in the docs :-)

 Best regards

 Peter Sørensen

 -Oprindelig meddelelse-
 Fra: soren.han...@ril.com mailto:soren.han...@ril.com
 [mailto:soren.han...@ril.com mailto:soren.han...@ril.com]
 Sendt: 1. november 2013 10:30
 Til: Peter Sørensen; assimilation@lists.community.tummy.com
 mailto:assimilation@lists.community.tummy.com
 Emne: Re: [Assimilation] Problem wneh calling cpack


 You need to have ctypesgen installed.

 Soren Hansen
 Assistant VP, Chief Architect, Cloud
 Reliance Jio Infocomm, Ltd.
 Mobile: +45 28287542 tel:%2B45%2028287542
 Skype: sorenhansen1234
 Email: soren.han...@ril.com mailto:soren.han...@ril.com

 On 01-11-2013 10:16, Peter Sørensen wrote:
  Hi,
 
  I've been away from this project the past year but would
 like to
  restart the process. So I grabbed a new source with:
 
  hg clone 'http://hg.linux-ha.org/%7Cexperimental/assimilation/'
 
  mkdir assimilation-bin
  cd assimilation-bin
  cmake ../assimilation
 
  BUT when i vall cpack I get:
 
  CPack: Create package using DEB
  CPack: Install projects
  CPack: - Run preinstall target for: assimilation
  CPack: - Install project: assimilation
  CPack: -   Install component: cma-component
  CMake Error at
 /home/maspsr/assimilation-bin/cma/cmake_install.cmake:60 (FILE):
file INSTALL cannot find
/home/maspsr/assimilation/cma/AssimCtypes.py-install.
  Call Stack (most recent call first):
/home/maspsr/assimilation-bin/cmake_install.cmake:66
 (INCLUDE)
 
 
  CPack Error: Error when generating package: assimilation
 
 
  What am I missing ? I'm on Ubuntu 12.04
 
  Best regards
 
  Peter Sorensen/Univ Of Southern Denmark/email:
 mas...@sdu.dk mailto:mas...@sdu.dk
 Confidentiality Warning: This message and any attachments
 are intended only for the use of the intended recipient(s).
 are confidential and may be privileged. If you are not the
 intended recipient. you are hereby notified that any

Re: [Assimilation] Problem when calling cpack

2013-11-02 Thread Alan Robertson
Hi *Pamela*,

I'd be delighted to have you take over the documentation :-D.  Really
delighted.

We'll talk at LISA.

Hi *JC:*
Can you keep two pointers - one to the last build, and one to the last
successful build for each platform?



On 11/02/2013 11:12 AM, Pamela Lynn Howell wrote:

 I would definitely vote for troubleshooting section.

 On Nov 2, 2013 1:07 PM, Fwiffo borgif...@gmail.com
 mailto:borgif...@gmail.com wrote:

 I'd be ok with either way. The script I wrote to copy the output
 and make it available in a GitHub gist can be run in any Jenkins
 instance (it isn't anything fancy) I just have to put better
 comments on it so others can use it too. 

 The advantage of using the links is that it'll be somewhat up to
 date (close to latest build). But the disadvantage is that it
 doesn't guarantee a successful build so broken builds would get
 linked too. 

 Maybe error messages would be just as valuable but I think that
 would belong in a diff section perhaps troubleshooting. 

 What do you think? Make new section on troubleshooting? I could
 provide a bunch of errors because I encounter them all the time.
 Would be useful for folks to look up issues like forgetting the
 ctypesgen dependency like we just had. 

 -- JC

 On Nov 2, 2013, at 4:34 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 Everything on the web site is produced from the source tree in
 Mercurial.

 Most of the base documents are from the docfiles directory -
 which you can find here online:
 http://hg.linux-ha.org/assimilation/file/tip/docfiles

 There are two possibilities that come to mind:
 - include some of our build output in these files
 - put a link to your latest build output live online in one
 of these files

 What say ye?

 Of course, I can't do much while traveling, but at least my
 laptop, brain and phone (auxiliary brain) all seem to be working.


 On 11/01/2013 09:26 PM, Fwiffo wrote:
 Oh you mean as it to the Linux ha website. Ya sure but I dunno
 how :)

 -- JC

 On Nov 1, 2013, at 2:30 PM, Pamela Lynn Howell
 pamelahow...@gmail.com mailto:pamelahow...@gmail.com wrote:

 Hey JC,
 I'm looking at the docs, do you think sample build log would be
 useful to add? Cuz I do.

 ---pam

 On Nov 1, 2013 4:39 PM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 There are a number of places we can host this.

 I'm sure we can host them at the Linux-HA site - like the
 source and the current web site.  Or we can host them
 somewhere else on our own (lots of options).

 Have you tried to digitally sign any yet?

 We need to establish a project signing key to use so people
 can know that the packages we put out are from us (not
 tampered with).


 On 11/01/2013 12:33 PM, Fwiffo wrote:
 I've gotta find a place to upload them to then automate
 that too

 -- JC

 On Nov 1, 2013, at 10:11 AM, Alan Robertson al...@unix.sh
 mailto:al...@unix.sh wrote:

 Where can he find the package you built?

 On 11/01/2013 10:04 AM, Fwiffo wrote:
 Hi Peter

 Sorry for the lack of explanation earlier. I linked you
 the output logs of the whole building process. You can
 see what commands were issued cuz those lines are
 prepended with a + sign. You can also see the result of
 running those commands and the versions of each package
 and use those as a guide. 

 -- JC

 On Nov 1, 2013, at 4:37 AM, Peter Sørensen
 mas...@sdu.dk mailto:mas...@sdu.dk wrote:

 Thanks but what can I achieve with that ?

  

 Best regards

  

 Peter

 *Fra:*John Carpenter [mailto:borgif...@gmail.com]
 *Sendt:* 1. november 2013 11:37
 *Til:* Peter Sørensen
 *Cc:* soren.han...@ril.com
 mailto:soren.han...@ril.com;
 assimilation@lists.community.tummy.com
 mailto:assimilation@lists.community.tummy.com
 *Emne:* Re: [Assimilation] Problem wneh calling cpack

  

 this might be
 useful: http://www.spathiwa.com/jenkins/builds.html

 i even have your version (12.04)

  

  

 On Fri, Nov 1, 2013 at 2:37 AM, Peter Sørensen
 mas...@sdu.dk mailto:mas...@sdu.dk wrote:

 Thanks - I overlooked this in the docs :-)

 Best regards

 Peter Sørensen

 -Oprindelig meddelelse-
 Fra: soren.han...@ril.com mailto:soren.han...@ril.com
 [mailto:soren.han...@ril.com mailto:soren.han...@ril.com]
 Sendt: 1. november 2013 10:30
 Til: Peter Sørensen;
 assimilation@lists.community.tummy.com
 mailto:assimilation@lists.community.tummy.com

  1   2   >