Re: Unit test for database Put processor

2018-04-16 Thread Mike Thomsen
Anthony,

Sorry, forgot to answer your last question. The best thing you can do,
especially since MarkLogic doesn't have a convenient Docker image
published, is to do both unit and integration tests. A test starting or
ending with Test is a unit test and one ending with IT is an integration
test with the way Maven is configured here. The integration tests will not
run unless you do *mvn integration-test -Pintegration-tests*.

Thanks,

Mike

On Mon, Apr 16, 2018 at 6:00 PM Mike Thomsen  wrote:

> The PutMongo tests fail because the Mongo processors haven't been fully
> migrated to the new method of declaring EL support. With that said, the
> Mongo tests aren't almost entirely integration tests, not unit tests
> because Mongo is kinda painful to mock.
>
> On Mon, Apr 16, 2018 at 5:50 PM Anthony Roach 
> wrote:
>
>> We are writing a PutMarkLogic processor to ingest flowfiles into a
>> MarkLogic database.  Looking at the PutMongo test, it appears that a
>> running instance of MongoDB is expected.  Attempting to run with one ends
>> in failure.  Can we make the same assumption for tests we include into the
>> suite?
>>
>>
>>
>> Thanks…
>>
>> ___
>>
>> Anthony Roach
>>
>> Product Manager
>> MarkLogic Corporation
>>
>> Desk: +1 650 287 2587 <(650)%20287-2587>
>>
>> Mobile: +1 415 368 6460 <(415)%20368-6460>
>> www.marklogic.com
>> 
>>
>> [image: MLSoMeSignature] 
>>
>>
>>
>


Re: Unit test for database Put processor

2018-04-16 Thread Mike Thomsen
The PutMongo tests fail because the Mongo processors haven't been fully
migrated to the new method of declaring EL support. With that said, the
Mongo tests aren't almost entirely integration tests, not unit tests
because Mongo is kinda painful to mock.

On Mon, Apr 16, 2018 at 5:50 PM Anthony Roach 
wrote:

> We are writing a PutMarkLogic processor to ingest flowfiles into a
> MarkLogic database.  Looking at the PutMongo test, it appears that a
> running instance of MongoDB is expected.  Attempting to run with one ends
> in failure.  Can we make the same assumption for tests we include into the
> suite?
>
>
>
> Thanks…
>
> ___
>
> Anthony Roach
>
> Product Manager
> MarkLogic Corporation
>
> Desk: +1 650 287 2587 <(650)%20287-2587>
>
> Mobile: +1 415 368 6460 <(415)%20368-6460>
> www.marklogic.com
> 
>
> [image: MLSoMeSignature] 
>
>
>


Unit test for database Put processor

2018-04-16 Thread Anthony Roach
We are writing a PutMarkLogic processor to ingest flowfiles into a MarkLogic 
database.  Looking at the PutMongo test, it appears that a running instance of 
MongoDB is expected.  Attempting to run with one ends in failure.  Can we make 
the same assumption for tests we include into the suite?

Thanks...
___
Anthony Roach
Product Manager
MarkLogic Corporation
Desk: +1 650 287 2587
Mobile: +1 415 368 6460
www.marklogic.com
[MLSoMeSignature]



Re: Clustering not happening in Kubernetes cluster

2018-04-16 Thread Anil Rai
Hi Mark,

We are also looking at spinning up nifi on k8s. If you could share more
information on your architecture of Nifi on k8s, that would be very
helpful. Also any best practice that you would suggest.
I am interested in understanding the auto scaling of nifi on k8s. Before I
deep dive, wanted to check if you have explored the best way to scale up or
down and what parameters would qualify in triggering that action.

Thanks
Anil


On Mon, Apr 16, 2018 at 1:36 PM, Mark Payne  wrote:

> Jonathan,
>
> I've spent some time in the last few weeks playing around with getting
> some clusters running on Kubernetes
> as well. I know from the mailing lists that some others have been
> venturing into this also. I'm by no means
> a Kubernetes expert myself, but I've gotten it up & running on a few-node
> cluster in GCE, so I do have some
> familiarity.
>
> One thing to note is that the log file that you shared appears to be from
> an instance that's been running for a while, not
> one that was newly started, so it's possible that there were some
> important details regarding why the clustering is not
> working as expected at the beginning of the logs that were missed.
>
> I also notice that you're setting the "nifi.web.http.host" and
> "nifi.cluster.node.address" properties to "nifi-test".
> I'm not sure that this will work. Perhaps it will, I've never tried that,
> but I don't know if the NiFi process will be able to listen
> on that hostname. What I ended up doing was to update my docker image
> locally to update the "hostname" variable to be
> the fully-qualified hostname:
>
> export hostname=$(hostname -f)
>
> And then set the http.host and node.address properties to that. If setting
> to nifi-test works, though, then that would be great,
> certainly a lot easier. But I suspect we'll need to take a closer look at
> the logs just after startup to see if they reveal anything
> interesting.
>
> Thanks
> -Mark
>
>
> On Apr 16, 2018, at 12:26 PM, Jonathan Kosgei  mailto:jonat...@saharacluster.com>> wrote:
>
> Hi,
>
>
>
> I'm trying to cluster nifi on kubernetes but haven't been able to get the
> pods to connect to each other.
>
>
>
> I have the following statefulset and services;
>
> https://gist.github.com/jonathan-kosgei/cdf7c9ec882948eac12beb6b28ffa748
>
>
>
> I'm running nifi 1.6.0 on Kubernetes v1.8.8-gke.0.
>
>
>
> My nifi-app.log:
>
> https://gist.github.com/jonathan-kosgei/527952976ec18cf3957cec4d1e186f68
>
>
>
> I set every log option to DEBUG in logback.xml
>
>
>
> I've noticed a lot of restarts and several files in the logs folder
>
>
>
> nifi@nifi2-0:/opt/nifi/nifi-1.6.0/logs$ ls
>
> nifi-app.log  nifi-app_2018-04-16_03.0.log
> nifi-app_2018-04-16_06.0.log  nifi-app_2018-04-16_09.0.log
> nifi-app_2018-04-16_12.0.log  nifi-app_2018-04-16_15.0.log
>
> nifi-app_2018-04-16_01.0.log  nifi-app_2018-04-16_04.0.log
> nifi-app_2018-04-16_07.0.log  nifi-app_2018-04-16_10.0.log
> nifi-app_2018-04-16_13.0.log  nifi-bootstrap.log
>
> nifi-app_2018-04-16_02.0.log  nifi-app_2018-04-16_05.0.log
> nifi-app_2018-04-16_08.0.log  nifi-app_2018-04-16_11.0.log
> nifi-app_2018-04-16_14.0.log  nifi-user.log
>
>
>
> Any ideas what I might be missing?
>
>
>
> The cluster configuration seemed very straightforward.
>
>
>
> Thanks!
>
>
>


Re: Clustering not happening in Kubernetes cluster

2018-04-16 Thread Mark Payne
Jonathan,

I've spent some time in the last few weeks playing around with getting some 
clusters running on Kubernetes
as well. I know from the mailing lists that some others have been venturing 
into this also. I'm by no means
a Kubernetes expert myself, but I've gotten it up & running on a few-node 
cluster in GCE, so I do have some
familiarity.

One thing to note is that the log file that you shared appears to be from an 
instance that's been running for a while, not
one that was newly started, so it's possible that there were some important 
details regarding why the clustering is not
working as expected at the beginning of the logs that were missed.

I also notice that you're setting the "nifi.web.http.host" and 
"nifi.cluster.node.address" properties to "nifi-test".
I'm not sure that this will work. Perhaps it will, I've never tried that, but I 
don't know if the NiFi process will be able to listen
on that hostname. What I ended up doing was to update my docker image locally 
to update the "hostname" variable to be
the fully-qualified hostname:

export hostname=$(hostname -f)

And then set the http.host and node.address properties to that. If setting to 
nifi-test works, though, then that would be great,
certainly a lot easier. But I suspect we'll need to take a closer look at the 
logs just after startup to see if they reveal anything
interesting.

Thanks
-Mark


On Apr 16, 2018, at 12:26 PM, Jonathan Kosgei 
> wrote:

Hi,



I'm trying to cluster nifi on kubernetes but haven't been able to get the pods 
to connect to each other.



I have the following statefulset and services;

https://gist.github.com/jonathan-kosgei/cdf7c9ec882948eac12beb6b28ffa748



I'm running nifi 1.6.0 on Kubernetes v1.8.8-gke.0.



My nifi-app.log:

https://gist.github.com/jonathan-kosgei/527952976ec18cf3957cec4d1e186f68



I set every log option to DEBUG in logback.xml



I've noticed a lot of restarts and several files in the logs folder



nifi@nifi2-0:/opt/nifi/nifi-1.6.0/logs$ ls

nifi-app.log  nifi-app_2018-04-16_03.0.log  
nifi-app_2018-04-16_06.0.log  nifi-app_2018-04-16_09.0.log  
nifi-app_2018-04-16_12.0.log  nifi-app_2018-04-16_15.0.log

nifi-app_2018-04-16_01.0.log  nifi-app_2018-04-16_04.0.log  
nifi-app_2018-04-16_07.0.log  nifi-app_2018-04-16_10.0.log  
nifi-app_2018-04-16_13.0.log  nifi-bootstrap.log

nifi-app_2018-04-16_02.0.log  nifi-app_2018-04-16_05.0.log  
nifi-app_2018-04-16_08.0.log  nifi-app_2018-04-16_11.0.log  
nifi-app_2018-04-16_14.0.log  nifi-user.log



Any ideas what I might be missing?



The cluster configuration seemed very straightforward.



Thanks!




Clustering not happening in Kubernetes cluster

2018-04-16 Thread Jonathan Kosgei
Hi,



I'm trying to cluster nifi on kubernetes but haven't been able to get the pods 
to connect to each other.



I have the following statefulset and services;

https://gist.github.com/jonathan-kosgei/cdf7c9ec882948eac12beb6b28ffa748



I'm running nifi 1.6.0 on Kubernetes v1.8.8-gke.0.



My nifi-app.log:

https://gist.github.com/jonathan-kosgei/527952976ec18cf3957cec4d1e186f68



I set every log option to DEBUG in logback.xml



I've noticed a lot of restarts and several files in the logs folder



nifi@nifi2-0:/opt/nifi/nifi-1.6.0/logs$ ls

nifi-app.log  nifi-app_2018-04-16_03.0.log  
nifi-app_2018-04-16_06.0.log  nifi-app_2018-04-16_09.0.log  
nifi-app_2018-04-16_12.0.log  nifi-app_2018-04-16_15.0.log

nifi-app_2018-04-16_01.0.log  nifi-app_2018-04-16_04.0.log  
nifi-app_2018-04-16_07.0.log  nifi-app_2018-04-16_10.0.log  
nifi-app_2018-04-16_13.0.log  nifi-bootstrap.log

nifi-app_2018-04-16_02.0.log  nifi-app_2018-04-16_05.0.log  
nifi-app_2018-04-16_08.0.log  nifi-app_2018-04-16_11.0.log  
nifi-app_2018-04-16_14.0.log  nifi-user.log



Any ideas what I might be missing?



The cluster configuration seemed very straightforward.



Thanks!