Great, thanks, Pierre!

On Tue, Dec 31, 2019 at 10:43 AM Pierre Villard <[email protected]>
wrote:

> Joe, if you're using the CA from NiFi 1.10, this is a known issue (already
> fixed in master, I don't have the JIRA handy). You can try with the CA from
> toolkit 1.9.2 if that's not too complex in your environment.
>
> Le lun. 30 déc. 2019 à 21:29, Joe Gresock <[email protected]> a écrit :
>
>> Looks like it works when I exclude the --subjectAlternativeNames
>> parameter, whereas it fails with the null error when I specify this
>> parameter.
>> For example, this works:
>> /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh client -c nifi-ca-cs -t
>> a5s4d545h48s6d87fa6sd7g45f6g4fga84sd6a8e7f6ga786a --dn "CN=$(hostname
>> -f),OU=NIFI"
>>
>> But this doesn't:
>> /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh client -c nifi-ca-cs -t
>> a5s4d545h48s6d87fa6sd7g45f6g4fga84sd6a8e7f6ga786a --dn "CN=$(hostname
>> -f),OU=NIFI" --subjectAlternativeNames "<my proxy server's hostname>"
>>
>> I'll dig further to see if there is any way I can get the SAN to work.
>>
>> On Mon, Dec 30, 2019 at 4:51 PM Joe Gresock <[email protected]> wrote:
>>
>>> Thanks for the suggestion, apparently there was a typo on my nifi-ca-cs
>>> service that caused the pod not to be matched.  Unfortunately, even once I
>>> corrected this, I'm still getting the "Service client error: null".
>>>
>>> I'll keep working on it.  Thanks for your help!
>>>
>>> On Sat, Dec 28, 2019 at 5:53 AM Swarup Karavadi <[email protected]> wrote:
>>>
>>>> Hey Joe,
>>>>
>>>> I'm glad that the article was of some help. I, unfortunately, do not
>>>> recollect running into that specific error scenario. At the risk of stating
>>>> the obvious, can you check if the nifi-ca-cs service is reachable from your
>>>> nifi pod?
>>>>
>>>> On the CRD bit, we are considering going this route with NiFi
>>>> stateless. We are experimenting to see if we can have something like "NiFi
>>>> K" that can be inline with the way "Camel K" works.  Based on our current
>>>> (limited) understanding of NiFi stateless, we think the CRD approach will
>>>> help scale NiFi stateless horizontally with much more ease.
>>>>
>>>> Cheers,
>>>> Swarup.
>>>>
>>>> On Sat, Dec 28, 2019 at 4:32 AM Mike Thomsen <[email protected]>
>>>> wrote:
>>>>
>>>>> If you don't see a value when you run "echo %JAVA_HOME%" then you need
>>>>> to check to see if it was set globally in Windows, and if it was, you need
>>>>> to open a new command shell.
>>>>>
>>>>> On Mon, Oct 21, 2019 at 12:37 PM <[email protected]> wrote:
>>>>>
>>>>>> Any suggestions?
>>>>>>
>>>>>>
>>>>>>
>>>>>> I downloaded NiFi, but when I run the runnifi from the bin folder
>>>>>> nothing happens. I get the following message. The JAVA_HOME environment
>>>>>> variable is not defined correctly. I downloaded the latest JRE, but still
>>>>>> get the same error message.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *From:* Swarup Karavadi <[email protected]>
>>>>>> *Sent:* Monday, October 21, 2019 12:19 PM
>>>>>> *To:* [email protected]
>>>>>> *Cc:* [email protected]
>>>>>> *Subject:* Re: NiFi Kubernetes question
>>>>>>
>>>>>>
>>>>>>
>>>>>> If you are hosting on the cloud, I'd recommend going for dedicated
>>>>>> worker nodes for the NiFi cluster. There might be rare (or not) occasions
>>>>>> when a worker node is under high load and needs to evict pods. If your 
>>>>>> NiFi
>>>>>> deployment's pod disruption budget allows for eviction of pods then there
>>>>>> are always chances that an evicted NiFi pod can be rescheduled on a
>>>>>> different node that is tainted (tainted because the node may not meet the
>>>>>> pod's volume affinity requirements). Your best case scenario when this
>>>>>> happens is that the pod will keep getting rescheduled on different nodes
>>>>>> until it starts up again. The worst case scenario is that it'll be stuck 
>>>>>> in
>>>>>> a CrashLoopBackoff limbo.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Disclaimer - I speak from my experience on a non production
>>>>>> environment. Our NiFi clusters will be deployed to a production k8s
>>>>>> environment in a few weeks from now. I am only sharing some learnings 
>>>>>> I've
>>>>>> had w.r.t. k8s statefulsets along the way.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hope this helps,
>>>>>>
>>>>>> Swarup.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 21, 2019, 9:32 PM Wyllys Ingersoll <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> We had success running  a 3-node cluster in kubernetes using modified
>>>>>> configuration scripts from the AlexJones github repo -
>>>>>> https://github.com/AlexsJones/nifi
>>>>>>
>>>>>> Ours is on an internal bare-metal k8s lab configuration, not in a
>>>>>> public cloud at this time, but the basics are the same either way.
>>>>>>
>>>>>>
>>>>>>
>>>>>> - setup nifi as a stateful set so you can scale up or down as needed.
>>>>>> When a pod fails, k8s will spawn another to take its place and zookeeper
>>>>>> will manage the election of the master during transitions.
>>>>>>
>>>>>> - manage your certs as K8S secrets.
>>>>>>
>>>>>> - you also need to also have a stateful set of zookeeper pods for
>>>>>> managing the nifi servers.
>>>>>>
>>>>>> - use persistent volume mounts to hold the flowfile, database,
>>>>>> content, and provenance _repository directories
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 21, 2019 at 11:21 AM Joe Gresock <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>> Apologies if this has been answered on the list already..
>>>>>>
>>>>>>
>>>>>>
>>>>>> Does anyone have knowledge of the latest in the realm of nifi
>>>>>> kubernetes support?  I see some pages like
>>>>>> https://hub.helm.sh/charts/cetic/nifi, and
>>>>>> https://github.com/AlexsJones/nifi but am unsure which example to
>>>>>> pick to start with.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I'm curious how well kubernetes maintains the nifi cluster state with
>>>>>> pod failures.  I.e., do any of the k8s implementations play well with the
>>>>>> nifi cluster list so that we don't have dangling downed nodes in the
>>>>>> cluster?  Also, I'm wondering how certs are managed in a secured cluster.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Appreciate any nudge in the right direction,
>>>>>>
>>>>>> Joe
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 21, 2019, 9:32 PM Wyllys Ingersoll <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> We had success running  a 3-node cluster in kubernetes using modified
>>>>>> configuration scripts from the AlexJones github repo -
>>>>>> https://github.com/AlexsJones/nifi
>>>>>>
>>>>>> Ours is on an internal bare-metal k8s lab configuration, not in a
>>>>>> public cloud at this time, but the basics are the same either way.
>>>>>>
>>>>>>
>>>>>>
>>>>>> - setup nifi as a stateful set so you can scale up or down as needed.
>>>>>> When a pod fails, k8s will spawn another to take its place and zookeeper
>>>>>> will manage the election of the master during transitions.
>>>>>>
>>>>>> - manage your certs as K8S secrets.
>>>>>>
>>>>>> - you also need to also have a stateful set of zookeeper pods for
>>>>>> managing the nifi servers.
>>>>>>
>>>>>> - use persistent volume mounts to hold the flowfile, database,
>>>>>> content, and provenance _repository directories
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 21, 2019 at 11:21 AM Joe Gresock <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>> Apologies if this has been answered on the list already..
>>>>>>
>>>>>>
>>>>>>
>>>>>> Does anyone have knowledge of the latest in the realm of nifi
>>>>>> kubernetes support?  I see some pages like
>>>>>> https://hub.helm.sh/charts/cetic/nifi, and
>>>>>> https://github.com/AlexsJones/nifi but am unsure which example to
>>>>>> pick to start with.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I'm curious how well kubernetes maintains the nifi cluster state with
>>>>>> pod failures.  I.e., do any of the k8s implementations play well with the
>>>>>> nifi cluster list so that we don't have dangling downed nodes in the
>>>>>> cluster?  Also, I'm wondering how certs are managed in a secured cluster.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Appreciate any nudge in the right direction,
>>>>>>
>>>>>> Joe
>>>>>>
>>>>>>
>>>
>>> --
>>> Be on your guard; stand firm in the faith; be courageous; be strong.
>>> Do everything in love.    -*1 Corinthians 16:13-14*
>>>
>>
>>
>> --
>> Be on your guard; stand firm in the faith; be courageous; be strong.  Do
>> everything in love.    -*1 Corinthians 16:13-14*
>>
>

-- 
Be on your guard; stand firm in the faith; be courageous; be strong.  Do
everything in love.    -*1 Corinthians 16:13-14*

Reply via email to