Hi Sushil, I am using Cetic helm chart only. May I know which did you use? Where did you generate the certs?
Thanks, Atul On Sat, Jul 25, 2020 at 2:00 AM Sushil Kumar <[email protected]> wrote: > Hello Atul > > I have recently tried using self signed certificates generated using nifi > toolkit while using helm chart. > cetic helm chart is not written completely to accomplish this, > I may be able to help if you can share your helm chart. > > However, as of now the error is in your values.yaml file. > > Thanks > Sushil Kumar > > On Fri, Jul 24, 2020 at 9:14 AM Chris Sampson <[email protected]> > wrote: > >> I don't use our know much about helm, but that error suggests you've got >> something wrong on line 202 of your yaml, so what's on that line (or the >> lines immediately before/after)? >> >> Notice you're using nifi 1.11.1, might be worth considering 1.11.4 if you >> can to take advantage of several high priority by fixes in nifi (but that >> won't affect your helm chart). Also, suggest using the apache/nifi-toolkit >> image for running the toolkit in tls server mode (much lighter weight), but >> again that's not likely to be causing you a problem here. >> >> >> Cheers, >> >> Chris Sampson >> >> On Fri, 24 Jul 2020, 15:05 Atul Wankhade, <[email protected]> >> wrote: >> >>> Chris I am trying what you have suggested, while passing the init >>> container params in values.yaml getting below error, can you please help to >>> get around this issue. >>> *Error: cannot load values.yaml: error converting YAML to JSON: yaml: >>> line 202: did not find expected ',' or '}'* >>> I am adding below init container config: tried to edit it in multiple >>> ways no luck :( >>> >>> initContainers: { >>> name: nifi-init >>> image: "apache/nifi:1.11.1" >>> imagePullPolicy: "IfNotPresent" >>> command: ['sh', '-c', >>> '/opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh client -c nifi-ca-cs -t >>> Mytesttoken12345 --dn "CN=$(hostname -f), OU=NIFI"','>','/opt/certs'] >>> volumeMounts: >>> - mountPath: /opt/certs/ >>> name: certs >>> } >>> >>> Created CA service as below: >>> apiVersion: apps/v1 >>> kind: ReplicaSet >>> metadata: >>> name: nifi-ca >>> namespace: nifi >>> labels: >>> app: nifi-ca >>> spec: >>> # modify replicas according to your case >>> replicas: 1 >>> selector: >>> matchLabels: >>> app: nifi-ca >>> template: >>> metadata: >>> namespace: nifi >>> labels: >>> app: nifi-ca >>> spec: >>> containers: >>> - name: nifi-ca >>> image: apache/nifi:1.9.2 >>> ports: >>> - containerPort: 8443 >>> name: ca-client-port >>> command: >>> - bash >>> - -c >>> - | >>> ../nifi-toolkit-current/bin/tls-toolkit.sh server -c >>> nifi-ca-cs -t <token> >>> --- >>> # Create service for the nifi-ca replica set >>> apiVersion: v1 >>> kind: Service >>> metadata: >>> name: nifi-ca-cs >>> namespace: nifi >>> labels: >>> app: nifi-ca >>> spec: >>> ports: >>> - port: 8443 >>> name: ca-client-port >>> targetPort: 8443 >>> selector: >>> app: nifi-ca >>> >>> On Fri, Jul 24, 2020 at 10:13 AM Atul Wankhade < >>> [email protected]> wrote: >>> >>>> Hi Andy, >>>> >>>> Sorry for the confusion, Nifi is running inside a container on the >>>> node(Image has java prebuilt). It seems I need to tweak the image to >>>> generate the certs inside the container. I have done the same setup(worked >>>> fine) On Azure where I used to generate the certs on VM itself for Node >>>> Identity so I was trying the same on Kubernetes Node but no Java here. I am >>>> new to K8S/Docker so limited by imagination I assume. TLS toolkit is part >>>> of the NiFi image but nowhere documented as how to use it inside the >>>> container(k8s env). >>>> Need to explore more on what Chris said. >>>> >>>> Thank you guys >>>> Atul >>>> >>>> On Thu, Jul 23, 2020 at 9:27 PM Andy LoPresto <[email protected]> >>>> wrote: >>>> >>>>> Chris has a lot of good suggestions there. NiFi can accept >>>>> certificates from any provider as long as they meet certain requirements >>>>> (EKU, SAN, no wildcard, etc.). The toolkit was designed to make the >>>>> process >>>>> easier for people who could not obtain their certificates elsewhere. >>>>> >>>>> Maybe I am misunderstanding your statement, but I am curious why the >>>>> toolkit can’t run on the node — if you don’t have Java available, how does >>>>> NiFi itself run? >>>>> >>>>> Andy LoPresto >>>>> [email protected] >>>>> *[email protected] <[email protected]>* >>>>> He/Him >>>>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69 >>>>> >>>>> On Jul 23, 2020, at 12:35 AM, Chris Sampson <[email protected]> >>>>> wrote: >>>>> >>>>> My suggestion would be to run the apache/nifi-toolkit image as another >>>>> Pod within your k8s namespace and have it running as a TLS Server[1]. >>>>> You'll probably need to do that separately from your Helm chart (I'm not >>>>> familiar with Helm or this chart). >>>>> >>>>> Then connect to that from your NiFi instances as they start up, e.g. >>>>> with an init-container based on the same apache/nifi-toolkit image using >>>>> the TLS client function [1] to obtain the required TLS certificate files >>>>> from the TLS Server. You can use an emptyDir [2] volume to pass the files >>>>> from the init-container to the NiFi container within the Pod. >>>>> >>>>> If you run the TLS Server as a StatefulSet (or a Deployment) with a >>>>> Persistent Volume Claim that backed by an external volume within your >>>>> cloud >>>>> provider (whatever the GKE equivalent is of AWS's EBS volumes), then the >>>>> TLS Server can be setup with its own Certificate Authority that persists >>>>> between Pod restarts and thus your NiFi certificates shouldn't become >>>>> invalid over time (if the TLS Server is restarted and generates a new CA, >>>>> then subsequent NiFi restarts would mean your NiFi cluster instances would >>>>> no longer be able to communicate with one another as they wouldn't trust >>>>> one another's certificates). >>>>> >>>>> >>>>> An alternative, if it's available in your k8s cluster, is to use >>>>> something like cert-manager [3] to provision certificates for your >>>>> instances, then use an init-container within the NiFi Pods to convert the >>>>> PEM files to Java Keystore or PKCS12 format as required by NiFi. >>>>> >>>>> >>>>> [1]: >>>>> https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#client-server >>>>> [2]: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir >>>>> [3]: https://github.com/jetstack/cert-manager >>>>> >>>>> >>>>> *Chris Sampson* >>>>> IT Consultant >>>>> [email protected] >>>>> >>>>> >>>>> >>>>> On Thu, 23 Jul 2020 at 07:09, Atul Wankhade <[email protected]> >>>>> wrote: >>>>> >>>>>> Thanks a lot Andy for your reply, it definitely helped >>>>>> pinpointing what is going wrong. I tried simulating the same with the >>>>>> docker image from Apache and generating the keystore/truststore files on >>>>>> the Docker host. For one node NiFi it worked fine. The problem comes >>>>>> when I >>>>>> am trying the same on Kubernetes. Nodes in GKE have Container optimized >>>>>> OS >>>>>> (no pkg installer) , so it does not support using NiFi tls-toolkit as >>>>>> Java >>>>>> cannot be installed. Can you please give some pointers/workaround on how >>>>>> to >>>>>> solve this issue with k8s? >>>>>> Once the files are generated we can mount it using Host mount in the >>>>>> pod. >>>>>> >>>>>> Thanks again for your help :) >>>>>> Atul >>>>>> >>>>>> On Tue, Jul 21, 2020 at 10:37 PM Andy LoPresto <[email protected]> >>>>>> wrote: >>>>>> >>>>>>> Atul, >>>>>>> >>>>>>> I am not a Kubernetes/ingress expert, but that error is indicating >>>>>>> that you specified NiFi should be secure (i.e. use TLS/HTTPS) and yet >>>>>>> there >>>>>>> is no keystore or truststore provided to the application, so it fails to >>>>>>> start. NiFi differs from some other applications in that you cannot >>>>>>> configure authentication and authorization without explicitly enabling >>>>>>> and >>>>>>> configuring TLS for NiFi itself, not just delegating that data in >>>>>>> transit >>>>>>> encryption to an external system (like a load balancer, proxy, or >>>>>>> service >>>>>>> mesh). >>>>>>> >>>>>>> I suggest you read the NiFi walkthrough for “Securing NiFi with TLS” >>>>>>> [1] which will provide some context around what the various requirements >>>>>>> are, and the Admin Guide [2] sections on authentication and >>>>>>> authorization >>>>>>> for more background. >>>>>>> >>>>>>> [1] >>>>>>> https://nifi.apache.org/docs/nifi-docs/html/walkthroughs.html#securing-nifi-with-tls >>>>>>> [2] >>>>>>> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#security_configuration >>>>>>> >>>>>>> >>>>>>> Andy LoPresto >>>>>>> [email protected] >>>>>>> *[email protected] <[email protected]>* >>>>>>> He/Him >>>>>>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69 >>>>>>> >>>>>>> On Jul 20, 2020, at 11:58 PM, Atul Wankhade < >>>>>>> [email protected]> wrote: >>>>>>> >>>>>>> Hi All, >>>>>>> I am trying to install NiFi with SSL on Kubernetes using >>>>>>> Helm(cetic/nifi), Below is my values.yaml. I keep getting an error on >>>>>>> NiFi >>>>>>> containers as - Am I missing something? >>>>>>> *Caused by: org.springframework.beans.factory.BeanCreationException: >>>>>>> Error creating bean with name 'clusterCoordinationProtocolSender' >>>>>>> defined >>>>>>> in class path resource [nifi-cluster-protocol-context.xml]: Cannot >>>>>>> resolve >>>>>>> reference to bean 'protocolSocketConfiguration' while setting >>>>>>> constructor >>>>>>> argument; nested exception is >>>>>>> org.springframework.beans.factory.BeanCreationException: Error creating >>>>>>> bean with name 'protocolSocketConfiguration': FactoryBean threw >>>>>>> exception >>>>>>> on object creation; nested exception is java.io.FileNotFoundException: >>>>>>> (No >>>>>>> such file or directory)* >>>>>>> >>>>>>> VALUES.YAML: >>>>>>> --- >>>>>>> # Number of nifi nodes >>>>>>> replicaCount: 1 >>>>>>> >>>>>>> ## Set default image, imageTag, and imagePullPolicy. >>>>>>> ## ref: https://hub.docker.com/r/apache/nifi/ >>>>>>> ## >>>>>>> image: >>>>>>> repository: apache/nifi >>>>>>> tag: "1.11.4" >>>>>>> pullPolicy: IfNotPresent >>>>>>> >>>>>>> ## Optionally specify an imagePullSecret. >>>>>>> ## Secret must be manually created in the namespace. >>>>>>> ## ref: >>>>>>> https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ >>>>>>> ## >>>>>>> # pullSecret: myRegistrKeySecretName >>>>>>> >>>>>>> securityContext: >>>>>>> runAsUser: 1000 >>>>>>> fsGroup: 1000 >>>>>>> >>>>>>> sts: >>>>>>> # Parallel podManagementPolicy for faster bootstrap and teardown. >>>>>>> Default is OrderedReady. >>>>>>> podManagementPolicy: Parallel >>>>>>> AntiAffinity: soft >>>>>>> hostPort: null >>>>>>> >>>>>>> ## Useful if using any custom secrets >>>>>>> ## Pass in some secrets to use (if required) >>>>>>> # secrets: >>>>>>> # - name: myNifiSecret >>>>>>> # keys: >>>>>>> # - key1 >>>>>>> # - key2 >>>>>>> # mountPath: /opt/nifi/secret >>>>>>> >>>>>>> ## Useful if using any custom configmaps >>>>>>> ## Pass in some configmaps to use (if required) >>>>>>> # configmaps: >>>>>>> # - name: myNifiConf >>>>>>> # keys: >>>>>>> # - myconf.conf >>>>>>> # mountPath: /opt/nifi/custom-config >>>>>>> >>>>>>> >>>>>>> properties: >>>>>>> # use externalSecure for when inbound SSL is provided by >>>>>>> nginx-ingress or other external mechanism >>>>>>> externalSecure: true >>>>>>> isNode: true >>>>>>> httpPort: null >>>>>>> httpsPort: 8443 >>>>>>> clusterPort: 6007 >>>>>>> clusterSecure: true >>>>>>> needClientAuth: true >>>>>>> provenanceStorage: "8 GB" >>>>>>> siteToSite: >>>>>>> secure: true >>>>>>> port: 10000 >>>>>>> authorizer: managed-authorizer >>>>>>> # use properties.safetyValve to pass explicit 'key: value' pairs >>>>>>> that overwrite other configuration >>>>>>> safetyValve: >>>>>>> #nifi.variable.registry.properties: >>>>>>> "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties" >>>>>>> nifi.web.http.network.interface.default: eth0 >>>>>>> # listen to loopback interface so "kubectl port-forward ..." >>>>>>> works >>>>>>> nifi.web.http.network.interface.lo: lo >>>>>>> >>>>>>> ## Include additional libraries in the Nifi containers by using the >>>>>>> postStart handler >>>>>>> ## ref: >>>>>>> https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/ >>>>>>> # postStart: /opt/nifi/psql; wget -P /opt/nifi/psql >>>>>>> https://jdbc.postgresql.org/download/postgresql-42.2.6.jar >>>>>>> >>>>>>> # Nifi User Authentication >>>>>>> auth: >>>>>>> ldap: >>>>>>> enabled: false >>>>>>> host: ldap://<hostname>:<port> >>>>>>> searchBase: CN=Users,DC=example,DC=com >>>>>>> searchFilter: CN=john >>>>>>> >>>>>>> ## Expose the nifi service to be accessed from outside the cluster >>>>>>> (LoadBalancer service). >>>>>>> ## or access it from within the cluster (ClusterIP service). Set the >>>>>>> service type and the port to serve it. >>>>>>> ## ref: http://kubernetes.io/docs/user-guide/services/ >>>>>>> ## >>>>>>> >>>>>>> # headless service >>>>>>> headless: >>>>>>> type: ClusterIP >>>>>>> annotations: >>>>>>> service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" >>>>>>> >>>>>>> # ui service >>>>>>> service: >>>>>>> type: LoadBalancer >>>>>>> httpPort: 80 >>>>>>> httpsPort: 443 >>>>>>> annotations: {} >>>>>>> # loadBalancerIP: >>>>>>> ## Load Balancer sources >>>>>>> ## >>>>>>> https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service >>>>>>> ## >>>>>>> # loadBalancerSourceRanges: >>>>>>> # - 10.10.10.0/24 >>>>>>> >>>>>>> # Enables additional port/ports to nifi service for internal >>>>>>> processors >>>>>>> processors: >>>>>>> enabled: false >>>>>>> ports: >>>>>>> - name: processor01 >>>>>>> port: 7001 >>>>>>> targetPort: 7001 >>>>>>> #nodePort: 30701 >>>>>>> - name: processor02 >>>>>>> port: 7002 >>>>>>> targetPort: 7002 >>>>>>> #nodePort: 30702 >>>>>>> >>>>>>> ## Configure Ingress based on the documentation here: >>>>>>> https://kubernetes.io/docs/concepts/services-networking/ingress/ >>>>>>> ## >>>>>>> ingress: >>>>>>> enabled: false >>>>>>> annotations: {} >>>>>>> tls: [] >>>>>>> hosts: [] >>>>>>> path: / >>>>>>> rule: [] >>>>>>> # If you want to change the default path, see this issue >>>>>>> https://github.com/cetic/helm-nifi/issues/22 >>>>>>> >>>>>>> # Amount of memory to give the NiFi java heap >>>>>>> jvmMemory: 2g >>>>>>> >>>>>>> # Separate image for tailing each log separately >>>>>>> sidecar: >>>>>>> image: ez123/alpine-tini >>>>>>> >>>>>>> # Busybox image >>>>>>> busybox: >>>>>>> image: busybox >>>>>>> >>>>>>> ## Enable persistence using Persistent Volume Claims >>>>>>> ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ >>>>>>> ## >>>>>>> persistence: >>>>>>> enabled: false >>>>>>> >>>>>>> # When creating persistent storage, the NiFi helm chart can either >>>>>>> reference an already-defined >>>>>>> # storage class by name, such as "standard" or can define a custom >>>>>>> storage class by specifying >>>>>>> # customStorageClass: true and providing the "storageClass", >>>>>>> "storageProvisioner" and "storageType". >>>>>>> # For example, to use SSD storage on Google Compute Engine see >>>>>>> values-gcp.yaml >>>>>>> # >>>>>>> # To use a storage class that already exists on the Kubernetes >>>>>>> cluster, we can simply reference it by name. >>>>>>> # For example: >>>>>>> # storageClass: standard >>>>>>> # >>>>>>> # The default storage class is used if this variable is not set. >>>>>>> >>>>>>> accessModes: [ReadWriteOnce] >>>>>>> ## Storage Capacities for persistent volumes >>>>>>> # Storage capacity for the 'data' directory, which is used to hold >>>>>>> things such as the flow.xml.gz, configuration, state, etc. >>>>>>> dataStorage: >>>>>>> size: 1Gi >>>>>>> # Storage capacity for the FlowFile repository >>>>>>> flowfileRepoStorage: >>>>>>> size: 10Gi >>>>>>> # Storage capacity for the Content repository >>>>>>> contentRepoStorage: >>>>>>> size: 10Gi >>>>>>> # Storage capacity for the Provenance repository. When changing >>>>>>> this, one should also change the properties.provenanceStorage value >>>>>>> above, >>>>>>> also. >>>>>>> provenanceRepoStorage: >>>>>>> size: 10Gi >>>>>>> # Storage capacity for nifi logs >>>>>>> logStorage: >>>>>>> size: 5Gi >>>>>>> >>>>>>> ## Configure resource requests and limits >>>>>>> ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ >>>>>>> ## >>>>>>> resources: {} >>>>>>> # We usually recommend not to specify default resources and to >>>>>>> leave this as a conscious >>>>>>> # choice for the user. This also increases chances charts run on >>>>>>> environments with little >>>>>>> # resources, such as Minikube. If you do want to specify >>>>>>> resources, uncomment the following >>>>>>> # lines, adjust them as necessary, and remove the curly braces >>>>>>> after 'resources:'. >>>>>>> # limits: >>>>>>> # cpu: 100m >>>>>>> # memory: 128Mi >>>>>>> # requests: >>>>>>> # cpu: 100m >>>>>>> # memory: 128Mi >>>>>>> >>>>>>> logresources: >>>>>>> requests: >>>>>>> cpu: 10m >>>>>>> memory: 10Mi >>>>>>> limits: >>>>>>> cpu: 50m >>>>>>> memory: 50Mi >>>>>>> >>>>>>> nodeSelector: {} >>>>>>> >>>>>>> tolerations: [] >>>>>>> >>>>>>> initContainers: {} >>>>>>> # foo-init: # <- will be used as container name >>>>>>> # image: "busybox:1.30.1" >>>>>>> # imagePullPolicy: "IfNotPresent" >>>>>>> # command: ['sh', '-c', 'echo this is an initContainer'] >>>>>>> # volumeMounts: >>>>>>> # - mountPath: /tmp/foo >>>>>>> # name: foo >>>>>>> >>>>>>> extraVolumeMounts: [] >>>>>>> >>>>>>> extraVolumes: [] >>>>>>> >>>>>>> ## Extra containers >>>>>>> extraContainers: [] >>>>>>> >>>>>>> terminationGracePeriodSeconds: 30 >>>>>>> >>>>>>> ## Extra environment variables that will be pass onto deployment pods >>>>>>> env: [] >>>>>>> >>>>>>> # >>>>>>> ------------------------------------------------------------------------------ >>>>>>> # Zookeeper: >>>>>>> # >>>>>>> ------------------------------------------------------------------------------ >>>>>>> zookeeper: >>>>>>> ## If true, install the Zookeeper chart >>>>>>> ## ref: >>>>>>> https://github.com/kubernetes/charts/tree/master/incubator/zookeeper >>>>>>> enabled: true >>>>>>> ## If the Zookeeper Chart is disabled a URL and port are required >>>>>>> to connect >>>>>>> url: "" >>>>>>> port: 2181 >>>>>>> >>>>>>> *Complete stacktrace:* >>>>>>> Caused by: org.springframework.beans.factory.BeanCreationException: >>>>>>> Error creating bean with name 'clusterCoordinationProtocolSender' >>>>>>> defined >>>>>>> in class path resource [nifi-cluster-protocol-context.xml]: Cannot >>>>>>> resolve >>>>>>> reference to bean 'protocolSocketConfiguration' while setting >>>>>>> constructor >>>>>>> argument; nested exception is >>>>>>> org.springframework.beans.factory.BeanCreationException: Error creating >>>>>>> bean with name 'protocolSocketConfiguration': FactoryBean threw >>>>>>> exception >>>>>>> on object creation; nested exception is java.io.FileNotFoundException: >>>>>>> (No >>>>>>> such file or directory) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:359) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:108) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:648) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:145) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1198) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1100) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:511) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:351) >>>>>>> ... 75 common frames omitted >>>>>>> Caused by: org.springframework.beans.factory.BeanCreationException: >>>>>>> Error creating bean with name 'protocolSocketConfiguration': FactoryBean >>>>>>> threw exception on object creation; nested exception is >>>>>>> java.io.FileNotFoundException: (No such file or directory) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:185) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1640) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:323) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:351) >>>>>>> ... 87 common frames omitted >>>>>>> Caused by: java.io.FileNotFoundException: (No such file or >>>>>>> directory) >>>>>>> at java.io.FileInputStream.open0(Native Method) >>>>>>> at java.io.FileInputStream.open(FileInputStream.java:195) >>>>>>> at java.io.FileInputStream.<init>(FileInputStream.java:138) >>>>>>> at java.io.FileInputStream.<init>(FileInputStream.java:93) >>>>>>> at >>>>>>> org.apache.nifi.io.socket.SSLContextFactory.<init>(SSLContextFactory.java:66) >>>>>>> at >>>>>>> org.apache.nifi.cluster.protocol.spring.SocketConfigurationFactoryBean.getObject(SocketConfigurationFactoryBean.java:45) >>>>>>> at >>>>>>> org.apache.nifi.cluster.protocol.spring.SocketConfigurationFactoryBean.getObject(SocketConfigurationFactoryBean.java:30) >>>>>>> at >>>>>>> org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:178) >>>>>>> ... 92 common frames omitted >>>>>>> 2020-07-17 11:04:25,204 INFO [Thread-1] org.apache.nifi.NiFi >>>>>>> Initiating shutdown of Jetty web server... >>>>>>> 2020-07-17 11:04:25,214 INFO [Thread-1] >>>>>>> o.eclipse.jetty.server.AbstractConnector Stopped >>>>>>> ServerConnector@700f518a{SSL,[ssl, >>>>>>> http/1.1]}{0.0.0.0:8443} >>>>>>> 2020-07-17 11:04:25,214 INFO [Thread-1] >>>>>>> org.eclipse.jetty.server.session node0 Stopped scavenging >>>>>>> >>>>>>> Any help to resolve this is appreciated. >>>>>>> Atul Wankhade >>>>>>> >>>>>>> >>>>>>> >>>>> > > -- > -- > > Thanks > > Sushil Kumar > +1-(206)-698-4116 > >
