RE: DBCPConnectionPool Not looking up

2018-10-25 Thread Cardinal, Alexandre
Hey Martijn,

I had this exact issue once, and simply setting the catalog name fixed it for 
me. Can you share (without the server names, of course) of a processor that 
does this?

Also, are you running Nifi 1.7? This issue never happened to me in 1.5 but 
happened in 1.7.

Thanks
-Alexandre

From: Martijn Dekkers [mailto:mart...@dekkers.org.uk]
Sent: October 25, 2018 5:22 PM
To: users@nifi.apache.org
Subject: Re: DBCPConnectionPool Not looking up

Apologies, I am talking about the DBCPCOnnectionPoolLookup. Long day...


On Thu, 25 Oct 2018, at 23:13, Martijn Dekkers wrote:
Hello all,

We ran into a weird issue. We use a DBCPConnectionPool to select the correct db 
for specific queries. Every so often (3 times now today, few times yesterday) 
the DBCPConnectionPool will bug out with "Attributes must contain an attribute 
name 'database.name'" event though the database.name attribute is present in 
the flowfiles, and correct.

Sometimes restarting the ConnectionPool will help, sometimes it just "fixes" 
again. We have several SQL processors using this pool, and some send lots of 
small queries, and these are in different process groups. The issue always 
occurs with a specific PutSQL processor. We have a few ExecuteSQL processors 
that work fine.

The logs don't show anything out of order.

Can anyone suggest a more thorough way of debugging this?

Thanks

Martijn


CONFIDENTIALITÉ : Ce document est destiné uniquement à la personne ou à 
l'entité à qui il est adressé. L'information apparaissant dans ce document est 
de nature légalement privilégiée et confidentielle. Si vous n'êtes pas le 
destinataire visé ou la personne chargée de le remettre à son destinataire, 
vous êtes, par la présente, avisé que toute lecture, usage, copie ou 
communication du contenu de ce document est strictement interdit. De plus, vous 
êtes prié de communiquer avec l'expéditeur sans délai ou d'écrire à 
confidential...@bnc.ca et de détruire ce document immédiatement.
CONFIDENTIALITY: This document is intended solely for the individual or entity 
to whom it is addressed. The information contained in this document is legally 
privileged and confidential. If you are not the intended recipient or the 
person responsible for delivering it to the intended recipient, you are hereby 
advised that you are strictly prohibited from reading, using, copying or 
disseminating the contents of this document. Please inform the sender 
immediately or write to confidential...@nbc.ca and delete this document 
immediately.



Re: DBCPConnectionPool Not looking up

2018-10-25 Thread Martijn Dekkers
Apologies, I am talking about the DBCPCOnnectionPoolLookup. Long day...

On Thu, 25 Oct 2018, at 23:13, Martijn Dekkers wrote:
> Hello all,
> 
> We ran into a weird issue. We use a DBCPConnectionPool to select the
> correct db for specific queries. Every so often (3 times now today,
> few times yesterday) the DBCPConnectionPool will bug out with
> "Attributes must contain an attribute name 'database.name'" event
> though the database.name attribute is present in the flowfiles, and
> correct.> 
> Sometimes restarting the ConnectionPool will help, sometimes it just
> "fixes" again. We have several SQL processors using this pool, and
> some send lots of small queries, and these are in different process
> groups. The issue always occurs with a specific PutSQL processor. We
> have a few ExecuteSQL processors that work fine.> 
> The logs don't show anything out of order. 
> 
> Can anyone suggest a more thorough way of debugging this? 
> 
> Thanks
> 
> Martijn



DBCPConnectionPool Not looking up

2018-10-25 Thread Martijn Dekkers
Hello all,

We ran into a weird issue. We use a DBCPConnectionPool to select the
correct db for specific queries. Every so often (3 times now today, few
times yesterday) the DBCPConnectionPool will bug out with "Attributes
must contain an attribute name 'database.name'" event though the
database.name attribute is present in the flowfiles, and correct.
Sometimes restarting the ConnectionPool will help, sometimes it just
"fixes" again. We have several SQL processors using this pool, and some
send lots of small queries, and these are in different process groups.
The issue always occurs with a specific PutSQL processor. We have a few
ExecuteSQL processors that work fine.
The logs don't show anything out of order. 

Can anyone suggest a more thorough way of debugging this? 

Thanks

Martijn


Re: NiFi Toolkit CLI issues with NiFi/Registry SSL handshake

2018-10-25 Thread Bryan Bende
Glad you were able to get it working.

Regarding the super user comment... I don't think it has to be a super
user, but it has to be a user that has permissions to perform the
action. For your example of "nifi pg-import" it would have to be a
user that has write permission to the parent process group where you
are importing. If you don't use a proxied entity and just the
keystore, the NiFi user by default does not have write to any process
groups, the servers only get /proxy and /controller, so you could go
in and grant the server nodes permissions to other things, but by
default it wouldn't work.

Regarding the slow execution... Currently you can only run 1 command
or use interactive mode, but it could be a nice improvement to execute
a series of commands using the one JVM instance. I would be curious to
know which part is slow though, is it really that slow spinning up the
JVM, or is the actual call to retrieve the buckets slow?
On Wed, Oct 24, 2018 at 9:24 PM ara m.  wrote:
>
> D'oh. Thanks.. I forgot I'm spinning up jvm in the CLI. You'd think after a
> week of troubleshooting NiFi<->Reg ssl I'd have thought of this.
>
> Let's just say I had some jvm flags to set of my own besides the debugging
> one...
> -Dcom.ibm.jsse2.overrideDefaultTLS=true
>
> Then I hit a few stumbles and worked through them. I am not sure why I
> needed a super-user as a proxiedUser to get anything done. I should have
> been able to use the user in the keystore, as it has permissions in NiFI,
> but for some reason I was getting errors with "> nifi pg-import" code.
>
> The next thing to fix it seems like spinning up the JVM to run the tasks is
> quite slow. Takes almost 20+ seconds to run one command like list-buckets.
> Maybe I've to increase the memory for my container.
>
> There's no way to pass the cli several lines of commands to run? I know
> there is benefits by using back-references for bucket and flow, which I kind
> of lose when executing CLI commands from the outside.
>
> Still thank you very much for getting my head straight!
>
>
>
> --
> Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/


Re: Recommended NiFi Docker volume mappings?

2018-10-25 Thread Stephen Greszczyszyn
@Juan, Thanks for the tip, probably not going to mount logs as using
Elastic filebeat to tail any live docker container logs - assuming that
nifi docker is configured to use the standard "docker log" API.

I've also found from experience that it is safer to move the default root
Docker volume location from /var/lib to a larger drive as it can blow up
and fill up root very quickly depending on how the internal docker app
behaves or writes logs.

On Thu, 25 Oct 2018 at 14:15, Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:

> I suggest to be careful when mount log directory. In one day fills some
> Gigabytes. If you want to mount logs, adjust the logging.
>
> On Thu, 25 Oct 2018 at 10:07 Stephen Greszczyszyn 
> wrote:
>
>>
>>
>> On Thu, 25 Oct 2018 at 12:50, Peter Wilcsinszky <
>> peterwilcsins...@gmail.com> wrote:
>>
>> But even with 1.8 I'll need to declare the host mount directory somehow
>> via docker-compose, as how will the built docker image on dockerhub know
>> where to locally mount the internal $(NIFI_HOME) volumes as described below?
>>
>> VOLUME ${NIFI_LOG_DIR} \
${NIFI_HOME}/conf \
${NIFI_HOME}/database_repository \
${NIFI_HOME}/flowfile_repository \
${NIFI_HOME}/content_repository \
${NIFI_HOME}/provenance_repository \
${NIFI_HOME}/state

>>>
>>> Yes you should specify volumes explicitly if you use 1.7.1, but also you
>>> should specify an extra separate volume to use for your incoming SFTP data.
>>>
>>>


Re: DistributedMapCacheServer controller service information is not getting saved in template

2018-10-25 Thread Bryan Bende
Hello,

Currently templates still have the issue mentioned in that JIRA, but
if you are trying to move flows between environments I would recommend
taking a look at NiFi Registry which will be much more powerful than
templates, and a versioned flow saved to registry will contain all of
the controller services.

Thanks,

Bryan
On Thu, Oct 25, 2018 at 5:54 AM Kumara M S, Hemantha (Nokia -
IN/Bangalore)  wrote:
>
> Hi All,
>
>
>
> Currently in one of my Nifi instance flow is having DistributedMapCacheServer 
> controller service and wanted to run same flow in another instance. I tried 
> saving and loading template but DistributedMapCacheServer controller is not 
> saved in template.
>
> 1.   I see JIRA issue NIFI-1293 related to this but no updates from long 
> time. Can anyone suggest way to save DistributedMapCacheServer controller 
> service in template?
>
>
>
> Thanks,
>
> Hemantha


Re: PutParquet - Array contains null element at 0

2018-10-25 Thread Bryan Bende
Currently I don't think there is a way that config value can be set without
a code change, but if you want to create a JIRA it would probably make
sense to expose that as a property in the processor to toggle between true
and false, or we can also allow make it so that any dynamic properties get
passed through to the Parquet writer's conf.

On Thu, Oct 25, 2018 at 2:58 AM Ken Tore Tallakstad 
wrote:

> Hi,
>
> We have an issue with PutParquet (NiFi 1.7.1), well with the parquet lib
> to be precise, and array type data containing null values.
> This is a schema snippet of the field in question:
> {
>  "name": "adresse",
>  "type" : ["null", { "type" : "array", "items" :
> ["null","string"], "default": null } ], "default": null
>  },
>
> And a corresponding data example:
> "adresse" : [ null, "value1" ],
> "adresse" : [ null, "value2" ],
> "adresse" : [ "value3", null, "value4" ],
>
> Avro does not seem to have a problem with this and all our records pass,
> but Put parquet fails with the following error: "Array contains a null
> element at X".
>
> Apparently there is a parquet config to allow
> this: parquet.avro.write-old-list-structure=false. Any tips on how to set
> it? Are there any other ways around this, besides stripping the raw data of
> nulls in arrays?
>
> Thanks!
>
> KT :)
>
>
> [image: parq1.png]
>
>
> [image: parq2.png]
>


Re: Recommended NiFi Docker volume mappings?

2018-10-25 Thread Peter Wilcsinszky
If you want them to be on your host machine then you have to declare those
yes. By default docker will create directories for those volumes on the
docker host under /var/lib/docker/volumes/. Note: the docker host is
typically running in a VM, at least this is the case on Docker for Mac.

On Thu, Oct 25, 2018 at 3:07 PM Stephen Greszczyszyn 
wrote:

>
>
> On Thu, 25 Oct 2018 at 12:50, Peter Wilcsinszky <
> peterwilcsins...@gmail.com> wrote:
>
> But even with 1.8 I'll need to declare the host mount directory somehow
> via docker-compose, as how will the built docker image on dockerhub know
> where to locally mount the internal $(NIFI_HOME) volumes as described below?
>
> VOLUME ${NIFI_LOG_DIR} \
>>>${NIFI_HOME}/conf \
>>>${NIFI_HOME}/database_repository \
>>>${NIFI_HOME}/flowfile_repository \
>>>${NIFI_HOME}/content_repository \
>>>${NIFI_HOME}/provenance_repository \
>>>${NIFI_HOME}/state
>>>
>>
>> Yes you should specify volumes explicitly if you use 1.7.1, but also you
>> should specify an extra separate volume to use for your incoming SFTP data.
>>
>>


Re: Recommended NiFi Docker volume mappings?

2018-10-25 Thread Juan Pablo Gardella
I suggest to be careful when mount log directory. In one day fills some
Gigabytes. If you want to mount logs, adjust the logging.

On Thu, 25 Oct 2018 at 10:07 Stephen Greszczyszyn 
wrote:

>
>
> On Thu, 25 Oct 2018 at 12:50, Peter Wilcsinszky <
> peterwilcsins...@gmail.com> wrote:
>
> But even with 1.8 I'll need to declare the host mount directory somehow
> via docker-compose, as how will the built docker image on dockerhub know
> where to locally mount the internal $(NIFI_HOME) volumes as described below?
>
> VOLUME ${NIFI_LOG_DIR} \
>>>${NIFI_HOME}/conf \
>>>${NIFI_HOME}/database_repository \
>>>${NIFI_HOME}/flowfile_repository \
>>>${NIFI_HOME}/content_repository \
>>>${NIFI_HOME}/provenance_repository \
>>>${NIFI_HOME}/state
>>>
>>
>> Yes you should specify volumes explicitly if you use 1.7.1, but also you
>> should specify an extra separate volume to use for your incoming SFTP data.
>>
>>


Re: Recommended NiFi Docker volume mappings?

2018-10-25 Thread Stephen Greszczyszyn
On Thu, 25 Oct 2018 at 12:50, Peter Wilcsinszky 
wrote:

But even with 1.8 I'll need to declare the host mount directory somehow via
docker-compose, as how will the built docker image on dockerhub know where
to locally mount the internal $(NIFI_HOME) volumes as described below?

VOLUME ${NIFI_LOG_DIR} \
>>${NIFI_HOME}/conf \
>>${NIFI_HOME}/database_repository \
>>${NIFI_HOME}/flowfile_repository \
>>${NIFI_HOME}/content_repository \
>>${NIFI_HOME}/provenance_repository \
>>${NIFI_HOME}/state
>>
>
> Yes you should specify volumes explicitly if you use 1.7.1, but also you
> should specify an extra separate volume to use for your incoming SFTP data.
>
>


Re: Recommended NiFi Docker volume mappings?

2018-10-25 Thread Peter Wilcsinszky
On Thu, Oct 25, 2018 at 1:01 PM Stephen Greszczyszyn 
wrote:

> Thanks for the reply Peter,
>
> You are right, last night when I tried mapping just /opt/nifi from NiFi
> version 1.7.1 the container wasn't happy starting up and I couldn't figure
> out what folders were needed to store state and manage any configurations.
>
> Just to be clear, should I be mapping the following volumes to local
> folders that have read/write access for host user ID 1000 (or a Linux group
> that user 1000 is a member of) for the internal docker user nifi (UID 1000)
> to be able to access?  I guess there is no way to change the UID of docker
> user nifi without doing a custom docker build.  For security/LDAP, I'm
> assuming I can just pass the environment variables through as documented on
> the README.md?
>
> VOLUME ${NIFI_LOG_DIR} \
>${NIFI_HOME}/conf \
>${NIFI_HOME}/database_repository \
>${NIFI_HOME}/flowfile_repository \
>${NIFI_HOME}/content_repository \
>${NIFI_HOME}/provenance_repository \
>${NIFI_HOME}/state
>

Yes you should specify volumes explicitly if you use 1.7.1, but also you
should specify an extra separate volume to use for your incoming SFTP data.


>
> I'm trying to automate the docker config using docker-compose via ansible,
> so normally I use a framework like this:
>
> - name: Create local host nifi state directories in /data/nifi/
>   file:
> path: "{{ item }}"
> state: directory
> owner: 1000
> group: 1000
> mode: 0775
>   with_items:
>   - /data/nifi
>   - /data/nifi/conf
>   - /data/nifi/state
>   - /data/nifi/database_repository
>   - /data/nifi/flowfile_repository
>   - /data/nifi/content_repository
>   - /data/nifi/provenance_repository
>
> - name: Build NiFi Docker Image
>   docker_service:
> project_name: nifi
> definition:
>   version: '2'
>   services:
> nifi:
>   image: apache/nifi:{{ nifi_version }}
>   container_name: nifi
>   restart: on-failure
> #  environment:
>
>   volumes:
> # take uid/gid lists from host to give same user/group
> permissions mapping as host
> #- /etc/passwd:/etc/passwd
> #- /etc/group:/etc/group
>
> # Give NiFi access to read/write in /data
> - /data:/data
>
> # Expose NiFi config and state directories
> - /data/nifi/conf:/opt/nifi/conf
> - /data/nifi/state:/data/nifi/state
> -
> /data/nifi/database_repository:/opt/nifi/database_repository
> -
> /data/nifi/flowfile_repository:/opt/nifi/flowfile_repository
> -
> /data/nifi/content_repository:/opt/nifi/content_repository
> -
> /data/nifi/provenance_repository:/opt/nifi/provenance_repository
>
>   ports:
> - 8080:8080
> - 8443:8443
> - 1:1
>
> On Thu, 25 Oct 2018 at 11:02, Peter Wilcsinszky <
> peterwilcsins...@gmail.com> wrote:
>
>> Hi Stephen,
>>
>> I don't recommend mounting /opt/nifi directly as it will copy all the
>> NiFi binaries over to the volume as well, which is unnecessary I beleive.
>> The latest dockerfile that will be used to build the docker image for the
>> upcoming release already declares volumes that I recommend to leverage:
>>
>> https://github.com/apache/nifi/blob/master/nifi-docker/dockerhub/Dockerfile#L73
>>
>> However if you have special needs you can always tweak the dockerfile and
>> build you own image from it.
>>
>> On Wed, Oct 24, 2018 at 10:04 PM Stephen Greszczyszyn 
>> wrote:
>>
>>> Hi there,
>>>
>>> I'm trying to get a working configuration for the official vanilla NiFi
>>> docker image where it can read existing SFTP incoming data as well as allow
>>> me to pass in any necessary configuration files.
>>>
>>> The problem seems to be that by default the docker container picks up
>>> userID 1000 to run the nifi process, which is OK since I mapped my
>>> /etc/passwd and /etc/group volumes and I'm managing the directory
>>> read/write access through my underlying OS (Ubuntu 18.04).
>>>
>>> Where I am having problems is mapping the docker NiFi /opt/nifi
>>> directory to a local directory, despite the permissions looking OK.  I've
>>> even set my local /data/nifi directory to chmod 777, but the docker
>>> container fails to start.
>>>
>>> Any suggestions on how to resolve this?  Also any best practices for
>>> mapping the NiFi internal docker volumes to the local OS would be
>>> appreciated.
>>>
>>> Thanks,
>>>
>>> Stephen
>>>
>>


Re: Recommended NiFi Docker volume mappings?

2018-10-25 Thread Stephen Greszczyszyn
Thanks for the reply Peter,

You are right, last night when I tried mapping just /opt/nifi from NiFi
version 1.7.1 the container wasn't happy starting up and I couldn't figure
out what folders were needed to store state and manage any configurations.

Just to be clear, should I be mapping the following volumes to local
folders that have read/write access for host user ID 1000 (or a Linux group
that user 1000 is a member of) for the internal docker user nifi (UID 1000)
to be able to access?  I guess there is no way to change the UID of docker
user nifi without doing a custom docker build.  For security/LDAP, I'm
assuming I can just pass the environment variables through as documented on
the README.md?

VOLUME ${NIFI_LOG_DIR} \
   ${NIFI_HOME}/conf \
   ${NIFI_HOME}/database_repository \
   ${NIFI_HOME}/flowfile_repository \
   ${NIFI_HOME}/content_repository \
   ${NIFI_HOME}/provenance_repository \
   ${NIFI_HOME}/state

I'm trying to automate the docker config using docker-compose via ansible,
so normally I use a framework like this:

- name: Create local host nifi state directories in /data/nifi/
  file:
path: "{{ item }}"
state: directory
owner: 1000
group: 1000
mode: 0775
  with_items:
  - /data/nifi
  - /data/nifi/conf
  - /data/nifi/state
  - /data/nifi/database_repository
  - /data/nifi/flowfile_repository
  - /data/nifi/content_repository
  - /data/nifi/provenance_repository

- name: Build NiFi Docker Image
  docker_service:
project_name: nifi
definition:
  version: '2'
  services:
nifi:
  image: apache/nifi:{{ nifi_version }}
  container_name: nifi
  restart: on-failure
#  environment:

  volumes:
# take uid/gid lists from host to give same user/group
permissions mapping as host
#- /etc/passwd:/etc/passwd
#- /etc/group:/etc/group

# Give NiFi access to read/write in /data
- /data:/data

# Expose NiFi config and state directories
- /data/nifi/conf:/opt/nifi/conf
- /data/nifi/state:/data/nifi/state
-
/data/nifi/database_repository:/opt/nifi/database_repository
-
/data/nifi/flowfile_repository:/opt/nifi/flowfile_repository
- /data/nifi/content_repository:/opt/nifi/content_repository
-
/data/nifi/provenance_repository:/opt/nifi/provenance_repository

  ports:
- 8080:8080
- 8443:8443
- 1:1

On Thu, 25 Oct 2018 at 11:02, Peter Wilcsinszky 
wrote:

> Hi Stephen,
>
> I don't recommend mounting /opt/nifi directly as it will copy all the NiFi
> binaries over to the volume as well, which is unnecessary I beleive. The
> latest dockerfile that will be used to build the docker image for the
> upcoming release already declares volumes that I recommend to leverage:
>
> https://github.com/apache/nifi/blob/master/nifi-docker/dockerhub/Dockerfile#L73
>
> However if you have special needs you can always tweak the dockerfile and
> build you own image from it.
>
> On Wed, Oct 24, 2018 at 10:04 PM Stephen Greszczyszyn 
> wrote:
>
>> Hi there,
>>
>> I'm trying to get a working configuration for the official vanilla NiFi
>> docker image where it can read existing SFTP incoming data as well as allow
>> me to pass in any necessary configuration files.
>>
>> The problem seems to be that by default the docker container picks up
>> userID 1000 to run the nifi process, which is OK since I mapped my
>> /etc/passwd and /etc/group volumes and I'm managing the directory
>> read/write access through my underlying OS (Ubuntu 18.04).
>>
>> Where I am having problems is mapping the docker NiFi /opt/nifi directory
>> to a local directory, despite the permissions looking OK.  I've even set my
>> local /data/nifi directory to chmod 777, but the docker container fails to
>> start.
>>
>> Any suggestions on how to resolve this?  Also any best practices for
>> mapping the NiFi internal docker volumes to the local OS would be
>> appreciated.
>>
>> Thanks,
>>
>> Stephen
>>
>


Re: Recommended NiFi Docker volume mappings?

2018-10-25 Thread Peter Wilcsinszky
Hi Stephen,

I don't recommend mounting /opt/nifi directly as it will copy all the NiFi
binaries over to the volume as well, which is unnecessary I beleive. The
latest dockerfile that will be used to build the docker image for the
upcoming release already declares volumes that I recommend to leverage:
https://github.com/apache/nifi/blob/master/nifi-docker/dockerhub/Dockerfile#L73

However if you have special needs you can always tweak the dockerfile and
build you own image from it.

On Wed, Oct 24, 2018 at 10:04 PM Stephen Greszczyszyn 
wrote:

> Hi there,
>
> I'm trying to get a working configuration for the official vanilla NiFi
> docker image where it can read existing SFTP incoming data as well as allow
> me to pass in any necessary configuration files.
>
> The problem seems to be that by default the docker container picks up
> userID 1000 to run the nifi process, which is OK since I mapped my
> /etc/passwd and /etc/group volumes and I'm managing the directory
> read/write access through my underlying OS (Ubuntu 18.04).
>
> Where I am having problems is mapping the docker NiFi /opt/nifi directory
> to a local directory, despite the permissions looking OK.  I've even set my
> local /data/nifi directory to chmod 777, but the docker container fails to
> start.
>
> Any suggestions on how to resolve this?  Also any best practices for
> mapping the NiFi internal docker volumes to the local OS would be
> appreciated.
>
> Thanks,
>
> Stephen
>


DistributedMapCacheServer controller service information is not getting saved in template

2018-10-25 Thread Kumara M S, Hemantha (Nokia - IN/Bangalore)
Hi All,

Currently in one of my Nifi instance flow is having DistributedMapCacheServer 
controller service and wanted to run same flow in another instance. I tried 
saving and loading template but DistributedMapCacheServer controller is not 
saved in template.
1.   I see JIRA issue 
NIFI-1293 related to this but 
no updates from long time. Can anyone suggest way to save 
DistributedMapCacheServer controller service in template?

Thanks,
Hemantha


PutParquet - Array contains null element at 0

2018-10-25 Thread Ken Tore Tallakstad
Hi,

We have an issue with PutParquet (NiFi 1.7.1), well with the parquet lib to
be precise, and array type data containing null values.
This is a schema snippet of the field in question:
{
 "name": "adresse",
 "type" : ["null", { "type" : "array", "items" :
["null","string"], "default": null } ], "default": null
 },

And a corresponding data example:
"adresse" : [ null, "value1" ],
"adresse" : [ null, "value2" ],
"adresse" : [ "value3", null, "value4" ],

Avro does not seem to have a problem with this and all our records pass,
but Put parquet fails with the following error: "Array contains a null
element at X".

Apparently there is a parquet config to allow
this: parquet.avro.write-old-list-structure=false. Any tips on how to set
it? Are there any other ways around this, besides stripping the raw data of
nulls in arrays?

Thanks!

KT :)


[image: parq1.png]


[image: parq2.png]