Hi Pierre,

I have tested this situation on NiFi 1.15.0-RC3. PutParquet processor 
(Compression Type: SNAPPY) does not work correctly and gives the same error :(

To summarize the situation, this error only happens in NiFi 1.14.0 and 1.15.0 
versions.

Could you help in order to resolve this situation?

Thank you in advance,

--Bilal



-----Original Message-----
From: Pierre Villard <pierre.villard...@gmail.com> 
Sent: 15 Eylül 2021 Çarşamba 19:00
To: dev@nifi.apache.org
Subject: Re: [SPAM] RE: PutParquet - Compression Type: SNAPPY (NiFi 1.14.0)

OK so we upgraded the version of parquet-avro in [1]. The snappy-java version 
between both is not the same so I *guess* there could be something weird in 
terms of classloader with another dependency conflicting. Checking.

[1]
https://github.com/apache/nifi/commit/5108d7cdd015ef14e98f2acfb61b0213972fe29e#diff-c2fdc63240d48e73764ca171426b65fc143249bd466b9ba71eb76664cceb4410

Le mer. 15 sept. 2021 à 15:53, Bilal Bektas <bilal.bek...@obase.com> a écrit :

> Hi Pierre,
>
> Thank you for your return.
>
> In order to do lean testing, the default configuration was used generally:
> * nifi-env.sh file has the default configuration.
> * bootstrap.conf file has the default configuration.
> * nifi.properties file has the default configuration except security 
> configuration.
> * PutParquet Processor has the default configuration except 
> configuration which I mentioned in the previous post. (But SNAPPY 
> compression is not
> working)
> * ConvertAvroToParquet Processor has the default configuration. 
> (SNAPPY compression is working correctly)
> * There is no custom processor in our NiFi environment.
> * There is no custom lib location in Nifi properties.
>
> As I mentioned in my previous email, this error happens only in NiFi
> 1.14.0 version.
>
> Thank you for helping,
>
> --Bilal
>
>
> -----Original Message-----
> From: Pierre Villard <pierre.villard...@gmail.com>
> Sent: 15 Eylül 2021 Çarşamba 13:50
> To: dev@nifi.apache.org
> Subject: Re: [SPAM] RE: PutParquet - Compression Type: SNAPPY (NiFi 
> 1.14.0)
>
> Hi,
>
> Is there something you changed on how the Snappy libraries are loaded 
> in the NiFi JVM?
> I'm not familiar with this specifically but you may need more recent 
> versions of the native libraries.
> Also, anything related to a snappy-java lib you added somewhere? /lib?
> custom processor? etc.
>
> I believe it may be related to 
> https://github.com/apache/nifi/pull/4601
> where we're adding snappy-java and maybe there is a version conflict there.
>
> Pierre
>
> Le mer. 15 sept. 2021 à 12:38, Bilal Bektas <bilal.bek...@obase.com> a 
> écrit :
>
> > Hi Dev Team,
> >
> > To be able to detect the problem, is there anything you expect from 
> > us or is there anything we can help with
> >
> > Thank you in advance,
> >
> > --Bilal
> >
> >
> > -----Original Message-----
> > From: Bilal Bektas <bilal.bek...@obase.com>
> > Sent: 1 Eylül 2021 Çarşamba 12:43
> > To: dev@nifi.apache.org
> > Subject: [SPAM] RE: PutParquet - Compression Type: SNAPPY (NiFi
> > 1.14.0)
> >
> > Hi Dev Team,
> >
> > I have tested different compression types which is a feature of 
> > PutParquet and ConvertAvroToParquet Processors on different NiFi
> versions.
> >
> > To avoid confusion, i will give a summary information:
> >
> > * Compression types (UNCOMPRESSED, GZIP, SNAPPY) of PutParquet 
> > Processor works correctly on NiFi 1.12.1 and 1.13.2
> > * Compression types (UNCOMPRESSED, GZIP) of PutParquet Processor 
> > works correctly on NiFi 1.14.0; SNAPPY gives an error in the 
> > previous email
> >
> > * Compression types (UNCOMPRESSED, GZIP, SNAPPY) of 
> > ConvertAvroToParquet Processor works correctly on NiFi 1.12.1, 
> > 1.13.2
> and 1.14.0.
> >
> >
> > PutParquet - Properties:
> > * Hadoop Configuration Resources: File locations
> > * Kerberos Credentials Service: Keytab service
> > * Record Reader: AvroReader Service (Embedded Avro Schema)
> > * Overwrite Files: True
> > * Compression Type: SNAPPY
> > * Other Properties: Default
> >
> > Error Log (nifi-app.log):
> > Class org.xerial.snappy.SnappyNative does not implement the 
> > requested interface org.xerial.snappy.SnappyApi
> >
> >
> > Could you help in order to resolve this situation?
> >
> > Thank you in advance,
> >
> > --Bilal
> >
> >
> >
> > -----Original Message-----
> > From: Bilal Bektas <bilal.bek...@obase.com>
> > Sent: 3 Ağustos 2021 Salı 14:55
> > To: dev@nifi.apache.org
> > Subject: PutParquet - Compression Type: SNAPPY (NiFi 1.14.0)
> >
> > Hi Dev Team,
> >
> > SNAPPY compression feature of PutParquet Processor works correctly 
> > on NiFi
> > 1.12.1 but NiFi 1.14.0 gives the following error.
> >
> > Could you help in order to resolve this situation?
> >
> > Thank you in advance,
> >
> > --Bilal
> >
> >
> > PutParquet - Config:
> > Hadoop Configuration Resources
> >
> > File locations
> >
> > Kerberos Credentials Service
> >
> > Keytab service
> >
> > Kerberos Relogin Period
> >
> > 4 Hours
> >
> > Record Reader
> >
> > AvroReader Service (Embedded Avro Schema)
> >
> > Compression Type
> >
> > SNAPPY
> >
> >
> >
> > Error Log (nifi-app.log):
> > 2021-08-03 14:13:01,955 ERROR [Timer-Driven Process Thread-12] 
> > o.a.nifi.processors.parquet.PutParquet
> > PutParquet[id=6caab337-68e8-3834-b64a-1d2cbd93aba8] Failed to write 
> > due to
> > java.lang.IncompatibleClassChangeError: Class 
> > org.xerial.snappy.SnappyNative does not implement the requested 
> > interface
> > org.xerial.snappy.SnappyApi: java.lang.IncompatibleClassChangeError:
> > Class org.xerial.snappy.SnappyNative does not implement the 
> > requested interface org.xerial.snappy.SnappyApi
> > java.lang.IncompatibleClassChangeError: Class 
> > org.xerial.snappy.SnappyNative does not implement the requested 
> > interface org.xerial.snappy.SnappyApi
> >         at org.xerial.snappy.Snappy.maxCompressedLength(Snappy.java:380)
> >         at
> >
> org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompre
> ssor.java:67)
> >         at
> >
> org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStre
> am.java:81)
> >         at
> >
> org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream
> .java:92)
> >         at
> >
> org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.compress(Co
> decFactory.java:167)
> >         at
> >
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWri
> ter.writePage(ColumnChunkPageWriteStore.java:168)
> >         at
> >
> org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1
> .java:59)
> >         at
> >
> org.apache.parquet.column.impl.ColumnWriterBase.writePage(ColumnWriter
> Base.java:387)
> >         at
> >
> org.apache.parquet.column.impl.ColumnWriteStoreBase.flush(ColumnWriteS
> toreBase.java:186)
> >         at
> >
> org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteSto
> reV1.java:29)
> >         at
> >
> org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToS
> tore(InternalParquetRecordWriter.java:185)
> >         at
> >
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalPa
> rquetRecordWriter.java:124)
> >         at
> > org.apache.parquet.hadoop.ParquetWriter.close(ParquetWriter.java:319)
> >         at
> >
> org.apache.nifi.parquet.hadoop.AvroParquetHDFSRecordWriter.close(AvroP
> arquetHDFSRecordWriter.java:49)
> >         at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:534)
> >         at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:466)
> >         at
> >
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.lambda$null$0(
> AbstractPutHDFSRecord.java:326)
> >         at
> >
> org.apache.nifi.controller.repository.StandardProcessSession.read(Stan
> dardProcessSession.java:2466)
> >         at
> >
> org.apache.nifi.controller.repository.StandardProcessSession.read(Stan
> dardProcessSession.java:2434)
> >         at
> >
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.lambda$onTrigg
> er$1(AbstractPutHDFSRecord.java:303)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:360)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat
> ion.java:1822)
> >         at
> >
> org.apache.nifi.processors.hadoop.AbstractPutHDFSRecord.onTrigger(Abst
> ractPutHDFSRecord.java:271)
> >         at
> >
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcesso
> r.java:27)
> >         at
> >
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardPro
> cessorNode.java:1202)
> >         at
> >
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTas
> k.java:214)
> >         at
> >
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run
> (TimerDrivenSchedulingAgent.java:103)
> >         at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> >         at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.a
> ccess$301(ScheduledThreadPoolExecutor.java:180)
> >         at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.r
> un(ScheduledThreadPoolExecutor.java:294)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1149)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > obase
> > TEL: +90216 527 30 00
> > FAX: +90216 527 31 11
> > [http://www.obase.com/images/signature/home.png]<http://www.obase.co
> > m> [ http://www.obase.com/images/signature/facebook.png] < 
> > https://www.facebook.com/obasesocial>  [ 
> > http://www.obase.com/images/signature/twitter.png] < 
> > https://twitter.com/obasesocial>  [ 
> > http://www.obase.com/images/signature/linkedin.png] < 
> > https://tr.linkedin.com/in/obase> 
> > [http://www.obase.com/images/signature/obase25tr.png]<http://www.oba
> > se
> > .com
> > >
> >
> > Bu elektronik posta ve onunla iletilen b?t?n dosyalar sadece 
> > g?ndericisi tarafindan almasi amaclanan yetkili gercek ya da t?zel 
> > kisinin kullanimi icindir. Eger s?z konusu yetkili alici degilseniz 
> > bu elektronik postanin icerigini aciklamaniz, kopyalamaniz, 
> > y?nlendirmeniz ve kullanmaniz kesinlikle yasaktir ve bu elektronik
> postayi derhal silmeniz gerekmektedir.
> > OBASE bu mesajin icerdigi bilgilerin do?rulu?u veya eksiksiz oldugu 
> > konusunda herhangi bir garanti vermemektedir. Bu nedenle bu 
> > bilgilerin ne sekilde olursa olsun iceriginden, iletilmesinden, 
> > alinmasindan ve saklanmasindan sorumlu degildir. Bu mesajdaki 
> > g?r?sler yalnizca g?nderen kisiye aittir ve OBASE g?r?slerini 
> > yansitmayabilir.
> >
> > Bu e-posta bilinen b?t?n bilgisayar vir?slerine karsi taranmistir.
> >
> > This e-mail and any files transmitted with it are confidential and 
> > intended solely for the use of the individual or entity to whom they 
> > are addressed. If you are not the intended recipient you are hereby 
> > notified that any dissemination, forwarding, copying or use of any 
> > of the information is strictly prohibited, and the e-mail should 
> > immediately be deleted. OBASE makes no warranty as to the accuracy 
> > or completeness of any information contained in this message and 
> > hereby excludes any liability of any kind for the information 
> > contained therein or for the information transmission, recepxion, 
> > storage or use of such in any way whatsoever. The opinions expressed 
> > in this message belong to sender alone and may not necessarily 
> > reflect the opinions of
> OBASE.
> >
> > This e-mail has been scanned for all known computer viruses.
> >
>

Reply via email to