Vignesh Nageswaran created PARQUET-2194:
-------------------------------------------
Summary: parquet.encryption.plaintext.footer parameter being true,
code expects parquet.encryption.footer.key
Key: PARQUET-2194
URL: https://issues.apache.org/jira/browse/PARQUET-2194
Project: Parquet
Issue Type: Bug
Components: parquet-mr
Affects Versions: 1.12.0
Reporter: Vignesh Nageswaran
Hi Team,
I want my footer in parquet file to be non encrypted. so I set the
_parquet.encryption.plaintext.footer_ to be {_}true{_}, but when I tried to run
my code, parquet-mr is expecting __ value __ for the __ property
_parquet.encryption.footer.key **_
Reproducer
Spark 3.3.0
Download the
[file|[https://repo1.maven.org/maven2/org/apache/parquet/parquet-hadoop/1.12.0/parquet-hadoop-1.12.0-tests.jar]
] and place it in spark - jar directory
using spark-shell
{code:java}
sc.hadoopConfiguration.set("parquet.crypto.factory.class"
,"org.apache.parquet.crypto.keytools.PropertiesDrivenCryptoFactory")
sc.hadoopConfiguration.set("parquet.encryption.kms.client.class"
,"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS")
sc.hadoopConfiguration.set("parquet.encryption.key.list","key1a:
BAECAwQFBgcICQoLDA0ODw==, key2a: BAECAAECAAECAAECAAECAA==, keyz:
BAECAAECAAECAAECAAECAA==")
sc.hadoopConfiguration.set("parquet.encryption.key.material.store.internally","false")
val encryptedParquetPath = "/tmp/par_enc_footer_non_encrypted" val
partitionCol = 1 case class nestedItem(ic: Int = 0, sic : Double, pc: Int = 0)
case class SquareItem(int_column: Int, square_int_column : Double,
partitionCol: Int, nestedCol :nestedItem)
val dataRange = (1 to 100).toList val squares = sc.parallelize(dataRange.map(i
=> new SquareItem(i, scala.math.pow(i,2), partitionCol,nestedItem(i,i))))
squares.toDS().show()
squares.toDS().write.partitionBy("partitionCol").mode("overwrite").option("parquet.encryption.column.keys",
"key1a:square_int_column,nestedCol.ic;").option("parquet.encryption.plaintext.footer",true).parquet(encryptedParquetPath){code}
I get the below error, my expectation is if I set properties for my footer to
be plain text, why do we need keys for footer.
{code:java}
Caused by: org.apache.parquet.crypto.ParquetCryptoRuntimeException: Undefined
footer key
at
org.apache.parquet.crypto.keytools.PropertiesDrivenCryptoFactory.getFileEncryptionProperties(PropertiesDrivenCryptoFactory.java:88)
at
org.apache.parquet.hadoop.ParquetOutputFormat.createEncryptionProperties(ParquetOutputFormat.java:554)
at
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:478)
at
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:420)
at
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:409)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:36)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:155)
at
org.apache.spark.sql.execution.datasources.BaseDynamicPartitionDataWriter.renewCurrentWriter(FileFormatDataWriter.scala:298)
at
org.apache.spark.sql.execution.datasources.DynamicPartitionDataSingleWriter.write(FileFormatDataWriter.scala:365)
at
org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithMetrics(FileFormatDataWriter.scala:85)
at
org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithIterator(FileFormatDataWriter.scala:92)
at
org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:331)
at
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538)
at
org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:338)
... 9 more
{code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)