Hi all, Looks like it's parquet-specific issue.
I can successfully write with 512k block-size if I use df.write.csv() or use df.write.text() (I can successfully do csv write when I put hadoop-lzo-0.4.15-cdh5.13.0.jar into the jars dir) sample code: block_size = 512 * 1024 conf = SparkConf().setAppName("myapp").setMaster("spark://spark1:7077").set('spark.cores.max', 20).set("spark.executor.cores", 10).set("spark.executor.memory", "10g").set("spark.hadoop.dfs.blocksize", str(block_size)).set("spark.hadoop.dfs.block.size", str(block_size)).set("spark.hadoop.dfs.namenode.fs-limits.min-block-size", str(131072)) sc = SparkContext(conf=conf) spark = SparkSession(sc) # create DataFrame df_txt = spark.createDataFrame([\{'temp': "hello"}, \{'temp': "world"}, \{'temp': "!"}]) # save using DataFrameWriter, resulting 128MB-block-size df_txt.write.mode('overwrite').format('parquet').save('hdfs://spark1/tmp/temp_with_df') # save using DataFrameWriter.csv, resulting 512k-block-size df_txt.write.mode('overwrite').csv('hdfs://spark1/tmp/temp_with_df_csv') # save using DataFrameWriter.text, resulting 512k-block-size df_txt.write.mode('overwrite').text('hdfs://spark1/tmp/temp_with_df_text') # save using rdd, resulting 512k-block-size client = InsecureClient('http://spark1:50070') client.delete('/tmp/temp_with_rrd', recursive=True) df_txt.rdd.saveAsTextFile('hdfs://spark1/tmp/temp_with_rrd') -- Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/ --------------------------------------------------------------------- To unsubscribe e-mail: dev-unsubscr...@spark.apache.org