hi all,

I am using spark-2.2.1-bin-hadoop2.7 with stand-alone mode.
(python version: 3.5.2 from ubuntu 16.04)

I intended to have DataFrame write to hdfs with customized block-size but
failed.
However, the corresponding rdd can successfully write with the customized
block-size.

Could you help me figure out the issue?

Thanks!

Best regards,
Hsiao


The following is the test code:


##########
# init
##########
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession

import hdfs
from hdfs import InsecureClient
import os

import numpy as np
import pandas as pd
import logging

os.environ['SPARK_HOME'] = '/opt/spark-2.2.1-bin-hadoop2.7'

block_size = 512 * 1024

conf = 
SparkConf().setAppName("DCSSpark").setMaster("spark://10.7.34.47:7077").set('spark.cores.max',
20).set("spark.executor.cores", 10).set("spark.executor.memory",
"10g").set("spark.hadoop.dfs.blocksize",
str(block_size)).set("spark.hadoop.dfs.block.size", str(block_size))

spark = SparkSession.builder.config(conf=conf).getOrCreate()
spark.sparkContext._jsc.hadoopConfiguration().setInt("dfs.blocksize",
block_size)
spark.sparkContext._jsc.hadoopConfiguration().setInt("dfs.block.size",
block_size)

##########
# main
##########

# create DataFrame
df_txt = spark.createDataFrame([{'temp': "hello"}, {'temp': "world"},
{'temp': "!"}])

# save using DataFrameWriter
df_txt.write.mode('overwrite').format('parquet').save('hdfs://spark1/tmp/temp_with_df')

# save using rdd
client = InsecureClient('http://spark1:50070')
client.delete('/tmp/temp_with_rrd', recursive=True)
df_txt.rdd.saveAsTextFile('hdfs://spark1/tmp/temp_with_rrd')

Reply via email to