Hi,
I have done a lot of EMR->S3->Redshift using Redshift COPY, haven't done
any from Spark yet but I plan on doing it soon and have been doing some
research. Take a look at this article - Best Practices for Micro-Batch
Loading on Amazon Redshift
https://blogs.aws.amazon.com/bigdata/post/Tx2ANLN1P
Below is trace from trying to access with ~/path. I also did the echo as
per Nick (see the last line), looks ok to me. This is my development box
with Spark 1.2.0 running CentOS 6.5, Python 2.6.6
[pete.zybrick@pz-lt2-ipc spark-1.2.0]$ ec2/spark-ec2
--key-pair=spark-streaming-kp
--identity-file=~