[ 
https://issues.apache.org/jira/browse/SPARK-17467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Huang updated SPARK-17467:
--------------------------------
    Description: 
I have an AVRO file on Swift(awclassic.avro). I ran Spark SQL to count its 
records, but got incorrect result. The file has 60855 records, but I got 42451.

The following is the sample code I used:
{code:java}
public class SparkTest {
    
    public static void main(String[] args) throws InterruptedException, 
IOException {
        SparkConf sparkConf = new SparkConf()
                .setAppName("Spark-Swift-App")
                .setMaster("local[8]");
        JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
        Configuration hadoopConf = sparkContext.hadoopConfiguration();

        Configuration extraHadoopConf = new Configuration(false);
        extraHadoopConf.addResource("swift-site.xml"); // swift-site.xml is a 
customized file which is to store Swift configurations.
        hadoopConf.addResource(extraHadoopConf);

        SQLContext sqlContext = new SQLContext(sparkContext);
        DataFrame df = 
sqlContext.read().format("com.databricks.spark.avro").load("swift://<container>.<provider>/awclassic/awclassic.avro");
        df.registerTempTable("awclassic");
        DataFrame result = sqlContext.sql("SELECT COUNT(*) FROM awclassic");
        result.show();
    }
}
{code}

I also found an interesting thing: if I changed the value of 
fs.swift.blocksize, I got different results. For details, please see the 
following table.
||fs.swift.blocksize ||SELECT COUNT\(*\) FROM awclassic ||
|16384(16M) | 51683|
|32768(32M) - default | 42451|
|65536(64M) | 30459|

  was:
I have an AVRO file on Swift(awclassic.avro). I ran Spark SQL to count its 
records, but got incorrect result. The file has 60855 records, but I got 42451.

The following is the sample code I used:
{code:java}
public class SparkTest {
    
    public static void main(String[] args) throws InterruptedException, 
IOException {
        SparkConf sparkConf = new SparkConf()
                .setAppName("Spark-Swift-App")
                .setMaster("local[8]");
        JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
        Configuration hadoopConf = sparkContext.hadoopConfiguration();

        Configuration extraHadoopConf = new Configuration(false);
        extraHadoopConf.addResource("swift-site.xml"); // swift-site.xml is a 
customized file which is to store Swift configurations.
        hadoopConf.addResource(extraHadoopConf);

        SQLContext sqlContext = new SQLContext(sparkContext);
        DataFrame df = 
sqlContext.read().format("com.databricks.spark.avro").load("swift://bdd-edp.bddcs/awclassic/awclassic.avro");
        df.registerTempTable("awclassic");
        DataFrame result = sqlContext.sql("SELECT COUNT(*) FROM awclassic");
        result.show();
    }
}
{code}

I also found an interesting thing: if I changed the value of 
fs.swift.blocksize, I got different results. For details, please see the 
following table.
||fs.swift.blocksize ||SELECT COUNT\(*\) FROM awclassic ||
|16384(16M) | 51683|
|32768(32M) - default | 42451|
|65536(64M) | 30459|


> Spark SQL: Return incorrect result for the data files on Swift
> --------------------------------------------------------------
>
>                 Key: SPARK-17467
>                 URL: https://issues.apache.org/jira/browse/SPARK-17467
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.2
>            Reporter: Kevin Huang
>
> I have an AVRO file on Swift(awclassic.avro). I ran Spark SQL to count its 
> records, but got incorrect result. The file has 60855 records, but I got 
> 42451.
> The following is the sample code I used:
> {code:java}
> public class SparkTest {
>     
>     public static void main(String[] args) throws InterruptedException, 
> IOException {
>         SparkConf sparkConf = new SparkConf()
>                 .setAppName("Spark-Swift-App")
>                 .setMaster("local[8]");
>         JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
>         Configuration hadoopConf = sparkContext.hadoopConfiguration();
>         Configuration extraHadoopConf = new Configuration(false);
>         extraHadoopConf.addResource("swift-site.xml"); // swift-site.xml is a 
> customized file which is to store Swift configurations.
>         hadoopConf.addResource(extraHadoopConf);
>         SQLContext sqlContext = new SQLContext(sparkContext);
>         DataFrame df = 
> sqlContext.read().format("com.databricks.spark.avro").load("swift://<container>.<provider>/awclassic/awclassic.avro");
>         df.registerTempTable("awclassic");
>         DataFrame result = sqlContext.sql("SELECT COUNT(*) FROM awclassic");
>         result.show();
>     }
> }
> {code}
> I also found an interesting thing: if I changed the value of 
> fs.swift.blocksize, I got different results. For details, please see the 
> following table.
> ||fs.swift.blocksize ||SELECT COUNT\(*\) FROM awclassic ||
> |16384(16M) | 51683|
> |32768(32M) - default | 42451|
> |65536(64M) | 30459|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to