Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/14491
  
    Could you look through these as below? at least I found one more.
    
    ```
    
src/main/java/org/apache/spark/examples/ml/JavaDecisionTreeClassificationExample.java:
    // Load the data stored in LIBSVM format as a DataFrame.
    
src/main/java/org/apache/spark/examples/ml/JavaDecisionTreeRegressionExample.java:
    // Load the data stored in LIBSVM format as a DataFrame.
    
src/main/java/org/apache/spark/examples/ml/JavaGradientBoostedTreeClassifierExample.java:
    // Load and parse the data file, converting it to a DataFrame.
    
src/main/java/org/apache/spark/examples/ml/JavaGradientBoostedTreeRegressorExample.java:
    // Load and parse the data file, converting it to a DataFrame.
    
src/main/java/org/apache/spark/examples/ml/JavaRandomForestClassifierExample.java:
    // Load and parse the data file, converting it to a DataFrame.
    
src/main/java/org/apache/spark/examples/ml/JavaRandomForestRegressorExample.java:
    // Load and parse the data file, converting it to a DataFrame.
    src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java: 
   // The results of SQL queries are themselves DataFrames and support all 
normal functions.
    src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java: 
   // You can also use DataFrames to create temporary views within a 
SparkSession.
    src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java: 
   // Queries can then join DataFrames data with data stored in Hive.
    src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java:    // 
Displays the content of the DataFrame to stdout
    src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java:    // 
Register the DataFrame as a SQL temporary view
    src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java:    // 
DataFrames can be converted to a Dataset by providing a class. Mapping based on 
name
    src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java:    // 
Apply a schema to an RDD of JavaBeans to get a DataFrame
    src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java:    // 
Register the DataFrame as a temporary view
    src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java:    // 
Creates a temporary view using the DataFrame
    src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java:    // 
SQL can be run over a temporary view created using DataFrames
    src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java:    // 
The results of SQL queries are DataFrames and support all the normal RDD 
operations
    src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java:  
  // DataFrames can be saved as Parquet files, maintaining the schema 
information
    src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java:  
  // The result of loading a parquet file is also a DataFrame
    src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java:  
  // Create a simple DataFrame, store into a partition directory
    src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java:  
  // Create another DataFrame in a new partition directory,
    src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java:  
  // Creates a temporary view using the DataFrame
    
src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredNetworkWordCount.java:
    // Create DataFrame representing the stream of input lines from connection 
to host:port
    
src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredNetworkWordCountWindowed.java:
    // Create DataFrame representing the stream of input lines from connection 
to host:port
    
src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java: 
* Use DataFrames and SQL to count words in UTF8 encoded, '\n' delimited text 
received from the
    
src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java: 
   // Convert RDDs of the words DStream to DataFrame and run SQL query
    
src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java: 
       // Convert JavaRDD[String] to JavaRDD[bean class] to DataFrame
    
src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java: 
       // Creates a temporary view using the DataFrame
    src/main/scala/org/apache/spark/examples/ml/DataFrameExample.scala: * An 
example of how to use [[org.apache.spark.sql.DataFrame]] for ML. Run with
    
src/main/scala/org/apache/spark/examples/ml/DecisionTreeClassificationExample.scala:
    // Load the data stored in LIBSVM format as a DataFrame.
    src/main/scala/org/apache/spark/examples/ml/DecisionTreeExample.scala:   * 
@param data  DataFrame with "prediction" and labelColName columns
    src/main/scala/org/apache/spark/examples/ml/DecisionTreeExample.scala:   * 
@param data  DataFrame with "prediction" and labelColName columns
    
src/main/scala/org/apache/spark/examples/ml/DecisionTreeRegressionExample.scala:
    // Load the data stored in LIBSVM format as a DataFrame.
    
src/main/scala/org/apache/spark/examples/ml/GradientBoostedTreeClassifierExample.scala:
    // Load and parse the data file, converting it to a DataFrame.
    
src/main/scala/org/apache/spark/examples/ml/GradientBoostedTreeRegressorExample.scala:
    // Load and parse the data file, converting it to a DataFrame.
    
src/main/scala/org/apache/spark/examples/ml/MultilayerPerceptronClassifierExample.scala:
    // Load the data stored in LIBSVM format as a DataFrame.
    src/main/scala/org/apache/spark/examples/ml/NaiveBayesExample.scala:    // 
Load the data stored in LIBSVM format as a DataFrame.
    
src/main/scala/org/apache/spark/examples/ml/RandomForestClassifierExample.scala:
    // Load and parse the data file, converting it to a DataFrame.
    
src/main/scala/org/apache/spark/examples/ml/RandomForestRegressorExample.scala: 
   // Load and parse the data file, converting it to a DataFrame.
    src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala:   
 // The results of SQL queries are themselves DataFrames and support all normal 
functions.
    src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala:   
 // You can also use DataFrames to create temporary views within a SparkSession.
    src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala:   
 // Queries can then join DataFrame data with data stored in Hive.
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
For implicit conversions like converting RDDs to DataFrames
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
Displays the content of the DataFrame to stdout
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
Register the DataFrame as a SQL temporary view
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
DataFrames can be converted to a Dataset by providing a class. Mapping will be 
done by name
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
For implicit conversions from RDDs to DataFrames
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
Register the DataFrame as a temporary view
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
Creates a temporary view using the DataFrame
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
SQL can be run over a temporary view created using DataFrames
    src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala:    // 
The results of SQL queries are DataFrames and support all the normal RDD 
operations
    src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala:    
// DataFrames can be saved as Parquet files, maintaining the schema information
    src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala:    
// The result of loading a Parquet file is also a DataFrame
    src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala:    
// This is used to implicitly convert an RDD to a DataFrame.
    src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala:    
// Create a simple DataFrame, store into a partition directory
    src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala:    
// Create another DataFrame in a new partition directory,
    src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala:    
// Creates a temporary view using the DataFrame
    src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala:    
// Alternatively, a DataFrame can be created for a JSON dataset represented by
    
src/main/scala/org/apache/spark/examples/sql/streaming/StructuredNetworkWordCount.scala:
    // Create DataFrame representing the stream of input lines from connection 
to host:port
    
src/main/scala/org/apache/spark/examples/sql/streaming/StructuredNetworkWordCountWindowed.scala:
    // Create DataFrame representing the stream of input lines from connection 
to host:port
    
src/main/scala/org/apache/spark/examples/streaming/SqlNetworkWordCount.scala: * 
Use DataFrames and SQL to count words in UTF8 encoded, '\n' delimited text 
received from the
    
src/main/scala/org/apache/spark/examples/streaming/SqlNetworkWordCount.scala:   
 // Convert RDDs of the words DStream to DataFrame and run SQL query
    
src/main/scala/org/apache/spark/examples/streaming/SqlNetworkWordCount.scala:   
   // Convert RDD[String] to RDD[case class] to DataFrame
    
src/main/scala/org/apache/spark/examples/streaming/SqlNetworkWordCount.scala:   
   // Creates a temporary view using the DataFrame
    
src/main/scala/org/apache/spark/examples/streaming/SqlNetworkWordCount.scala:/**
 Case class for converting RDD to DataFrame */
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to