Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4127#discussion_r23276701
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
    @@ -178,3 +180,34 @@ case class DescribeCommand(
         child.output.map(field => Row(field.name, field.dataType.toString, 
null))
       }
     }
    +
    +/**
    + * :: DeveloperApi ::
    + */
    +@DeveloperApi
    +case class DDLDescribeCommand(
    +    dbName: Option[String],
    +    tableName: String, isExtended: Boolean) extends RunnableCommand {
    +
    +  override def run(sqlContext: SQLContext) = {
    +    val tblRelation = dbName match {
    +      case Some(db) => UnresolvedRelation(Seq(db, tableName))
    +      case None => UnresolvedRelation(Seq(tableName))
    +    }
    +    val logicalRelation = sqlContext.executePlan(tblRelation).analyzed
    +    val rows = new ArrayBuffer[Row]()
    +    rows ++= logicalRelation.schema.fields.map{field =>
    +      Row(field.name, field.dataType.toSimpleString, null)}
    +
    +    /*
    +     * TODO if future support partition table, add header below:
    +     * # Partition Information
    +     * # col_name data_type comment
    --- End diff --
    
    I realize that was your intention and that is how hive did it, but I
    disagree with that implementation.  It means that in order to know which
    columns are partition columns you have to look a other rows of the
    results.  Any given row should be self contained.
    
    I should be able to query the catalog just like I can query normal tables.
    On Jan 20, 2015 7:40 PM, "Sheng, Li" <[email protected]> wrote:
    
    > In sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala
    > <https://github.com/apache/spark/pull/4127#discussion_r23276556>:
    >
    > > +    tableName: String, isExtended: Boolean) extends RunnableCommand {
    > > +
    > > +  override def run(sqlContext: SQLContext) = {
    > > +    val tblRelation = dbName match {
    > > +      case Some(db) => UnresolvedRelation(Seq(db, tableName))
    > > +      case None => UnresolvedRelation(Seq(tableName))
    > > +    }
    > > +    val logicalRelation = sqlContext.executePlan(tblRelation).analyzed
    > > +    val rows = new ArrayBuffer[Row]()
    > > +    rows ++= logicalRelation.schema.fields.map{field =>
    > > +      Row(field.name, field.dataType.toSimpleString, null)}
    > > +
    > > +    /*
    > > +     * TODO if future support partition table, add header below:
    > > +     * # Partition Information
    > > +     * # col_name data_type comment
    >
    > Here what I mean is to display the normal columns information first and
    > then append the partitioned columns at bottom of the normal columns
    > description. Like below:
    >
    > CREATE TABLE temp_shengli (
    >   viewTime int,
    >   userid bigint,
    >   page_url string,
    >   referrer_url string,
    >   ip string comment 'IP Address of the User'
    > )
    > comment 'This is the page view table'
    > PARTITIONED BY(date string, pos string)
    >
    > To describe it:
    >
    > viewtime                int                     None
    > userid                  bigint                  None
    > page_url                string                  None
    > referrer_url            string                  None
    > ip                      string                  IP Address of the User
    > date                    string                  None
    > pos                     string                  None
    >
    > # Partition Information
    > # col_name              data_type               comment
    >
    > date                    string                  None
    > pos                     string                  None
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/spark/pull/4127/files#r23276556>.
    >


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to