[ 
https://issues.apache.org/jira/browse/PHOENIX-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14970271#comment-14970271
 ] 

ASF GitHub Bot commented on PHOENIX-2288:
-----------------------------------------

Github user JamesRTaylor commented on a diff in the pull request:

    https://github.com/apache/phoenix/pull/124#discussion_r42826328
  
    --- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java ---
    @@ -459,19 +459,30 @@ public static ColumnInfo getColumnInfo(PTable table, 
String columnName) throws S
             return getColumnInfo(pColumn);
         }
     
    -   /**
    +    /**
          * Constructs a column info for the supplied pColumn
          * @param pColumn
          * @return columnInfo
          * @throws SQLException if the parameter is null.
          */
         public static ColumnInfo getColumnInfo(PColumn pColumn) throws 
SQLException {
    -        if (pColumn==null) {
    +        if (pColumn == null) {
                 throw new SQLException("pColumn must not be null.");
             }
             int sqlType = pColumn.getDataType().getSqlType();
    -        ColumnInfo columnInfo = new ColumnInfo(pColumn.toString(),sqlType);
    -        return columnInfo;
    +        if (pColumn.getMaxLength() == null) {
    +            return new ColumnInfo(pColumn.toString(), sqlType);
    +        }
    +        if (sqlType == Types.CHAR || sqlType == Types.VARCHAR) {
    --- End diff --
    
    ColumnInfo is a kind of lightweight transport class solely for passing in 
the necessary column metadata for the MR and Spark integration to run. It's 
passed in through the config so it has some simple to/from string methods - 
this prevents us from having to lookup the metadata from Phoenix metadata using 
the regular JDBC metadata APIs (which would be another option). Having this 
ColumnInfo class was deemed slightly easier.
    
    PColumn has more information than we need and it'd be best to keep this as 
an internal/private class as much as possible. It's the object representation 
of our column metadata.


> Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark 
> DataFrame
> -------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-2288
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2288
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.5.2
>            Reporter: Josh Mahonin
>
> When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, 
> the underlying precision and scale aren't carried forward to Spark.
> The Spark catalyst schema converter should load these from the underlying 
> column. These appear to be exposed in the ResultSetMetaData, but if there was 
> a way to expose these somehow through ColumnInfo, it would be cleaner.
> I'm not sure if Pig has the same issues or not, but I suspect it may.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to