[ 
https://issues.apache.org/jira/browse/PHOENIX-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14970208#comment-14970208
 ] 

ASF GitHub Bot commented on PHOENIX-2288:
-----------------------------------------

Github user JamesRTaylor commented on a diff in the pull request:

    https://github.com/apache/phoenix/pull/124#discussion_r42823983
  
    --- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java ---
    @@ -459,19 +459,30 @@ public static ColumnInfo getColumnInfo(PTable table, 
String columnName) throws S
             return getColumnInfo(pColumn);
         }
     
    -   /**
    +    /**
          * Constructs a column info for the supplied pColumn
          * @param pColumn
          * @return columnInfo
          * @throws SQLException if the parameter is null.
          */
         public static ColumnInfo getColumnInfo(PColumn pColumn) throws 
SQLException {
    -        if (pColumn==null) {
    +        if (pColumn == null) {
                 throw new SQLException("pColumn must not be null.");
             }
             int sqlType = pColumn.getDataType().getSqlType();
    -        ColumnInfo columnInfo = new ColumnInfo(pColumn.toString(),sqlType);
    -        return columnInfo;
    +        if (pColumn.getMaxLength() == null) {
    +            return new ColumnInfo(pColumn.toString(), sqlType);
    +        }
    +        if (sqlType == Types.CHAR || sqlType == Types.VARCHAR) {
    --- End diff --
    
    Rather than check for particular types, it'd be more general to check for 
null like this:
    
        Integer maxLength = pColumn.getMaxLength();
        Integer scale = pColumn.getScale();
        return new ColumnInfo(pColumn.toString(), sqlType, maxLength, scale);
    
    Then make sure that ColumnInfo handles a null maxLength and scale.


> Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark 
> DataFrame
> -------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-2288
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2288
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.5.2
>            Reporter: Josh Mahonin
>
> When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, 
> the underlying precision and scale aren't carried forward to Spark.
> The Spark catalyst schema converter should load these from the underlying 
> column. These appear to be exposed in the ResultSetMetaData, but if there was 
> a way to expose these somehow through ColumnInfo, it would be cleaner.
> I'm not sure if Pig has the same issues or not, but I suspect it may.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to