[jira] [Created] (CARBONDATA-670) Add new MD files for Data Types and File Structure.

2017-01-20 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-670:


 Summary: Add new MD files for Data Types and File Structure.
 Key: CARBONDATA-670
 URL: https://issues.apache.org/jira/browse/CARBONDATA-670
 Project: CarbonData
  Issue Type: Improvement
  Components: docs
Reporter: Pallavi Singh
Priority: Minor


Add MD files for  : Data Types and File Structure
Update the Overview Section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-710) Add content to Faqs and Troubleshooting

2017-02-16 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-710:


 Summary: Add content to Faqs and Troubleshooting
 Key: CARBONDATA-710
 URL: https://issues.apache.org/jira/browse/CARBONDATA-710
 Project: CarbonData
  Issue Type: Improvement
  Components: docs
Reporter: Pallavi Singh
Assignee: Pallavi Singh






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CARBONDATA-414) Access array elements using index than Loop

2016-11-16 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-414:


 Summary: Access array elements using index than Loop
 Key: CARBONDATA-414
 URL: https://issues.apache.org/jira/browse/CARBONDATA-414
 Project: CarbonData
  Issue Type: Improvement
Reporter: Pallavi Singh
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-428) Remove Redundant Condition Checks

2016-11-20 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-428:


 Summary: Remove Redundant Condition Checks
 Key: CARBONDATA-428
 URL: https://issues.apache.org/jira/browse/CARBONDATA-428
 Project: CarbonData
  Issue Type: Improvement
Reporter: Pallavi Singh
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-426) replace if else with conditional operator

2016-11-20 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-426:


 Summary: replace if else with conditional operator
 Key: CARBONDATA-426
 URL: https://issues.apache.org/jira/browse/CARBONDATA-426
 Project: CarbonData
  Issue Type: Improvement
Reporter: Pallavi Singh
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-529) Add Unit Tests for processing.newflow.parser package

2016-12-13 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-529:


 Summary: Add Unit Tests for processing.newflow.parser package
 Key: CARBONDATA-529
 URL: https://issues.apache.org/jira/browse/CARBONDATA-529
 Project: CarbonData
  Issue Type: Test
Reporter: Pallavi Singh
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-631) Select Query Failure for table created in 0.2 with data loaded in 1.0

2017-01-12 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-631:


 Summary: Select Query Failure for table created in 0.2 with data 
loaded in 1.0
 Key: CARBONDATA-631
 URL: https://issues.apache.org/jira/browse/CARBONDATA-631
 Project: CarbonData
  Issue Type: Bug
 Environment: Spark 1.6 
Reporter: Pallavi Singh
 Fix For: 0.1.0-incubating


Created table  with the 0.2 jar:

CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, 
DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB");

then 
LOAD DATA INPATH 'hdfs://localhost:54310/csv/2000_UniqData.csv' into table 
uniqdata OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

Switched to 1.0 jar

LOAD DATA INPATH 'hdfs://localhost:54310/csv/2000_UniqData.csv' into table 
uniqdata OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

After successful load :

select count(*) from uniqdata;

I get following error : 
INFO  12-01 18:31:04,057 - Running query 'select count(*) from uniqdata' with 
81129cf3-fcd4-429d-9adf-d37d35cdf051
INFO  12-01 18:31:04,058 - pool-27-thread-46 Query [SELECT COUNT(*) FROM 
UNIQDATA]
INFO  12-01 18:31:04,060 - Parsing command: select count(*) from uniqdata
INFO  12-01 18:31:04,060 - Parse Completed
INFO  12-01 18:31:04,061 - Parsing command: select count(*) from uniqdata
INFO  12-01 18:31:04,061 - Parse Completed
INFO  12-01 18:31:04,061 - 27: get_table : db=12jan17 tbl=uniqdata
INFO  12-01 18:31:04,061 - ugi=pallavi  ip=unknown-ip-addr  cmd=get_table : 
db=12jan17 tbl=uniqdata 
INFO  12-01 18:31:04,061 - 27: Opening raw store with implemenation 
class:org.apache.hadoop.hive.metastore.ObjectStore
INFO  12-01 18:31:04,063 - ObjectStore, initialize called
INFO  12-01 18:31:04,068 - Reading in results for query 
"org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is 
closing
INFO  12-01 18:31:04,069 - Using direct SQL, underlying DB is DERBY
INFO  12-01 18:31:04,069 - Initialized ObjectStore
INFO  12-01 18:31:04,101 - pool-27-thread-46 Starting to optimize plan
ERROR 12-01 18:31:04,168 - pool-27-thread-46 Cannot convert12-01-2017 16:02:28 
to Time/Long type valueUnparseable date: "12-01-2017 16:02:28"
ERROR 12-01 18:31:04,185 - pool-27-thread-46 Cannot convert12-01-2017 16:02:08 
to Time/Long type valueUnparseable date: "12-01-2017 16:02:08"
ERROR 12-01 18:31:04,185 - pool-27-thread-46 Cannot convert12-01-2017 16:02:08 
to Time/Long type valueUnparseable date: "12-01-2017 16:02:08"
ERROR 12-01 18:31:04,204 - pool-27-thread-46 Cannot convert12-01-2017 16:02:08 
to Time/Long type valueUnparseable date: "12-01-2017 16:02:08"
ERROR 12-01 18:31:04,210 - Error executing query, currentState RUNNING, 
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
CarbonDictionaryDecoder [CarbonDecoderRelation(Map(dob#280 -> dob#280, 
double_column1#287 -> double_column1#287, decimal_column1#285 -> 
decimal_column1#285, cust_id#282L -> cust_id#282L, integer_column1#289L -> 
integer_column1#289L, decimal_column2#286 -> decimal_column2#286, cust_name#278 
-> cust_name#278, double_column2#288 -> double_column2#288, 
active_emui_version#279 -> active_emui_version#279, bigint_column1#283L -> 
bigint_column1#283L, bigint_column2#284L -> bigint_column2#284L, doj#281 -> 
doj#281),CarbonDatasourceRelation(`12jan17`.`uniqdata`,None))], 
ExcludeProfile(ArrayBuffer()), CarbonAliasDecoderRelation()
+- TungstenAggregate(key=[], 
functions=[(count(1),mode=Final,isDistinct=false)], output=[_c0#750L])
   +- TungstenExchange SinglePartition, None
  +- TungstenAggregate(key=[], 
functions=[(count(1),mode=Partial,isDistinct=false)], output=[count#754L])
 +- CarbonScan CarbonRelation 12jan17, uniqdata, 
CarbonMetaData(ArrayBuffer(cust_name, active_emui_version, dob, 
doj),ArrayBuffer(cust_id, bigint_column1, bigint_column2, decimal_column1, 
decimal_column2, double_column1, double_column2, 
integer_column1),org.apache.carbondata.core.carbon.metadata.schema.table.CarbonTable@2302bcb1,DictionaryMap(Map(cust_name
 -> true, active_emui_version -> true, dob -> false, doj -> false))), 
org.apache.carbondata.spark.merger.TableMeta@2d38370a, None, true

at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
at 

[jira] [Created] (CARBONDATA-578) CarbonData V2 Format Default Behavior

2016-12-29 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-578:


 Summary: CarbonData V2 Format Default Behavior
 Key: CARBONDATA-578
 URL: https://issues.apache.org/jira/browse/CARBONDATA-578
 Project: CarbonData
  Issue Type: Bug
Reporter: Pallavi Singh
Priority: Minor


In the CarbonCommonConstants.java it has been specified :
 public static final String CARBON_DATA_FILE_DEFAULT_VERSION = "V2"; 
as default. Then why in CarbonDataReaderFactory.java

  public DimensionColumnChunkReader 
getDimensionColumnChunkReader(ColumnarFormatVersion version,
  BlockletInfo blockletInfo, int[] eachColumnValueSize, String filePath) {
switch (version) {
  case V2:
return new CompressedDimensionChunkFileBasedReaderV2(blockletInfo, 
eachColumnValueSize,
filePath);
  case V1:
return new CompressedDimensionChunkFileBasedReaderV1(blockletInfo, 
eachColumnValueSize,
filePath);
  default:
throw new IllegalArgumentException("invalid format version: " + 
version);
}
  }

throw an exception for Invalid format, when default Format Version is V2.
By default it should take V2 as default format.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-775) Update Documentation for Supported Datatypes

2017-03-15 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-775:


 Summary: Update Documentation for Supported Datatypes
 Key: CARBONDATA-775
 URL: https://issues.apache.org/jira/browse/CARBONDATA-775
 Project: CarbonData
  Issue Type: Improvement
  Components: docs
Reporter: Pallavi Singh






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CARBONDATA-897) Redundant Fields Inside * **Global Dictionary Configurations** in Configuration-parameters.md

2017-04-11 Thread Pallavi Singh (JIRA)
Pallavi Singh created CARBONDATA-897:


 Summary: Redundant Fields Inside  * **Global Dictionary 
Configurations** in Configuration-parameters.md
 Key: CARBONDATA-897
 URL: https://issues.apache.org/jira/browse/CARBONDATA-897
 Project: CarbonData
  Issue Type: Bug
  Components: docs
Reporter: Pallavi Singh
Assignee: Pallavi Singh
Priority: Minor
 Attachments: Configurations.png

In the Configuration-parameters.md file under the table Global Dictionary 
Configurations the row for field  high.cardinality.threshold has extra columns 
with redundant values in the md file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)