[GitHub] incubator-carbondata pull request #306: Update readme as per IPMC comments

2016-11-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/incubator-carbondata/pull/306


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Use of ANTLR instead of CarbonSqlParser

2016-11-08 Thread Jacky Li
Hi,

Yes, ANTLR parser is anyway needed, and yes we have planned to support spark 2 
by the end of this year, so feel free to create a JIRA ticket for the parser

Regards,
Jacky

> 在 2016年11月7日,下午3:59,Anurag Srivastava  写道:
> 
> Hi,
> 
> We can use ANTLR with Spark1.5/1.6. There is no compatibility issue. We can
> replace the carbon parser with the ANTLR even when CarbonData is integrated
> with Spark 1.5/1.6.  And when we are integrating with Spark 2.0, not much
> changes will be required in the ANTLR parser.
> 
> But, if integration with Spark 2.0 is planned in near future, we can switch
> to ANTLR parser at that time as well.
> 
> On Mon, Nov 7, 2016 at 6:59 AM, Jacky Li  wrote:
> 
>> Hi,
>> 
>> It is because CarbonData currently is integrated with Spark 1.5/1.6 and
>> CarbonContext is based on HiveContext, so it is based on the Hive parser in
>> HiveContext. But you are right, there is no design limitation about this,
>> Carbon can switch to use ANTLR. I see in Spark 2.0, spark is using ANTLR as
>> well. So maybe it is a good time to switch the parser when doing
>> integration with Spark 2.0.
>> What is your idea?
>> 
>> Regards,
>> Jacky
>> 
>> 
>>> 在 2016年11月4日,下午3:47,Anurag Srivastava  写道:
>>> 
>>> Hi,
>>> 
>>> We are using CarbonSqlParser for parsing the queries but we can use ANTLR
>>> for the same.
>>> 
>>> Is there any specific reason for using CarbonSqlParser because as per my
>>> concern we could better handle with ANTLR.
>>> 
>>> 
>>> --
>>> *Thanks*
>>> 
>>> 
>>> *Anurag Srivastava**Software Consultant*
>>> *Knoldus Software LLP*
>>> 
>>> *India - US - Canada*
>>> * Twitter  | FB
>>>  | LinkedIn
>>> *
>> 
>> 
> 
> 
> -- 
> *Thanks*
> 
> 
> *Anurag Srivastava**Software Consultant*
> *Knoldus Software LLP*
> 
> *India - US - Canada*
> * Twitter  | FB
>  | LinkedIn
> *



Re: As planed, we are ready to make Apache CarbonData 0.2.0 release:

2016-11-08 Thread Jacky Li
+1

Regards,
Jacky

> 在 2016年11月9日,上午9:05,Jay <2550062...@qq.com> 写道:
> 
> +1
> regards
> Jay
> 
> 
> 
> 
> -- 原始邮件 --
> 发件人: "向志强";;
> 发送时间: 2016年11月9日(星期三) 上午8:59
> 收件人: "dev"; 
> 
> 主题: Re: As planed, we are ready to make Apache CarbonData 0.2.0 release:
> 
> 
> 
> No need to install thrift for building project is so great.
> 
> 2016-11-08 23:16 GMT+08:00 QiangCai :
> 
>> I look forward to release this version.
>> Carbondata improved query and load performance. And it is a good news no
>> need to install thrift for building project.
>> Btw, How many PR merged into this version?
>> 
>> 
>> 
>> --
>> View this message in context: http://apache-carbondata-
>> mailing-list-archive.1130556.n5.nabble.com/As-planed-we-
>> are-ready-to-make-Apache-CarbonData-0-2-0-release-tp2738p2752.html
>> Sent from the Apache CarbonData Mailing List archive mailing list archive
>> at Nabble.com.





[GitHub] incubator-carbondata pull request #263: [CARBONDATA-2] Data load integration...

2016-11-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/263#discussion_r87125387
  
--- Diff: 
integration/spark/src/main/java/org/apache/carbondata/spark/load/CarbonLoaderUtil.java
 ---
@@ -215,6 +227,105 @@ public static void executeGraph(CarbonLoadModel 
loadModel, String storeLocation,
 info, loadModel.getPartitionId(), 
loadModel.getCarbonDataLoadSchema());
   }
 
+  public static void executeNewDataLoad(CarbonLoadModel loadModel, String 
storeLocation,
+  String hdfsStoreLocation, RecordReader[] recordReaders)
+  throws Exception {
+if (!new File(storeLocation).mkdirs()) {
+  LOGGER.error("Error while creating the temp store path: " + 
storeLocation);
+}
+CarbonDataLoadConfiguration configuration = new 
CarbonDataLoadConfiguration();
+String databaseName = loadModel.getDatabaseName();
+String tableName = loadModel.getTableName();
+String tempLocationKey = databaseName + 
CarbonCommonConstants.UNDERSCORE + tableName
++ CarbonCommonConstants.UNDERSCORE + loadModel.getTaskNo();
+CarbonProperties.getInstance().addProperty(tempLocationKey, 
storeLocation);
+CarbonProperties.getInstance()
+.addProperty(CarbonCommonConstants.STORE_LOCATION_HDFS, 
hdfsStoreLocation);
+// CarbonProperties.getInstance().addProperty("store_output_location", 
outPutLoc);
+CarbonProperties.getInstance().addProperty("send.signal.load", 
"false");
+
+CarbonTable carbonTable = 
loadModel.getCarbonDataLoadSchema().getCarbonTable();
+AbsoluteTableIdentifier identifier =
+carbonTable.getAbsoluteTableIdentifier();
+configuration.setTableIdentifier(identifier);
+String csvHeader = loadModel.getCsvHeader();
+String csvFileName = null;
+if (csvHeader != null && !csvHeader.isEmpty()) {
+  
configuration.setHeader(CarbonDataProcessorUtil.getColumnFields(csvHeader, 
","));
+} else {
+  CarbonFile csvFile =
+  
CarbonDataProcessorUtil.getCsvFileToRead(loadModel.getFactFilesToProcess().get(0));
--- End diff --

CSV file is not a `CarbonFile`, here it just need to pass the file path 
string to validateHeader, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #263: [CARBONDATA-2] Data load integration...

2016-11-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/263#discussion_r87123603
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/newflow/steps/DataConverterProcessorStepImpl.java
 ---
@@ -47,20 +58,109 @@ public 
DataConverterProcessorStepImpl(CarbonDataLoadConfiguration configuration,
 
   @Override
   public void initialize() throws CarbonDataLoadingException {
-encoder = new RowConverterImpl(child.getOutput(), configuration);
-child.initialize();
+super.initialize();
+BadRecordslogger badRecordLogger = createBadRecordLogger();
+converter = new RowConverterImpl(child.getOutput(), configuration, 
badRecordLogger);
+converter.initialize();
+  }
+
+  /**
+   * Create the iterator using child iterator.
+   *
+   * @param childIter
+   * @return new iterator with step specific processing.
+   */
+  @Override
+  protected Iterator getIterator(final 
Iterator childIter) {
+return new CarbonIterator() {
+  RowConverter localConverter = converter.createCopyForNewThread();
+  @Override public boolean hasNext() {
+return childIter.hasNext();
+  }
+
+  @Override public CarbonRowBatch next() {
+return processRowBatch(childIter.next(), localConverter);
+  }
+};
+  }
+
+  /**
+   * Process the batch of rows as per the step logic.
+   *
+   * @param rowBatch
+   * @return processed row.
+   */
+  protected CarbonRowBatch processRowBatch(CarbonRowBatch rowBatch, 
RowConverter localConverter) {
+CarbonRowBatch newBatch = new CarbonRowBatch();
+Iterator batchIterator = rowBatch.getBatchIterator();
+while (batchIterator.hasNext()) {
+  newBatch.addRow(localConverter.convert(batchIterator.next()));
+}
+return newBatch;
   }
 
   @Override
   protected CarbonRow processRow(CarbonRow row) {
-return encoder.convert(row);
+// Not implemented
+return null;
+  }
+
+  private BadRecordslogger createBadRecordLogger() {
+boolean badRecordsLogRedirect = false;
+boolean badRecordConvertNullDisable = false;
+boolean badRecordsLoggerEnable = Boolean.parseBoolean(
+
configuration.getDataLoadProperty(DataLoadProcessorConstants.BAD_RECORDS_LOGGER_ENABLE)
+.toString());
+Object bad_records_action =
+
configuration.getDataLoadProperty(DataLoadProcessorConstants.BAD_RECORDS_LOGGER_ACTION)
+.toString();
+if (null != bad_records_action) {
+  LoggerAction loggerAction = null;
+  try {
+loggerAction = 
LoggerAction.valueOf(bad_records_action.toString().toUpperCase());
+  } catch (IllegalArgumentException e) {
+loggerAction = LoggerAction.FORCE;
+  }
+  switch (loggerAction) {
+case FORCE:
+  badRecordConvertNullDisable = false;
+  break;
+case REDIRECT:
+  badRecordsLogRedirect = true;
+  badRecordConvertNullDisable = true;
+  break;
+case IGNORE:
+  badRecordsLogRedirect = false;
+  badRecordConvertNullDisable = true;
+  break;
+  }
+}
+CarbonTableIdentifier identifier =
+configuration.getTableIdentifier().getCarbonTableIdentifier();
+String key = identifier.getDatabaseName() + '/' + 
identifier.getTableName() + '_' + identifier
--- End diff --

add this string concatenation into `CarbonTableIdentifier` as an utility 
function


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #263: [CARBONDATA-2] Data load integration...

2016-11-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/263#discussion_r87123336
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/newflow/steps/DataConverterProcessorStepImpl.java
 ---
@@ -47,20 +58,109 @@ public 
DataConverterProcessorStepImpl(CarbonDataLoadConfiguration configuration,
 
   @Override
   public void initialize() throws CarbonDataLoadingException {
-encoder = new RowConverterImpl(child.getOutput(), configuration);
-child.initialize();
+super.initialize();
+BadRecordslogger badRecordLogger = createBadRecordLogger();
+converter = new RowConverterImpl(child.getOutput(), configuration, 
badRecordLogger);
+converter.initialize();
+  }
+
+  /**
+   * Create the iterator using child iterator.
+   *
+   * @param childIter
+   * @return new iterator with step specific processing.
+   */
+  @Override
+  protected Iterator getIterator(final 
Iterator childIter) {
+return new CarbonIterator() {
+  RowConverter localConverter = converter.createCopyForNewThread();
+  @Override public boolean hasNext() {
+return childIter.hasNext();
+  }
+
+  @Override public CarbonRowBatch next() {
+return processRowBatch(childIter.next(), localConverter);
+  }
+};
+  }
+
+  /**
+   * Process the batch of rows as per the step logic.
+   *
+   * @param rowBatch
+   * @return processed row.
+   */
+  protected CarbonRowBatch processRowBatch(CarbonRowBatch rowBatch, 
RowConverter localConverter) {
+CarbonRowBatch newBatch = new CarbonRowBatch();
+Iterator batchIterator = rowBatch.getBatchIterator();
+while (batchIterator.hasNext()) {
+  newBatch.addRow(localConverter.convert(batchIterator.next()));
+}
+return newBatch;
   }
 
   @Override
   protected CarbonRow processRow(CarbonRow row) {
-return encoder.convert(row);
+// Not implemented
+return null;
--- End diff --

throw UnsupportedOperationException here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #263: [CARBONDATA-2] Data load integration...

2016-11-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/263#discussion_r87122653
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/newflow/parser/impl/RowParserImpl.java
 ---
@@ -18,22 +18,80 @@
  */
 package org.apache.carbondata.processing.newflow.parser.impl;
 
+import 
org.apache.carbondata.processing.newflow.CarbonDataLoadConfiguration;
+import org.apache.carbondata.processing.newflow.DataField;
+import 
org.apache.carbondata.processing.newflow.constants.DataLoadProcessorConstants;
+import org.apache.carbondata.processing.newflow.parser.CarbonParserFactory;
 import org.apache.carbondata.processing.newflow.parser.GenericParser;
 import org.apache.carbondata.processing.newflow.parser.RowParser;
 
 public class RowParserImpl implements RowParser {
 
   private GenericParser[] genericParsers;
 
-  public RowParserImpl(GenericParser[] genericParsers) {
-this.genericParsers = genericParsers;
+  private int[] outputMapping;
+
+  private int[] inputMapping;
+
+  private int numberOfColumns;
+
+  public RowParserImpl(DataField[] output, CarbonDataLoadConfiguration 
configuration) {
+String[] complexDelimiters =
+(String[]) 
configuration.getDataLoadProperty(DataLoadProcessorConstants.COMPLEX_DELIMITERS);
+String nullFormat =
+
configuration.getDataLoadProperty(DataLoadProcessorConstants.SERIALIZATION_NULL_FORMAT)
+.toString();
+DataField[] input = getInput(configuration);
+genericParsers = new GenericParser[input.length];
+for (int i = 0; i < genericParsers.length; i++) {
+  genericParsers[i] =
+  CarbonParserFactory.createParser(input[i].getColumn(), 
complexDelimiters, nullFormat);
+}
+outputMapping = new int[output.length];
+for (int i = 0; i < input.length; i++) {
+  for (int j = 0; j < output.length; j++) {
+if (input[i].getColumn().equals(output[j].getColumn())) {
+  outputMapping[i] = j;
+  break;
+}
+  }
+}
+  }
+
+  public DataField[] getInput(CarbonDataLoadConfiguration configuration) {
+DataField[] fields = configuration.getDataFields();
+String[] header = configuration.getHeader();
+numberOfColumns = header.length;
+DataField[] input = new DataField[fields.length];
+inputMapping = new int[input.length];
+int k = 0;
+for (int i = 0; i < numberOfColumns; i++) {
+  for (int j = 0; j < fields.length; j++) {
+if 
(header[i].equalsIgnoreCase(fields[j].getColumn().getColName())) {
+  input[k] = fields[j];
+  inputMapping[k] = i;
+  k++;
+  break;
+}
+  }
+}
+return input;
   }
 
   @Override
   public Object[] parseRow(Object[] row) {
-for (int i = 0; i < row.length; i++) {
-  row[i] = genericParsers[i].parse(row[i].toString());
+// If number of columns are less in a row then create new array with 
same size of he
--- End diff --

last word of this sentence is not correct


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #263: [CARBONDATA-2] Data load integration...

2016-11-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/263#discussion_r87122331
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/newflow/converter/impl/RowConverterImpl.java
 ---
@@ -43,35 +45,58 @@
 
   private CarbonDataLoadConfiguration configuration;
 
+  private DataField[] fields;
+
   private FieldConverter[] fieldConverters;
 
-  public RowConverterImpl(DataField[] fields, CarbonDataLoadConfiguration 
configuration) {
+  private BadRecordslogger badRecordLogger;
+
+  private BadRecordLogHolder logHolder;
+
+  public RowConverterImpl(DataField[] fields, CarbonDataLoadConfiguration 
configuration,
+  BadRecordslogger badRecordLogger) {
+this.fields = fields;
 this.configuration = configuration;
+this.badRecordLogger = badRecordLogger;
+  }
+
+  @Override
+  public void initialize() {
 CacheProvider cacheProvider = CacheProvider.getInstance();
 Cache cache =
 cacheProvider.createCache(CacheType.REVERSE_DICTIONARY,
 configuration.getTableIdentifier().getStorePath());
+String nullFormat =
+
configuration.getDataLoadProperty(DataLoadProcessorConstants.SERIALIZATION_NULL_FORMAT)
+.toString();
 List fieldConverterList = new ArrayList<>();
 
 long lruCacheStartTime = System.currentTimeMillis();
 
 for (int i = 0; i < fields.length; i++) {
   FieldConverter fieldConverter = FieldEncoderFactory.getInstance()
   .createFieldEncoder(fields[i], cache,
-  
configuration.getTableIdentifier().getCarbonTableIdentifier(), i);
-  if (fieldConverter != null) {
-fieldConverterList.add(fieldConverter);
-  }
+  
configuration.getTableIdentifier().getCarbonTableIdentifier(), i, nullFormat);
+  fieldConverterList.add(fieldConverter);
 }
 CarbonTimeStatisticsFactory.getLoadStatisticsInstance()
 .recordLruCacheLoadTime((System.currentTimeMillis() - 
lruCacheStartTime) / 1000.0);
 fieldConverters = fieldConverterList.toArray(new 
FieldConverter[fieldConverterList.size()]);
+logHolder = new BadRecordLogHolder();
   }
 
   @Override
   public CarbonRow convert(CarbonRow row) throws 
CarbonDataLoadingException {
+CarbonRow copy = row.getCopy();
--- End diff --

Why copy it every time? Copy it only if it is bad record


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #263: [CARBONDATA-2] Data load integration...

2016-11-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/263#discussion_r87120631
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchema.scala
 ---
@@ -1112,24 +1085,27 @@ case class LoadTableUsingKettle(
   val dataLoadSchema = new CarbonDataLoadSchema(table)
   // Need to fill dimension relation
   carbonLoadModel.setCarbonDataLoadSchema(dataLoadSchema)
-  var storeLocation = ""
   val configuredStore = 
CarbonLoaderUtil.getConfiguredLocalDirs(SparkEnv.get.conf)
-  if (null != configuredStore && configuredStore.nonEmpty) {
-storeLocation = 
configuredStore(Random.nextInt(configuredStore.length))
-  }
-  if (storeLocation == null) {
-storeLocation = System.getProperty("java.io.tmpdir")
-  }
 
   var partitionLocation = relation.tableMeta.storePath + "/partition/" 
+
   
relation.tableMeta.carbonTableIdentifier.getDatabaseName + "/" +
   
relation.tableMeta.carbonTableIdentifier.getTableName + "/"
 
-  storeLocation = storeLocation + "/carbonstore/" + System.nanoTime()
 
   val columinar = sqlContext.getConf("carbon.is.columnar.storage", 
"true").toBoolean
   val kettleHomePath = CarbonScalaUtil.getKettleHome(sqlContext)
 
+  val useKettle = options.get("use_kettle") match {
+case Some(value) => value.toBoolean
+case _ =>
+  val useKettleLocal = System.getProperty("use.kettle")
+  if (useKettleLocal == null) {
+sqlContext.sparkContext.getConf.get("use_kettle_default", 
"true").toBoolean
--- End diff --

Seems not very good to embedded this option string in this class, can we 
move it to `CarbonOption`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: List the supported datatypes in carbondata

2016-11-08 Thread Liang Chen
Hi

Please find the data type list:
https://cwiki.apache.org/confluence/display/CARBONDATA/Carbon+Data+Types

Regards
Liang

cenyuhai wrote
> I think we should make it clear that what datatypes are supported in
> carbondata.
> 
> these types are confused (int or integer, short or smallint, long or
> bigint, double or numeric)
> 
> some datatypes are not supported now, (numeric, integer, short, smallint,
> long, numeric).
> 
> We should tell users what datatypes are supported in docs which is not
> existed now.
> 
> We need a doc like
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types
> 
> I suggest that we support (string, int, smallint, bigint, float, double,
> decimal, timestamp, array, struct) at first.





--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/List-the-supported-datatypes-in-carbondata-tp2419p2755.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


Re: As planed, we are ready to make Apache CarbonData 0.2.0 release:

2016-11-08 Thread QiangCai
I look forward to release this version. 
Carbondata improved query and load performance. And it is a good news no
need to install thrift for building project. 
Btw, How many PR merged into this version?



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/As-planed-we-are-ready-to-make-Apache-CarbonData-0-2-0-release-tp2738p2752.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


[jira] [Created] (CARBONDATA-394) Carbon Loading data from files having invalid extensions or no extension

2016-11-08 Thread SWATI RAO (JIRA)
SWATI RAO created CARBONDATA-394:


 Summary: Carbon Loading data from files having invalid extensions 
or no extension
 Key: CARBONDATA-394
 URL: https://issues.apache.org/jira/browse/CARBONDATA-394
 Project: CarbonData
  Issue Type: Bug
Reporter: SWATI RAO
Priority: Trivial


When I try to run the following queries :

LOAD DATA inpath 'hdfs://localhost:54310/user/hive/warehouse/file1.csv.csv' 
INTO table empdata options('DELIMITER'=',', 'FILEHEADER'='id, 
name','QUOTECHAR'='"');

LOAD DATA inpath 
'hdfs://localhost:54310/user/hive/warehouse/file2.csv.csv.csv.csv' INTO table 
empdata options('DELIMITER'=',', 'FILEHEADER'='id, name','QUOTECHAR'='"');

 LOAD DATA inpath 'hdfs://localhost:54310/user/hive/warehouse/file3.txttt' INTO 
table empdata options('DELIMITER'=',', 'FILEHEADER'='id, name','QUOTECHAR'='"');

LOAD DATA inpath 'hdfs://localhost:54310/user/hive/warehouse/file4' INTO table 
empdata options('DELIMITER'=',', 'FILEHEADER'='id, name','QUOTECHAR'='"');

LOAD DATA inpath 'hdfs://localhost:54310/user/hive/warehouse/file5.txt.bat.csv' 
INTO table empdata options('DELIMITER'=',', 'FILEHEADER'='id, 
name','QUOTECHAR'='"');

We should get Input File Errors, but the data is loaded successfully into the 
Carbon table.














--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-carbondata pull request #305: [CARBONDATA-393] implement test case...

2016-11-08 Thread anuragknoldus
GitHub user anuragknoldus opened a pull request:

https://github.com/apache/incubator-carbondata/pull/305

[CARBONDATA-393] implement test cases for core.keygenerator module 

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [ ] Make sure the PR title is formatted like:
   `[CARBONDATA-] Description of pull request`
 - [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [ ] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [ ] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.txt).
 - [ ] Testing done
 
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- What manual testing you have done?
- Any additional information to help reviewers in testing this 
change.
 
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
 
---


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anuragknoldus/incubator-carbondata 
CARBONDATA-393

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/305.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #305


commit 82e25ddcec6dc033d40fd65de03def5513db1844
Author: Anurag 
Date:   2016-11-08T08:05:28Z

implement test cases for core.keygenerator

commit b9ac282c7593f43f3f81769efc8efb9e6ae880ac
Author: Anurag 
Date:   2016-11-08T08:35:59Z

rebase with master and manage test case




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-393) Write Unit Test cases for core.keygenerator package

2016-11-08 Thread Prabhat Kashyap (JIRA)
Prabhat Kashyap created CARBONDATA-393:
--

 Summary: Write Unit Test cases for core.keygenerator package
 Key: CARBONDATA-393
 URL: https://issues.apache.org/jira/browse/CARBONDATA-393
 Project: CarbonData
  Issue Type: Test
Reporter: Prabhat Kashyap
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)