[GitHub] carbondata issue #2562: [HOTFIX] CreateDataMapPost Event was skipped in case...

2018-07-26 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2562
  
retest sdv please


---


[GitHub] carbondata pull request #2562: [HOTFIX] CreateDataMapPost Event was skipped ...

2018-07-25 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2562

[HOTFIX] CreateDataMapPost Event was skipped in case of preaggregate datamap


Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? NA
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. NA



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata hotfix1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2562.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2562


commit bfa56deb66175c40a75cdeabab68990fa3d7d58f
Author: Jatin 
Date:   2018-07-25T19:12:50Z

hotfix : CreateDataMapPost Event was skipped in case of preaggregate datamap




---


[GitHub] carbondata pull request #2527: [CARBONDATA-2758] Fix for filling data with e...

2018-07-19 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2527

[CARBONDATA-2758] Fix for filling data with enabled Local Dictionary having 
continous null values greater than default batch size throws array index out of 
bound


Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. NA



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2758

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2527.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2527


commit 585b381fd9414b7f49f67e2995b337b74bad2c39
Author: Jatin 
Date:   2018-07-19T09:06:18Z

Fix for filling data with enabled Local Dictionary having continous null 
values greater than default batch size




---


[GitHub] carbondata pull request #2507: [CABONDATA-2741]Fix for filling measure colum...

2018-07-17 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2507#discussion_r202988752
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/scan/collector/impl/DictionaryBasedVectorResultCollector.java
 ---
@@ -109,13 +109,19 @@ void prepareDimensionAndMeasureColumnVectors() {
 allColumnInfo[queryDimensions[i].getOrdinal()] = columnVectorInfo;
   }
 }
+//skipping non existing measure columns in measureColumnInfo as here 
data filling to be done only on existing columns
+// for non existing column it is already been filled from restructure 
based collector
+int j = 0;
--- End diff --

@ravipesala 
For RestructureBasedDictionaryResultCollector already handled 
implementation of filling measure of  non existing and existing  column in 
method fillmeasureData which is correct handling.


---


[GitHub] carbondata pull request #2507: [CABONDATA-2741]Fix for filling measure colum...

2018-07-16 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2507#discussion_r202897146
  
--- Diff: 
integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala
 ---
@@ -709,6 +710,18 @@ test("test alter command for boolean data type with 
correct default measure valu
   Seq(Row(1))
 )
   }
+
+  test("Alter table selection in random order"){
+sql("create table restructure_random_select (imei string,channelsId 
string,gamePointId double,deviceInformationId double," +
+" deliverycharge double) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES('table_blocksize'='2000','sort_columns'='imei')")
+sql("insert into restructure_random_select 
values('abc','def',50.5,30.2,40.6) ")
+sql("Alter table restructure_random_select add columns (age int,name 
String)")
+checkAnswer(
+  sql("select gamePointId,deviceInformationId,age,name from 
restructure_random_select where name is NULL or channelsId=4"),
--- End diff --

did the required changes in core.


---


[GitHub] carbondata issue #2507: [CABONDATA-2741]Fix for filling measure column data ...

2018-07-16 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2507
  
retest this please


---


[GitHub] carbondata issue #2448: [HotFix] Getting carbon table identifier to datamap ...

2018-07-13 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2448
  
retest sdv please


---


[GitHub] carbondata pull request #2507: [CABONDATA-2741]Fix for fetching random query...

2018-07-13 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2507

[CABONDATA-2741]Fix for fetching random query order by set in case of 
restructure

Description : 
Using AttributeSet changes the order id of the query, as a set does not 
maintain order. so, changes the same with Sequence.
Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? yes
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA.  Na



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2741

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2507.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2507


commit d4fd5ef44b2528b00456f19f02cee73cd9d427c2
Author: Jatin 
Date:   2018-07-13T15:30:33Z

Fix for fetching random query order by set in case of restructure




---


[GitHub] carbondata issue #2448: [HotFix] Getting carbon table identifier to datamap ...

2018-07-08 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2448
  
retest sdv please


---


[GitHub] carbondata issue #2448: [HotFix] Getting carbon table identifier to datamap ...

2018-07-07 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2448
  
retest this please


---


[GitHub] carbondata issue #2448: [HotFix] Getting carbon table identifier to datamap ...

2018-07-05 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2448
  
retest this please


---


[GitHub] carbondata pull request #2448: [HotFix] Getting carbon table identifier to d...

2018-07-04 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2448

[HotFix] Getting carbon table identifier to datamap events

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? NO
 
 - [ ] Any backward compatibility impacted? NO
 
 - [ ] Document update required? NO

 - [ ] Testing done NA
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2448.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2448


commit c0d23c8d6529fff5af8bcb8730c817d75e5da1ca
Author: Jatin 
Date:   2018-07-04T14:23:48Z

getting carbon table identifier to datamap events




---


[GitHub] carbondata pull request #2376: [CARBONDATA-2610] Fix for Null values in data...

2018-06-14 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2376

[CARBONDATA-2610] Fix for Null values in datamap

Problem: Datamap creation having null values already loaded in string 
datatype of table fails.
Solution: Check for null before converting data to the string.
Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? Yes
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. NA



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
bug/Carbondata-2610

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2376.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2376


commit 776712a01863749c4047de00a25ff1f71df88537
Author: Jatin 
Date:   2018-06-14T17:26:09Z

Fix for Null values in datamap




---


[GitHub] carbondata issue #2063: [CARBONDATA-2251] Refactored sdv testcase

2018-04-26 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2063
  
retest sdv please


---


[GitHub] carbondata issue #2063: [CARBONDATA-2251] Refactored sdv testcase

2018-04-25 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2063
  
retest sdv please


---


[GitHub] carbondata issue #2063: [CARBONDATA-2251] Refactored sdv testcase

2018-04-18 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2063
  
retest sdv please


---


[GitHub] carbondata pull request #2155: [CARBONDATA-2321] Fix selection of partition ...

2018-04-11 Thread jatin9896
Github user jatin9896 closed the pull request at:

https://github.com/apache/carbondata/pull/2155


---


[GitHub] carbondata pull request #2155: [CARBONDATA-2321] Fix selection of partition ...

2018-04-11 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2155

[CARBONDATA-2321] Fix selection of partition column after concurrent load 
fails randomly

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? No Test Manually
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
bug/CARBONDATA-2321

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2155.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2155


commit 685285dbd0fce94471ce4d970fc570f10d749d25
Author: Jatin <jatin.demla@...>
Date:   2018-04-11T06:34:31Z

fix selection of partition column after concurrent load fails randomly




---


[GitHub] carbondata issue #2146: [CARBONDATA-2321] Fix for selection of partion colum...

2018-04-10 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2146
  
retest this please


---


[GitHub] carbondata pull request #2146: [CARBONDATA-2321] Fix for selection of partio...

2018-04-08 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2146

[CARBONDATA-2321] Fix for selection of partion column after concurrent load 
fails randomly

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? No, Test Manually
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2321

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2146.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2146


commit 6db9259dcb9dd28bc922572b131edd345f91fb13
Author: Jatin <jatin.demla@...>
Date:   2018-04-08T10:05:19Z

fix selection of partion column after concurrent load fails randomly




---


[GitHub] carbondata pull request #2102: [CARBONDATA-2277] fix for filter on default v...

2018-03-25 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2102

[CARBONDATA-2277] fix for filter on default values on all datatypes

1. Added solution to handle filter keys for the direct dictionary on 
default values.
2. For no dictionary columns changes code to get correct bytes value of 
default values.
Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No 
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? Yes
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2277

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2102






---


[GitHub] carbondata issue #2063: [CARBONDATA-2251] Refactored sdv testcase

2018-03-16 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2063
  
retest sdv please


---


[GitHub] carbondata pull request #2063: [CARBONDATA-2251] Refactored sdv testcase

2018-03-14 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2063

[CARBONDATA-2251] Refactored sdv testcase

1. MergeIndex test case in sdv fails if executed with a different number of 
executors or in standalone spark.
2. Changes test case having Hive UDAF like histogram_numeric having 
unexpected behavior. so recommended way to write test cases using aggregation. 

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2251

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2063.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2063


commit 6de85846874821cfa0e18027da8beb56ca269cf6
Author: Jatin <jatin.demla@...>
Date:   2018-03-13T11:43:13Z

Refactored sdv testcase




---


[GitHub] carbondata issue #2005: [CARBONDATA-2207] Fix testcases after using hive met...

2018-02-27 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/2005
  
retest sdv please


---


[GitHub] carbondata pull request #2005: [CARBONDATA-2207] Fix testcases after using h...

2018-02-27 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/2005

[CARBONDATA-2207] Fix testcases after using hive metastore

CarbonTable was getting null in case of hivemetatore so, fetch the same 
from metastore instead of carbon.
Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?No 
 
 - [ ] Any backward compatibility impacted?No 
 
 - [ ] Document update required?No 

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2207

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2005.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2005


commit 50fadb13b63822a7c0bca177a01b6a09e62136d6
Author: Jatin <jatin.demla@...>
Date:   2018-02-27T10:43:40Z

fix for hivemetastore using for carbon testcases




---


[GitHub] carbondata issue #1981: [Pre-Agg Test] Added SDV TestCase of preaggregate

2018-02-26 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1981
  
retest sdv please


---


[GitHub] carbondata pull request #1993: [CARBONDATA-2199] Fixed Dimension column afte...

2018-02-23 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1993

[CARBONDATA-2199] Fixed Dimension column after restructure getting wrong 
block datatype

Problem: Changing datatype of measure having sort_columns calls for 
restructure and after having restructure it changes the datatype to actual 
datatype for which accessing the data with changed datatype gives exception of 
incorrect length.

Solution: Store the datatype in DimensionInfo while restructuring and 
access the same datatype to get the block data type.
Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? Yes
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2199

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1993.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1993


commit 591c83f38be254ec0fa0e4a3d6423b244e9d106f
Author: Jatin <jatin.demla@...>
Date:   2018-02-23T11:26:17Z

Fixed Dimension column after restructure getting wrong block datatype




---


[GitHub] carbondata issue #1981: [Pre-Agg Test] Added SDV TestCase of preaggregate

2018-02-20 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1981
  
retest sdv please


---


[GitHub] carbondata pull request #1981: [Pre-Agg Test] Added SDV TestCase of preaggre...

2018-02-15 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1981

[Pre-Agg Test] Added SDV TestCase of preaggregate

1. Added test cases for pre-aggregate create, load, 
2. Added test cases for time series in pre-aggregate.

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
sdvTestCasePreAggregate

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1981.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1981


commit 092445b75423f327d503b0210789d3de7525642d
Author: Jatin <jatin.demla@...>
Date:   2018-02-15T14:55:20Z

Added SDV TestCase of preaggregate




---


[GitHub] carbondata pull request #1954: [Documentation] Formatting issue fixed

2018-02-08 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1954

[Documentation] Formatting issue fixed

Updated document syntax of which pdf generation was failing
Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata DocumentUpdate

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1954.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1954


commit e4cb62cedc6a48a6a2b10f11635ea561a4453ca1
Author: Jatin <jatin.demla@...>
Date:   2018-02-08T10:55:14Z

updated data-management for pdf generation




---


[GitHub] carbondata pull request #1860: [CARBONDATA-2080] [S3-Implementation] Propaga...

2018-02-03 Thread jatin9896
Github user jatin9896 closed the pull request at:

https://github.com/apache/carbondata/pull/1860


---


[GitHub] carbondata pull request #1914: [CARBONDATA-2122] Corrected bad record path v...

2018-02-02 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1914

[CARBONDATA-2122] Corrected bad record path validation

Data Load having bad record redirect with empty location should throw the 
exception of Invalid Path.

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?No 
 
 - [ ] Any backward compatibility impacted?No
 
 - [ ] Document update required?No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?Yes
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2122

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1914.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1914


commit 0bc09a48210b534fbb8be1c9bb815fc00906215c
Author: Jatin <jatin.demla@...>
Date:   2018-02-02T14:25:16Z

corrected bad record path validation




---


[GitHub] carbondata issue #1860: [CARBONDATA-2080] [S3-Implementation] Propagated had...

2018-02-02 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1860
  
retest this please


---


[GitHub] carbondata pull request #1860: [CARBONDATA-2080] [S3-Implementation] Propaga...

2018-01-25 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1860

[CARBONDATA-2080] [S3-Implementation] Propagated hadoopConf from driver to 
executor for s3 implementation in cluster mode.

Problem : hadoopconf was not getting propagated from driver to the executor 
that's why load was failing to the distributed environment.
Solution:  Setting the Hadoop conf in base class CarbonRDD
How to verify this PR : 
Execute the load in the cluster mode It should be a success using location 
s3.

Be sure to do all of the following checklists to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? No
- How it is tested? Please attach test report. Testing is done 
Manually
- Is it a performance related change? Please attach the performance 
test report. No 
- Any additional information to help reviewers in testing this 
change. No
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-2080

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1860


commit a310a9a5a6d230501bfd26d7c3605791638a1860
Author: Jatin <jatin.demla@...>
Date:   2018-01-25T11:23:00Z

Propagated hadoopConf from driver to executor for s3 implementation




---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-18 Thread jatin9896
Github user jatin9896 closed the pull request at:

https://github.com/apache/carbondata/pull/1805


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-18 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r162312570
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, 
SECRET_KEY}
+import org.apache.spark.sql.{Row, SparkSession}
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of
+   * 1. create carbon table with storage location on object based storage
+   * like AWS S3, Huawei OBS, etc
+   * 2. load data into carbon table, the generated file will be stored on 
object based storage
+   * query the table.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "table-path on s3" "s3-endpoint" "spark-master"
+   */
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+
+import org.apache.spark.sql.CarbonSession._
+if (args.length < 3 || args.length > 5) {
+  logger.error("Usage: java CarbonS3Example  " 
+
+   " [s3-endpoint] [spark-master]")
+  System.exit(0)
+}
+
+val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2))
+val spark = SparkSession
+  .builder()
+  .master(getSparkMaster(args))
+  .appName("S3Example")
+  .config("spark.driver.host", "localhost")
+  .config(accessKey, args(0))
+  .config(secretKey, args(1))
+  .config(endpoint, getS3EndPoint(args))
+  .getOrCreateCarbonSession()
+
+spark.sparkContext.setLogLevel("WARN")
+
+spark.sql("Drop table if exists carbon_table")
+
+spark.sql(
+  s"""
+ | CREATE TABLE if not exists carbon_table(
+ | shortField SHORT,
+ | intField INT,
+ | bigintField LONG,
+ | doubleField DOUBLE,
+ | stringField STRING,
+ | timestampField TIMESTAMP,
+ | decimalField DECIMAL(18,2),
+ | dateField DATE,
+ | charField CHAR(5),
+ | floatField FLOAT
+ | )
+ | STORED BY 'carbondata'
+ | LOCATION '${ args(2) }'
+ | TBLPROPERTIES('SORT_COLUMNS'='', 
'DICTIONARY_INCLUDE'='dateField, charField')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | SELECT *
+ | FROM carbon_table
+  """.stripMargin).show()
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+val countSegment: Array[Row] =
+  spark.sql(
+s"""
+   | SHOW SEGMENTS FOR TABLE c

[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-18 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r162311702
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, 
SECRET_KEY}
+import org.apache.spark.sql.{Row, SparkSession}
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of
+   * 1. create carbon table with storage location on object based storage
+   * like AWS S3, Huawei OBS, etc
+   * 2. load data into carbon table, the generated file will be stored on 
object based storage
+   * query the table.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "table-path on s3" "s3-endpoint" "spark-master"
+   */
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+
+import org.apache.spark.sql.CarbonSession._
+if (args.length < 3 || args.length > 5) {
+  logger.error("Usage: java CarbonS3Example  " 
+
+   " [s3-endpoint] [spark-master]")
+  System.exit(0)
+}
+
+val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2))
+val spark = SparkSession
+  .builder()
+  .master(getSparkMaster(args))
+  .appName("S3Example")
+  .config("spark.driver.host", "localhost")
+  .config(accessKey, args(0))
+  .config(secretKey, args(1))
+  .config(endpoint, getS3EndPoint(args))
+  .getOrCreateCarbonSession()
+
+spark.sparkContext.setLogLevel("WARN")
+
+spark.sql("Drop table if exists carbon_table")
+
+spark.sql(
+  s"""
+ | CREATE TABLE if not exists carbon_table(
+ | shortField SHORT,
+ | intField INT,
+ | bigintField LONG,
+ | doubleField DOUBLE,
+ | stringField STRING,
+ | timestampField TIMESTAMP,
+ | decimalField DECIMAL(18,2),
+ | dateField DATE,
+ | charField CHAR(5),
+ | floatField FLOAT
+ | )
+ | STORED BY 'carbondata'
+ | LOCATION '${ args(2) }'
+ | TBLPROPERTIES('SORT_COLUMNS'='', 
'DICTIONARY_INCLUDE'='dateField, charField')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | SELECT *
+ | FROM carbon_table
+  """.stripMargin).show()
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+val countSegment: Array[Row] =
+  spark.sql(
+s"""
+   | SHOW SEGMENTS FOR TABLE c

[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-18 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r162311525
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, 
SECRET_KEY}
+import org.apache.spark.sql.{Row, SparkSession}
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of
+   * 1. create carbon table with storage location on object based storage
+   * like AWS S3, Huawei OBS, etc
+   * 2. load data into carbon table, the generated file will be stored on 
object based storage
+   * query the table.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "table-path on s3" "s3-endpoint" "spark-master"
+   */
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+
+import org.apache.spark.sql.CarbonSession._
+if (args.length < 3 || args.length > 5) {
+  logger.error("Usage: java CarbonS3Example  " 
+
+   " [s3-endpoint] [spark-master]")
+  System.exit(0)
+}
+
+val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2))
+val spark = SparkSession
+  .builder()
+  .master(getSparkMaster(args))
+  .appName("S3Example")
+  .config("spark.driver.host", "localhost")
+  .config(accessKey, args(0))
+  .config(secretKey, args(1))
+  .config(endpoint, getS3EndPoint(args))
+  .getOrCreateCarbonSession()
+
+spark.sparkContext.setLogLevel("WARN")
+
+spark.sql("Drop table if exists carbon_table")
+
+spark.sql(
+  s"""
+ | CREATE TABLE if not exists carbon_table(
+ | shortField SHORT,
+ | intField INT,
+ | bigintField LONG,
+ | doubleField DOUBLE,
+ | stringField STRING,
+ | timestampField TIMESTAMP,
+ | decimalField DECIMAL(18,2),
+ | dateField DATE,
+ | charField CHAR(5),
+ | floatField FLOAT
+ | )
+ | STORED BY 'carbondata'
+ | LOCATION '${ args(2) }'
+ | TBLPROPERTIES('SORT_COLUMNS'='', 
'DICTIONARY_INCLUDE'='dateField, charField')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | SELECT *
+ | FROM carbon_table
+  """.stripMargin).show()
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+val countSegment: Array[Row] =
+  spark.sql(
+s"""
+   | SHOW SEGMENTS FOR TABLE c

[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-18 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r162311496
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, 
SECRET_KEY}
+import org.apache.spark.sql.{Row, SparkSession}
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of
+   * 1. create carbon table with storage location on object based storage
+   * like AWS S3, Huawei OBS, etc
+   * 2. load data into carbon table, the generated file will be stored on 
object based storage
+   * query the table.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "table-path on s3" "s3-endpoint" "spark-master"
+   */
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+
+import org.apache.spark.sql.CarbonSession._
+if (args.length < 3 || args.length > 5) {
+  logger.error("Usage: java CarbonS3Example  " 
+
+   " [s3-endpoint] [spark-master]")
+  System.exit(0)
+}
+
+val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2))
+val spark = SparkSession
+  .builder()
+  .master(getSparkMaster(args))
+  .appName("S3Example")
+  .config("spark.driver.host", "localhost")
+  .config(accessKey, args(0))
+  .config(secretKey, args(1))
+  .config(endpoint, getS3EndPoint(args))
+  .getOrCreateCarbonSession()
+
+spark.sparkContext.setLogLevel("WARN")
+
+spark.sql("Drop table if exists carbon_table")
+
+spark.sql(
+  s"""
+ | CREATE TABLE if not exists carbon_table(
+ | shortField SHORT,
+ | intField INT,
+ | bigintField LONG,
+ | doubleField DOUBLE,
+ | stringField STRING,
+ | timestampField TIMESTAMP,
+ | decimalField DECIMAL(18,2),
+ | dateField DATE,
+ | charField CHAR(5),
+ | floatField FLOAT
+ | )
+ | STORED BY 'carbondata'
+ | LOCATION '${ args(2) }'
+ | TBLPROPERTIES('SORT_COLUMNS'='', 
'DICTIONARY_INCLUDE'='dateField, charField')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | SELECT *
+ | FROM carbon_table
+  """.stripMargin).show()
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+val countSegment: Array[Row] =
+  spark.sql(
+s"""
+   | SHOW SEGMENTS FOR TABLE c

[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-18 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r162306737
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, 
SECRET_KEY}
+import org.apache.spark.sql.{Row, SparkSession}
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of
+   * 1. create carbon table with storage location on object based storage
+   * like AWS S3, Huawei OBS, etc
+   * 2. load data into carbon table, the generated file will be stored on 
object based storage
+   * query the table.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "table-path on s3" "s3-endpoint" "spark-master"
+   */
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+
+import org.apache.spark.sql.CarbonSession._
+if (args.length < 3 || args.length > 5) {
+  logger.error("Usage: java CarbonS3Example  " 
+
+   " [s3-endpoint] [spark-master]")
+  System.exit(0)
+}
+
+val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2))
+val spark = SparkSession
+  .builder()
+  .master(getSparkMaster(args))
+  .appName("S3Example")
+  .config("spark.driver.host", "localhost")
+  .config(accessKey, args(0))
+  .config(secretKey, args(1))
+  .config(endpoint, getS3EndPoint(args))
+  .getOrCreateCarbonSession()
+
+spark.sparkContext.setLogLevel("WARN")
+
+spark.sql("Drop table if exists carbon_table")
+
+spark.sql(
+  s"""
+ | CREATE TABLE if not exists carbon_table(
+ | shortField SHORT,
+ | intField INT,
+ | bigintField LONG,
+ | doubleField DOUBLE,
+ | stringField STRING,
+ | timestampField TIMESTAMP,
+ | decimalField DECIMAL(18,2),
+ | dateField DATE,
+ | charField CHAR(5),
+ | floatField FLOAT
+ | )
+ | STORED BY 'carbondata'
+ | LOCATION '${ args(2) }'
+ | TBLPROPERTIES('SORT_COLUMNS'='', 
'DICTIONARY_INCLUDE'='dateField, charField')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | SELECT *
+ | FROM carbon_table
+  """.stripMargin).show()
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+val countSegment: Array[Row] =
+  spark.sql(
+s"""
+   | SHOW SEGMENTS FOR TABLE c

[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-18 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r162305664
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, 
SECRET_KEY}
+import org.apache.spark.sql.{Row, SparkSession}
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of
+   * 1. create carbon table with storage location on object based storage
+   * like AWS S3, Huawei OBS, etc
+   * 2. load data into carbon table, the generated file will be stored on 
object based storage
+   * query the table.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "table-path on s3" "s3-endpoint" "spark-master"
+   */
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+
+import org.apache.spark.sql.CarbonSession._
+if (args.length < 3 || args.length > 5) {
+  logger.error("Usage: java CarbonS3Example  " 
+
+   " [s3-endpoint] [spark-master]")
+  System.exit(0)
+}
+
+val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2))
+val spark = SparkSession
+  .builder()
+  .master(getSparkMaster(args))
+  .appName("S3Example")
+  .config("spark.driver.host", "localhost")
+  .config(accessKey, args(0))
+  .config(secretKey, args(1))
+  .config(endpoint, getS3EndPoint(args))
+  .getOrCreateCarbonSession()
+
+spark.sparkContext.setLogLevel("WARN")
+
+spark.sql("Drop table if exists carbon_table")
+
+spark.sql(
+  s"""
+ | CREATE TABLE if not exists carbon_table(
+ | shortField SHORT,
+ | intField INT,
+ | bigintField LONG,
+ | doubleField DOUBLE,
+ | stringField STRING,
+ | timestampField TIMESTAMP,
+ | decimalField DECIMAL(18,2),
+ | dateField DATE,
+ | charField CHAR(5),
+ | floatField FLOAT
+ | )
+ | STORED BY 'carbondata'
+ | LOCATION '${ args(2) }'
+ | TBLPROPERTIES('SORT_COLUMNS'='', 
'DICTIONARY_INCLUDE'='dateField, charField')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | SELECT *
+ | FROM carbon_table
+  """.stripMargin).show()
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+val countSegment: Array[Row] =
+  spark.sql(
+s"""
+   | SHOW SEGMENTS FOR TABLE c

[GitHub] carbondata issue #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-18 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1805
  
retest this please


---


[GitHub] carbondata issue #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-17 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1805
  
@jackylk table path is the path to bucket location like I have provided 
s3a:/// and regarding endpoints, I have modified example 
which takes endpoint as args(4) and it is not mandatory to provide. About 
connection pooling exception in the example is also fixed. Please check. 


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-16 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r161698430
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3CsvExample.scala
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, SECRET_KEY}
+import org.apache.spark.sql.SparkSession
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+object S3CsvExample {
+
+  /**
+   * This example demonstrate to create local store and load data from CSV 
files on S3
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "s3 path to csv" "spark-master"
+   */
+
--- End diff --

done


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-16 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r161698128
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, SECRET_KEY}
+import org.apache.spark.sql.SparkSession
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of s3 as a store.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "s3 bucket path" "spark-master"
+   */
+
--- End diff --

removed.


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-16 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r161698051
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, SECRET_KEY}
+import org.apache.spark.sql.SparkSession
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of s3 as a store.
--- End diff --

done


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-16 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r161697998
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, SECRET_KEY}
+import org.apache.spark.sql.SparkSession
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of s3 as a store.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "s3 bucket path" "spark-master"
+   */
+
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, 
"/MM/dd HH:mm:ss")
+  .addProperty(CarbonCommonConstants.CARBON_DATE_FORMAT, "/MM/dd")
+  
.addProperty(CarbonCommonConstants.ENABLE_UNSAFE_COLUMN_PAGE_LOADING, "true")
+  
.addProperty(CarbonCommonConstants.DEFAULT_CARBON_MAJOR_COMPACTION_SIZE, "0.02")
--- End diff --

done


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-16 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r161697900
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, SECRET_KEY}
+import org.apache.spark.sql.SparkSession
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of s3 as a store.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "s3 bucket path" "spark-master"
+   */
+
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, 
"/MM/dd HH:mm:ss")
+  .addProperty(CarbonCommonConstants.CARBON_DATE_FORMAT, "/MM/dd")
+  
.addProperty(CarbonCommonConstants.ENABLE_UNSAFE_COLUMN_PAGE_LOADING, "true")
+  
.addProperty(CarbonCommonConstants.DEFAULT_CARBON_MAJOR_COMPACTION_SIZE, "0.02")
+
+import org.apache.spark.sql.CarbonSession._
+if (args.length != 4) {
+  logger.error("Usage: java CarbonS3Example  " 
+
+   " ")
+  System.exit(0)
+}
+
+val (accessKey, secretKey) = getKeyOnPrefix(args(2))
+val spark = SparkSession
+  .builder()
+  .master(args(3))
+  .appName("S3Example")
+  .config("spark.driver.host", "localhost")
+  .config(accessKey, args(0))
+  .config(secretKey, args(1))
+  .getOrCreateCarbonSession()
+
+spark.sparkContext.setLogLevel("INFO")
+
+spark.sql(
+  s"""
+ | CREATE TABLE if not exists carbon_table(
+ | shortField SHORT,
+ | intField INT,
+ | bigintField LONG,
+ | doubleField DOUBLE,
+ | stringField STRING,
+ | timestampField TIMESTAMP,
+ | decimalField DECIMAL(18,2),
+ | dateField DATE,
+ | charField CHAR(5),
+ | floatField FLOAT
+ | )
+ | STORED BY 'carbondata'
+ | LOCATION '${ args(2) }'
+ | TBLPROPERTIES('SORT_COLUMNS'='', 
'DICTIONARY_INCLUDE'='dateField, charField')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$path'
+ | INTO TABLE carbon_table
+ | OPTIONS('HEADER'='true')
+   """.stripMargin)
+
+spark.sql("ALTER table carbon_table compact 'MINOR'")
--- End diff --

Added


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-16 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r161697822
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3CsvExample.scala
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, SECRET_KEY}
+import org.apache.spark.sql.SparkSession
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+object S3CsvExample {
+
+  /**
+   * This example demonstrate to create local store and load data from CSV 
files on S3
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "s3 path to csv" "spark-master"
+   */
+
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, 
"/MM/dd HH:mm:ss")
+  .addProperty(CarbonCommonConstants.CARBON_DATE_FORMAT, "/MM/dd")
+  
.addProperty(CarbonCommonConstants.ENABLE_UNSAFE_COLUMN_PAGE_LOADING, "true")
--- End diff --

Removed all properties.


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-16 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r161697634
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 ---
@@ -168,6 +168,14 @@
 
   public static final String S3A_PREFIX = "s3a://";
 
+  public static final String S3N_ACCESS_KEY = "fs.s3n.awsAccessKeyId";
--- End diff --

Added comment.


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-15 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1805#discussion_r161669078
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala 
---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, SECRET_KEY}
+import org.apache.spark.sql.SparkSession
+import org.slf4j.{Logger, LoggerFactory}
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+object S3Example {
+
+  /**
+   * This example demonstrate usage of s3 as a store.
+   *
+   * @param args require three parameters "Access-key" "Secret-key"
+   * "s3 bucket path" "spark-master"
+   */
+
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv"
+val logger: Logger = LoggerFactory.getLogger(this.getClass)
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, 
"/MM/dd HH:mm:ss")
+  .addProperty(CarbonCommonConstants.CARBON_DATE_FORMAT, "/MM/dd")
+  
.addProperty(CarbonCommonConstants.ENABLE_UNSAFE_COLUMN_PAGE_LOADING, "true")
+  
.addProperty(CarbonCommonConstants.DEFAULT_CARBON_MAJOR_COMPACTION_SIZE, "0.02")
+
+import org.apache.spark.sql.CarbonSession._
+if (args.length != 4) {
+  logger.error("Usage: java CarbonS3Example  " 
+
+   " ")
--- End diff --

you are right. ok I will change it to tablepath.


---


[GitHub] carbondata pull request #1805: [CARBONDATA-1827] S3 Carbon Implementation

2018-01-15 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1805

[CARBONDATA-1827] S3 Carbon Implementation

1) Provide support for s3 in carbondata.
2) Added S3Example to create store on s3.
3) Added S3CSVExample to load csv from s3.
 

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? NO
 
 - [ ] Any backward compatibility impacted? NO
 
 - [ ] Document update required? NO

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? 
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   Added Examples to test the functionality
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata s3-carbon

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1805.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1805


commit bd5b90cfefcfa25c941c104630dbc9e9ed2b150b
Author: SangeetaGulia <sangeeta.gulia@...>
Date:   2017-09-21T09:26:26Z

Added S3 implementation and TestCases

commit d3d374ce7a82662e1fbb6b1d0b81bfaaa3a22cc1
Author: Jatin <jatin.demla@...>
Date:   2017-11-29T07:24:48Z

Removed S3CarbonFile and added append functionality

commit a79245a73103b58c997e20b47b7c51d91dd2e8ad
Author: Jatin <jatin.demla@...>
Date:   2018-01-15T07:43:29Z

refactored examples




---


[GitHub] carbondata issue #1584: [CARBONDATA-1827] Added S3 Implementation

2018-01-09 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1584
  
retest this please


---


[GitHub] carbondata issue #1584: [CARBONDATA-1827] Added S3 Implementation

2018-01-07 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1584
  
retest this please


---


[GitHub] carbondata issue #1584: [CARBONDATA-1827][WIP] Added S3 Implementation

2018-01-05 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1584
  
retest this please


---


[GitHub] carbondata issue #1584: [CARBONDATA-1827][WIP] Added S3 Implementation

2017-12-28 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1584
  
retest this please


---


[GitHub] carbondata issue #1689: [CARBONDATA-1674] Describe formatted shows partition...

2017-12-22 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1689
  
retest this please


---


[GitHub] carbondata pull request #1689: [CARBONDATA-1674] Describe formatted shows pa...

2017-12-21 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1689#discussion_r158431404
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
 ---
@@ -150,6 +150,11 @@ class TestShowPartition  extends QueryTest with 
BeforeAndAfterAll {
 
   }
 
+  test("show partition table: desc formatted should show partition type"){
+//check for partition type exist in desc formatted
+checkExistence(sql("describe formatted hashTable"),true,"Partition 
Type")
--- End diff --

@jackylk please review


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-20 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
retest this please


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-19 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
retest this please


---


[GitHub] carbondata pull request #1689: [CARBONDATA-1674] Describe formatted shows pa...

2017-12-19 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1689

[CARBONDATA-1674] Describe formatted shows partition type

On desc formatted command list partition should show.

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done Yes
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? Yes
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-1674

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1689.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1689


commit a972b8e1f6747173411e2a51601496ca95027325
Author: Jatin <jatin.demla@...>
Date:   2017-12-15T04:32:12Z

decribe formatted show partition type




---


[GitHub] carbondata pull request #1629: [CARBONDATA-1714] Fixed Issue for Selection o...

2017-12-19 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1629#discussion_r157790360
  
--- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
 ---
@@ -804,7 +804,7 @@ public QueryModel getQueryModel(InputSplit inputSplit, 
TaskAttemptContext taskAt
 Expression filter = getFilterPredicates(configuration);
 boolean[] isFilterDimensions = new 
boolean[carbonTable.getDimensionOrdinalMax()];
 boolean[] isFilterMeasures =
-new 
boolean[carbonTable.getNumberOfMeasures(carbonTable.getTableName())];
--- End diff --

done please review.


---


[GitHub] carbondata pull request #1629: [CARBONDATA-1714] Fixed Issue for Selection o...

2017-12-19 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1629#discussion_r157711859
  
--- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
 ---
@@ -804,7 +804,7 @@ public QueryModel getQueryModel(InputSplit inputSplit, 
TaskAttemptContext taskAt
 Expression filter = getFilterPredicates(configuration);
 boolean[] isFilterDimensions = new 
boolean[carbonTable.getDimensionOrdinalMax()];
 boolean[] isFilterMeasures =
-new 
boolean[carbonTable.getNumberOfMeasures(carbonTable.getTableName())];
--- End diff --

getNumberOfMeasures doesn't include column count which was dropped but it 
exists in measure list with invisible true whereas the getAllmeasures also 
include the recent dropped column as well.


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-15 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
@kumarvishal09 Please verify.


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-15 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
retest this please


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-14 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
retest this please


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-14 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
retest this please


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-14 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
retest this please


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-14 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
retest this please


---


[GitHub] carbondata issue #1629: [CARBONDATA-1714] Fixed Issue for Selection of null ...

2017-12-07 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1629
  
retest this please


---


[GitHub] carbondata pull request #1629: [CARBONDATA-1714] Fixed Issue for Selection o...

2017-12-06 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1629

[CARBONDATA-1714] Fixed Issue for Selection of null values after having 
multiple alter commands

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed? No
 
 - [ ] Any backward compatibility impacted? No
 
 - [ ] Document update required? No

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required? No
- How it is tested? Please attach test report. NA
- Is it a performance related change? Please attach the performance 
test report. NA
- Any additional information to help reviewers in testing this 
change. NA
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-1714

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1629.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1629


commit 78a30778c4171cede160e091fabce2c95a556dbf
Author: Jatin <jatin.de...@knoldus.in>
Date:   2017-12-07T05:11:37Z

fixed Issue for selection of null values after having muliple alter commands




---


[GitHub] carbondata pull request #1353: [CARBONDATA-1476] Added Unit Test Case For Pr...

2017-11-14 Thread jatin9896
Github user jatin9896 closed the pull request at:

https://github.com/apache/carbondata/pull/1353


---


[GitHub] carbondata pull request #1068: [CARBONDATA-1195] Closes table tag in configu...

2017-10-30 Thread jatin9896
Github user jatin9896 closed the pull request at:

https://github.com/apache/carbondata/pull/1068


---


[GitHub] carbondata pull request #1083: [CARBONDATA-962] Fixed bug for < operator on ...

2017-10-30 Thread jatin9896
Github user jatin9896 closed the pull request at:

https://github.com/apache/carbondata/pull/1083


---


[GitHub] carbondata pull request #1353: [Carbondata-1476] Added Unit Test Case For Pr...

2017-09-13 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1353

[Carbondata-1476] Added Unit Test Case For Presto

Added unit test cases for presto integration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-1476

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1353.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1353


commit d07602200253023c271b9dd774996609748fe6b3
Author: jatin9896 <jatin.de...@knoldus.in>
Date:   2017-09-11T11:15:48Z

added unit testcases for presto

commit 030d987a5fa659caa6f4bd6882ac74f2bf170732
Author: PallaviSingh1992 <pallavisingh_1...@yahoo.co.in>
Date:   2017-09-12T09:48:07Z

added test cases for CarbondataRecordSetProvider,SplitMaanger and 
LocalInputSplit

commit 8a7bf59788eaf79c3ab0c9098d069eee9bbabf10
Author: Geetika Gupta <geetika.gu...@knoldus.in>
Date:   2017-09-13T08:45:13Z

Added test cases for CarbonTableReader, CarbonDataUtilTest, 
CarbondataRecordSetTest and and added test scenario in 
CarbonVectorizedRecordReaderTest




---


[GitHub] carbondata pull request #1310: [CARBONDATA-1442] Refactored Partition-Guide....

2017-08-31 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1310

[CARBONDATA-1442] Refactored Partition-Guide.md

Refactored the partition-guide.md file to maintain consistency of headings 
across the all the doc files and formatted the syntax to generate correct pdf 
for the carbondata website.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
docs/ReformatPartitionGuide

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1310.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1310


commit ca24013b704fa931c7d2b287ec8b520ae7ffc5f8
Author: jatin9896 <jatin.de...@knoldus.in>
Date:   2017-08-31T06:14:36Z

Refactored partition-guide




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1309: [CARBONDATA-1142] Refactored Partition-Guide....

2017-08-31 Thread jatin9896
Github user jatin9896 closed the pull request at:

https://github.com/apache/carbondata/pull/1309


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1309: [CARBONDATA-1142] Refactored Partition-Guide....

2017-08-31 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1309

[CARBONDATA-1142] Refactored Partition-Guide.md

Refactored the partition-guide.md file to maintain consistency of headings 
across the all the doc files and formatted the syntax to generate correct pdf 
for the carbondata website 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
docs/ReformatPartitionGuide

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1309.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1309


commit ca24013b704fa931c7d2b287ec8b520ae7ffc5f8
Author: jatin9896 <jatin.de...@knoldus.in>
Date:   2017-08-31T06:14:36Z

Refactored partition-guide




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1137: [CARBONDATA-1266] Fixed issue for non existin...

2017-07-06 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1137#discussion_r125872632
  
--- Diff: 
integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataMetadata.java
 ---
@@ -115,9 +115,6 @@ private ConnectorTableMetadata 
getTableMetadata(SchemaTableName schemaTableName)
 }
 
 CarbonTable carbonTable = carbonTableReader.getTable(schemaTableName);
--- End diff --

Please check.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1137: [CARBONDATA-1266] Fixed issue for non existing table

2017-07-05 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1137
  
@jackylk I did the required changes. Please check. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1137: [CARBONDATA-1266] Fixed issue for non existin...

2017-07-05 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1137

[CARBONDATA-1266] Fixed issue for non existing table



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata CARBONDATA-1266

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1137.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1137


commit b7844b967e05558b2f6f3a65fc7683aa98521939
Author: jatin <jatin.de...@knoldus.in>
Date:   2017-07-05T12:04:19Z

Fixed issue for non existing table




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1128: [CARBONDATA-980] Fix for Is Not Null in prest...

2017-07-03 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1128

[CARBONDATA-980] Fix for Is Not Null in presto




You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
feature/CARBONDATA-980

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1128.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1128


commit 471566f39a7138aeb7d8b71279b3c6c2807697b1
Author: jatin <jatin.de...@knoldus.in>
Date:   2017-07-03T12:38:22Z

Fixed Is Not Null in presto




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause I...

2017-07-03 Thread jatin9896
Github user jatin9896 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1062#discussion_r125220311
  
--- Diff: 
integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataRecordSetProvider.java
 ---
@@ -142,13 +142,14 @@ private void fillFilter2QueryModel(QueryModel 
queryModel,
   }
 
   List singleValues = new ArrayList<>();
-  List rangeFilter = new ArrayList<>();
+
+  List disjuncts = new ArrayList<>();
+
   for (Range range : 
domain.getValues().getRanges().getOrderedRanges()) {
-checkState(!range.isAll()); // Already checked
--- End diff --

@chenliang613 Actually this code is fix for IS NOT NULL  operator results 
null values. 
[!CARBONDATA-980](https://issues.apache.org/jira/browse/CARBONDATA-980)
For now i will be removing this code from here and will raise different PR 
for that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause In Prest...

2017-07-02 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1062
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1083: [CARBONDATA-962] Fixed bug for < operator on timesta...

2017-06-24 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1083
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1083: [CARBONDATA-962] Fixed bug for < operator on ...

2017-06-23 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1083

[CARBONDATA-962] Fixed bug for < operator on timestamp datatype in Presto



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
feature/CARBONDATA-962

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1083.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1083






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause In Prest...

2017-06-21 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1062
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause In Prest...

2017-06-20 Thread jatin9896
Github user jatin9896 commented on the issue:

https://github.com/apache/carbondata/pull/1062
  
Please retest


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1068: [CARBONDATA-1195] Closes table tag in configu...

2017-06-20 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1068

[CARBONDATA-1195] Closes table tag in configuration-parameters



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
docs/Rectified-configuration-parameters

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1068.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1068


commit 5f1750c7bf427509e1654e35e9b6ca4ef0e99dd6
Author: jatin <jatin.de...@knoldus.in>
Date:   2017-06-20T06:08:28Z

Closes table tag in configuration-parameters




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause I...

2017-06-19 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1062

[CARBONDATA-982] Fixed Bug For NotIn Clause In Presto

Resolved NotIn clause for presto integration

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
feature/CARBONDATA-982

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1062.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1062


commit ab85bce11e537614a633c7915c17a44093920442
Author: Geetika gupta <geetika.gu...@knoldus.in>
Date:   2017-06-16T07:37:52Z

Resolved NotIn clause for presto integration




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1057: [CARBONDATA-1187]Fixed linking and content is...

2017-06-18 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1057

[CARBONDATA-1187]Fixed linking and content issues



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata docs/updateLink

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1057.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1057


commit 2f39e4d6d1632891b2a3f37d4ccec28134edfd5e
Author: jatin <jatin.de...@knoldus.in>
Date:   2017-06-15T07:48:48Z

Fixed linking and content issues




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---