[GitHub] carbondata issue #2649: [CARBONDATA-2869] Add support for Avro Map data type...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2649 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6417/ ---
[GitHub] carbondata pull request #2623: [CARBONDATA-2844] Pass SK/AK to executor by s...
Github user kunal642 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2623#discussion_r212869553 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataPageSourceProvider.java --- @@ -79,6 +80,7 @@ @Override public ConnectorPageSource createPageSource(ConnectorTransactionHandle transactionHandle, ConnectorSession session, ConnectorSplit split, List columns) { +ThreadLocalSessionInfo.getOrCreateCarbonSessionInfo(); --- End diff -- removed ---
[GitHub] carbondata pull request #2623: [CARBONDATA-2844] Pass SK/AK to executor by s...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2623#discussion_r212868174 --- Diff: integration/presto/src/test/scala/org/apache/carbondata/presto/server/PrestoServer.scala --- @@ -98,6 +99,7 @@ object PrestoServer { def executeQuery(query: String): List[Map[String, Any]] = { Try { + ThreadLocalSessionInfo.getOrCreateCarbonSessionInfo() --- End diff -- why it is needed? ---
[GitHub] carbondata pull request #2623: [CARBONDATA-2844] Pass SK/AK to executor by s...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2623#discussion_r212868149 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataPageSourceProvider.java --- @@ -79,6 +80,7 @@ @Override public ConnectorPageSource createPageSource(ConnectorTransactionHandle transactionHandle, ConnectorSession session, ConnectorSplit split, List columns) { +ThreadLocalSessionInfo.getOrCreateCarbonSessionInfo(); --- End diff -- why it is needed? ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] Pass SK/AK to executor by serializ...
Github user kunal642 commented on the issue: https://github.com/apache/carbondata/pull/2623 @ravipesala fixed the comments ---
[GitHub] carbondata issue #2656: [CARBONDATA-2883][ExternalFormat] block some operati...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2656 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/20/ ---
[GitHub] carbondata issue #2628: [CARBONDATA-2851][CARBONDATA-2852] Support zstd as c...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2628 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/19/ ---
[GitHub] carbondata issue #2635: [CARBONDATA-2856][BloomDataMap] Fix bug in bloom ind...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2635 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8076/ ---
[GitHub] carbondata issue #2635: [CARBONDATA-2856][BloomDataMap] Fix bug in bloom ind...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2635 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/18/ ---
[GitHub] carbondata issue #2656: [CARBONDATA-2883][ExternalFormat] block some operati...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2656 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8079/ ---
[GitHub] carbondata issue #2628: [CARBONDATA-2851][CARBONDATA-2852] Support zstd as c...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2628 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8077/ ---
[GitHub] carbondata issue #2656: [CARBONDATA-2883][ExternalFormat] block some operati...
Github user Sssan520 commented on the issue: https://github.com/apache/carbondata/pull/2656 retest this please ---
[GitHub] carbondata issue #2628: [CARBONDATA-2851][CARBONDATA-2852] Support zstd as c...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2628 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6416/ ---
[GitHub] carbondata issue #2656: [CARBONDATA-2883][ExternalFormat] block some operati...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2656 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8078/ ---
[GitHub] carbondata issue #2656: [CARBONDATA-2883][ExternalFormat] block some operati...
Github user Sssan520 commented on the issue: https://github.com/apache/carbondata/pull/2656 retest this please ---
[GitHub] carbondata issue #2635: [CARBONDATA-2856][BloomDataMap] Fix bug in bloom ind...
Github user xuchuanyin commented on the issue: https://github.com/apache/carbondata/pull/2635 retest this please ---
[GitHub] carbondata issue #2659: [CARBONDATA-2887] Fix complex filters on spark carbo...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2659 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/17/ ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] add sk ak to file factory on creat...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2623 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/16/ ---
[GitHub] carbondata issue #2654: [WIP] Adaptive Encoding for Primitive data types
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2654 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/15/ ---
[GitHub] carbondata issue #2659: [CARBONDATA-2887] Fix complex filters on spark carbo...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8075/ ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] add sk ak to file factory on creat...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2623 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8074/ ---
[GitHub] carbondata issue #2661: [CARBONDATA-2888] Support multi level subfolder for ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2661 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/14/ ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] add sk ak to file factory on creat...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2623 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6415/ ---
[GitHub] carbondata issue #2659: [CARBONDATA-2887] Fix complex filters on spark carbo...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2659 retest this please ---
[GitHub] carbondata issue #2654: [WIP] Adaptive Encoding for Primitive data types
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2654 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8073/ ---
[GitHub] carbondata issue #2661: [CARBONDATA-2888] Support multi level subfolder for ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2661 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8072/ ---
[GitHub] carbondata issue #2659: [CARBONDATA-2887] Fix complex filters on spark carbo...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8071/ ---
[GitHub] carbondata issue #2659: [CARBONDATA-2887] Fix complex filters on spark carbo...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/13/ ---
[GitHub] carbondata issue #2654: [WIP] Adaptive Encoding for Primitive data types
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2654 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6414/ ---
[GitHub] carbondata issue #2661: [CARBONDATA-2888] Support multi level subfolder for ...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2661 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6413/ ---
[GitHub] carbondata pull request #2650: [HOTFIX]Fixed Join Query Performance issue
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2650 ---
[GitHub] carbondata issue #2659: [CARBONDATA-2887] Fix complex filters on spark carbo...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2659 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6412/ ---
[jira] [Resolved] (CARBONDATA-2874) Support SDK writer as thread safe api
[ https://issues.apache.org/jira/browse/CARBONDATA-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala resolved CARBONDATA-2874. - Resolution: Fixed Fix Version/s: 1.5.0 > Support SDK writer as thread safe api > - > > Key: CARBONDATA-2874 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2874 > Project: CarbonData > Issue Type: Bug >Reporter: Ajantha Bhat >Priority: Minor > Fix For: 1.5.0 > > Time Spent: 11h 40m > Remaining Estimate: 0h > > h1. Support SDK writer as thread safe api -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #2653: [CARBONDATA-2874] Support SDK writer as threa...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2653 ---
[GitHub] carbondata issue #2653: [CARBONDATA-2874] Support SDK writer as thread safe ...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2653 LGTM ---
[GitHub] carbondata pull request #2661: [CARBONDATA-2888] Support multi level subfold...
GitHub user ravipesala opened a pull request: https://github.com/apache/carbondata/pull/2661 [CARBONDATA-2888] Support multi level subfolder for SDK read and fileformat read This PR supports multi-level subfolders read for SDK Reader and spark's carbon fileformat reader. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata sdk-multi-folder-support Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2661.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2661 commit a71189330b223a76c0b40ee8abf28ce2fd9733c1 Author: ravipesala Date: 2018-08-26T17:17:14Z Support multi level subfolder for SDK read and fileformat read ---
[jira] [Created] (CARBONDATA-2888) Support multi level sdk read support for carbon tables
Ravindra Pesala created CARBONDATA-2888: --- Summary: Support multi level sdk read support for carbon tables Key: CARBONDATA-2888 URL: https://issues.apache.org/jira/browse/CARBONDATA-2888 Project: CarbonData Issue Type: Bug Reporter: Ravindra Pesala Current SDK reader cannot support multi level folders. it would be better to read data under subfolders as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #2659: [CARBONDATA-2887] Fix complex filters on spark carbo...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2659 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6411/ ---
[GitHub] carbondata issue #2653: [CARBONDATA-2874] Support SDK writer as thread safe ...
Github user ajantha-bhat commented on the issue: https://github.com/apache/carbondata/pull/2653 @ravipesala : PR is ready. Please check ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] add sk ak to file factory on creat...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2623 LGTM apart from minor comment ---
[GitHub] carbondata issue #2659: [CARBONDATA-2887] Fix complex filters on spark carbo...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2659 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6410/ ---
[GitHub] carbondata pull request #2623: [CARBONDATA-2844] add sk ak to file factory o...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2623#discussion_r212832743 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala --- @@ -49,6 +49,8 @@ case class CarbonCreateTableCommand( val LOGGER = LogServiceFactory.getLogService(this.getClass.getCanonicalName) val tableName = tableInfo.getFactTable.getTableName var databaseOpt : Option[String] = None + ThreadLocalSessionInfo.getCarbonSessionInfo.getNonSerializableExtraInfo.put("carbonConf", --- End diff -- Move this one utility and pass the only configuration to it like `setConfigurationToCurrentThread(configuration)` and call from all places ---
[GitHub] carbondata issue #2653: [CARBONDATA-2874] Support SDK writer as thread safe ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2653 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/11/ ---
[GitHub] carbondata issue #2653: [CARBONDATA-2874] Support SDK writer as thread safe ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2653 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8069/ ---
[GitHub] carbondata issue #2653: [CARBONDATA-2874] Support SDK writer as thread safe ...
Github user brijoobopanna commented on the issue: https://github.com/apache/carbondata/pull/2653 retest this please ---
[GitHub] carbondata pull request #2659: [CARBONDATA-2887] Fix complex filters on spar...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2659#discussion_r212829790 --- Diff: integration/spark-datasource/src/test/scala/org/apache/spark/sql/carbondata/datasource/SparkCarbonDataSourceTest.scala --- @@ -285,6 +285,44 @@ class SparkCarbonDataSourceTest extends FunSuite with BeforeAndAfterAll { spark.sql("drop table if exists date_parquet_table") } + test("test write with array type with filter") { +spark.sql("drop table if exists carbon_table") +spark.sql("drop table if exists parquet_table") +import spark.implicits._ +val df = spark.sparkContext.parallelize(1 to 10) + .map(x => ("a" + x % 10, Array("b", "c"), x)) + .toDF("c1", "c2", "number") + +df.write + .format("parquet").saveAsTable("parquet_table") +spark.sql("create table carbon_table(c1 string, c2 array, number int) using carbon") +spark.sql("insert into carbon_table select * from parquet_table") +assert(spark.sql("select * from carbon_table").count() == 10) +TestUtil.checkAnswer(spark.sql("select * from carbon_table where c1='a1' and c2[0]='b'"), spark.sql("select * from parquet_table where c1='a1' and c2[0]='b'")) +TestUtil.checkAnswer(spark.sql("select * from carbon_table"), spark.sql("select * from parquet_table")) +spark.sql("drop table if exists carbon_table") +spark.sql("drop table if exists parquet_table") + } + + test("test write with struct type with filter") { +spark.sql("drop table if exists carbon_table") +spark.sql("drop table if exists parquet_table") +import spark.implicits._ +val df = spark.sparkContext.parallelize(1 to 10) + .map(x => ("a" + x % 10, ("b", "c"), x)) + .toDF("c1", "c2", "number") + +df.write + .format("parquet").saveAsTable("parquet_table") +spark.sql("create table carbon_table(c1 string, c2 struct, number int) using carbon") --- End diff -- ok ---
[GitHub] carbondata issue #2614: [CARBONDATA-2837] Added MVExample in example module
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2614 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/10/ ---
[GitHub] carbondata issue #2614: [CARBONDATA-2837] Added MVExample in example module
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2614 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8068/ ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] add sk ak to file factory on creat...
Github user kunal642 commented on the issue: https://github.com/apache/carbondata/pull/2623 @ravipesala Please review. ---
[GitHub] carbondata pull request #2657: [CARBONDATA-2884] Rename the methods of ByteU...
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2657#discussion_r212822441 --- Diff: core/src/test/java/org/apache/carbondata/core/util/ByteUtilTest.java --- @@ -17,156 +17,246 @@ package org.apache.carbondata.core.util; import junit.framework.TestCase; + import org.apache.carbondata.core.constants.CarbonCommonConstants; import org.apache.carbondata.core.util.ByteUtil.UnsafeComparer; + import org.junit.Before; import org.junit.Test; import java.nio.ByteBuffer; import java.nio.charset.Charset; - /** * This test will test the functionality of the Byte Util * for the comparision of 2 byte buffers */ public class ByteUtilTest extends TestCase { -String dimensionValue1 = "1235"; -String dimensionValue2 = "1234"; -private ByteBuffer buff1; -private ByteBuffer buff2; - -/** - * This method will form one single byte [] for all the high card dims. - * - * @param byteBufferArr - * @return - */ -public static byte[] packByteBufferIntoSingleByteArray( -ByteBuffer[] byteBufferArr) { -// for empty array means there is no data to remove dictionary. -if (null == byteBufferArr || byteBufferArr.length == 0) { -return null; -} -int noOfCol = byteBufferArr.length; -short toDetermineLengthOfByteArr = 2; -short offsetLen = (short) (noOfCol * 2 + toDetermineLengthOfByteArr); -int totalBytes = calculateTotalBytes(byteBufferArr) + offsetLen; - -ByteBuffer buffer = ByteBuffer.allocate(totalBytes); - -// write the length of the byte [] as first short -buffer.putShort((short) (totalBytes - toDetermineLengthOfByteArr)); -// writing the offset of the first element. -buffer.putShort(offsetLen); - -// prepare index for byte [] -for (int index = 0; index < byteBufferArr.length - 1; index++) { -ByteBuffer individualCol = byteBufferArr[index]; -// short lengthOfbytes = individualCol.getShort(); -int noOfBytes = individualCol.capacity(); - -buffer.putShort((short) (offsetLen + noOfBytes)); -offsetLen += noOfBytes; -individualCol.rewind(); -} - -// put actual data. -for (int index = 0; index < byteBufferArr.length; index++) { -ByteBuffer individualCol = byteBufferArr[index]; -buffer.put(individualCol.array()); -} - -buffer.rewind(); -return buffer.array(); + String dimensionValue1 = "1235"; + String dimensionValue2 = "1234"; + private ByteBuffer buff1; + private ByteBuffer buff2; + /** + * This method will form one single byte [] for all the high card dims. + * + * @param byteBufferArr + * @return + */ + public static byte[] packByteBufferIntoSingleByteArray(ByteBuffer[] byteBufferArr) { +// for empty array means there is no data to remove dictionary. +if (null == byteBufferArr || byteBufferArr.length == 0) { + return null; } +int noOfCol = byteBufferArr.length; +short toDetermineLengthOfByteArr = 2; +short offsetLen = (short) (noOfCol * 2 + toDetermineLengthOfByteArr); +int totalBytes = calculateTotalBytes(byteBufferArr) + offsetLen; -/** - * To calculate the total bytes in byte Buffer[]. - * - * @param byteBufferArr - * @return - */ -private static int calculateTotalBytes(ByteBuffer[] byteBufferArr) { -int total = 0; -for (int index = 0; index < byteBufferArr.length; index++) { -total += byteBufferArr[index].capacity(); -} -return total; -} +ByteBuffer buffer = ByteBuffer.allocate(totalBytes); -/** - * @throws Exception - */ -@Before -public void setUp() throws Exception { +// write the length of the byte [] as first short +buffer.putShort((short) (totalBytes - toDetermineLengthOfByteArr)); +// writing the offset of the first element. +buffer.putShort(offsetLen); -} - -@Test -public void testLessThan() { -dimensionValue1 = "a6aa1235"; -dimensionValue2 = "a5aa1234"; +// prepare index for byte [] +for (int index = 0; index < byteBufferArr.length - 1; index++) { + ByteBuffer individualCol = byteBufferArr[index]; + // short lengthOfbytes = individualCol.getShort(); + int
[GitHub] carbondata pull request #2657: [CARBONDATA-2884] Rename the methods of ByteU...
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2657#discussion_r212822416 --- Diff: core/src/test/java/org/apache/carbondata/core/util/ByteUtilTest.java --- @@ -17,156 +17,246 @@ package org.apache.carbondata.core.util; import junit.framework.TestCase; + import org.apache.carbondata.core.constants.CarbonCommonConstants; import org.apache.carbondata.core.util.ByteUtil.UnsafeComparer; + import org.junit.Before; import org.junit.Test; import java.nio.ByteBuffer; import java.nio.charset.Charset; - /** * This test will test the functionality of the Byte Util * for the comparision of 2 byte buffers */ public class ByteUtilTest extends TestCase { -String dimensionValue1 = "1235"; -String dimensionValue2 = "1234"; -private ByteBuffer buff1; -private ByteBuffer buff2; - -/** - * This method will form one single byte [] for all the high card dims. - * - * @param byteBufferArr - * @return - */ -public static byte[] packByteBufferIntoSingleByteArray( -ByteBuffer[] byteBufferArr) { -// for empty array means there is no data to remove dictionary. -if (null == byteBufferArr || byteBufferArr.length == 0) { -return null; -} -int noOfCol = byteBufferArr.length; -short toDetermineLengthOfByteArr = 2; -short offsetLen = (short) (noOfCol * 2 + toDetermineLengthOfByteArr); -int totalBytes = calculateTotalBytes(byteBufferArr) + offsetLen; - -ByteBuffer buffer = ByteBuffer.allocate(totalBytes); - -// write the length of the byte [] as first short -buffer.putShort((short) (totalBytes - toDetermineLengthOfByteArr)); -// writing the offset of the first element. -buffer.putShort(offsetLen); - -// prepare index for byte [] -for (int index = 0; index < byteBufferArr.length - 1; index++) { -ByteBuffer individualCol = byteBufferArr[index]; -// short lengthOfbytes = individualCol.getShort(); -int noOfBytes = individualCol.capacity(); - -buffer.putShort((short) (offsetLen + noOfBytes)); -offsetLen += noOfBytes; -individualCol.rewind(); -} - -// put actual data. -for (int index = 0; index < byteBufferArr.length; index++) { -ByteBuffer individualCol = byteBufferArr[index]; -buffer.put(individualCol.array()); -} - -buffer.rewind(); -return buffer.array(); + String dimensionValue1 = "1235"; + String dimensionValue2 = "1234"; + private ByteBuffer buff1; + private ByteBuffer buff2; + /** + * This method will form one single byte [] for all the high card dims. + * + * @param byteBufferArr + * @return --- End diff -- please complete the comment ---
[GitHub] carbondata issue #2614: [CARBONDATA-2837] Added MVExample in example module
Github user chenliang613 commented on the issue: https://github.com/apache/carbondata/pull/2614 retest this please ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] add sk ak to file factory on creat...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2623 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/9/ ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] add sk ak to file factory on creat...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2623 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8067/ ---
[GitHub] carbondata issue #2623: [CARBONDATA-2844] add sk ak to file factory on creat...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2623 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6409/ ---