[jira] [Created] (CARBONDATA-600) Should reuse unit test case for integration module

2017-01-05 Thread QiangCai (JIRA)
QiangCai created CARBONDATA-600:
---

 Summary: Should reuse unit test case for integration module
 Key: CARBONDATA-600
 URL: https://issues.apache.org/jira/browse/CARBONDATA-600
 Project: CarbonData
  Issue Type: Bug
  Components: spark-integration
Affects Versions: 1.0.0-incubating
Reporter: QiangCai
Assignee: QiangCai
Priority: Minor
 Fix For: 1.0.0-incubating






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-601) Should reuse unit test case for integration module

2017-01-05 Thread QiangCai (JIRA)
QiangCai created CARBONDATA-601:
---

 Summary: Should reuse unit test case for integration module
 Key: CARBONDATA-601
 URL: https://issues.apache.org/jira/browse/CARBONDATA-601
 Project: CarbonData
  Issue Type: Test
  Components: spark-integration
Affects Versions: 1.0.0-incubating
Reporter: QiangCai
Assignee: QiangCai
Priority: Minor
 Fix For: 1.0.0-incubating






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: minor compact throw err 'IndexBuilderException'

2017-01-05 Thread Liang Chen
Hi

1.Just i tested at my machine for 0.2 version,it is working fine.
-
scala> cc.sql("ALTER TABLE connectdemo1 COMPACT 'MINOR'")
INFO  05-01 23:46:54,111 - main Query [ALTER TABLE CONNECTDEMO1 COMPACT
'MINOR']
INFO  05-01 23:46:54,115 - Parsing command: alter table  connectdemo1
COMPACT 'MINOR'
INFO  05-01 23:46:54,116 - Parse Completed
AUDIT 05-01 23:46:54,379 -
[AppledeMacBook-Pro.local][apple][Thread-1]Compaction request received for
table default.connectdemo1
INFO  05-01 23:46:54,385 - main Acquired the compaction lock for table
default.connectdemo1
INFO  05-01 23:46:54,392 - main Successfully deleted the lock file
/var/folders/d3/x_28r1q932g6bq6pxcf8c6rhgn/T//default/connectdemo1/compaction.lock
res8: org.apache.spark.sql.DataFrame = []


2.Can you provide the steps for reproducing the error.
3.Please check the compaction example : DataManagementExample.scala, see if
you used the correct compact script.

Regards
Liang

2017-01-05 15:24 GMT+08:00 Li Peng :

> Hello:
>   in spark shell with carbondata 0.2.0, minor compact, throw err:
>
> WARN  05-01 15:04:06,964 - Lost task 0.0 in stage 0.0 (TID 1, dpnode08):
> org.apache.carbondata.core.carbon.datastore.exception.
> IndexBuilderException:
> at
> org.apache.carbondata.integration.spark.merger.CarbonCompactionUtil.
> createDataFileFooterMappingForSegments(CarbonCompactionUtil.java:127)
> at
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<
> init>(CarbonMergerRDD.scala:121)
> at
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.compute(
> CarbonMergerRDD.scala:70)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.
> scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:227)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.carbondata.core.util.CarbonUtilException: Problem
> while reading the file metadata
> at
> org.apache.carbondata.core.util.CarbonUtil.readMetadatFile(CarbonUtil.
> java:1063)
> at
> org.apache.carbondata.integration.spark.merger.CarbonCompactionUtil.
> createDataFileFooterMappingForSegments(CarbonCompactionUtil.java:123)
> ... 10 more
> Caused by: java.io.IOException: It doesn't set the offset properly
> at
> org.apache.carbondata.core.reader.ThriftReader.setReadOffset(ThriftReader.
> java:88)
> at
> org.apache.carbondata.core.reader.CarbonFooterReader.
> readFooter(CarbonFooterReader.java:55)
> at
> org.apache.carbondata.core.util.DataFileFooterConverter.
> readDataFileFooter(DataFileFooterConverter.java:148)
> at
> org.apache.carbondata.core.util.CarbonUtil.readMetadatFile(CarbonUtil.
> java:1061)
> ... 11 more
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/minor-compact-throw-err-
> IndexBuilderException-tp5551.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>



-- 
Regards
Liang


Re: Select query is not working.

2017-01-05 Thread Ravindra Pesala
Hi,

Its an issue, we are working on the fix.

On 5 January 2017 at 17:26, Anurag Srivastava  wrote:

> Hello,
>
> I have taken latest code at today (5/01/2017) and build code with spark
> 1.6. After that I put the latest jar in carbonlib in spark and start thrift
> server.
>
> When I have started running query I was able to run the create and load
> query but for the "*select*"
> query it is giving me error :
>
> *org.apache.carbondata.core.carbon.datastore.exception.
> IndexBuilderException:
> Block B-tree loading failed*
>
> I have raised JIRA ISSUE for the same. Please look at there for further
> information and stack trace. Here is the link :
>
> https://issues.apache.org/jira/browse/CARBONDATA-597
>
>
> --
> *ThanksĀ®ards*
>
>
> *Anurag Srivastava**Software Consultant*
> *Knoldus Software LLP*
>
> *India - US - Canada*
> * Twitter  | FB
>  | LinkedIn
> *
>



-- 
Thanks & Regards,
Ravi


[jira] [Created] (CARBONDATA-599) should not be able to create table when number of bucket is precedded with arthematic operators

2017-01-05 Thread anubhav tarar (JIRA)
anubhav tarar created CARBONDATA-599:


 Summary: should not be able to create table when number of bucket 
is precedded with arthematic operators
 Key: CARBONDATA-599
 URL: https://issues.apache.org/jira/browse/CARBONDATA-599
 Project: CarbonData
  Issue Type: Bug
  Components: spark-integration
Affects Versions: 1.0.0-incubating
 Environment: cluster
Reporter: anubhav tarar
Priority: Minor


when i created a table in carbon data it works even if arthematic number 
precedded the bucketnumber 

here are the logs 

spark.sql("""CREATE TABLE bugs(ID string)USING 
org.apache.spark.sql.CarbonSource 
OPTIONS("bucketnumber"="+1","bucketcolumns"="ID","tableName"="bugs")""");

WARN  05-01 17:40:31,912 - Couldn't find corresponding Hive SerDe for data 
source provider org.apache.spark.sql.CarbonSource. Persisting data source table 
`default`.`bugs5` into Hive metastore in Spark SQL specific format, which is 
NOT compatible with Hive.
res0: org.apache.spark.sql.DataFrame = []

but in hive it gives exception

here are logs 

hive> CREATE TABLE test888(user_id BIGINT, firstname STRING, lastname STRING)
> CLUSTERED BY(user_id) INTO +1 BUCKETS;
FAILED: ParseException line 2:27 extraneous input '+' expecting Number near 
''




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Select query is not working.

2017-01-05 Thread Anurag Srivastava
Hello,

I have taken latest code at today (5/01/2017) and build code with spark
1.6. After that I put the latest jar in carbonlib in spark and start thrift
server.

When I have started running query I was able to run the create and load
query but for the "*select*"
query it is giving me error :

*org.apache.carbondata.core.carbon.datastore.exception.IndexBuilderException:
Block B-tree loading failed*

I have raised JIRA ISSUE for the same. Please look at there for further
information and stack trace. Here is the link :

https://issues.apache.org/jira/browse/CARBONDATA-597


-- 
*ThanksĀ®ards*


*Anurag Srivastava**Software Consultant*
*Knoldus Software LLP*

*India - US - Canada*
* Twitter  | FB
 | LinkedIn
*


[jira] [Created] (CARBONDATA-598) Not using tableName option in Create table command Shows Strange Behaviour

2017-01-05 Thread anubhav tarar (JIRA)
anubhav tarar created CARBONDATA-598:


 Summary: Not using tableName option in Create table command Shows 
Strange Behaviour 
 Key: CARBONDATA-598
 URL: https://issues.apache.org/jira/browse/CARBONDATA-598
 Project: CarbonData
  Issue Type: Bug
  Components: spark-integration
Affects Versions: 1.0.0-incubating
 Environment: cluster
Reporter: anubhav tarar


if you dont use the tableName Option when creating table with Bucketing it 
shows strange behaviour and do not validate any check

here are the logs
spark.sql("""CREATE TABLE t3q(ID String)USING org.apache.spark.sql.CarbonSource 
OPTIONS("bucketnumber"="1","bucketcolumns"="id","tableName"="t3")""");

here tables get created

then again

it does the checking when fired another query

spark.sql("""CREATE TABLE t3219(ID Int)USING 
org.apache.spark.sql.CarbonSource 
OPTIONS("bucketnumber"="1","bucketcolumns"="id","tableName"="t3q21000")""");

org.apache.carbondata.spark.exception.MalformedCarbonCommandException: Table 
default.t3q21000 can not be created without key columns. Please use 
DICTIONARY_INCLUDE or DICTIONARY_EXCLUDE to set at least one key column if all 
specified columns are numeric types

either there should be a check that both table name in create table statement 
and  tableName in Option both are same and if it is allowed it should valid all 
the checks  




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-597) Unable to fetch data with "select" query

2017-01-05 Thread Anurag Srivastava (JIRA)
Anurag Srivastava created CARBONDATA-597:


 Summary: Unable to fetch data with "select" query
 Key: CARBONDATA-597
 URL: https://issues.apache.org/jira/browse/CARBONDATA-597
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
Reporter: Anurag Srivastava
 Attachments: ErrorLog.png

I am running Carbon Data with beeline and I am able to Create Table and Load 
Data but as I run *select * from table_name;*, Its giving me error : *Block 
B-tree loading failed*

PFA for stack Trace.



 

 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)