[ 
https://issues.apache.org/jira/browse/CARBONDATA-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRIYESH RANJAN updated CARBONDATA-4237:
---------------------------------------
    Description: 
+Modification 1 :+

[https://github.com/apache/carbondata/blob/master/docs/streaming-guide.md]

Streaming table don't support alter table operation(alter add columns, drop 
column, rename column, change datatypes and rename table name) so In Constraint 
section of this doc ,it can be added.

 

0: jdbc:hive2://100-112-148-186:22550/> alter table uniqdata_alter add 
columns(id2 int);
 Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
Alter table add column is not allowed for streaming table

0: jdbc:hive2://100-112-148-186:22550/> alter table uniqdata_alter drop 
columns(integer_column1);
 Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
Alter table drop column is not allowed for streaming table.

0: jdbc:hive2://100-112-148-186:22550/> ALTER TABLE uniqdata_alter rename TO 
uniqdata_alterTable ;
 Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
Alter rename table is not allowed for streaming table.

 

+Modification 2 :+

[https://github.com/apache/carbondata/blob/master/docs/file-structure-of-carbondata.md]

Since Metadata folder contain segment, tablestatus and schema folder so  
dictionary file related content inside metadata folder can be removed from doc.

eg : Metadata directory stores schema files, tablestatus and *dictionary files 
(including .dict, .dictmeta and .sortindex).* These line from doc can be 
modifed as Metadata directory stores schema files, tablestatus and segments 
details.

 

+Modification 3 :+

[https://github.com/apache/carbondata/blob/master/docs/sdk-guide.md]

 In the Quick Example section of following doc, it still converting date 
datatype to Integer value and timestamp datatype to long value whereas now they 
accept value as date and timestamp value respectively.

 

{{while (reader.hasNext()) {
 Object[] row = (Object[]) reader.readNextRow();
 System.out.println(String.format("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t",
 i, row[0], row[1], row[2], row[3], row[4], row[5],
 +*new Date((day * ((int) row[6]))), new Timestamp((long) row[7] / 1000)*+, 
row[8]
 ));

{{can be modified to}}

while (reader.hasNext()) {
 Object[] row = (Object[]) reader.readNextRow();
 
System.out.println(String.format("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t",
 i, row[0], row[1], row[2], row[3], row[4], row[5], +*row[6], row[7]*+,
 row[8], row[9]
 ));\{{}}

  was:
+Modification 1 :+

[https://github.com/apache/carbondata/blob/master/docs/streaming-guide.md]

Streaming table don't support alter table operation(alter add columns, drop 
column, rename column, change datatypes and rename table name) so In Constraint 
section of this doc ,it can be added.

0: jdbc:hive2://100-112-148-186:22550/> alter table uniqdata_alter add 
columns(id2 int);
Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
Alter table add column is not allowed for streaming table

0: jdbc:hive2://100-112-148-186:22550/> alter table uniqdata_alter drop 
columns(integer_column1);
Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
Alter table drop column is not allowed for streaming table.

0: jdbc:hive2://100-112-148-186:22550/> ALTER TABLE uniqdata_alter rename TO 
uniqdata_alterTable ;
Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
Alter rename table is not allowed for streaming table.

 

+Modification 2 :+

[https://github.com/apache/carbondata/blob/master/docs/file-structure-of-carbondata.md]

Since Metadata folder contain segment, tablestatus and schema folder so  
dictionary file related content inside metadata folder can be removed from doc.

eg : Metadata directory stores schema files, tablestatus and *dictionary files 
(including .dict, .dictmeta and .sortindex).* There are three types of metadata 
data information files.

 

+Modification 3 :+

[https://github.com/apache/carbondata/blob/master/docs/sdk-guide.md]

 In the Quick Example section of following doc, it still converting date 
datatype to Integer value and timestamp datatype to long value whereas now they 
accept value as date and timestamp value respectively.

{{    }}{{}}

{{while (reader.hasNext()) {
    Object[] row = (Object[]) reader.readNextRow();
    System.out.println(String.format("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t",
        i, row[0], row[1], row[2], row[3], row[4], row[5],
        new Date((day * ((int) row[6]))), new Timestamp((long) row[7] / 1000), 
row[8]
    ));}}{{}}{{}}

{{can be modified to}}

while (reader.hasNext()) {
 Object[] row = (Object[]) reader.readNextRow();
 
System.out.println(String.format("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t",
 i, row[0], row[1], row[2], row[3], row[4], row[5], row[6], row[7],
 row[8], row[9]
 ));{{}}


> documentation issues in github master docs.
> -------------------------------------------
>
>                 Key: CARBONDATA-4237
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-4237
>             Project: CarbonData
>          Issue Type: Bug
>          Components: docs
>    Affects Versions: 2.2.0
>         Environment: Contents verified on Spark 2.4.5 and Spark 3.1.1
>            Reporter: PRIYESH RANJAN
>            Priority: Minor
>
> +Modification 1 :+
> [https://github.com/apache/carbondata/blob/master/docs/streaming-guide.md]
> Streaming table don't support alter table operation(alter add columns, drop 
> column, rename column, change datatypes and rename table name) so In 
> Constraint section of this doc ,it can be added.
>  
> 0: jdbc:hive2://100-112-148-186:22550/> alter table uniqdata_alter add 
> columns(id2 int);
>  Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
> org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
> Alter table add column is not allowed for streaming table
> 0: jdbc:hive2://100-112-148-186:22550/> alter table uniqdata_alter drop 
> columns(integer_column1);
>  Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
> org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
> Alter table drop column is not allowed for streaming table.
> 0: jdbc:hive2://100-112-148-186:22550/> ALTER TABLE uniqdata_alter rename TO 
> uniqdata_alterTable ;
>  Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
> org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
> Alter rename table is not allowed for streaming table.
>  
> +Modification 2 :+
> [https://github.com/apache/carbondata/blob/master/docs/file-structure-of-carbondata.md]
> Since Metadata folder contain segment, tablestatus and schema folder so  
> dictionary file related content inside metadata folder can be removed from 
> doc.
> eg : Metadata directory stores schema files, tablestatus and *dictionary 
> files (including .dict, .dictmeta and .sortindex).* These line from doc can 
> be modifed as Metadata directory stores schema files, tablestatus and 
> segments details.
>  
> +Modification 3 :+
> [https://github.com/apache/carbondata/blob/master/docs/sdk-guide.md]
>  In the Quick Example section of following doc, it still converting date 
> datatype to Integer value and timestamp datatype to long value whereas now 
> they accept value as date and timestamp value respectively.
>  
> {{while (reader.hasNext()) {
>  Object[] row = (Object[]) reader.readNextRow();
>  System.out.println(String.format("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t",
>  i, row[0], row[1], row[2], row[3], row[4], row[5],
>  +*new Date((day * ((int) row[6]))), new Timestamp((long) row[7] / 1000)*+, 
> row[8]
>  ));
> {{can be modified to}}
> while (reader.hasNext()) {
>  Object[] row = (Object[]) reader.readNextRow();
>  
> System.out.println(String.format("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t",
>  i, row[0], row[1], row[2], row[3], row[4], row[5], +*row[6], row[7]*+,
>  row[8], row[9]
>  ));\{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to