rdblue commented on a change in pull request #1481:
URL: https://github.com/apache/iceberg/pull/1481#discussion_r494442976



##########
File path: mr/src/main/java/org/apache/iceberg/mr/Catalogs.java
##########
@@ -77,6 +102,77 @@ private static Table loadTable(Configuration conf, String 
tableIdentifier, Strin
     return new HadoopTables(conf).load(tableLocation);
   }
 
+  /**
+   * Creates an Iceberg table using the catalog specified by the configuration.
+   * The properties should contain the following values:
+   * <p><ul>
+   * <li>Table identifier ({@link Catalogs#NAME}) or table path ({@link 
Catalogs#LOCATION}) is required
+   * <li>Table schema ({@link InputFormatConfig#TABLE_SCHEMA}) is required
+   * <li>Partition specification ({@link InputFormatConfig#PARTITION_SPEC}) is 
optional. Table will be unpartitioned if
+   *  not provided
+   * </ul><p>
+   * Other properties will be handled over to the Table creation. The 
controlling properties above will not be
+   * propagated.
+   * @param conf a Hadoop conf
+   * @param props the controlling properties
+   * @return the created Iceberg table
+   */
+  public static Table createTable(Configuration conf, Properties props) {
+    String schemaString = props.getProperty(InputFormatConfig.TABLE_SCHEMA);
+    Preconditions.checkNotNull(schemaString, "Table schema not set");
+    Schema schema = 
SchemaParser.fromJson(props.getProperty(InputFormatConfig.TABLE_SCHEMA));

Review comment:
       I think it's fine to do this in a separate PR. I just really don't want 
to require setting properties with JSON schema or spec representations as the 
way to use Iceberg. It's okay for a way to customize if there isn't syntax, but 
normal cases should just use DDL.

##########
File path: core/src/main/java/org/apache/iceberg/hadoop/HadoopTables.java
##########
@@ -144,6 +147,52 @@ public Table create(Schema schema, PartitionSpec spec, 
SortOrder order,
     return new BaseTable(ops, location);
   }
 
+  /**
+   * Drop a table and delete all data and metadata files. Throws 
NoSuchTableException if the table does not exists.
+   *
+   * @param location a path URI (e.g. hdfs:///warehouse/my_table)
+   * @return true if the table was dropped
+   */
+  public boolean dropTable(String location) {

Review comment:
       Agreed.

##########
File path: core/src/main/java/org/apache/iceberg/hadoop/HadoopTables.java
##########
@@ -144,6 +147,52 @@ public Table create(Schema schema, PartitionSpec spec, 
SortOrder order,
     return new BaseTable(ops, location);
   }
 
+  /**
+   * Drop a table and delete all data and metadata files.
+   *
+   * @param location a path URI (e.g. hdfs:///warehouse/my_table)
+   * @return true if the table was dropped
+   * @throws NoSuchTableException if the table does not exists.
+   */
+  public boolean dropTable(String location) {
+    return dropTable(location, true);
+  }
+
+  /**
+   * Drop a table; optionally delete data and metadata files.
+   * <p>
+   * If purge is set to true the implementation should delete all data and 
metadata files.
+   *
+   * @param location a path URI (e.g. hdfs:///warehouse/my_table)
+   * @param purge if true, delete all data and metadata files in the table
+   * @return true if the table was dropped
+   * @throws NoSuchTableException if the table does not exists.
+   */
+  public boolean dropTable(String location, boolean purge) {
+    TableOperations ops = newTableOps(location);
+    TableMetadata lastMetadata = null;
+    if (ops.current() != null) {
+      if (purge) {
+        lastMetadata = ops.current();
+      }
+    } else {
+      throw new NoSuchTableException("Table does not exist at location: %s, so 
it can not be dropped", location);

Review comment:
       I should have caught this yesterday, but shouldn't this return `false` 
instead of throwing the exception? That's what all the other `drop` methods do. 
If the table doesn't exist, it isn't an exceptional case. It just returns false 
to signal that it nothing needed to be done.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to