reswqa commented on code in PR #35:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/35#discussion_r1135385443


##########
.idea/vcs.xml:
##########
@@ -20,5 +20,6 @@
   </component>
   <component name="VcsDirectoryMappings">
     <mapping directory="$PROJECT_DIR$" vcs="Git" />
+    <mapping directory="$PROJECT_DIR$/tools/releasing/shared" vcs="Git" />

Review Comment:
   Why submit this change to upstream branch?



##########
flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/table/catalog/PulsarCatalog.java:
##########
@@ -0,0 +1,522 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.pulsar.table.catalog;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.connector.pulsar.common.config.PulsarConfigBuilder;
+import 
org.apache.flink.connector.pulsar.table.catalog.client.PulsarCatalogClient;
+import 
org.apache.flink.connector.pulsar.table.catalog.config.CatalogConfiguration;
+import org.apache.flink.table.catalog.AbstractCatalog;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.CatalogFunction;
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.ResolvedCatalogTable;
+import org.apache.flink.table.catalog.ResolvedCatalogView;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.factories.Factory;
+
+import org.apache.pulsar.client.admin.PulsarAdminException;
+import org.apache.pulsar.client.api.PulsarClientException;
+import org.apache.pulsar.common.naming.TopicDomain;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.List;
+import java.util.Optional;
+
+import static 
org.apache.flink.connector.pulsar.common.config.PulsarOptions.PULSAR_ADMIN_URL;
+import static 
org.apache.flink.connector.pulsar.common.config.PulsarOptions.PULSAR_SERVICE_URL;
+import static 
org.apache.flink.connector.pulsar.table.catalog.PulsarCatalogOptions.LOCAL_PULSAR_ADMIN_URL;
+import static 
org.apache.flink.connector.pulsar.table.catalog.PulsarCatalogOptions.LOCAL_PULSAR_SERVICE_URL;
+import static 
org.apache.flink.connector.pulsar.table.catalog.config.PulsarCatalogConfigUtils.CATALOG_VALIDATOR;
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.StringUtils.isNullOrWhitespaceOnly;
+
+/**
+ * Catalog implementation which uses Pulsar to store Flink created 
databases/tables, exposes the
+ * Pulsar's namespace as the Flink databases and exposes the Pulsar's topics 
as the Flink tables.
+ *
+ * <h2>Database Mapping</h2>
+ *
+ * <p>{@link PulsarCatalog} offers two kinds of databases.
+ *
+ * <ul>
+ *   <li><strong>Managed Databases</strong><br>
+ *       A managed database refers to a database created by using Flink but 
the its name doesn't
+ *       contain tenant information.<br>
+ *       We will created the corresponding namespace under the tenant 
configured by {@link
+ *       PulsarCatalogOptions#PULSAR_CATALOG_MANAGED_TENANT}.
+ *   <li><strong>Pulsar Databases</strong><br>
+ *       A Pulsar databases refers to an existing namespace that wasn't a 
system namespace nor under
+ *       the Flink managed tenant in Pulsar. Each namespace will be mapped to 
a database using the
+ *       tenant and namespace name like {@code tenant/namespace}.
+ * </ul>
+ *
+ * <h2>Table Mapping</h2>
+ *
+ * <p>A table refers to a Pulsar topic, using a 1-to-1 mapping from the 
Pulsar's {@link
+ * TopicDomain#persistent} topic to the Flink table. We don't support {@link
+ * TopicDomain#non_persistent} topics here.
+ *
+ * <p>Each topic will be mapped to a table under a database using the topic's 
tenant and namespace
+ * named like {@code tenant/namespace}. The mapped table has the same name as 
the local name of the
+ * original topic. For example, the topic {@code 
persistent://public/default/some} will be mapped to
+ * {@code some} table under the {@code public/default} database. This allows 
users to easily query
+ * from existing Pulsar topics without explicitly creating the table. It 
automatically determines
+ * the Flink format to use based on the stored Pulsar schema in the Pulsar 
topic.
+ *
+ * <p>This mapping has some limitations, such as users can't designate the 
watermark and thus can't
+ * use window aggregate functions for the topics that aren't created by 
catalog.
+ */
+@PublicEvolving
+@SuppressWarnings("java:S1192")
+public class PulsarCatalog extends AbstractCatalog {
+    private static final Logger LOG = 
LoggerFactory.getLogger(PulsarCatalog.class);
+
+    private final CatalogConfiguration configuration;
+
+    private PulsarCatalogClient client;
+
+    public PulsarCatalog(String catalogName, CatalogConfiguration 
configuration) {
+        super(catalogName, configuration.getDefaultDatabase());
+
+        PulsarConfigBuilder builder = new PulsarConfigBuilder(configuration);
+
+        // Set the required options for supporting the local catalog.
+        builder.setIfMissing(PULSAR_SERVICE_URL, LOCAL_PULSAR_SERVICE_URL);
+        builder.setIfMissing(PULSAR_ADMIN_URL, LOCAL_PULSAR_ADMIN_URL);
+
+        // We may create the CatalogConfiguration twice when using the 
PulsarCatalogFactory.
+        // But we truly add the config validation when you want to manually 
create the Pulsar
+        // catalog.
+        this.configuration = builder.build(CATALOG_VALIDATOR, 
CatalogConfiguration::new);
+    }
+
+    @Override
+    public Optional<Factory> getFactory() {
+        // We will add PulsarDynamicTableFactory support here in the upcoming 
PRs.
+        return Optional.empty();
+    }
+
+    @Override
+    public void open() throws CatalogException {
+        // Create the catalog client.
+        if (client == null) {
+            try {
+                this.client = new PulsarCatalogClient(getName(), 
configuration);
+            } catch (PulsarClientException e) {
+                String message =
+                        "Failed to create the client in catalog "
+                                + getName()
+                                + ", config is: "
+                                + configuration;
+                throw new CatalogException(message, e);

Review Comment:
   Can we exact this logic to a method since too many place has the same 
behavior.



##########
flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/table/schema/translators/AvroSchemaTranslator.java:
##########
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.pulsar.table.schema.translators;
+
+import org.apache.flink.connector.pulsar.table.schema.SchemaTranslator;
+import org.apache.flink.formats.avro.typeutils.AvroSchemaConverter;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.catalog.Column;
+import org.apache.flink.table.catalog.ResolvedSchema;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.FieldsDataType;
+
+import org.apache.pulsar.client.api.schema.SchemaDefinition;
+import org.apache.pulsar.client.impl.schema.AvroSchema;
+import org.apache.pulsar.client.impl.schema.util.SchemaUtil;
+import org.apache.pulsar.common.schema.SchemaInfo;
+import org.apache.pulsar.common.schema.SchemaType;
+
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+
+import static 
org.apache.flink.connector.pulsar.table.schema.translators.PrimitiveSchemaTranslator.SINGLE_FIELD_FIELD_NAME;
+import static 
org.apache.flink.formats.avro.typeutils.AvroSchemaConverter.convertToSchema;
+
+/** The translator for Pulsar's {@link AvroSchema}. */
+public class AvroSchemaTranslator implements SchemaTranslator {
+
+    @Override
+    public Schema toSchema(SchemaInfo info) {
+        String json = new String(info.getSchema(), StandardCharsets.UTF_8);
+        DataType dataType = AvroSchemaConverter.convertToDataType(json);
+        if (!(dataType instanceof FieldsDataType)) {
+            // KeyValue type will be converted into the single value type.

Review Comment:
   Can we guarantee that it must be `KeyValue` type here?



##########
flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/table/catalog/PulsarCatalog.java:
##########
@@ -0,0 +1,522 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.pulsar.table.catalog;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.connector.pulsar.common.config.PulsarConfigBuilder;
+import 
org.apache.flink.connector.pulsar.table.catalog.client.PulsarCatalogClient;
+import 
org.apache.flink.connector.pulsar.table.catalog.config.CatalogConfiguration;
+import org.apache.flink.table.catalog.AbstractCatalog;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.CatalogFunction;
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.ResolvedCatalogTable;
+import org.apache.flink.table.catalog.ResolvedCatalogView;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.factories.Factory;
+
+import org.apache.pulsar.client.admin.PulsarAdminException;
+import org.apache.pulsar.client.api.PulsarClientException;
+import org.apache.pulsar.common.naming.TopicDomain;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.List;
+import java.util.Optional;
+
+import static 
org.apache.flink.connector.pulsar.common.config.PulsarOptions.PULSAR_ADMIN_URL;
+import static 
org.apache.flink.connector.pulsar.common.config.PulsarOptions.PULSAR_SERVICE_URL;
+import static 
org.apache.flink.connector.pulsar.table.catalog.PulsarCatalogOptions.LOCAL_PULSAR_ADMIN_URL;
+import static 
org.apache.flink.connector.pulsar.table.catalog.PulsarCatalogOptions.LOCAL_PULSAR_SERVICE_URL;
+import static 
org.apache.flink.connector.pulsar.table.catalog.config.PulsarCatalogConfigUtils.CATALOG_VALIDATOR;
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.StringUtils.isNullOrWhitespaceOnly;
+
+/**
+ * Catalog implementation which uses Pulsar to store Flink created 
databases/tables, exposes the
+ * Pulsar's namespace as the Flink databases and exposes the Pulsar's topics 
as the Flink tables.
+ *
+ * <h2>Database Mapping</h2>
+ *
+ * <p>{@link PulsarCatalog} offers two kinds of databases.
+ *
+ * <ul>
+ *   <li><strong>Managed Databases</strong><br>
+ *       A managed database refers to a database created by using Flink but 
the its name doesn't
+ *       contain tenant information.<br>
+ *       We will created the corresponding namespace under the tenant 
configured by {@link
+ *       PulsarCatalogOptions#PULSAR_CATALOG_MANAGED_TENANT}.
+ *   <li><strong>Pulsar Databases</strong><br>
+ *       A Pulsar databases refers to an existing namespace that wasn't a 
system namespace nor under
+ *       the Flink managed tenant in Pulsar. Each namespace will be mapped to 
a database using the
+ *       tenant and namespace name like {@code tenant/namespace}.
+ * </ul>
+ *
+ * <h2>Table Mapping</h2>
+ *
+ * <p>A table refers to a Pulsar topic, using a 1-to-1 mapping from the 
Pulsar's {@link
+ * TopicDomain#persistent} topic to the Flink table. We don't support {@link
+ * TopicDomain#non_persistent} topics here.
+ *
+ * <p>Each topic will be mapped to a table under a database using the topic's 
tenant and namespace
+ * named like {@code tenant/namespace}. The mapped table has the same name as 
the local name of the
+ * original topic. For example, the topic {@code 
persistent://public/default/some} will be mapped to
+ * {@code some} table under the {@code public/default} database. This allows 
users to easily query
+ * from existing Pulsar topics without explicitly creating the table. It 
automatically determines
+ * the Flink format to use based on the stored Pulsar schema in the Pulsar 
topic.
+ *
+ * <p>This mapping has some limitations, such as users can't designate the 
watermark and thus can't
+ * use window aggregate functions for the topics that aren't created by 
catalog.

Review Comment:
   Why user can't use window agg functions for the topics that `aren't` created 
by catalog? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to