[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-07 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r764483238



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java
##
@@ -220,35 +224,47 @@ private static Result toResult(HBaseRpcController 
controller, MutateResponse res
 
   @Override
   public CompletableFuture get(Get get) {
+final Supplier supplier = new TableOperationSpanBuilder()

Review comment:
   The operation argument is polymorphic, so I'd have to implement several 
identical methods, each with a different operation type in their signature. I 
have wrapped up invocations of `TableOperationSpanBuilder` as described.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-07 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r764228006



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.CheckAndMutate;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Construct {@link io.opentelemetry.api.trace.Span} instances originating from
+ * "table operations" -- the verbs in our public API that interact with data 
in tables.
+ */
+@InterfaceAudience.Private
+public class TableOperationSpanBuilder implements Supplier {
+
+  // n.b. The results of this class are tested implicitly by way of the likes 
of
+  // `TestAsyncTableTracing` and friends.
+
+  private static final String unknown = "UNKNOWN";
+
+  private TableName tableName;
+  private final Map, Object> attributes = new HashMap<>();
+
+  @Override public Span get() {
+return build();
+  }
+
+  public TableOperationSpanBuilder setOperation(final Scan scan) {
+return setOperation(valueFrom(scan));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Row row) {
+return setOperation(valueFrom(row));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Operation operation) {
+attributes.put(DB_OPERATION, operation.name());
+return this;
+  }
+
+  public TableOperationSpanBuilder setTableName(final TableName tableName) {
+this.tableName = tableName;
+attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString());
+attributes.put(DB_NAME, tableName.getNamespaceAsString());
+attributes.put(TABLE_KEY, tableName.getNameAsString());
+return this;
+  }
+
+  @SuppressWarnings("unchecked")
+  public Span build() {
+final String name = attributes.getOrDefault(DB_OPERATION, unknown)

Review comment:
   > And what you said about the scanner.next, I was not talking about 
client side scan, I was talking about the server side RegionScanner...
   
   Okay, understood. We can discuss that separately.
   
   > And we will always have the rpc method to be traced, so even if we do 
nothing in the scan method, we could still see a lot of rpc spans when 
scanning. This is true.
   
   It sounds like we need to open an operation-level span, just to encapsulate 
all the RPC spans.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-07 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r764225344



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.CheckAndMutate;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Construct {@link io.opentelemetry.api.trace.Span} instances originating from
+ * "table operations" -- the verbs in our public API that interact with data 
in tables.
+ */
+@InterfaceAudience.Private
+public class TableOperationSpanBuilder implements Supplier {
+
+  // n.b. The results of this class are tested implicitly by way of the likes 
of
+  // `TestAsyncTableTracing` and friends.
+
+  private static final String unknown = "UNKNOWN";
+
+  private TableName tableName;
+  private final Map, Object> attributes = new HashMap<>();
+
+  @Override public Span get() {
+return build();
+  }
+
+  public TableOperationSpanBuilder setOperation(final Scan scan) {
+return setOperation(valueFrom(scan));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Row row) {
+return setOperation(valueFrom(row));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Operation operation) {
+attributes.put(DB_OPERATION, operation.name());
+return this;
+  }
+
+  public TableOperationSpanBuilder setTableName(final TableName tableName) {
+this.tableName = tableName;
+attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString());
+attributes.put(DB_NAME, tableName.getNamespaceAsString());
+attributes.put(TABLE_KEY, tableName.getNameAsString());
+return this;
+  }
+
+  @SuppressWarnings("unchecked")
+  public Span build() {
+final String name = attributes.getOrDefault(DB_OPERATION, unknown)
++ " "
++ (tableName != null ? tableName.getNameWithNamespaceInclAsString() : 
unknown);
+final SpanBuilder builder = TraceUtil.getGlobalTracer()
+  .spanBuilder(name)
+  // TODO: what about clients embedded in Master/RegionServer/Gateways/?
+  .setSpanKind(SpanKind.CLIENT);
+attributes.forEach((k, v) -> builder.setAttribute((AttributeKey) k, v));
+return builder.startSpan();
+  }
+
+  private static Operation valueFrom(final Scan scan) {
+if (scan == null) { return null; }
+return Operation.SCAN;
+  }
+
+  private static Operation valueFrom(final Row row) {
+if (row == null) { return null; }
+if (row instanceof Append) { return Operation.APPEND; }
+if (row instanceof CheckAndMutate) { return Operation.CHECK_AND_MUTATE; }
+if (row instanceof Delete) { return Operation.DELETE; }
+if (row instanceof Get) { return Operation.GET; }
+if (row instanceof Increment) { return Operation.INCREMENT; }
+if (row instanceof Put) { return Operation.PUT; }
+if (row instanceof 

[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-07 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r764221474



##
File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/trace/HBaseSemanticAttributes.java
##
@@ -28,7 +28,9 @@
 */
 @InterfaceAudience.Private
 public final class HBaseSemanticAttributes {
+  public static final AttributeKey DB_NAME = 
SemanticAttributes.DB_NAME;
   public static final AttributeKey NAMESPACE_KEY = 
SemanticAttributes.DB_HBASE_NAMESPACE;
+  public static final AttributeKey DB_OPERATION = 
SemanticAttributes.DB_OPERATION;
   public static final AttributeKey TABLE_KEY = 
AttributeKey.stringKey("db.hbase.table");

Review comment:
   I guess we can drop the `_KEY` part here as none of these constants we 
import from `SemanticAttributes` use this naming convention.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-06 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r763517917



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.CheckAndMutate;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Construct {@link io.opentelemetry.api.trace.Span} instances originating from
+ * "table operations" -- the verbs in our public API that interact with data 
in tables.
+ */
+@InterfaceAudience.Private
+public class TableOperationSpanBuilder implements Supplier {
+
+  // n.b. The results of this class are tested implicitly by way of the likes 
of
+  // `TestAsyncTableTracing` and friends.
+
+  private static final String unknown = "UNKNOWN";
+
+  private TableName tableName;
+  private final Map, Object> attributes = new HashMap<>();
+
+  @Override public Span get() {
+return build();
+  }
+
+  public TableOperationSpanBuilder setOperation(final Scan scan) {
+return setOperation(valueFrom(scan));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Row row) {
+return setOperation(valueFrom(row));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Operation operation) {
+attributes.put(DB_OPERATION, operation.name());
+return this;
+  }
+
+  public TableOperationSpanBuilder setTableName(final TableName tableName) {
+this.tableName = tableName;
+attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString());
+attributes.put(DB_NAME, tableName.getNamespaceAsString());
+attributes.put(TABLE_KEY, tableName.getNameAsString());
+return this;
+  }
+
+  @SuppressWarnings("unchecked")
+  public Span build() {
+final String name = attributes.getOrDefault(DB_OPERATION, unknown)

Review comment:
   However the "long" scan is traced, the both `scan` and `scanAll` methods 
are "scan" operations from the client's perspective, so I think it's find for 
both of them to have `db.operation="SCAN"`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-06 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r763516926



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.CheckAndMutate;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Construct {@link io.opentelemetry.api.trace.Span} instances originating from
+ * "table operations" -- the verbs in our public API that interact with data 
in tables.
+ */
+@InterfaceAudience.Private
+public class TableOperationSpanBuilder implements Supplier {
+
+  // n.b. The results of this class are tested implicitly by way of the likes 
of
+  // `TestAsyncTableTracing` and friends.
+
+  private static final String unknown = "UNKNOWN";
+
+  private TableName tableName;
+  private final Map, Object> attributes = new HashMap<>();
+
+  @Override public Span get() {
+return build();
+  }
+
+  public TableOperationSpanBuilder setOperation(final Scan scan) {
+return setOperation(valueFrom(scan));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Row row) {
+return setOperation(valueFrom(row));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Operation operation) {
+attributes.put(DB_OPERATION, operation.name());
+return this;
+  }
+
+  public TableOperationSpanBuilder setTableName(final TableName tableName) {
+this.tableName = tableName;
+attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString());
+attributes.put(DB_NAME, tableName.getNamespaceAsString());
+attributes.put(TABLE_KEY, tableName.getNameAsString());
+return this;
+  }
+
+  @SuppressWarnings("unchecked")
+  public Span build() {
+final String name = attributes.getOrDefault(DB_OPERATION, unknown)

Review comment:
   Filed [HBASE-26545](https://issues.apache.org/jira/browse/HBASE-26545).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-03 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r762327178



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java
##
@@ -329,43 +348,52 @@ private void preCheck() {
 public CompletableFuture thenPut(Put put) {
   validatePut(put, conn.connConf.getMaxKeyValueSize());
   preCheck();
+  final Supplier supplier = new TableOperationSpanBuilder()
+.setTableName(tableName)
+.setOperation(HBaseSemanticAttributes.Operation.CHECK_AND_MUTATE);
   return tracedFuture(
 () -> RawAsyncTableImpl.this. newCaller(row, 
put.getPriority(), rpcTimeoutNs)
   .action((controller, loc, stub) -> 
RawAsyncTableImpl.mutate(controller, loc, stub, put,
 (rn, p) -> RequestConverter.buildMutateRequest(rn, row, family, 
qualifier, op, value,
   null, timeRange, p, HConstants.NO_NONCE, HConstants.NO_NONCE),
 (c, r) -> r.getProcessed()))
   .call(),
-"AsyncTable.CheckAndMutateBuilder.thenPut", tableName);
+supplier);
 }
 
 @Override
 public CompletableFuture thenDelete(Delete delete) {
   preCheck();
+  final Supplier supplier = new TableOperationSpanBuilder()
+.setTableName(tableName)
+.setOperation(HBaseSemanticAttributes.Operation.CHECK_AND_MUTATE);
   return tracedFuture(
 () -> RawAsyncTableImpl.this. newCaller(row, 
delete.getPriority(), rpcTimeoutNs)
   .action((controller, loc, stub) -> 
RawAsyncTableImpl.mutate(controller, loc, stub, delete,
 (rn, d) -> RequestConverter.buildMutateRequest(rn, row, family, 
qualifier, op, value,
   null, timeRange, d, HConstants.NO_NONCE, HConstants.NO_NONCE),
 (c, r) -> r.getProcessed()))
   .call(),
-"AsyncTable.CheckAndMutateBuilder.thenDelete", tableName);
+supplier);
 }
 
 @Override
-public CompletableFuture thenMutate(RowMutations mutation) {
+public CompletableFuture thenMutate(RowMutations mutations) {
   preCheck();
-  validatePutsInRowMutations(mutation, conn.connConf.getMaxKeyValueSize());
+  validatePutsInRowMutations(mutations, 
conn.connConf.getMaxKeyValueSize());
+  final Supplier supplier = new TableOperationSpanBuilder()
+.setTableName(tableName)
+.setOperation(HBaseSemanticAttributes.Operation.BATCH);

Review comment:
   Latest patch corrects the operation type for the checkandmutate cases 
@taklwu pointed out. thanks for noticing!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-03 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r762233370



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java
##
@@ -329,43 +348,52 @@ private void preCheck() {
 public CompletableFuture thenPut(Put put) {
   validatePut(put, conn.connConf.getMaxKeyValueSize());
   preCheck();
+  final Supplier supplier = new TableOperationSpanBuilder()
+.setTableName(tableName)
+.setOperation(HBaseSemanticAttributes.Operation.CHECK_AND_MUTATE);
   return tracedFuture(
 () -> RawAsyncTableImpl.this. newCaller(row, 
put.getPriority(), rpcTimeoutNs)
   .action((controller, loc, stub) -> 
RawAsyncTableImpl.mutate(controller, loc, stub, put,
 (rn, p) -> RequestConverter.buildMutateRequest(rn, row, family, 
qualifier, op, value,
   null, timeRange, p, HConstants.NO_NONCE, HConstants.NO_NONCE),
 (c, r) -> r.getProcessed()))
   .call(),
-"AsyncTable.CheckAndMutateBuilder.thenPut", tableName);
+supplier);
 }
 
 @Override
 public CompletableFuture thenDelete(Delete delete) {
   preCheck();
+  final Supplier supplier = new TableOperationSpanBuilder()
+.setTableName(tableName)
+.setOperation(HBaseSemanticAttributes.Operation.CHECK_AND_MUTATE);
   return tracedFuture(
 () -> RawAsyncTableImpl.this. newCaller(row, 
delete.getPriority(), rpcTimeoutNs)
   .action((controller, loc, stub) -> 
RawAsyncTableImpl.mutate(controller, loc, stub, delete,
 (rn, d) -> RequestConverter.buildMutateRequest(rn, row, family, 
qualifier, op, value,
   null, timeRange, d, HConstants.NO_NONCE, HConstants.NO_NONCE),
 (c, r) -> r.getProcessed()))
   .call(),
-"AsyncTable.CheckAndMutateBuilder.thenDelete", tableName);
+supplier);
 }
 
 @Override
-public CompletableFuture thenMutate(RowMutations mutation) {
+public CompletableFuture thenMutate(RowMutations mutations) {
   preCheck();
-  validatePutsInRowMutations(mutation, conn.connConf.getMaxKeyValueSize());
+  validatePutsInRowMutations(mutations, 
conn.connConf.getMaxKeyValueSize());
+  final Supplier supplier = new TableOperationSpanBuilder()
+.setTableName(tableName)
+.setOperation(HBaseSemanticAttributes.Operation.BATCH);

Review comment:
   Since we haven't gotten to that PR yet, I'm asking in the community what 
they suggest for this type of operation. 
https://cloud-native.slack.com/archives/C01QZFGMLQ7/p1638564279059600




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-03 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r762161794



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.CheckAndMutate;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Construct {@link io.opentelemetry.api.trace.Span} instances originating from
+ * "table operations" -- the verbs in our public API that interact with data 
in tables.
+ */
+@InterfaceAudience.Private
+public class TableOperationSpanBuilder implements Supplier {
+
+  // n.b. The results of this class are tested implicitly by way of the likes 
of
+  // `TestAsyncTableTracing` and friends.
+
+  private static final String unknown = "UNKNOWN";
+
+  private TableName tableName;
+  private final Map, Object> attributes = new HashMap<>();
+
+  @Override public Span get() {
+return build();
+  }
+
+  public TableOperationSpanBuilder setOperation(final Scan scan) {
+return setOperation(valueFrom(scan));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Row row) {
+return setOperation(valueFrom(row));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Operation operation) {
+attributes.put(DB_OPERATION, operation.name());
+return this;
+  }
+
+  public TableOperationSpanBuilder setTableName(final TableName tableName) {
+this.tableName = tableName;
+attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString());
+attributes.put(DB_NAME, tableName.getNamespaceAsString());
+attributes.put(TABLE_KEY, tableName.getNameAsString());
+return this;
+  }
+
+  @SuppressWarnings("unchecked")
+  public Span build() {
+final String name = attributes.getOrDefault(DB_OPERATION, unknown)
++ " "
++ (tableName != null ? tableName.getNameWithNamespaceInclAsString() : 
unknown);
+final SpanBuilder builder = TraceUtil.getGlobalTracer()
+  .spanBuilder(name)
+  // TODO: what about clients embedded in Master/RegionServer/Gateways/?

Review comment:
   And this one, 
https://cloud-native.slack.com/archives/C01QZFGMLQ7/p1638556578055200




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-03 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r762160351



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.CheckAndMutate;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Construct {@link io.opentelemetry.api.trace.Span} instances originating from
+ * "table operations" -- the verbs in our public API that interact with data 
in tables.
+ */
+@InterfaceAudience.Private
+public class TableOperationSpanBuilder implements Supplier {
+
+  // n.b. The results of this class are tested implicitly by way of the likes 
of
+  // `TestAsyncTableTracing` and friends.
+
+  private static final String unknown = "UNKNOWN";
+
+  private TableName tableName;
+  private final Map, Object> attributes = new HashMap<>();
+
+  @Override public Span get() {
+return build();
+  }
+
+  public TableOperationSpanBuilder setOperation(final Scan scan) {
+return setOperation(valueFrom(scan));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Row row) {
+return setOperation(valueFrom(row));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Operation operation) {
+attributes.put(DB_OPERATION, operation.name());
+return this;
+  }
+
+  public TableOperationSpanBuilder setTableName(final TableName tableName) {
+this.tableName = tableName;
+attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString());
+attributes.put(DB_NAME, tableName.getNamespaceAsString());
+attributes.put(TABLE_KEY, tableName.getNameAsString());
+return this;
+  }
+
+  @SuppressWarnings("unchecked")
+  public Span build() {
+final String name = attributes.getOrDefault(DB_OPERATION, unknown)

Review comment:
   I've raised this question over here: 
https://cloud-native.slack.com/archives/C01QZFGMLQ7/p1638556336052800




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-02 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r761414990



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.CheckAndMutate;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Construct {@link io.opentelemetry.api.trace.Span} instances originating from
+ * "table operations" -- the verbs in our public API that interact with data 
in tables.
+ */
+@InterfaceAudience.Private
+public class TableOperationSpanBuilder implements Supplier {
+
+  // n.b. The results of this class are tested implicitly by way of the likes 
of
+  // `TestAsyncTableTracing` and friends.
+
+  private static final String unknown = "UNKNOWN";
+
+  private TableName tableName;
+  private final Map, Object> attributes = new HashMap<>();
+
+  @Override public Span get() {
+return build();
+  }
+
+  public TableOperationSpanBuilder setOperation(final Scan scan) {
+return setOperation(valueFrom(scan));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Row row) {
+return setOperation(valueFrom(row));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Operation operation) {
+attributes.put(DB_OPERATION, operation.name());
+return this;
+  }
+
+  public TableOperationSpanBuilder setTableName(final TableName tableName) {
+this.tableName = tableName;
+attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString());
+attributes.put(DB_NAME, tableName.getNamespaceAsString());
+attributes.put(TABLE_KEY, tableName.getNameAsString());
+return this;
+  }
+
+  @SuppressWarnings("unchecked")
+  public Span build() {
+final String name = attributes.getOrDefault(DB_OPERATION, unknown)
++ " "
++ (tableName != null ? tableName.getNameWithNamespaceInclAsString() : 
unknown);
+final SpanBuilder builder = TraceUtil.getGlobalTracer()
+  .spanBuilder(name)
+  // TODO: what about clients embedded in Master/RegionServer/Gateways/?
+  .setSpanKind(SpanKind.CLIENT);
+attributes.forEach((k, v) -> builder.setAttribute((AttributeKey) k, v));
+return builder.startSpan();
+  }
+
+  private static Operation valueFrom(final Scan scan) {
+if (scan == null) { return null; }
+return Operation.SCAN;
+  }
+
+  private static Operation valueFrom(final Row row) {
+if (row == null) { return null; }
+if (row instanceof Append) { return Operation.APPEND; }
+if (row instanceof CheckAndMutate) { return Operation.CHECK_AND_MUTATE; }
+if (row instanceof Delete) { return Operation.DELETE; }
+if (row instanceof Get) { return Operation.GET; }
+if (row instanceof Increment) { return Operation.INCREMENT; }
+if (row instanceof Put) { return Operation.PUT; }
+if (row instanceof 

[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations

2021-12-01 Thread GitBox


ndimiduk commented on a change in pull request #3906:
URL: https://github.com/apache/hbase/pull/3906#discussion_r760655901



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java
##
@@ -548,14 +591,17 @@ public void run(MultiResponse resp) {
 validatePutsInRowMutations(mutations, conn.connConf.getMaxKeyValueSize());
 long nonceGroup = conn.getNonceGenerator().getNonceGroup();
 long nonce = conn.getNonceGenerator().newNonce();
+final Supplier supplier = new TableOperationSpanBuilder()
+  .setTableName(tableName)
+  .setOperation(HBaseSemanticAttributes.Operation.BATCH);

Review comment:
   Yes, this one could use the `mutations` instance.

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.CheckAndMutate;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Construct {@link io.opentelemetry.api.trace.Span} instances originating from
+ * "table operations" -- the verbs in our public API that interact with data 
in tables.
+ */
+@InterfaceAudience.Private
+public class TableOperationSpanBuilder implements Supplier {
+
+  // n.b. The results of this class are tested implicitly by way of the likes 
of
+  // `TestAsyncTableTracing` and friends.
+
+  private static final String unknown = "UNKNOWN";
+
+  private TableName tableName;
+  private final Map, Object> attributes = new HashMap<>();
+
+  @Override public Span get() {
+return build();
+  }
+
+  public TableOperationSpanBuilder setOperation(final Scan scan) {
+return setOperation(valueFrom(scan));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Row row) {
+return setOperation(valueFrom(row));
+  }
+
+  public TableOperationSpanBuilder setOperation(final Operation operation) {
+attributes.put(DB_OPERATION, operation.name());
+return this;
+  }
+
+  public TableOperationSpanBuilder setTableName(final TableName tableName) {
+this.tableName = tableName;
+attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString());
+attributes.put(DB_NAME, tableName.getNamespaceAsString());
+attributes.put(TABLE_KEY, tableName.getNameAsString());
+return this;
+  }
+
+  @SuppressWarnings("unchecked")
+  public Span build() {
+final String name = attributes.getOrDefault(DB_OPERATION, unknown)
++ " "
++ (tableName != null ? tableName.getNameWithNamespaceInclAsString() : 
unknown);
+final SpanBuilder builder = TraceUtil.getGlobalTracer()
+  .spanBuilder(name)
+  // TODO: what about clients embedded in Master/RegionServer/Gateways/?
+  .setSpanKind(SpanKind.CLIENT);
+attributes.forEach((k, v) -> builder.setAttribute((AttributeKey) k, v));
+return builder.startSpan();
+  }
+
+  private static Operation