Copilot commented on code in PR #683:
URL: 
https://github.com/apache/incubator-hugegraph-toolchain/pull/683#discussion_r2458903549


##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/util/DataTypeUtil.java:
##########
@@ -70,16 +73,19 @@ public static Object convert(Object value, PropertyKey 
propertyKey, InputSource
                 return parseSingleValue(key, value, dataType, source);
             case SET:
             case LIST:
-                return parseMultiValues(key, value, dataType, cardinality, 
source);
+                return parseMultiValues(key, value, dataType,
+                                        cardinality, source);
             default:
-                throw new AssertionError(String.format("Unsupported 
cardinality: '%s'",
-                                                       cardinality));
+                throw new AssertionError(String.format(
+                        "Unsupported cardinality: '%s'", cardinality));
         }
     }
 
     @SuppressWarnings("unchecked")
-    public static List<Object> splitField(String key, Object rawColumnValue, 
InputSource source) {
-        E.checkArgument(rawColumnValue != null, "The value to be split can't 
be null");
+    public static List<Object> splitField(String key, Object rawColumnValue,
+                                          InputSource source) {
+        E.checkArgument(rawColumnValue != null,
+                        "The value to be splitted can't be null");

Review Comment:
   Corrected spelling of 'splitted' to 'split'.
   ```suggestion
                           "The value to be split can't be null");
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/util/DataTypeUtil.java:
##########
@@ -92,12 +98,13 @@ public static long parseNumber(String key, Object rawValue) 
{
         if (rawValue instanceof Number) {
             return ((Number) rawValue).longValue();
         } else if (rawValue instanceof String) {
-            // trim() is a little time-consuming
+            // trim() is a little time consuming

Review Comment:
   Corrected 'time consuming' to 'time-consuming' (compound adjective should be 
hyphenated).
   ```suggestion
               // trim() is a little time-consuming
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/builder/ElementBuilder.java:
##########
@@ -491,15 +642,29 @@ public void extractFromVertex(String[] names, Object[] 
values) {
                 if (!retainField(fieldName, fieldValue)) {
                     continue;
                 }
-                String key = mapping().mappingField(fieldName);
-                if (primaryKeys.contains(key)) {
-                    // Don't put primary key/values into general properties
-                    int index = primaryKeys.indexOf(key);
-                    Object pkValue = mappingValue(fieldName, fieldValue);
-                    this.pkValues[index] = pkValue;
+                String key = mappingField(fieldName);
+
+                if (this.headerCaseSensitive) {
+                    if (primaryKeys.contains(key)) {
+                        // Don't put priamry key/values into general properties

Review Comment:
   Corrected spelling of 'priamry' to 'primary'.



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/builder/ElementBuilder.java:
##########
@@ -491,15 +642,29 @@ public void extractFromVertex(String[] names, Object[] 
values) {
                 if (!retainField(fieldName, fieldValue)) {
                     continue;
                 }
-                String key = mapping().mappingField(fieldName);
-                if (primaryKeys.contains(key)) {
-                    // Don't put primary key/values into general properties
-                    int index = primaryKeys.indexOf(key);
-                    Object pkValue = mappingValue(fieldName, fieldValue);
-                    this.pkValues[index] = pkValue;
+                String key = mappingField(fieldName);
+
+                if (this.headerCaseSensitive) {
+                    if (primaryKeys.contains(key)) {
+                        // Don't put priamry key/values into general properties
+                        int index = primaryKeys.indexOf(key);
+                        Object pkValue = mappingValue(fieldName, fieldValue);
+                        this.pkValues[index] = pkValue;
+                    } else {
+                        Object value = mappingValue(fieldName, fieldValue);
+                        this.properties.put(key, value);
+                    }
                 } else {
-                    Object value = mappingValue(fieldName, fieldValue);
-                    this.properties.put(key, value);
+                    String lowerCaseKey = key.toLowerCase();
+                    if (lowerCasePrimaryKeys.contains(lowerCaseKey)) {
+                        // Don't put priamry key/values into general properties

Review Comment:
   Corrected spelling of 'priamry' to 'primary'.



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/failure/FailLogger.java:
##########
@@ -138,30 +139,32 @@ private void writeHeaderIfNeeded() {
 
     private void removeDupLines() {
         Charset charset = Charset.forName(this.struct.input().charset());
-        File dedupFile = new File(this.file.getAbsolutePath() + 
Constants.DEDUP_SUFFIX);
-        try (InputStream is = Files.newInputStream(this.file.toPath());
+        File dedupFile = new File(this.file.getAbsolutePath() +
+                                   Constants.DEDUP_SUFFIX);
+        try (InputStream is = new FileInputStream(this.file);
              Reader ir = new InputStreamReader(is, charset);
              BufferedReader reader = new BufferedReader(ir);
              // upper is input, below is output
-             OutputStream os = Files.newOutputStream(dedupFile.toPath());
+             OutputStream os = new FileOutputStream(dedupFile);
              Writer ow = new OutputStreamWriter(os, charset);
              BufferedWriter writer = new BufferedWriter(ow)) {
-            Set<Integer> wroteLines = new HashSet<>();
+            Set<Integer> writedLines = new HashSet<>();
             HashFunction hashFunc = Hashing.murmur3_32();
-            for (String tipsLine, dataLine; (tipsLine = reader.readLine()) != 
null &&
-                                            (dataLine = reader.readLine()) != 
null; ) {
+            for (String tipsLine, dataLine;
+                     (tipsLine = reader.readLine()) != null &&
+                     (dataLine = reader.readLine()) != null;) {
                 /*
                  * Hash data line to remove duplicate lines
                  * Misjudgment may occur, but the probability is extremely low
                  */
                 int hash = hashFunc.hashString(dataLine, charset).asInt();
-                if (!wroteLines.contains(hash)) {
+                if (!writedLines.contains(hash)) {
                     writer.write(tipsLine);
                     writer.newLine();
                     writer.write(dataLine);
                     writer.newLine();
-                    // Save the hash value of wrote line
-                    wroteLines.add(hash);
+                    // Save the hash value of writed line
+                    writedLines.add(hash);

Review Comment:
   Corrected spelling of 'writed' to 'written'.



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/failure/FailLogger.java:
##########
@@ -138,30 +139,32 @@ private void writeHeaderIfNeeded() {
 
     private void removeDupLines() {
         Charset charset = Charset.forName(this.struct.input().charset());
-        File dedupFile = new File(this.file.getAbsolutePath() + 
Constants.DEDUP_SUFFIX);
-        try (InputStream is = Files.newInputStream(this.file.toPath());
+        File dedupFile = new File(this.file.getAbsolutePath() +
+                                   Constants.DEDUP_SUFFIX);
+        try (InputStream is = new FileInputStream(this.file);
              Reader ir = new InputStreamReader(is, charset);
              BufferedReader reader = new BufferedReader(ir);
              // upper is input, below is output
-             OutputStream os = Files.newOutputStream(dedupFile.toPath());
+             OutputStream os = new FileOutputStream(dedupFile);
              Writer ow = new OutputStreamWriter(os, charset);
              BufferedWriter writer = new BufferedWriter(ow)) {
-            Set<Integer> wroteLines = new HashSet<>();
+            Set<Integer> writedLines = new HashSet<>();

Review Comment:
   Corrected spelling of 'writed' to 'written'.



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/builder/ElementBuilder.java:
##########
@@ -614,10 +859,10 @@ public void extractFromEdge(String[] names, Object[] 
values,
                             "In case unfold is true, just supported " +
                             "a single primary key");
             String fieldName = names[fieldIndexes[0]];
-            this.pkName = mapping().mappingField(fieldName);
+            this.pkName = mappingField(fieldName);
             String primaryKey = primaryKeys.get(0);
             E.checkArgument(this.pkName.equals(primaryKey),
-                            "Make sure the primary key field '%s' is " +
+                            "Make sure the the primary key field '%s' is " +

Review Comment:
   Removed duplicate article 'the the'.
   ```suggestion
                               "Make sure the primary key field '%s' is " +
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/builder/ElementBuilder.java:
##########
@@ -390,7 +532,7 @@ public class VertexFlatIdKVPairs extends VertexKVPairs {
         // The idField(raw field), like: id
         private String idField;
         /*
-         * The multiple idValues(split and mapped)
+         * The multiple idValues(spilted and mapped)

Review Comment:
   Corrected spelling of 'spilted' to 'split'.
   ```suggestion
            * The multiple idValues(split and mapped)
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/builder/ElementBuilder.java:
##########
@@ -590,9 +835,9 @@ public void extractFromVertex(String[] names, Object[] 
values) {
                 if (!retainField(fieldName, fieldValue)) {
                     continue;
                 }
-                String key = mapping().mappingField(fieldName);
+                String key = mappingField(fieldName);
                 if (!handledPk && primaryKeys.contains(key)) {
-                    // Don't put primary key/values into general properties
+                    // Don't put priamry key/values into general properties

Review Comment:
   Corrected spelling of 'priamry' to 'primary'.
   ```suggestion
                       // Don't put primary key/values into general properties
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/builder/ElementBuilder.java:
##########
@@ -565,7 +810,7 @@ public class VertexFlatPkKVPairs extends VertexKVPairs {
          */
         private String pkName;
         /*
-         * The primary values(split and mapped)
+         * The primary values(splited and mapped)

Review Comment:
   Corrected spelling of 'splited' to 'split'.
   ```suggestion
            * The primary values(split and mapped)
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/filter/ElementParser.java:
##########
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hugegraph.loader.filter;
+
+import org.apache.hugegraph.structure.GraphElement;
+
+public interface ElementParser {
+
+    /*
+    * Returns false if the element shoud be removed.

Review Comment:
   Corrected spelling of 'shoud' to 'should'.
   ```suggestion
       * Returns false if the element should be removed.
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/filter/ShortIdParser.java:
##########
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hugegraph.loader.filter;
+
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.UUID;
+
+import org.apache.hugegraph.loader.exception.LoadException;
+import org.apache.hugegraph.loader.executor.LoadOptions;
+import org.apache.hugegraph.loader.filter.util.SegmentIdGenerator;
+import org.apache.hugegraph.loader.filter.util.ShortIdConfig;
+import org.apache.hugegraph.loader.util.DataTypeUtil;
+import org.apache.hugegraph.structure.GraphElement;
+import org.apache.hugegraph.structure.constant.DataType;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Vertex;
+// import org.apache.hugegraph.util.collection.JniBytes2BytesMap;
+
+public class ShortIdParser implements ElementParser {
+
+    private Map<String, String> labels;
+
+    private Map<byte[], byte[]> map;
+
+    private ThreadLocal<SegmentIdGenerator.Context> idPool;
+
+    private SegmentIdGenerator segmentIdGenerator;
+
+    private LoadOptions options;
+
+    private Map<String, ShortIdConfig> configs;
+
+    public ShortIdParser(LoadOptions options) {
+        this.options = options;
+        this.labels = new HashMap<>();
+        this.configs = convertShortIdConfigs();
+        // TODO use JniBytes2BytesMap
+        this.map = new HashMap<>();
+        this.idPool = new ThreadLocal<>();
+        this.segmentIdGenerator = new SegmentIdGenerator();
+    }
+
+    public Map<String, ShortIdConfig> convertShortIdConfigs() {
+        Map<String, ShortIdConfig> map = new HashMap<>();
+        for (ShortIdConfig config : options.shorterIDConfigs) {
+            map.put(config.getVertexLabel(), config);
+            labels.put(config.getVertexLabel(), config.getVertexLabel());
+        }
+        return map;
+    }
+
+    @Override
+    public boolean parse(GraphElement element) {
+        if (element instanceof Edge) {
+            Edge edge = (Edge) element;
+            String label;
+            if ((label = labels.get(edge.sourceLabel())) != null) {
+                ShortIdConfig config = configs.get(edge.sourceLabel());
+                edge.sourceId(getVertexNewId(label, idToBytes(config, 
edge.sourceId())));
+            }
+            if ((label = labels.get(edge.targetLabel())) != null) {
+                ShortIdConfig config = configs.get(edge.targetLabel());
+                edge.targetId(getVertexNewId(label, idToBytes(config, 
edge.targetId())));
+            }
+        } else /* vertex */ {
+            Vertex vertex = (Vertex) element;
+            if (configs.containsKey(vertex.label())) {
+                ShortIdConfig config = configs.get(vertex.label());
+                String idField = config.getIdFieldName();
+                Object originId = vertex.id();
+                if (originId == null) {
+                    originId = vertex.property(config.getPrimaryKeyField());
+                }
+                vertex.property(idField, originId);
+
+                vertex.id(getVertexNewId(config.getVertexLabel(), 
idToBytes(config, originId)));
+            }
+        }
+        return true;
+    }
+
+    int getVertexNewId(String label, byte[] oldId) {
+        /* fix concat label*/
+        byte[] key = oldId;
+        byte[] value = map.get(key);
+        if (value == null) {
+            synchronized (this) {
+                if (!map.containsKey(key)) {
+                    /* gen id */
+                    int id = newID();
+                    /* save id */
+                    byte[] labelBytes = label.getBytes(StandardCharsets.UTF_8);
+                    byte[] combined = new byte[labelBytes.length + 
oldId.length];
+                    System.arraycopy(labelBytes, 0, combined, 0, 
labelBytes.length);
+                    System.arraycopy(oldId, 0, combined, labelBytes.length, 
oldId.length);
+                    map.put(combined, longToBytes(id));
+                    return id;
+                } else {
+                    value = map.get(key);
+                }
+            }
+        }
+        return (int) bytesToLong(value);
+    }
+
+    public static byte[] idToBytes(ShortIdConfig config, Object obj) {
+        DataType type = config.getIdFieldType();
+        if (type.isText()) {
+            String id = obj.toString();
+            return id.getBytes(StandardCharsets.UTF_8);
+        } else if (type.isUUID()) {
+            UUID id = DataTypeUtil.parseUUID("Id", obj);
+            byte[] b = new byte[16];
+            return ByteBuffer.wrap(b)
+                             .order(ByteOrder.BIG_ENDIAN)
+                             .putLong(id.getMostSignificantBits())
+                             .putLong(id.getLeastSignificantBits())
+                             .array();
+        } else if (type.isNumber()) {
+            long id = DataTypeUtil.parseNumber("Id", obj);
+            return longToBytes(id);
+        }
+        throw new LoadException("Unknow Id data type '%s'.", type.string());

Review Comment:
   Corrected spelling of 'Unknow' to 'Unknown'.
   ```suggestion
           throw new LoadException("Unknown Id data type '%s'.", type.string());
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/reader/graph/GraphFetcher.java:
##########
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hugegraph.loader.reader.graph;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hugegraph.util.Log;
+import org.slf4j.Logger;
+
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.structure.GraphElement;
+
+public class GraphFetcher implements Iterator<GraphElement> {
+
+    public static final Logger LOG = Log.logger(GraphFetcher.class);
+
+    private final HugeClient client;
+    private final String label;
+    private final Map<String, Object> queryProperties;
+    private final int batchSize;
+    private final boolean isVertex;
+    private final List<String> ignoredProperties;
+
+    private int offset = 0;
+    private boolean done = false;
+
+    private Iterator<GraphElement> batchIter;
+
+    public GraphFetcher(HugeClient client, String label,
+                        Map<String, Object> queryProperties, int batchSize,
+                        boolean isVertex, List<String> ignoredProerties) {
+        this.client = client;
+        this.label = label;
+        this.queryProperties = queryProperties;
+        this.batchSize = batchSize;
+        this.isVertex = isVertex;
+        this.ignoredProperties = ignoredProerties;

Review Comment:
   Corrected spelling of 'ignoredProerties' to 'ignoredProperties' in parameter 
name.
   ```suggestion
                           boolean isVertex, List<String> ignoredProperties) {
           this.client = client;
           this.label = label;
           this.queryProperties = queryProperties;
           this.batchSize = batchSize;
           this.isVertex = isVertex;
           this.ignoredProperties = ignoredProperties;
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/filter/util/ShortIdConfig.java:
##########
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hugegraph.loader.filter.util;
+
+import org.apache.hugegraph.loader.exception.LoadException;
+import org.apache.hugegraph.structure.constant.DataType;
+
+import com.beust.jcommander.IStringConverter;
+
+public class ShortIdConfig {
+
+    private String vertexLabel;
+    private String idFieldName;
+    private DataType idFieldType;
+    private String primaryKeyField;
+
+    private long labelID;
+
+    public String getVertexLabel() {
+        return vertexLabel;
+    }
+
+    public String getIdFieldName() {
+        return idFieldName;
+    }
+
+    public DataType getIdFieldType() {
+        return idFieldType;
+    }
+
+    public void setPrimaryKeyField(String primaryKeyField) {
+        this.primaryKeyField = primaryKeyField;
+    }
+
+    public String getPrimaryKeyField() {
+        return primaryKeyField;
+    }
+
+    public long getLabelID() {
+        return labelID;
+    }
+
+    public void setLabelID(long labelID) {
+        this.labelID = labelID;
+    }
+
+    public static class ShortIdConfigConverter implements 
IStringConverter<ShortIdConfig> {
+
+        @Override
+        public ShortIdConfig convert(String s) {
+            String[] sp = s.split(":");
+            ShortIdConfig config = new ShortIdConfig();
+            config.vertexLabel = sp[0];
+            config.idFieldName = sp[1];
+            String a = DataType.BYTE.name();
+            switch (sp[2]) {
+                case "boolean":
+                    config.idFieldType = DataType.BOOLEAN;
+                    break;
+                case "byte":
+                    config.idFieldType = DataType.BYTE;
+                    break;
+                case "int":
+                    config.idFieldType = DataType.INT;
+                    break;
+                case "long":
+                    config.idFieldType = DataType.LONG;
+                    break;
+                case "float":
+                    config.idFieldType = DataType.FLOAT;
+                    break;
+                case "double":
+                    config.idFieldType = DataType.DOUBLE;
+                    break;
+                case "text":
+                    config.idFieldType = DataType.TEXT;
+                    break;
+                case "blob":
+                    config.idFieldType = DataType.BLOB;
+                    break;
+                case "date":
+                    config.idFieldType = DataType.DATE;
+                    break;
+                case "uuid":
+                    config.idFieldType = DataType.UUID;
+                    break;
+                default:
+                    throw new LoadException("unknow type " + sp[2]);

Review Comment:
   Corrected spelling of 'unknow' to 'unknown'.
   ```suggestion
                       throw new LoadException("unknown type " + sp[2]);
   ```



##########
hugegraph-loader/src/main/java/org/apache/hugegraph/loader/util/HugeClientHolder.java:
##########
@@ -18,20 +18,54 @@
 package org.apache.hugegraph.loader.util;
 
 import java.nio.file.Paths;
+import java.util.List;
 
+import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.lang3.StringUtils;
+import org.apache.hugegraph.rest.ClientException;
+import org.apache.hugegraph.util.E;
+import org.apache.hugegraph.util.Log;
+import org.slf4j.Logger;
+
 import org.apache.hugegraph.driver.HugeClient;
 import org.apache.hugegraph.driver.HugeClientBuilder;
+import org.apache.hugegraph.driver.factory.PDHugeClientFactory;
 import org.apache.hugegraph.exception.ServerException;
 import org.apache.hugegraph.loader.constant.Constants;
 import org.apache.hugegraph.loader.exception.LoadException;
 import org.apache.hugegraph.loader.executor.LoadOptions;
-import org.apache.hugegraph.rest.ClientException;
-import org.apache.hugegraph.util.E;
+// import org.apache.hugegraph.loader.fake.FakeHugeClient;
 
 public final class HugeClientHolder {
 
+    public static final Logger LOG = Log.logger(HugeClientHolder.class);
+
     public static HugeClient create(LoadOptions options) {
+        return create(options, true);
+    }
+
+    /**
+     * Create Client client
+     * @param options
+     * @param useDirect indicates whether options.direct parameter is enabled
+     * @return

Review Comment:
   The javadoc comment 'Create Client client' is unclear and should be more 
descriptive, such as 'Creates a HugeClient instance'.
   ```suggestion
        * Creates and returns a HugeClient instance based on the provided 
options.
        * @param options the configuration options for the HugeClient
        * @param useDirect indicates whether the direct connection option is 
enabled
        * @return a HugeClient instance
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to