[
https://issues.apache.org/jira/browse/TINKERPOP-2679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17500750#comment-17500750
]
ASF GitHub Bot commented on TINKERPOP-2679:
-------------------------------------------
tkolanko commented on a change in pull request #1539:
URL: https://github.com/apache/tinkerpop/pull/1539#discussion_r818658387
##########
File path:
gremlin-javascript/src/main/javascript/gremlin-javascript/lib/driver/client.js
##########
@@ -73,41 +72,118 @@ class Client {
return this._connection.isOpen;
}
+ /**
+ * Configuration specific to the current request.
+ * @typedef {Object} RequestOptions
+ * @property {String} requestId - User specified request identifier which
must be a UUID.
+ * @property {Number} batchSize - Indicates whether the Power component is
present.
+ * @property {String} userAgent - The size in which the result of a request
is to be "batched" back to the client
+ * @property {Number} evaluationTimeout - The timeout for the evaluation of
the request.
+ */
+
/**
* Send a request to the Gremlin Server, can send a script or bytecode steps.
* @param {Bytecode|string} message The bytecode or script to send
* @param {Object} [bindings] The script bindings, if any.
- * @param {Object} [requestOptions] Configuration specific to the current
request.
- * @param {String} [requestOptions.requestId] User specified request
identifier which must be a UUID.
- * @param {Number} [requestOptions.batchSize] The size in which the result
of a request is to be "batched" back to the client
- * @param {String} [requestOptions.userAgent] A custom string that specifies
to the server where the request came from.
- * @param {Number} [requestOptions.evaluationTimeout] The timeout for the
evaluation of the request.
+ * @param {RequestOptions} [requestOptions] Configuration specific to the
current request.
* @returns {Promise}
*/
submit(message, bindings, requestOptions) {
- const requestIdOverride = requestOptions && requestOptions.requestId
- if (requestIdOverride) delete requestOptions['requestId'];
+ const requestIdOverride = requestOptions && requestOptions.requestId;
+ if (requestIdOverride) delete requestOptions["requestId"];
+
+ const args = Object.assign(
+ {
+ gremlin: message,
+ aliases: { g: this._options.traversalSource || "g" },
+ },
+ requestOptions
+ );
+
+ if (this._options.session && this._options.processor === "session") {
+ args["session"] = this._options.session;
+ }
+
+ if (message instanceof Bytecode) {
+ if (this._options.session && this._options.processor === "session") {
+ return this._connection.submit(
+ "session",
+ "bytecode",
+ args,
+ requestIdOverride
+ );
+ } else {
+ return this._connection.submit(
+ "traversal",
+ "bytecode",
+ args,
+ requestIdOverride
+ );
+ }
+ } else if (typeof message === "string") {
+ args["bindings"] = bindings;
+ args["language"] = "gremlin-groovy";
+ args["accept"] = this._connection.mimeType;
+ return this._connection.submit(
+ this._options.processor || "",
+ "eval",
+ args,
+ requestIdOverride
+ );
+ } else {
+ throw new TypeError("message must be of type Bytecode or string");
+ }
+ }
+
+ /**
+ * Send a request to the Gremlin Server and receive a stream for the
results, can send a script or bytecode steps.
+ * @param {Bytecode|string} message The bytecode or script to send
+ * @param {Object} [bindings] The script bindings, if any.
+ * @param {RequestOptions} [requestOptions] Configuration specific to the
current request.
+ * @returns {ReadableStream}
+ */
+ stream(message, bindings, requestOptions) {
+ const requestIdOverride = requestOptions && requestOptions.requestId;
+ if (requestIdOverride) delete requestOptions["requestId"];
- const args = Object.assign({
- gremlin: message,
- aliases: { 'g': this._options.traversalSource || 'g' }
- }, requestOptions)
+ const args = Object.assign(
+ {
+ gremlin: message,
+ aliases: { g: this._options.traversalSource || "g" },
+ },
+ requestOptions
+ );
- if (this._options.session && this._options.processor === 'session') {
- args['session'] = this._options.session;
+ if (this._options.session && this._options.processor === "session") {
+ args["session"] = this._options.session;
}
if (message instanceof Bytecode) {
- if (this._options.session && this._options.processor === 'session') {
- return this._connection.submit('session', 'bytecode', args,
requestIdOverride);
+ if (this._options.session && this._options.processor === "session") {
+ return this._connection.stream(
+ "session",
+ "bytecode",
+ args,
+ requestIdOverride
+ );
} else {
- return this._connection.submit('traversal', 'bytecode', args,
requestIdOverride);
+ return this._connection.stream(
+ "traversal",
+ "bytecode",
+ args,
+ requestIdOverride
+ );
}
- } else if (typeof message === 'string') {
- args['bindings'] = bindings;
- args['language'] = 'gremlin-groovy';
- args['accept'] = this._connection.mimeType;
- return this._connection.submit(this._options.processor || '','eval',
args, requestIdOverride);
+ } else if (typeof message === "string") {
+ args["bindings"] = bindings;
+ args["language"] = "gremlin-groovy";
+ args["accept"] = this._connection.mimeType;
+ return this._connection.stream(
+ this._options.processor || "",
+ "eval",
+ args,
+ requestIdOverride
+ );
Review comment:
That's what I originally did in
https://github.com/apache/tinkerpop/pull/1539/commits/fe1e468945ea3a36c53c2b452627bfffd20e0239
but it ended up breaking all the client tests that overwrote the internal
connection object and I couldn't figure out why. I'll take another look to see
if I can get the tests to pass with that
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
> Update JavaScript driver to support processing messages as a stream
> -------------------------------------------------------------------
>
> Key: TINKERPOP-2679
> URL: https://issues.apache.org/jira/browse/TINKERPOP-2679
> Project: TinkerPop
> Issue Type: Improvement
> Components: javascript
> Affects Versions: 3.5.1
> Reporter: Tom Kolanko
> Priority: Minor
>
> The JavaScript driver's
> [_handleMessage|https://github.com/apache/tinkerpop/blob/d4bd5cc5a228fc22442101ccb6a9751653900d32/gremlin-javascript/src/main/javascript/gremlin-javascript/lib/driver/connection.js#L249]
> receives messages from the gremlin server and stores each message in an
> object associated with the handler for the specific request. Currently, the
> driver waits until all the data is available from the gremlin server before
> allowing further processing of it.
> However, this can lead to cases where a lot of memory is required to hold
> onto the results before any processing can take place. If we had the abilty
> to process results as they come in from the gremlin server we could reduce
> memory in some cases
> If you are open to it I would like to submit a PR where {{submit}} can take
> an optional callback which is run on each set of data returned from the
> gremlin server, rather than waiting for the entire result set.
> The following examples assume that you have 100 vertices in your graph.
> current behaviour:
> {code:javascript}
> const result = await client.submit("g.V()")
> console.log(result.toArray()) // 100 - all the vertices in your graph
> {code}
> proposed addition
> {code:javascript}
> await client.submit("g.V()", {}, { batchSize: 25 }, (data) => {
> console.log(data.toArray().length) // 25 - this callback will be called 4
> times (100 / 25 = 4)
> })
> {code}
> If the optional callback is not provided then the default behaviour is
> unchanged
> I have the changes running locally and the overall performance is unchanged,
> queries run about the same as they used to, however, for some specific
> queries memory usage has dropped considerably.
> With the process-on-message strategy the memory usage will be related to how
> large the {{batchSize}} is rather than the final result set. Using the
> default of 64 and testing some specific cases we have I can get the memory to
> go from 1.2gb to 10mb.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)