[jira] [Commented] (TINKERPOP-2219) Upgrade Netty version

2019-05-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/TINKERPOP-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840922#comment-16840922
 ] 

ASF GitHub Bot commented on TINKERPOP-2219:
---

divijvaidya commented on pull request #1116: TINKERPOP-2219 Upgrade Netty 
dependency to 4.1.32
URL: https://github.com/apache/tinkerpop/pull/1116
 
 
   https://issues.apache.org/jira/browse/TINKERPOP-2219
   
   **Testing**
   
   gremlin-driver: mvn clean install -DskipIntegrationTests=false
   gremlin-server: mvn clean install -DskipIntegrationTests=false
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Upgrade Netty version
> -
>
> Key: TINKERPOP-2219
> URL: https://issues.apache.org/jira/browse/TINKERPOP-2219
> Project: TinkerPop
>  Issue Type: Improvement
>  Components: driver, server
>Affects Versions: 3.3.6, 3.4.1
>Reporter: Divij Vaidya
>Priority: Minor
> Fix For: 3.3.7, 3.4.2
>
>
> Please upgrade the Netty version for Tinkerpop. We are currently using a year 
> old version at 4.1.25-final.
> The new versions contain numerous bug fixes and improvements. 
> My recommendation is to move to at least a 6 month old version (since newer 
> version might be unstable and have bugs) 4.1.32-final



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (TINKERPOP-2220) Dedup inside Repeat Produces 0 results

2019-05-15 Thread Rahul Chander (JIRA)


 [ 
https://issues.apache.org/jira/browse/TINKERPOP-2220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Chander updated TINKERPOP-2220:
-
Affects Version/s: (was: 3.4.1)
   3.3.0

> Dedup inside Repeat Produces 0 results
> --
>
> Key: TINKERPOP-2220
> URL: https://issues.apache.org/jira/browse/TINKERPOP-2220
> Project: TinkerPop
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Rahul Chander
>Priority: Major
>
> Testing against the Tinkerpop Modern graph dataset, I ran this query:
> {code:java}
> g.V().repeat(__.dedup()).times(2).count()
> {code}
> which should essentially be the same as running dedup twice. It produced 0 
> results, while dedup twice produced the correct 6.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TINKERPOP-2220) Dedup inside Repeat Produces 0 results

2019-05-15 Thread Rahul Chander (JIRA)
Rahul Chander created TINKERPOP-2220:


 Summary: Dedup inside Repeat Produces 0 results
 Key: TINKERPOP-2220
 URL: https://issues.apache.org/jira/browse/TINKERPOP-2220
 Project: TinkerPop
  Issue Type: Bug
Affects Versions: 3.4.1
Reporter: Rahul Chander


Testing against the Tinkerpop Modern graph dataset, I ran this query:
{code:java}
g.V().repeat(__.dedup()).times(2).count()
{code}
which should essentially be the same as running dedup twice. It produced 0 
results, while dedup twice produced the correct 6.

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (TINKERPOP-2219) Upgrade Netty version

2019-05-15 Thread Divij Vaidya (JIRA)
Divij Vaidya created TINKERPOP-2219:
---

 Summary: Upgrade Netty version
 Key: TINKERPOP-2219
 URL: https://issues.apache.org/jira/browse/TINKERPOP-2219
 Project: TinkerPop
  Issue Type: Improvement
  Components: driver, server
Affects Versions: 3.4.1, 3.3.6
Reporter: Divij Vaidya
 Fix For: 3.3.7, 3.4.2


Please upgrade the Netty version for Tinkerpop. We are currently using a year 
old version at 4.1.25-final.

The new versions contain numerous bug fixes and improvements. 

My recommendation is to move to at least a 6 month old version (since newer 
version might be unstable and have bugs) 4.1.32-final



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: N-Tuple Transactions?

2019-05-15 Thread Joshua Shinavier
Tough question, since I have not used Akka or the actor model, but here are
some first thoughts. From what I am reading, the trick would be to
implement the transaction log as a CRDT
.
Operation-based CRDTs -- which propagate individual mutations as opposed to
local state -- appear to be preferable if mutations are commutative. So are
they commutative? In the "imperative" scenario I described to Stephen, no.
In the "functional" scenario, yes, they have to be. Suppose you insert a
vertex and also delete that vertex. The eventually consistent result of the
transaction must be a no-op; if the vertex already exists, leave it alone.
If it does not exist, do not create it. However, it does not matter in what
order you perform the insert and delete -- once all operations are
accounted for, you arrive at the correct state.

Just from what I glean from Wikipedia, there appear to be a handful of
well-known strategies for operation-based and state-based CRDTs. I do not
know how hard it would be to support multiple strategies in the same VM,
but in the Akka world, that seems to be the way in which you would choose
your operational semantics.

Josh




On Wed, May 15, 2019 at 8:00 AM Marko Rodriguez 
wrote:

> Wow. I totally understood what you wrote.
>
> Question: What is the TransactionLog in a distributed environment?
> e.g. Akka-driven traversers spawned from the same
> query migrating around the cluster mutating stuff.
>
> Thanks for the lesson,
> Marko.
>
> http://rredux.com 
>
>
>
>
> > On May 15, 2019, at 8:58 AM, Joshua Shinavier  wrote:
> >
> > Hi Stephen,
> >
> > More the latter. TinkerPop transactions would be layered on top of the
> > native transactions of the database (if any), which gives the VM more
> > control over the operational semantics of a computation in between
> database
> > commits. For example, in many scenarios it would be desirable not to
> mutate
> > the graph at all until a traversal has completed, so that the result does
> > not depend on the order of evaluation. Consider a traversal which adds or
> > deletes elements as it goes. In some cases, you want writes and reads to
> > build on each other, so that what you wrote in one step is accessible for
> > reading in the next step. This is a very imperative style of traversal
> for
> > which you need to understand how the VM builds a query plan in order to
> > predict the result. In many other cases, you might prefer a more
> functional
> > approach, for which you can forget about the query plan. Without VM-level
> > transactions, you don't have this choice; you are at the mercy of the
> > underlying database. The extra level of control will be useful for
> > concurrency and parallelism, as well -- without it, the same programs may
> > have different results when executed on different databases.
> >
> > Josh
> >
> >
> >
> >
> > On Wed, May 15, 2019 at 6:47 AM Stephen Mallette 
> > wrote:
> >
> >> Hi Josh, interesting... we have graphs with everything from no
> transactions
> >> like TinkerGraph to more acid transactional systems and everything in
> >> between - will transaction support as you describe it cover all the
> >> different transactional semantics of the underlying graphs which we
> might
> >> encounter? or is this an approach that helps unify those different
> >> transactional semantics under TinkerPop's definition of a transaction?
> >>
> >> On Wed, May 15, 2019 at 9:23 AM Joshua Shinavier 
> >> wrote:
> >> [...]
>
>


Re: N-Tuple Transactions?

2019-05-15 Thread Marko Rodriguez
Wow. I totally understood what you wrote.

Question: What is the TransactionLog in a distributed environment?
e.g. Akka-driven traversers spawned from the same query 
migrating around the cluster mutating stuff.

Thanks for the lesson,
Marko.

http://rredux.com 




> On May 15, 2019, at 8:58 AM, Joshua Shinavier  wrote:
> 
> Hi Stephen,
> 
> More the latter. TinkerPop transactions would be layered on top of the
> native transactions of the database (if any), which gives the VM more
> control over the operational semantics of a computation in between database
> commits. For example, in many scenarios it would be desirable not to mutate
> the graph at all until a traversal has completed, so that the result does
> not depend on the order of evaluation. Consider a traversal which adds or
> deletes elements as it goes. In some cases, you want writes and reads to
> build on each other, so that what you wrote in one step is accessible for
> reading in the next step. This is a very imperative style of traversal for
> which you need to understand how the VM builds a query plan in order to
> predict the result. In many other cases, you might prefer a more functional
> approach, for which you can forget about the query plan. Without VM-level
> transactions, you don't have this choice; you are at the mercy of the
> underlying database. The extra level of control will be useful for
> concurrency and parallelism, as well -- without it, the same programs may
> have different results when executed on different databases.
> 
> Josh
> 
> 
> 
> 
> On Wed, May 15, 2019 at 6:47 AM Stephen Mallette 
> wrote:
> 
>> Hi Josh, interesting... we have graphs with everything from no transactions
>> like TinkerGraph to more acid transactional systems and everything in
>> between - will transaction support as you describe it cover all the
>> different transactional semantics of the underlying graphs which we might
>> encounter? or is this an approach that helps unify those different
>> transactional semantics under TinkerPop's definition of a transaction?
>> 
>> On Wed, May 15, 2019 at 9:23 AM Joshua Shinavier 
>> wrote:
>> [...]



Re: N-Tuple Transactions?

2019-05-15 Thread Joshua Shinavier
Hi Stephen,

More the latter. TinkerPop transactions would be layered on top of the
native transactions of the database (if any), which gives the VM more
control over the operational semantics of a computation in between database
commits. For example, in many scenarios it would be desirable not to mutate
the graph at all until a traversal has completed, so that the result does
not depend on the order of evaluation. Consider a traversal which adds or
deletes elements as it goes. In some cases, you want writes and reads to
build on each other, so that what you wrote in one step is accessible for
reading in the next step. This is a very imperative style of traversal for
which you need to understand how the VM builds a query plan in order to
predict the result. In many other cases, you might prefer a more functional
approach, for which you can forget about the query plan. Without VM-level
transactions, you don't have this choice; you are at the mercy of the
underlying database. The extra level of control will be useful for
concurrency and parallelism, as well -- without it, the same programs may
have different results when executed on different databases.

Josh




On Wed, May 15, 2019 at 6:47 AM Stephen Mallette 
wrote:

> Hi Josh, interesting... we have graphs with everything from no transactions
> like TinkerGraph to more acid transactional systems and everything in
> between - will transaction support as you describe it cover all the
> different transactional semantics of the underlying graphs which we might
> encounter? or is this an approach that helps unify those different
> transactional semantics under TinkerPop's definition of a transaction?
>
> On Wed, May 15, 2019 at 9:23 AM Joshua Shinavier 
> wrote:
> [...]


Re: N-Tuple Transactions?

2019-05-15 Thread Stephen Mallette
Hi Josh, interesting... we have graphs with everything from no transactions
like TinkerGraph to more acid transactional systems and everything in
between - will transaction support as you describe it cover all the
different transactional semantics of the underlying graphs which we might
encounter? or is this an approach that helps unify those different
transactional semantics under TinkerPop's definition of a transaction?

On Wed, May 15, 2019 at 9:23 AM Joshua Shinavier  wrote:

> Hi Marko,
>
> Get ready for monads
> . I
> mentioned
> them in my post
> 
> on
> algebraic property graphs. In functional programming, monads are a typical
> way of composing chains of stateful operations together in such that they
> do not violate functional purity. For example, an operation which adds a
> vertex to a graph can be thought of as a function f : Graph -> Graph that
> takes a graph as its input, adds a vertex, and returns the resulting graph
> as its output. The function f doesn't actually mutate the graph on disk,
> but it gives you an in-memory representation of the mutated graph, which
> can then be persisted to disk. Some things you need in order to make this
> work:
>
> 1) a snapshot of the state of the graph / database as it existed when the
> transaction was started
> 2) a transaction log, within the TinkerPop VM, containing all atomic
> changes that were made to the graph since the transaction was started
> 3) a view of the graph overlaid with the contents of the transaction log
> 4) the ability to persist the transaction log to the database
>
> Items (1) and (4) are pretty trivial if the underlying database itself
> supports transactions. Item (2) is easy if we use a basic state monad. More
> on that below. Item (3) requires some insight into how graphs and other
> data structures are represented in TinkerPop4, and this is where the
> interaction between the basic data model and the VM comes in. In terms of
> what I called the APG data model, there are three basic changes of state:
>
> 1) add an element of a given type. E.g. the edge with label knows and id 42
> didn't exist before, and now it does.
> 2) remove an element of a given type. E.g. the edge with label knows and id
> 42 existed before, but now it doesn't.
> 3) mutate an existing element of a given type. E.g. the element with label
> knows and id 42 used to have Person vertex 1 as its out-element and Person
> vertex 4 as its in-element, but now it has Person vertex 6 as its
> in-element.
>
> In other words, we support *create*, *update*, and *delete* operations for
> typed elements. *Read* operations do not require appending to the
> transaction log. Now, given that we have mutated the graph in our
> transaction, but the graph on disk has not changed, how do we deliver a
> consistent view of the mutated graph to subsequent read operations in the
> same transaction? If we think of the graph as a set of relations (tables,
> indexes), then we just need to wrap each read operation, from each table,
> in such a way that the read operation respects the transaction log.
>
> For example, if we have a relation like V() that represents all vertices in
> the graph, and we have added a vertex, then the iterator for V() should be
> the raw V() iterator for the unmodified graph -- filtered to exclude all
> *delete* elements in the transaction log which are vertices -- concatenated
> with a filtered iterator over all *create* elements which are vertices.
> Once you have committed your transaction, the transaction log is empty, so
> these wrapped iterators provide exactly the same elements as the raw
> iterators.
>
> How do you build a transaction log within a traversal? With a state monad
>  >.
> A state monad will allow you to execute any basic VM instruction and carry
> the transaction log along with the computation. Most instructions are a
> no-op with respect to state, but those few instructions which do affect
> state must append to the transaction log. For example, a V() operation
> doesn't just give you an iterator over vertices; it gives you a pair of
> objects: the iterator over vertices, and also the transaction log . A
> create-vertex operation also gives you a pair of objects: the newly-created
> vertex, and also the state, in which we have appended a *create* element to
> the transaction log.
>
> In terms of language support, Java 8+ supports some things
>  that happen to be
> monads, where the flatMap method is equivalent to the monadic bind
> operator. Of course, you can also implement your own monads in Java. Scala
> does not have a built-in monad concept or syntactic sugar either, although
> it does have better support for higher-kinded types in general. In any
> case, we only really 

Re: N-Tuple Transactions?

2019-05-15 Thread Joshua Shinavier
Hi Marko,

Get ready for monads
. I mentioned
them in my post
 on
algebraic property graphs. In functional programming, monads are a typical
way of composing chains of stateful operations together in such that they
do not violate functional purity. For example, an operation which adds a
vertex to a graph can be thought of as a function f : Graph -> Graph that
takes a graph as its input, adds a vertex, and returns the resulting graph
as its output. The function f doesn't actually mutate the graph on disk,
but it gives you an in-memory representation of the mutated graph, which
can then be persisted to disk. Some things you need in order to make this
work:

1) a snapshot of the state of the graph / database as it existed when the
transaction was started
2) a transaction log, within the TinkerPop VM, containing all atomic
changes that were made to the graph since the transaction was started
3) a view of the graph overlaid with the contents of the transaction log
4) the ability to persist the transaction log to the database

Items (1) and (4) are pretty trivial if the underlying database itself
supports transactions. Item (2) is easy if we use a basic state monad. More
on that below. Item (3) requires some insight into how graphs and other
data structures are represented in TinkerPop4, and this is where the
interaction between the basic data model and the VM comes in. In terms of
what I called the APG data model, there are three basic changes of state:

1) add an element of a given type. E.g. the edge with label knows and id 42
didn't exist before, and now it does.
2) remove an element of a given type. E.g. the edge with label knows and id
42 existed before, but now it doesn't.
3) mutate an existing element of a given type. E.g. the element with label
knows and id 42 used to have Person vertex 1 as its out-element and Person
vertex 4 as its in-element, but now it has Person vertex 6 as its
in-element.

In other words, we support *create*, *update*, and *delete* operations for
typed elements. *Read* operations do not require appending to the
transaction log. Now, given that we have mutated the graph in our
transaction, but the graph on disk has not changed, how do we deliver a
consistent view of the mutated graph to subsequent read operations in the
same transaction? If we think of the graph as a set of relations (tables,
indexes), then we just need to wrap each read operation, from each table,
in such a way that the read operation respects the transaction log.

For example, if we have a relation like V() that represents all vertices in
the graph, and we have added a vertex, then the iterator for V() should be
the raw V() iterator for the unmodified graph -- filtered to exclude all
*delete* elements in the transaction log which are vertices -- concatenated
with a filtered iterator over all *create* elements which are vertices.
Once you have committed your transaction, the transaction log is empty, so
these wrapped iterators provide exactly the same elements as the raw
iterators.

How do you build a transaction log within a traversal? With a state monad
.
A state monad will allow you to execute any basic VM instruction and carry
the transaction log along with the computation. Most instructions are a
no-op with respect to state, but those few instructions which do affect
state must append to the transaction log. For example, a V() operation
doesn't just give you an iterator over vertices; it gives you a pair of
objects: the iterator over vertices, and also the transaction log . A
create-vertex operation also gives you a pair of objects: the newly-created
vertex, and also the state, in which we have appended a *create* element to
the transaction log.

In terms of language support, Java 8+ supports some things
 that happen to be
monads, where the flatMap method is equivalent to the monadic bind
operator. Of course, you can also implement your own monads in Java. Scala
does not have a built-in monad concept or syntactic sugar either, although
it does have better support for higher-kinded types in general. In any
case, we only really need to implement one monad for the sake of
transactions: call it State or Transaction. In plain old Java, this would
look something like the following (ignoring applicative functors, which are
awkward in Java):

public class TransactionLog {
private Iterable created;
private Iterable updated;
private Iterable deleted;

public TransactionLog append(TransactionLog other) {
...
}
}

public class State {
private TransactionLog log;
private A object;

public State(A object, TransactionLog log) {
this.object = object;
this.log = log;
}

public TransactionLog getLog() {
 

A Novel "Bytecode" Optimization Mechanism for TP4

2019-05-15 Thread Marko Rodriguez
Hi,

Thinking last night, I came up with another way of doing bytecode optimization 
in TP4 that has some interesting properties.

1. Providers don't write custom strategies.
2. Providers don’t write custom instructions.
3. TP4 can be ignorant of large swaths of optimization techniques.
==> Instead, providers define custom Sequences.
- In Java, Sequence.iterator() => Iterator
- In mm-ADT, a sequence tuple.

———

Assume the following {graph} tuple.

{ type:graph V: }

g.V()
=compiles to=>
[V]
=evaluates to=>
 

When an instruction is applied to a tuple, it first sees if that tuple has a 
respective "instruction-key.” If so, the value of that key is returned. Thus, 
[V] => .

 is a “sequence” (an iterator) of vertex tuples. It is a reference/pointer 
to a bunch of vertices.

< type:V, parent:{graph}, bytecode:[[V]], hasId: >

If all you did was g.V(), then  would be dereferenced (iterated) yielding 
all the vertex tuples of the graph.  Note that the {graph} tuple was the one 
who said there was a V-key that returned . In other words, the graph 
provider knows what a  sequence is as it created it! Thus, the provider 
knows how to generate an iterator of vertex tuples when .iterator() is 
called.

Moving on….

// g.V().hasId(1) 
[V][hasId,1] 
  ==> 

Note above that the provider’s created  sequence has a hasId-key that 
returns a  sequence. Again, like ,  is a reference/pointer 
to a bunch of vertex tuples.

< type:V.hasId, parent:, bytecode:[[V][hasId,1]] >

If all you did was g.V().hasId(1), then  would be dereferenced 
(iterated) yielding v[1]. Note that  was created by  which was 
created by {graph}. Thus, the graph provider indirectly created  and 
thus,  knows how to dereference/iterate itself with respects to the 
{graph} (follow the parent-key chain back to {graph}). Assume for this graph 
provider, a  dereference/iteration performs an index lookup by id.

Notice how we haven’t done anything with bytecode strategies. g.V().hasId() was 
able to trigger an index lookup. No custom strategy. No custom instructions. 
Why? Because the graph provider delays dereferencing these sequences and thus, 
delays manifesting vertex objects! When it finally has to manifest vertex 
objects, it has an instruction-provenance chain that allows it to be smart 
about how to get the data — i.e. an index lookup is possible.

A dereference doesn’t just happen when the end of the bytecode is reached. No, 
it also happens when a sequence doesn’t have a respective instruction-key. 
Watch...

// g.V().hasId(1).has(‘name’,’marko’)
[V][hasId,1][has,name,marko] 
  ==> { type:vertex, id:1, name:marko, outE: }

The  sequence from previous does not have a has-key. Thus, the 
sequence chain can no longer delay evaluation.  is dereferenced, index 
lookup occurs, and v[1] is flatmapped into the processor stream. The 
has(name,marko) instruction is evaluated on v[1]. The v[1] tuple doesn’t have a 
has-key so the HasFunction does its standard evaluation on a vertex (no delayed 
evaluation as we are back into standard TP-stream processing).

Moving on...

// g.V().hasId(1).has(‘name’,’marko’).outE()
[V][hasId,1][has,name,marko][outE]
  ==> 

When the v[1] vertex tuple is flatmapped into the processor stream from 
, HasFunction lets it live, and then the [outE] instruction is called. 
The v[1] vertex tuple has an outE-key. Thus, instead of OutEdgesFunction 
evaluating on v[1], v[1] provides an  sequence object to the processor 
stream.

< type:outE, parent:{vertex id:1}, bytecode:[[outE]], hasLabel: >

If no more instructions, outE is dereferenced. Since v[1] created the , 
it must have the logic to create an iterator() of outgoing incident edge tuples.

Moving on...

// g.V().hasId(1).has(‘name’,’marko’).outE(‘knows’)
[V][hasId,1][has,name,marko][outE][hasLabel,knows]
  ==> 

< type:outE.hasLabel, parent:, bytecode:[[outE][hasLabel,knows]], 
inV: >

Do you see where this is going?

// g.V().hasId().has(‘name’,’marko’).out(‘knows’)
[V][hasId,1][has,name,marko][outE][hasLabel,knows][inV]
  ==> 

< type:outE.hasLabel.inV, parent:, 
bytecode:[[outE][hasLabel,knows][inV]] >

When the  sequence is dereferenced, v[1] will know how to 
get all its know-adjacent vertices. Guess what cool things just happened? 
1. We didn’t materialize any incident edges.
2. We used a vertex-centric index to directly grab the v[1] knows-edges 
off disk. 

——

Here is why this direction may prove fruitful for TP4:

1. Optimizations don’t have to be determined at compile via strategies. 
* Instead they can be determined at runtime via this “delayed 
evaluation”-mechanism.
2. Optimizations don’t have to be global to a type, they can also be 
local to an instance.
* v[1] could have a out(‘knows’)-index, but v[4] might not!
* This realization happens at runtime, not at compile time.
* Now think about 

[jira] [Closed] (TINKERPOP-2211) Provide API to add per request option for a bytecode

2019-05-15 Thread stephen mallette (JIRA)


 [ 
https://issues.apache.org/jira/browse/TINKERPOP-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stephen mallette closed TINKERPOP-2211.
---
Resolution: Done
  Assignee: stephen mallette

> Provide API to add per request option for a bytecode
> 
>
> Key: TINKERPOP-2211
> URL: https://issues.apache.org/jira/browse/TINKERPOP-2211
> Project: TinkerPop
>  Issue Type: Improvement
>  Components: driver
>Affects Versions: 3.3.6, 3.4.1
>Reporter: Divij Vaidya
>Assignee: stephen mallette
>Priority: Minor
> Fix For: 3.3.7, 3.4.2
>
>
> Client does not provide an API to add per-request options to a request 
> submission using bytecode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (TINKERPOP-2211) Provide API to add per request option for a bytecode

2019-05-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/TINKERPOP-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840293#comment-16840293
 ] 

ASF GitHub Bot commented on TINKERPOP-2211:
---

spmallette commented on pull request #1110: TINKERPOP-2211 Add API which allows 
per-request option for bytecode
URL: https://github.com/apache/tinkerpop/pull/1110
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Provide API to add per request option for a bytecode
> 
>
> Key: TINKERPOP-2211
> URL: https://issues.apache.org/jira/browse/TINKERPOP-2211
> Project: TinkerPop
>  Issue Type: Improvement
>  Components: driver
>Affects Versions: 3.3.6, 3.4.1
>Reporter: Divij Vaidya
>Priority: Minor
> Fix For: 3.3.7, 3.4.2
>
>
> Client does not provide an API to add per-request options to a request 
> submission using bytecode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)