[ 
https://issues.apache.org/jira/browse/FLINK-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16576269#comment-16576269
 ] 

ASF GitHub Bot commented on FLINK-9642:
---------------------------------------

dawidwys commented on a change in pull request #6205: [FLINK-9642]Reduce the 
count to deal with state during a CEP process
URL: https://github.com/apache/flink/pull/6205#discussion_r209251987
 
 

 ##########
 File path: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/SharedBuffer.java
 ##########
 @@ -204,227 +133,78 @@ public NodeId put(
         * @throws Exception Thrown if the system cannot access the state.
         */
        public boolean isEmpty() throws Exception {
-               return Iterables.isEmpty(eventsBuffer.keys());
+               return Iterables.isEmpty(eventsBuffer.keys()) && 
Iterables.isEmpty(eventsBufferCache.keySet());
        }
 
        /**
-        * Returns all elements from the previous relation starting at the 
given entry.
-        *
-        * @param nodeId  id of the starting entry
-        * @param version Version of the previous relation which shall be 
extracted
-        * @return Collection of previous relations starting with the given 
value
-        * @throws Exception Thrown if the system cannot access the state.
+        * Put an event to cache.
+        * @param eventId id of the event
+        * @param event event body
         */
-       public List<Map<String, List<EventId>>> extractPatterns(
-                       final NodeId nodeId,
-                       final DeweyNumber version) throws Exception {
-
-               List<Map<String, List<EventId>>> result = new ArrayList<>();
-
-               // stack to remember the current extraction states
-               Stack<ExtractionState> extractionStates = new Stack<>();
-
-               // get the starting shared buffer entry for the previous 
relation
-               Lockable<SharedBufferNode> entryLock = entries.get(nodeId);
-
-               if (entryLock != null) {
-                       SharedBufferNode entry = entryLock.getElement();
-                       extractionStates.add(new 
ExtractionState(Tuple2.of(nodeId, entry), version, new Stack<>()));
-
-                       // use a depth first search to reconstruct the previous 
relations
-                       while (!extractionStates.isEmpty()) {
-                               final ExtractionState extractionState = 
extractionStates.pop();
-                               // current path of the depth first search
-                               final Stack<Tuple2<NodeId, SharedBufferNode>> 
currentPath = extractionState.getPath();
-                               final Tuple2<NodeId, SharedBufferNode> 
currentEntry = extractionState.getEntry();
-
-                               // termination criterion
-                               if (currentEntry == null) {
-                                       final Map<String, List<EventId>> 
completePath = new LinkedHashMap<>();
-
-                                       while (!currentPath.isEmpty()) {
-                                               final NodeId currentPathEntry = 
currentPath.pop().f0;
-
-                                               String page = 
currentPathEntry.getPageName();
-                                               List<EventId> values = 
completePath
-                                                       .computeIfAbsent(page, 
k -> new ArrayList<>());
-                                               
values.add(currentPathEntry.getEventId());
-                                       }
-                                       result.add(completePath);
-                               } else {
-
-                                       // append state to the path
-                                       currentPath.push(currentEntry);
-
-                                       boolean firstMatch = true;
-                                       for (SharedBufferEdge edge : 
currentEntry.f1.getEdges()) {
-                                               // we can only proceed if the 
current version is compatible to the version
-                                               // of this previous relation
-                                               final DeweyNumber 
currentVersion = extractionState.getVersion();
-                                               if 
(currentVersion.isCompatibleWith(edge.getDeweyNumber())) {
-                                                       final NodeId target = 
edge.getTarget();
-                                                       Stack<Tuple2<NodeId, 
SharedBufferNode>> newPath;
-
-                                                       if (firstMatch) {
-                                                               // for the 
first match we don't have to copy the current path
-                                                               newPath = 
currentPath;
-                                                               firstMatch = 
false;
-                                                       } else {
-                                                               newPath = new 
Stack<>();
-                                                               
newPath.addAll(currentPath);
-                                                       }
-
-                                                       
extractionStates.push(new ExtractionState(
-                                                               target != null 
? Tuple2.of(target, entries.get(target).getElement()) : null,
-                                                               
edge.getDeweyNumber(),
-                                                               newPath));
-                                               }
-                                       }
-                               }
-
-                       }
-               }
-               return result;
+       void cacheEvent(EventId eventId, Lockable<V> event) {
+               this.eventsBufferCache.put(eventId, event);
        }
 
-       public Map<String, List<V>> materializeMatch(Map<String, List<EventId>> 
match) {
-               return materializeMatch(match, new HashMap<>());
+       /**
+        * Put a ShareBufferNode to cache.
+        * @param nodeId id of the event
+        * @param entry SharedBufferNode
+        */
+       void cacheEntry(NodeId nodeId, Lockable<SharedBufferNode> entry) {
+               this.entryCache.put(nodeId, entry);
        }
 
-       public Map<String, List<V>> materializeMatch(Map<String, List<EventId>> 
match, Map<EventId, V> cache) {
-
-               Map<String, List<V>> materializedMatch = new 
LinkedHashMap<>(match.size());
-
-               for (Map.Entry<String, List<EventId>> pattern : 
match.entrySet()) {
-                       List<V> events = new 
ArrayList<>(pattern.getValue().size());
-                       for (EventId eventId : pattern.getValue()) {
-                               V event = cache.computeIfAbsent(eventId, id -> {
-                                       try {
-                                               return 
eventsBuffer.get(id).getElement();
-                                       } catch (Exception ex) {
-                                               throw new 
WrappingRuntimeException(ex);
-                                       }
-                               });
-                               events.add(event);
-                       }
-                       materializedMatch.put(pattern.getKey(), events);
-               }
-
-               return materializedMatch;
+       /**
+        * Remove an event from cache and state.
+        * @param eventId id of the event
+        */
+       void removeEvent(EventId eventId) throws Exception {
+               this.eventsBufferCache.remove(eventId);
+               this.eventsBuffer.remove(eventId);
        }
 
        /**
-        * Increases the reference counter for the given entry so that it is not
-        * accidentally removed.
-        *
-        * @param node id of the entry
-        * @throws Exception Thrown if the system cannot access the state.
+        * remove a ShareBufferNode from cache and state.
+        * @param nodeId id of the event
         */
-       public void lockNode(final NodeId node) throws Exception {
-               Lockable<SharedBufferNode> sharedBufferNode = entries.get(node);
-               if (sharedBufferNode != null) {
-                       sharedBufferNode.lock();
-                       entries.put(node, sharedBufferNode);
-               }
+       void removeEntry(NodeId nodeId) throws Exception {
+               this.entryCache.remove(nodeId);
+               this.entries.remove(nodeId);
        }
 
        /**
-        * Decreases the reference counter for the given entry so that it can be
-        * removed once the reference counter reaches 0.
-        *
-        * @param node id of the entry
+        * Try to get the sharedBufferNode from state iff the node has not been 
quered during this turn process.
+        * @param nodeId id of the event
+        * @return SharedBufferNode
         * @throws Exception Thrown if the system cannot access the state.
         */
-       public void releaseNode(final NodeId node) throws Exception {
-               Lockable<SharedBufferNode> sharedBufferNode = entries.get(node);
-               if (sharedBufferNode != null) {
-                       if (sharedBufferNode.release()) {
-                               removeNode(node, sharedBufferNode.getElement());
-                       } else {
-                               entries.put(node, sharedBufferNode);
-                       }
-               }
+       Lockable<SharedBufferNode> getEntry(NodeId nodeId) throws Exception {
+               Lockable<SharedBufferNode> entry = entryCache.get(nodeId);
+               return  entry != null ? entry : entries.get(nodeId);
        }
 
-       private void removeNode(NodeId node, SharedBufferNode sharedBufferNode) 
throws Exception {
-               entries.remove(node);
-               EventId eventId = node.getEventId();
-               releaseEvent(eventId);
-
-               for (SharedBufferEdge sharedBufferEdge : 
sharedBufferNode.getEdges()) {
-                       releaseNode(sharedBufferEdge.getTarget());
-               }
-       }
-
-       private void lockEvent(EventId eventId) throws Exception {
-               Lockable<V> eventWrapper = eventsBuffer.get(eventId);
-               checkState(
-                       eventWrapper != null,
-                       "Referring to non existent event with id %s",
-                       eventId);
-               eventWrapper.lock();
-               eventsBuffer.put(eventId, eventWrapper);
+       Lockable<V> getEvent(EventId eventId) throws Exception {
+               Lockable<V> event = eventsBufferCache.get(eventId);
 
 Review comment:
   Same as above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Reduce the count to deal with state during a CEP process 
> ---------------------------------------------------------
>
>                 Key: FLINK-9642
>                 URL: https://issues.apache.org/jira/browse/FLINK-9642
>             Project: Flink
>          Issue Type: Improvement
>          Components: CEP
>    Affects Versions: 1.6.0
>            Reporter: aitozi
>            Assignee: aitozi
>            Priority: Major
>              Labels: pull-request-available
>
> With the rework of sharedBuffer Flink-9418, the lock & release operation is 
> deal with rocksdb state which is different from the previous version which 
> will read the state of sharedBuffer all to memory, i think we can add a cache 
> or variable in sharedbuffer to cache the LockAble Object to mark the ref 
> change in once process in NFA, this will reduce the count when the events 
> point to the same NodeId.. And flush the result to MapState at the end of 
> process. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to