aicam opened a new issue, #3902:
URL: https://github.com/apache/texera/issues/3902

   ## Background
   
   Our current architecture uses **Ingress** as a static gateway and **Envoy** 
as a dynamic gateway. Ingress is responsible for forwarding requests to the 
appropriate microservice. Since each microservice can have multiple replicas or 
pods that serve the same purpose, Ingress uses a simple round-robin strategy 
for load balancing.
   
   However, the **Amber Engine** (also referred to as the *Computing Unit*) 
operates differently. Each Computing Unit is a dedicated pod for a specific 
user, meaning requests cannot be routed arbitrarily. A request for a given user 
must always be directed to the specific Computing Unit that user is authorized 
to access. 
   
   At present, this is handled using a `?cuid=` parameter in the URL, which 
Envoy interprets to forward requests to the correct Computing Unit.
   
   ---
   
   ## Problem
   
   Using two different gateways — Ingress and Envoy — has introduced 
unnecessary complexity into our architecture. For example, if we want to 
centralize the authorization logic in an **Access Control Service**, every 
request would need to pass through both Ingress and Envoy, effectively 
duplicating proxy logic and increasing maintenance overhead.
   
   ---
   
   ## Proposal
   
   We propose replacing both **Ingress** and **Envoy** with **Contour**, a 
modern gateway solution capable of handling both static and dynamic routing. 
   
   By consolidating these responsibilities under Contour, we can:
   - Simplify the overall architecture  
   - Eliminate redundant proxy layers  
   - Streamline routing and access control logic  
   
   This change would make the system more maintainable, efficient, and easier 
to evolve going forward.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to