adugeek opened a new issue #4388:
URL: https://github.com/apache/apisix/issues/4388


   Although there is an official implementation of Ingress Controller, it is 
not very practical in some scenarios.
   
   1: Small-scale cluster. If your cluster only has 1~2 nodes, you cannot 
provide a stable Etcd deployment environment. Even if you deploy it in a 
certain way, it will consume additional resources.
   
   2: The general private k8s cluster does not have a stable external storage 
service. Even if the number of nodes is sufficient, it is not convenient to 
deploy etcd
   
   3: An isolated multi-tenant cluster through the namespace. The appearance of 
the Ingress Controller as a cluster entry will break the isolation of tenants. 
Even creating a new Ingress Contoller Class for each tenant is not appropriate.
   
   Based on the above reasons, I designed a new implementation of Ingress 
Controller.
   The main points of implementation are as follows:
   
   1: Define CRD on the cluster
   2: Deploy apisix service in each namespace of the cluster as and only as the 
traffic entrance of that namespace
   3: Each apisix uses the privileged process list-watch k8s namespaced crd 
resources, and then writes it to conf/apisix.yaml
   4: Implement k8s discovery (list-watch k8s namespaced enpoints)
   
   This way
   No need for additional development language and framework intervention,
   No need for additional etcd, reduced data transfer process,
   It is more robust than the official implementation.
   
   The only implementation difficulty may be how to implement webhooks.
   If use client-go development, it is not convenient to verify config schema, 
plugin schema.
   If use the apisix plugin, it is not convenient to verify the uniqueness of 
service_id, upstream_id and route_id because of the nginx process model.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to