Ingress controller

0 28
IntroductionAPISIX is a dynamic, real-time, high-performance API gateway. It pro...

Introduction

APISIX is a dynamic, real-time, high-performance API gateway. It provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more. It can be used to handle both traditional north-south traffic and east-west traffic between services. At the same time, it can also be used as a Kubernetes Ingress controller.
The APISIX Ingress controller provides a Helm installation method, but installing it using native YAML is more helpful for understanding its principles.

Install APISIX and APISIX Ingress controller using native YAML

In this tutorial, we will install APISIX and APISIX Ingress controller in Kubernetes using native YAML.

Prerequisites

Ingress controller

If there is no Kubernetes cluster in use, it is recommended to use kind to create a local Kubernetes cluster.


kubectl create ns apisix

In this tutorial, all our operations will be performed within the namespace apisixis executed.

ETCD installation

Here, we will deploy a single-node ETCD cluster without authentication within the Kubernetes cluster.
In this example, we assume that you have a storage deployer. If you are using Kind, then the local path deployer will be created automatically. If you do not have a storage deployer or do not want to use a persistent storage volume, you can use emptyDirAs a storage volume.


# etcd-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: etcd-headless
  namespace: apisix
  labels:
    app.kubernetes.io/name: etcd
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - name: "client"
      port: 2379
      targetPort: client
    - name: "peer"
      port: 2380
      targetPort: peer
  selector:
    app.kubernetes.io/name: etcd
---
# etcd.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: etcd
  namespace: apisix
  labels:
    app.kubernetes.io/name: etcd
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: etcd
  serviceName: etcd-headless
  podManagementPolicy: Parallel
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: etcd
    spec:
      securityContext:
        fsGroup: 1001
        runAsUser: 1001
      containers:
        - name: etcd
          image: docker.io/bitnami/etcd:3.4.14-debian-10-r0
          imagePullPolicy: "IfNotPresent"
          # command:
            # - /scripts/setup.sh
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ETCDCTL_API
              value: "3"
            - name: ETCD_NAME
              value: "$(MY_POD_NAME)"
            - name: ETCD_DATA_DIR
              value: /etcd/data
            - name: ETCD_ADVERTISE_CLIENT_URLS
              value: "http://$(MY_POD_NAME).etcd-headless.apisix.svc.cluster.local:2379"
            - name: ETCD_LISTEN_CLIENT_URLS
              value: "http://0.0.0.0:2379"
            - name: ETCD_INITIAL_ADVERTISE_PEER_URLS
              value: "http://$(MY_POD_NAME).etcd-headless.apisix.svc.cluster.local:2380"
            - name: ETCD_LISTEN_PEER_URLS
              value: "http://0.0.0.0:2380"
            - name: ALLOW_NONE_AUTHENTICATION
              value: "yes"
          ports:
            - name: client
              containerPort: 2379
            - name: peer
              containerPort: 2380
          volumeMounts:
            - name: data
              mountPath: /etcd
      # If you don't have a storage provisioner or don't want to use a persistent volume, you could use an `emptyDir` as follows.
      # volumes:
      #   - name: data
      #     emptyDir: {}
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "8Gi"

Please note that this ETCD installation is very simple and lacks many necessary production features; it is only intended for learning scenarios. If you want to deploy a production-grade ETCD, please refer to bitnami/etcd.

APISIX Installation

create a configuration file for our APISIX. We will deploy version 2.5 of APISIX.
Please note that the APISIX Ingress controller needs to communicate with the APISIX management API, therefore, for testing purposes, we will apisix.allow_adminSet to 0.0.0.0/0.


apiVersion: v1
kind: ConfigMap
metadata:
  name: apisix-conf
  namespace: apisix
data:
  config.yaml: |-
    apisix:
      node_listen: 9080             # APISIX listening port
      enable_heartbeat: true
      enable_admin: true
      enable_admin_cors: true
      enable_debug: false
      enable_dev_mode: false          # Sets nginx worker_processes to 1 if set to true
      enable_reuseport: true          # Enable nginx SO_REUSEPORT switch if set to true.
      enable_ipv6: true
      config_center: etcd             # etcd: use etcd to store the config value

      allow_admin:                  # Module ngx_http_access_module
        - 0.0.0.0/0
      port_admin: 9180

      # Default token when use API to call for Admin API.
      # *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
      # Disabling this configuration item means that the Admin API does not
      # require any authentication.
      admin_key:
        # admin: can everything for configuration data
        - name: "admin"
          key: edd1c9f034335f136f87ad84b625c8f1
          role: admin
        # viewer: only can view configuration data
        - name: "viewer"
          key: 4054f7cf07e344346cd3f287985e76a2
          role: viewer
      # dns_resolver:
      #   - 127.0.0.1
      dns_resolver_valid: 30
      resolver_timeout: 5

    nginx_config:                     # config for render the template to generate nginx.conf
      error_log: "/dev/stderr"
      error_log_level: "warn"         # warn,error
      worker_rlimit_nofile: 20480     # the number of files a worker process can open, should be larger than worker_connections
      event:
        worker_connections: 10620
      http:
        access_log: "/dev/stdout"
        keepalive_timeout: 60s         # timeout during which a keep-alive client connection will stay open on the server side.
        client_header_timeout: 60s     # timeout for reading client request header, then 408 (Request Time-out) error is returned to the client
        client_body_timeout: 60s       # timeout for reading client request body, then 408 (Request Time-out) error is returned to the client
        send_timeout: 10s              # timeout for transmitting a response to the client.then the connection is closed
        underscores_in_headers: "on"   # default enables the use of underscores in client request header fields
        real_ip_header: "X-Real-IP"    # Module ngx_http_realip_module
        real_ip_from:                  # Module ngx_http_realip_module
          - 127.0.0.1
          - 'unix:'

    etcd:
      host:
        - "http://etcd-headless.apisix.svc.cluster.local:2379"
      prefix: "/apisix"     # apisix configurations prefix
      timeout: 30   # seconds
    plugins:                          # plugin list
      - api-breaker
      - authz-keycloak
      - basic-auth
      - batch-requests
      - consumer-restriction
      - cors
      - echo
      - fault-injection
      - grpc-transcode
      - hmac-auth
      - http-logger
      - ip-restriction
      - jwt-auth
      - kafka-logger
      - key-auth
      - limit-conn
      - limit-count
      - limit-req
      - node-status
      - openid-connect
      - prometheus
      - proxy-cache
      - proxy-mirror
      - proxy-rewrite
      - redirect
      - referer-restriction
      - request-id
      - request-validation
      - response-rewrite
      - serverless-post-function
      - serverless-pre-function
      - sls-logger
      - syslog
      - tcp-logger
      - udp-logger
      - uri-blocker
      - wolf-rbac
      - zipkin
      - traffic-split
    stream_plugins:
      - mqtt-proxy


Please make sure etcd.hostto match the headless service we created initially. In our example, it is http://etcd-headless.apisix.svc.cluster.local:2379.

In this configuration, we set apisix.admin_keyThe definition below has adminAccess Key with the name. This key is our API key and will be used to control APISIX in the future. This key is the default key of APISIX, and it should be changed in a production environment.

Save it as config.yamlThen run kubectl -n apisix create -f config.yamlCreate a ConfigMap. Later, we will mount this ConfigMap to the APISIX Deployment.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: apisix
  namespace: apisix
  labels:
    app.kubernetes.io/name: apisix
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: apisix
  template:
    metadata:
      labels:
        app.kubernetes.io/name: apisix
    spec:
      containers:
        - name: apisix
          image: "apache/apisix:2.5-alpine"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 9080
              protocol: TCP
            - name: tls
              containerPort: 9443
              protocol: TCP
            - name: admin
              containerPort: 9180
              protocol: TCP
          readinessProbe:
            failureThreshold: 6
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            tcpSocket:
              port: 9080
            timeoutSeconds: 1
          lifecycle:
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - "sleep 30"
          volumeMounts:
            - mountPath: /usr/local/apisix/conf/config.yaml
              name: apisix-config
              subPath: config.yaml
          resources: {}
      volumes:
        - configMap:
            name: apisix-conf
          name: apisix-config

Now, it should be possible to use APISIX. Use kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o nameto list the APISIX Pod names. Here we assume the Pod name is apisix-7644966c4d-cl4k6.

Let's check:


kubectl -n apisix exec -it apisix-7644966c4d-cl4k6 -- curl http://127.0.0.1:9080

If you are using Linux or macOS, run the following command in Bash:


kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl http://127.0.0.1:9080

If APISIX is working normally, it should output:{"error_msg":"404 Route Not Found"}. Because we have not defined any routes yet.

HTTPBIN service

Before configuring APISIX, we need to create a test service. Here, we use kennethreitz/httpbin. We place the httpbin service in demoin the namespace.


kubectl create ns demo
kubectl label namespace demo apisix.ingress=watching # Add the apisix.ingress label to the demo namespace
kubectl -n demo run httpbin --image-pull-policy=IfNotPresent --image kennethreitz/httpbin --port 80
kubectl -n demo expose pod httpbin --port 80

After the httpbin service is started, we should be able to access it through the service in the APISIX Pod.


kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl http://httpbin.demo/get

This command outputs the query parameters of the request, for example:}}


{   "args": {},   "headers": {     "Accept": "*/*",     "Host": "httpbin.demo",     "User-Agent": "curl/7.67.0"   },   "origin": "172.17.0.1",   "url": "http://httpbin.demo/get" }

For more information, please refer to the Quick Start Guide.

Define the route

Now, we can define the route for HTTPBIN service traffic through the APISIX proxy.
Assuming we want to route URIs with /httpbinprefix, and the request contains Host: httpbin.orgall traffic.
Please note that the management port is 9180.


kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9180/apisix/admin/routes/1" -H "X-API-KEY: edd1c9f034335f136f87ad84b625c8f1" -X PUT -d '
{
  "uri": "/*",
  "host": "httpbin.org",
  "upstream": {
    "type": "roundrobin",
    "nodes": {
      "httpbin.demo:80": 1
    }
  }
'

The output is as follows:


{"action":"set","node":{"key":"\/apisix\/routes\/1","value":{"status":1,"create_time":1621408897,"upstream":{"pass_host":"pass","type":"roundrobin","hash_on":"vars","nodes":{"httpbin.demo:80":1},"scheme":"http"},"update_time":1621408897,"priority":0,"host":"httpbin.org","id":"1","uri":"\/*"}}}

We can check through GET /apisix/admin/routesCheck the routing rule:


kubectl -n apisix exec -it $(kubectl get pods -n apisix -o name) -- curl "http://127.0.0.1:9180/apisix/admin/routes/1" -H "X-API-KEY: edd1c9f034335f136f87ad84b625c8f1"

The output is as follows:


{"action":"get","node":{"key":"\/apisix\/routes\/1","value":{"upstream":{"pass_host":"pass","type":"roundrobin","scheme":"http","hash_on":"vars","nodes":{"httpbin.demo:80":1}},"id":"1","create_time":1621408897,"update_time":1621408897,"host":"httpbin.org","priority":0,"status":1,"uri":"\/*"}},"count":"1"}


Now, let's test the routing rule:


kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9080/get" -H 'Host: httpbin.org'


The output is as follows:


{   "args": {},   "headers": {     "Accept": "*/*",     "Host": "httpbin.org",     "User-Agent": "curl/7.67.0",     "X-Forwarded-Host": "httpbin.org"   },   "origin": "127.0.0.1",   "url": "http://httpbin.org/get" }

Install APISIX Ingress Controller

The APISIX Ingress controller can help you manage configurations declaratively by using Kubernetes resources. Here we will install version 1.6.0.

Currently, the APISIX Ingress controller supports both the official Ingress resources and the custom resource definitions of APISIX, including ApisixRoute and ApisixUpstream.

Before installing the APISIX Ingress controller, we need to create a service account and the corresponding cluster role to ensure that the APISIX Ingress controller has sufficient permissions to access the required resources.


Below is an example configuration from the apisix-helm-chart:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: apisix-ingress-controller
  namespace: apisix
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: apisix-clusterrole
  namespace: apisix
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - persistentvolumeclaims
      - pods
      - replicationcontrollers
      - replicationcontrollers/scale
      - serviceaccounts
      - services
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - bindings
      - events
      - limitranges
      - namespaces/status
      - pods/log
      - pods/status
      - replicationcontrollers/status
      - resourcequotas
      - resourcequotas/status
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - controllerrevisions
      - daemonsets
      - deployments
      - deployments/scale
      - replicasets
      - replicasets/scale
      - statefulsets
      - statefulsets/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - autoscaling
    resources:
      - horizontalpodautoscalers
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - batch
    resources:
      - cronjobs
      - jobs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - daemonsets
      - deployments
      - deployments/scale
      - ingresses
      - networkpolicies
      - replicasets
      - replicasets/scale
      - replicationcontrollers/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - metrics.k8s.io
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apisix.apache.org
    resources:
      - apisixroutes
      - apisixroutes/status
      - apisixupstreams
      - apisixupstreams/status
      - apisixtlses
      - apisixtlses/status
      - apisixclusterconfigs
      - apisixclusterconfigs/status
      - apisixconsumers
      - apisixconsumers/status
      - apisixpluginconfigs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apisix-clusterrolebinding
  namespace: apisix
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: apisix-clusterrole
subjects:
  - kind: ServiceAccount
    name: apisix-ingress-controller
    namespace: apisix

Then, we need to create the ApisixRoute CRD:


git clone https://github.com/apache/apisix-ingress-controller.git --depth 1 
cd apisix-ingress-controller/ 
kubectl apply -k samples/deploy/crd


Please refer to samplesGet details.

To make the Ingress controller work normally with APISIX, we need to create a configuration file that contains the APISIX management API URL and API key, as shown below:


apiVersion: v1
data:
  config.yaml: |
    # log options
    log_level: "debug"
    log_output: "stderr"
    http_listen: ":8080"
    enable_profiling: true
    kubernetes:
      kubeconfig: ""
      resync_interval: "30s"
      namespace_selector:
      - "apisix.ingress=watching"
      ingress_class: "apisix"
      ingress_version: "networking/v1"
      apisix_route_version: "apisix.apache.org/v2"
    apisix:
      default_cluster_base_url: "http://apisix-admin.apisix:9180/apisix/admin"
      default_cluster_admin_key: "edd1c9f034335f136f87ad84b625c8f1"
kind: ConfigMap
metadata:
  name: apisix-configmap
  namespace: apisix
  labels:
    app.kubernetes.io/name: ingress-controller

If you want to learn all the configuration items, check conf/config-default.yamlGet details.

Because the Ingress controller needs to access the APISIX management API, we need to create a service for APISIX.


apiVersion: v1
kind: Service
metadata:
  name: apisix-admin
  namespace: apisix
  labels:
    app.kubernetes.io/name: apisix
spec:
  type: ClusterIP
  ports:
  - name: apisix-admin
    port: 9180
    targetPort: 9180
    protocol: TCP
  selector:
    app.kubernetes.io/name: apisix

Because the current APISIX Ingress controller is not 100% compatible with APISIX, we need to delete the routes created earlier to prevent data structure mismatches.


kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X DELETE -H "X-API-KEY: edd1c9f034335f136f87ad84b625c8f1"

After completing these configurations, we now deploy the Ingress controller.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: apisix-ingress-controller
  namespace: apisix
  labels:
    app.kubernetes.io/name: ingress-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-controller
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-controller
    spec:
      serviceAccountName: apisix-ingress-controller
      volumes:
        - name: configuration
          configMap:
            name: apisix-configmap
            items:
              - key: config.yaml
                path: config.yaml
      initContainers:
        - name: wait-apisix-admin
          image: busybox:1.28
          command: ['sh', '-c', "until nc -z apisix-admin.apisix.svc.cluster.local 9180 ; do echo waiting for apisix-admin; sleep 2; done;"]
      containers:
        - name: ingress-controller
          command:
            - /ingress-apisix/apisix-ingress-controller
            - ingress
            - --config-path
            - /ingress-apisix/conf/config.yaml
          image: "apache/apisix-ingress-controller:1.6.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
          readinessProbe:
            httpGet:
              path: /healthz
              port: 8080
          resources:
            {}
          volumeMounts:
            - mountPath: /ingress-apisix/conf
              name: configuration

In this Deployment, we mount the ConfigMap created above as a configuration file and tell Kubernetes to use the service account apisix-ingress-controller.

The state of the Ingress controller changes to RunningAfter that, we create the APISIXRoute resource and observe its behavior.

Below is an example of APISIXRoute:


apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
  name: httpserver-route
  namespace: demo
spec:
  http:
  - name: httpbin
    match:
      hosts:
      - local.httpbin.org
      paths:
      - /*
    backends:
      - serviceName: httpbin
        servicePort: 80

Note that the apiVersion field should match the ConfigMap above. The serviceName should match the name of the service exposed, which is httpbin.

Before creating it, we confirm that it has a header Host: local.http.demoThe request returns 404:


kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9080/get" -H 'Host: local.httpbin.org'

It will return:


{"error_msg":"404 Route Not Found"}

Apply APISIXRoute in the same namespace as the target service, in this example demo. After applying it, we check whether it takes effect:


kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9080/get" -H "Host: local.httpbin.org"

It should return:


{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "local.httpbin.org",
    "User-Agent": "curl/7.67.0",
    "X-Forwarded-Host": "local.httpbin.org"
  },
  "origin": "127.0.0.1",
  "url": "http://local2.httpbin.org/get"
}

That's all! Enjoy your journey with APISIX and APISIX Ingress controller.

Ingress controller

The Apache APISIX Ingress controller for Kubernetes.

Module

Ingress-types

  • Define the CRD (CustomResourceDefinition) required by Apache APISIX
  • Currently supports ApisixRoute/ApisixUpstream, as well as other services and route-level plugins
  • Can be packaged as an independent binary file, keeping in sync with Ingress definitions
  • CRD design
  1. Types
  • Define interface objects to match the concepts in Apache APISIX, such as routes, services, upstreams, and plugins
  • Can be packaged as an independent binary file, requiring a compatible Apache APISIX version
  • Add new types to this module to support new features
  1. Seven
  • Contains the main application logic
  • Based on Apisix-types object, synchronize the Kubernetes cluster state to Apache APISIX
  1. Ingress-controller
  • Ingress controller driver; listens to the Kubernetes API Server
  • Before transferring control to the upper module Seven, match and convert Apisix-ingress-types to Apisix-types

CRD design

Currently apisix-ingress-controllerCRD includes 6 parts: ApisixRoute/ApisixUpstream/ApisixConsumer/ApisixTls/ApisixClusterConfig/ApisixPluginConfig. Its design follows the following ideas.

  1. The most important part of the gateway is the routing part, which is used to define the distribution rules of gateway traffic
  2. For ease of understanding and configuration,ApisixRouteThe design structure is basically similar to Kubernetes Ingress
  3. In the design of annotations, referring to the structure of Kubernetes Ingress, but the internal implementation is based on Apache APISIX plugins
  4. In the simplest case, it is only necessary to define ApisixRoute, the Ingress controller will automatically add ApisixUpstream
  5. ApisixUpstreamCan define some details on the Apache APISIX upstream, such as load balancing/health check, etc.

Monitor CRD

apisix-ingress-controllerResponsible for interacting with the Kubernetes API Server to apply for access resource permissions (RBAC), monitoring changes, implementing object transformation in the Ingress controller, comparing changes, and then synchronizing to Apache APISIX.

Sequence diagram

The following is an introduction ApisixRouteAnd the main logic flowchart of other CRD in the synchronization process.

Conversion structure

apisix-ingress-controllerProvide external configuration methods for CRD. It is aimed at daily operation and maintenance personnel, who often need to process a large number of routes in batches, and hope to handle all related services in the same configuration file, while having convenient and easy-to-understand management capabilities. Apache APISIX is designed from the perspective of the gateway, and all routes are independent. This leads to significant differences in data structures between the two. One focuses on batch definition, while the other is discrete implementation.
Considering the usage habits of different groups of people, the data structure of CRD refers to the data structure of Kubernetes Ingress, and is basically the same in form.
A simple comparison is as follows, they have different definitions:

They are many-to-many relationships. Therefore,apisix-ingress-controllerSome transformations must be performed on CRD to adapt to different gateways.

Cascading update

Currently, we define multiple CRDs, and these CRDs are responsible for their respective field definitions.ApisixRouteApisixUpstreamCorresponding to Apache APISIX routeservice/upstreamand other objects. Due to the strong binding relationship between APISIX objects, when batch modifying and deleting CRD and other data structures, it is necessary to consider the cascading effects between objects.
Therefore, in apisix-ingress-controllerThrough channelImplement a broadcast notification mechanism, that is, the definition of any object must be notified to other objects related to it, and trigger the corresponding behavior.

Difference rules

sevenThe module saves memory data structures internally, and it is very similar to the Apache APISIX resource objects at present. When new changes occur to the Kubernetes resource objects,sevenCompare memory objects, and then perform incremental updates based on the comparison results.
The current comparison rules are based on routeserviceupstreamThe grouping of resource objects, compare them separately, and notify accordingly after finding differences.

service discovery

apisix-ingress-controllerBased on ApisixUpstreamdefined in the resource object namespace,name,port, will be in runningstatus endpointsNode information is registered to the nodes in the Apache APISIX Upstream, and the Endpoint status is synchronized in real time according to Kubernetes.

Based on service discovery, Apache APISIX Ingress can directly access backend Pod nodes. Bypassing the Kubernetes service, it can achieve customized load balancing strategies.

Annotation implementation

Unlike the implementation of Kubernetes Nginx Ingress,apisix-ingress-controllerAnnotation implementation based on the Apache APISIX plugin mechanism.

For example, through ApisixRouteThe resource object of k8s.apisix.apache.org/whitelist-source-rangeAnnotation configuration for black/white list settings.

apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
  annotations:
    k8s.apisix.apache.org/whitelist-source-range: 1.2.3.4,2.2.0.0/16
  name: httpserver-route
spec:
    ...

The black/white list here is defined by ip-restrictionPlugin implementation.

There will be more annotation implementations in the future, to facilitate the definition of some common configurations, such as CORS.

If you have annotation needs, welcome to issueDiscussion, we discuss how to implement it.

Reference document

你可能想看:

It is possible to perform credible verification on the system boot program, system program, important configuration parameters, and application programs of computing devices based on a credible root,

d) Adopt identification technologies such as passwords, password technologies, biometric technologies, and combinations of two or more to identify users, and at least one identification technology sho

In today's rapidly developing digital economy, data has become an important engine driving social progress and enterprise development. From being initially regarded as part of intangible assets to now

Businesses going abroad for compliance: how to distinguish between data controllers and data processors

A record of entering the school management background through information collection

Article 2 of the Cryptography Law clearly defines the term 'cryptography', which does not include commonly known terms such as 'bank card password', 'login password', as well as facial recognition, fi

APP Illegal Trend: Interpreting the 'Identification Method for Illegal and Unauthorized Collection and Use of Personal Information by APPs'

4.5 Main person in charge reviews the simulation results, sorts out the separated simulation issues, and allows the red and blue teams to improve as soon as possible. The main issues are as follows

5. Collect exercise results The main person in charge reviews the exercise results, sorts out the separated exercise issues, and allows the red and blue sides to improve as soon as possible. The main

1) Progress in the data plane and control plane of zero-trust

2. The International Criminal Police Organization arrests more than 1,000 network criminals from 20 countries, seize 27 million US dollars

最后修改时间:
admin
上一篇 2025年03月30日 08:27
下一篇 2025年03月30日 08:50

评论已关闭