2. Route Redis cache data based on request parameters or IP, etc., for higher flexibility

0 17
IntroductionIt is better to teach people how to fish than to give them fish.Firs...

Introduction

It is better to teach people how to fish than to give them fish.First learn to use it, then learn the principle, and then learn to create. It may not be used for this ability all one's life, but it is necessary to possess this ability. This article mainly focuses on the accumulation of grayscale implementation using nginx+lua+redis. When we have this ability, we can adjust the implementation plan at any time based on this ability and thought: such as nginx+lua+(other data sources), nginx+(other scripting languages)……

I. Gray-scale Solution

2. Route Redis cache data based on request parameters or IP, etc., for higher flexibility

Common grayscale implementation solutions

1. Request Routing: Decide whether to route the request to a grayscale environment through identifiers in the request (such as user ID, device ID, request headers, etc.). Routing rules can be implemented using reverse proxies (such as Nginx, Envoy) or API gateways (such as Kong, Apigee).

2. Weight Control: Allocate traffic to different environments in a certain weighted ratio. Weight control can be achieved through load balancers (such as HAProxy, Kubernetes Ingress) or proxy servers (such as Nginx, Envoy).

3. Feature Toggle: Control the enablement and disablement of features by embedding feature toggles (Feature Flag) in the code. Configuration files, databases, key-value storage, or feature management platforms (such as LaunchDarkly, Unleash) can be used to manage feature toggles.

4. Phased Release: Divide the release of features into multiple stages, from internal testing to grayscale environment to full-scale release. Deployment tools (such as Jenkins, GitLab CI/CD) or cloud platforms (such as AWS, Azure) can support phased release.

5. A/B Testing: Divide the traffic into multiple different versions of the application and compare their performance and user feedback. A/B testing platforms (such as Optimizely, Google Optimize) can be used to manage and monitor A/B testing.

6. Canary Release: Gradually introduce the new version of the application into the production environment, direct only a small amount of traffic to the new version, and gradually increase traffic based on its performance and stability. Canary release can be realized using deployment tools, container orchestration platforms, or cloud platforms.


Common Grayscale Release Solutions

1. User ID-Based Grayscale Release: Divide grayscale users or percentage grayscale based on user ID. For example, decide whether to route users to the grayscale environment based on the hash value or random number of user ID.

2. IP Address-Based Grayscale Release: Divide grayscale users based on the user's IP address. For example, specify a range of IP addresses as grayscale users and route requests from these IP addresses to the grayscale environment.

3. Cookie/Session Grayscale Release: Divide grayscale users by setting specific identifiers in the user's cookie or session. For example, set specific cookies or session variables as grayscale identifiers and route requests with this identifier to the grayscale environment.

4. Request Header Grayscale Release: Divide grayscale users based on specific identifiers in the request headers. For example, route requests to the grayscale environment based on custom identifiers in the request headers or specific HTTP headers.

5. Weight or Percentage Grayscale Release: Randomly distribute requests to different environments, and control the traffic distribution by setting different weights or percentages for different environments.

6. A/B Testing: Divide the traffic into multiple different versions of the application, compare their performance and user feedback during the experiment, and finally select the best version for full-scale release.


Secondly, grayscale implementation using nginx+lua+redis

Theory

1. Install and configure Nginx and Redis. Ensure that the Lua module is enabled in Nginx and can access Redis.

2. Define the grayscale rules in the Nginx configuration. You can use Lua scripts to determine whether a user should be routed to the grayscale environment. An example configuration is as follows:

server {
    listen 80;
    server_name example.com;
    location / {
        access_by_lua_block {
            local redis = require "resty.redis"
            local red = redis:new()
            -- Connect to Redis
            local ok, err = red:connect("redis_host", redis_port)
            if not ok then
                ngx.log(ngx.ERR, "failed to connect to Redis: ", err)
                ngx.exit(500)
            end
            -- Use Redis to judge whether to route to the gray environment based on user ID
            local user_id = ngx.req.get_headers()["X-User-ID"]
            local is_gray = red:get("gray:" .. user_id)
            if is_gray == "1" then
                ngx.var.upstream = "gray_backend"
            end
        {}
        proxy_pass http://backend;
    {}
    location /gray {
        # Configuration for the gray environment
        proxy_pass http://gray_backend;
    {}
    location /admin {
        # Configuration for the management backend
        proxy_pass http://admin_backend;
    {}
{}

In the above example, we connect to Redis and judge whether to route the request to the gray environment based on the user ID in the request. The ngx.var.upstream variable is used to dynamically set the upstream address, thus realizing the routing of the gray environment.

6. Set gray users in Redis. You can maintain a key-value pair in Redis, where the key is the user ID and the value indicates whether the user is a gray user (for example, 1 indicates a gray user, 0 indicates not). You can use Redis's SET and GET commands to manipulate these values.

-- Set the user as a gray user
local ok, err = red:set("gray:" .. user_id, 1)
if not ok then
    ngx.log(ngx.ERR, "failed to set gray status for user: ", err)
    ngx.exit(500)
end

-- Set the user as a non-gray user
local ok, err = red:set("gray:" .. user_id, 0)
if not ok then
    ngx.log(ngx.ERR, "failed to set gray status for user: ", err)
    ngx.exit(500)
end

By setting the grayscale status of users in Redis, you can dynamically control whether users should be routed to the grayscale environment.

4. According to your needs, configure other path or feature grayscale rules. You can add other path or feature grayscale rules in the Nginx configuration as needed to implement more complex grayscale release strategies.

Practice

Here mainly uses OpenResty

Implementation of grayscale with nginx+lua - mainly using OpenResty

OpenResty (also known as ngx_openresty) is an scalable Web platform based on NGINX. OpenResty is a powerful Web application server, and Web developers can use Lua scripting language to call various C and Lua modules supported by Nginx

OpenResty API documentation: https://www.kancloud.cn/qq13867685/openresty-api-cn/159190

1. Route based on the URL parameters of the POST request

The following is the Nginx configuration:

#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
    worker_connections 1024;
{}
http {
    include mime.types;
    default_type application/octet-stream;
    #log_format main '$time_local Client address: $remote_addr–$remote_port Request URI and HTTP protocol: $request Request address: $http_host HTTP request status: $status Upstream status: $upstream_status Load address: $upstream_addr URL redirection source: $http_referer $body_bytes_sent $http_user_agent $request_uri';
    log_format  logFormat '$group $time_local Client:$remote_addr–$remote_port Request URI and HTTP Protocol:$request Request:$http_host HTTP Status:$status Upstream Status:$upstream_status Load:$upstream_addr 
                          URL redirection: $http_referer $body_bytes_sent $http_user_agent $request_uri request parameters $query_string $args $document_root $uri
                          -----$request_uri $request_filename $http_cookie';
    access_log logs/access.log logFormat;
    sendfile        on;
    #tcp_nopush     on;
    #keepalive_timeout  0;
    keepalive_timeout  65;
    server{
        listen       80;   #Listening port
        server_name  域名; #Listening address 
        access_log  logs/xx.com.access.log  logFormat;
              location /hello {
                default_type 'text/plain';
                content_by_lua 'ngx.say("hello ,lua scripts")';
        {}
        location /myip {
                default_type 'text/plain';
                content_by_lua '
                        clientIP = ngx.req.get_headers()["x_forwarded_for"]
                        ngx.say("Forwarded_IP:",clientIP)
                        if clientIP == nli then
                                clientIP = ngx.var.remote_addr
                                ngx.say("Remote_IP:",clientIP)
                        end
                        ';
        {}
        location / {
                default_type 'text/plain';
                lua_need_request_body on;
                #content_by_lua_file /etc/nginx/lua/dep.lua;
                #content_by_lua_file D:/sortware/openresty/openresty-1.17.8.2-win64/conf/dep.lua; # Specify the lua file to handle http requests
                content_by_lua_file D:/user/Downloads/openresty-1.19.9.1-win64/conf/dep.lua; # Specify the http request processing by the lua file
        {}
        location @default_version {
            proxy_pass http://default; 
            proxy_set_header  Host  $http_host;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
        {}
        location @new_version {
            proxy_pass http://new_version;
            proxy_set_header Host $http_host;
            #proxy_redirect default;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        {}
        location @old_version {
            proxy_pass http://old_version; 
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;    
        {}
    {}
	# Standard pre-release environment
	upstream default {
		 server ip:port;
	{}
    # Pre-release 2
	upstream new_version {
		 server ip:port;
	{}
    # Pre-release 3
	upstream old_version {
		server ip:port;
	{}
{}

The following is a Lua script:

--get request URI parameters

function SaveTableContent(file, obj)
      local szType = type(obj);
      print(szType);
      if szType == "number" then
            file:write(obj);
      elseif szType == "string" then
            file:write(string.format("%q", obj));
      elseif szType == "table" then
            --Format the table content and write to the file
            --file:write("{\n");
            for i, v in pairs(obj) do
                  SaveTableContent(file, i);
                  file:write(":");
                  SaveTableContent(file, v);
                  file:write(",");
			end
            --file:write("}\n");
      else
			error("can't serialize a "..szType);
      end
end

function SaveTable(obj)

      local file = io.open("D:\\user\\Downloads\\openresty-1.19.9.1-win64\\logs\\parmas.txt", "a");
      assert(file);
      SaveTableContent(file,obj);
      file:close();
end

local request_method = ngx.var.request_method;
local getargs = nil;
local args = nil;
local read_body = nil;
local body_data = nil;
local thirdPolicystatus = nil;
if "GET" == request_method then
	args = ngx.req.get_uri_args();
elseif "POST" == request_method then
	getargs = ngx.req.get_uri_args();
	args  	  = ngx.req.get_post_args();
	read_body = ngx.req.read_body();
	body_data = ngx.req.get_body_data();
end
if getargs ~= nil then
	SaveTable(getargs);
	thirdPolicystatus = getargs["thirdPolicystatus"];
	if thirdPolicystatus ~= nil then
		SaveTable(thirdPolicystatus);
	end
end

if args ~= nil then
	SaveTable(args);
end

if read_body ~= nil then
	SaveTable(read_body);
end

if body_data ~= nil then
	SaveTable(body_data);
end

if getargs ~= nil then
	thirdPolicystatus = getargs["thirdPolicystatus"]
	if thirdPolicystatus ~= nil and thirdPolicystatus == "1" then
		SaveTable("new_version-getargs");
		ngx.exec('@new_version')
	elseif thirdPolicystatus ~= nil and thirdPolicystatus == "2" then
		SaveTable("old_version-getargs");
		"old_version-args-string");
	else
		SaveTable("default_version-getargs");
		ngx.exec('@default_version')
	end
end

if args ~= nil then
	if type(args) == "table" then
		thirdPolicystatus = tostring(args["thirdPolicystatus"])
		if thirdPolicystatus ~= nil and thirdPolicystatus == 1 then
			SaveTable("new_version-args-table");
			ngx.exec('@new_version')
		}
			SaveTable("old_version-args-table");
			"old_version-args-string");
		else
			SaveTable("default_version-args-table");
			ngx.exec('@default_version')
		end
	elseif type(args) == "string" then
		local json = require("cjson")
		local jsonObj = json.decode(args)
			thirdPolicystatus = jsonObj['thirdPolicystatus']
		if thirdPolicystatus ~= nil and thirdPolicystatus == 1 then
			SaveTable("new_version-args-string");
			ngx.exec('@new_version')
		}
			elseif thirdPolicystatus ~= nil and thirdPolicystatus == 2 thenSaveTable(
			"old_version-args-string");
		else
			SaveTable("default_version-args-string");
			ngx.exec('@default_version')
		end
	end
end
return

Host as follows:

127.0.0.1  Domain Name

Access Address:

Domain

Menu operation data --- policy data, default goes through the default cluster, policy status underwriting successful goes through the new_version cluster, policy status terminated goes through the old_version cluster

2. Route Redis cache data based on request parameters or IP, etc., for higher flexibility

Redis download address: https://github.com/tporadowski/redis/releases

The following is the Nginx configuration:

#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
    worker_connections 1024;
{}
http {
    include mime.types;
    default_type application/octet-stream;
    #log_format main '$time_local Client address: $remote_addr–$remote_port Request URI and HTTP protocol: $request Request address: $http_host HTTP request status: $status Upstream status: $upstream_status Load address: $upstream_addr URL redirection source: $http_referer $body_bytes_sent $http_user_agent $request_uri';
    log_format  logFormat '$group $time_local Client:$remote_addr–$remote_port Request URI and HTTP Protocol:$request Request:$http_host HTTP Status:$status Upstream Status:$upstream_status Load:$upstream_addr 
                          URL redirection: $http_referer $body_bytes_sent $http_user_agent $request_uri request parameters $query_string $args $document_root $uri
                          -----$request_uri $request_filename $http_cookie';
    access_log logs/access.log logFormat;
    sendfile        on;
    #tcp_nopush     on;
    #keepalive_timeout  0;
    keepalive_timeout  65;
   server{
        listen       80;   #Listening port
        server_name  域名; #Listening address 
        access_log  logs/xx.com.access.log  logFormat;
              location /redis {
                default_type 'text/plain';
                content_by_lua 'ngx.say("hello ,lua scripts redis")';
        {}
        location / {
                default_type 'text/plain';
                lua_need_request_body on;
                content_by_lua_file D:/user/Downloads/openresty-1.19.9.1-win64/conf/redis.lua; # Specify that the Lua file handles HTTP requests
        {}
        location @pre-prd {
            proxy_pass http://pre-prd;
            proxy_set_header Host $http_host;
            #proxy_redirect default;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        {}
        location @prd {
            proxy_pass http://prd; 
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;    
        {}
    {}
    # Pre-release 2 demonstration online
	upstream prd {
		 server ip:port;
	{}
    # Pre-release demonstration pre-release online
	upstream pre-prd {
		 server ip:port;
	{}
{}

The following is a Lua script:

--get request URI parameters
function SaveTableContent(file, obj)
      local szType = type(obj);
      print(szType);
      if szType == "number" then
            file:write(obj);
      elseif szType == "string" then
            file:write(string.format("%q", obj));
      elseif szType == "table" then
            --Format the table content and write to the file
            --file:write("{\n");
            for i, v in pairs(obj) do
                  SaveTableContent(file, i);
                  file:write(":");
                  SaveTableContent(file, v);
                  file:write(",");
			end
            --file:write("}\n");
      else
			error("can't serialize a "..szType);
      end
end

function SaveTable(obj)
      --local file = io.open("D:\\user\\Downloads\\openresty-1.19.9.1-win64\\logs\\parmas.txt", "a");
      local file = io.open("D:\\user\\Downloads\\openresty-1.19.9.1-win64\\logs\\redis.txt", "a");
      assert(file);
      SaveTableContent(file,obj);
      file:close();
end


local request_method = ngx.var.request_method;
local getargs = nil;
local args = nil;
local read_body = nil;
local body_data = nil;
local thirdPolicystatus = nil;
if "GET" == request_method then
	args = ngx.req.get_uri_args();
elseif "POST" == request_method then
	getargs = ngx.req.get_uri_args();
	args  	  = ngx.req.get_post_args();
	read_body = ngx.req.read_body();
	body_data = ngx.req.get_body_data();
end

if getargs ~= nil then
	SaveTable("getargs");
	SaveTable(getargs);
	thirdPolicystatus = getargs["thirdPolicystatus"];
	if thirdPolicystatus ~= nil then
		SaveTable("thirdPolicystatus");
		SaveTable(thirdPolicystatus);
	end
end

if args ~= nil then
	SaveTable("args");
	SaveTable(args);
end

if read_body ~= nil then
	SaveTable("read_body");
	SaveTable(read_body);
end

if body_data ~= nil then
	SaveTable("body_data");
	SaveTable(body_data);
end


local redis = require "resty.redis"
local cache = redis.new()
cache:set_timeout(60000)

local ok, err = cache.connect(cache, '127.0.0.1', 6379)
if not ok then
	SaveTable("not ok");
	ngx.exec("@prd")
	return
end

local local_ip = ngx.req.get_headers()["X-Real-IP"]
if local_ip == nil then
        local_ip = ngx.req.get_headers()["x_forwarded_for"]
		SaveTable("local_ip1");
		if local_id ~= nil then
		SaveTable(local_id);
		end
end

if local_ip == nil then
	local_ip = ngx.var.remote_addr
	SaveTable("local_ip2");
	if local_id ~= nil then
	SaveTable(local_id);
	end
end

-- Get the existence value from redis based on client ip
local res, err = cache:get(local_ip)
-- If it exists, forward to @pre-prd
if res == "1" then
		SaveTable(res);
		SaveTable("pre-prd");
        ngx.exec("@pre-prd")
        return
else
	SaveTable("-------");
	SaveTable(local_ip);
	SaveTable(res);
	cache:set(local_ip)
end

-- If it does not exist, then forward to @prd
SaveTable("prd");
ngx.exec("@prd")

local ok, err = cache:close()
if not ok then
    ngx.say("failed to close:", err)
    return
end
return

Use this to perform load balancing based on the IP address of the redis cache.

3. Related Configuration and Syntax

1. Detailed Explanation of Nginx Configuration File

Source code: https://trac.nginx.org/nginx/browser

Official website: http://www.nginx.org/

Windows installation package download address: https://nginx.org/en/download.html

nginx.conf

########### Each instruction must end with a semicolon.###################
#Global block  For example, the number of working processes, define the log path;
#Configure the user or group, default is nobody nobody.
#user nobody;
#user administrator administrators;

#The number of processes generated, default is 1, generally recommended to be 1-2 times the number of CPU cores
worker_processes 1;
#worker_processes  8;


#Specify the running file storage address of the nginx process
#pid /nginx/pid/nginx.pid;

#Specify the log path and level. This setting can be placed in the global block, http block, server block, and the levels are: #debug|info|notice|warn|error|crit|alert|emerg
error_log logs/error.log error;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#Events block sets the polling event model, the maximum number of connections per working process, and the keep-alive timeout time of the http layer;
events {
   	#Use the epoll I/O model to handle polling events.
   	#This can be left unset, nginx will choose the appropriate model based on the operating system
   	#Event-driven model, select|poll|kqueue|epoll|resig|/dev/poll|eventport
   	#use epoll; 
   	#Maximum number of connections for the work process, default 1024
   	worker_connections  2048;

    #设置网路连接序列化,防止惊群现象发生,默认为on
    worker_connections  2048;  
    # Set network connection serialization to prevent the occurrence of a herd of surprises, default is on   
    accept_mutex on;  
{}

# Set whether a process can accept multiple network connections at the same time, default is off
http {
 	multi_accept on;
    include mime.types;
    # http block routing matching, static file server, reverse proxy, load balancing, etc.# Import the mapping table of file extensions and file types mime.types
    default_type application/octet-stream;
    # Default file type 
    # Default is text/plain 
    # Disable service log
    # Log format and access log path custom format
    # combined is the default log format
    access_log logs/access.log myFormat;
    # Allows file transmission using the sendfile method, default is off, can be set in the http block, server block, and location block.
    sendfile on; 
    # Only enabled when sendfile is enabled.
    tcp_nopush   on; 
	server_names_hash_bucket_size 64; 
    # The number of bytes transmitted by each process in each call cannot exceed the set value, default is 0, which means no limit is set.
    sendfile_max_chunk 100k;
    # Connection timeout time, default is 75s, can be set in the http, server, and location blocks.
    keepalive_timeout 65;

    #--------------------Static file compression-----------------------------#
    # Nginx can compress the css, js, xml, and html files of a website before transmission, greatly improving the page loading speed. After being compressed by Gzip, the page size can be reduced to 30% or even smaller. To use it, you only need to enable the Gzip compression feature. You can add this configuration in the http global block or server block.
    # Enable gzip compression feature
    #gzip  on;
    gzip on;
     
    # Set the minimum number of bytes for pages that are allowed to be compressed; This indicates that if the file is less than 10k, compression is not meaningful.
    gzip_min_length 10k; 
 
    #Set the compression ratio, the minimum is 1, fast processing speed, slow transmission speed;
    #9 is the maximum compression ratio, slow processing speed, fast transmission speed; Recommended 6
    gzip_comp_level 6; 
     
    #Set the size of the compression buffer, here set 16 8K memory as the compression result buffer
    gzip_buffers 16 8k; 
     
    #Set which files need to be compressed, generally text, css and js are recommended to be compressed. Images should be locked as needed.
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; 
    #--------------------Static file compression-----------------------------#
	server {
		listen       80;
		server_name  localhost;
		location / {
            root   html;
            index  index.html index.htm;
        {}
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        {}
    {}
    #http server block
    server {
        keepalive_requests 120; # Maximum number of requests per connection
        listen       8081;   #Listen on port
        server_name  域名 #Listen address
        #ssi on;
		#autoindex on;
        charset utf-8;
        client_max_body_size 10M; # Limit the size of the uploaded files by the userDefault 1M
        #access_log  logs/host.access.log  myFormat; # Define access logs, which can be set for each server (i.e., each site) to have their own access logs.

        # Forward dynamic requests to the web application server
        #location ^~ /api {
            #rewrite ^/api/(.*)$ /$1 break;
            #proxy_pass https://stream;
            #break;#Termination of matching
        #}
		location / {
           # Use proxy_pass to forward requests to a group of application servers defined by upstream
			proxy_pass      http://stream ;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			proxy_set_header Host $http_host;
			proxy_redirect off;
			proxy_set_header X-Real-IP  $remote_addr;
        {}
		location  ~*^.+$ {       #Request URL filtering, regular expression matching, ~ is case-sensitive, ~* is case-insensitive.  
			proxy_pass  	http://stream ;  #Redirect requests to the server list defined by stream          
        {} 
		
        #location / {
            #autoindex on;
            #try_files $uri $uri/ /index.html?$args;
        #}

        # Rule 1: General Match
        #location / {
			#ssi on;
			#autoindex on;                 #Automatically display directories
            #autoindex_exact_size off;     #Display file size in a user-friendly way, otherwise display in bytes
            #autoindex_localtime on;       #Display according to the server time, otherwise display in GMT time
            #root   /root;                 #Define the default website root directory of the server 
            #index index.html index.htm;   #Define the name of the home index file, set the default page
            # Use proxy_pass to forward requests to a group of application servers defined by upstream
            #proxy_pass   http://mysvr;      # Load configuration
            #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            #proxy_set_header Host $http_host;
            #proxy_redirect off;
            #proxy_set_header X-Real-IP  $remote_addr; 
            #deny ip;  # Denied IPs
            #allow ip; # Allowed IPs 
        #}

        # Rule 2: Handle URLs starting with /static/
        location ^~ /static {                         
            alias /usr/share/nginx/html/static; # Path to static resources
        {}
        #= Exact match 1
        #^~ Starts with a certain string 2
        #~ Case-sensitive regular expression matching 3
        #~* Case-insensitive regular expression matching 4
        #!~ Case-sensitive non-matching regular expression 5
        #!~* Case-insensitive non-matching regular expression 6
        #/  General matching, any request will match to 7
        #location  ~*^.+$ {       # URL request filtering, regular expression matching, ~ is case-sensitive, ~* is case-insensitive.  
            #root path;  # Root directory
            #index vv.txt;  # Set default page
			#proxy_pass  http://stream;  # Redirect requests to the server list defined by stream
            #deny 127.0.0.1;  # Denied IPs
            #allow ip; # Allowed IPs           
        #} 
        #-----------------------------Static File Caching--------------------#
        # Caching can speed up the loading of static files next time. Many files related to the website style, such as css and js files, generally do not change much. The cache validity can be set longer through the expires option.
        # Enable static file caching with expires option, valid for 10 days
        location ~ ^/(images|javascript|js|css|flash|media|static)/  {
             root    /var/www/big.server.com/static_files;
            expires 10d;
        {}
		#-----------------------------Static File Caching--------------------#
        # Error Page
		error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        {}
    {}

    #-------------The meaning of global variables with the $ symbol--------------#
    #$args, The parameters in the request;
    #$content_length, The "Content-Length" in the HTTP request information;
    #$content_type, The "Content-Type" in the request information;
    #$document_root, The root path setting value for the current request;
    #$document_uri, Same as $uri;
    #$host, The "Host" in the request informationIf there is no Host line in the request, it is equal to the set server name;
    #$limit_rate, The limit on the connection rate;
    #$request_method, The request method, such as "GET", "POST", etc.
    #$remote_addr, The client address;
    #$remote_port, The client port number;
    #$remote_user, The client username, used for authentication;
    #$request_filename, The current request file path name
    #$request_body_file, The current request file
    #$request_uri, The requested URI with the query string;
    #$query_string, Same as $args;
    #$scheme, The protocol used, such as http or https, such as rewrite ^(.+)$
    #$scheme://example.com$1 redirect;        
    #$server_protocol, The protocol version of the request, "HTTP/1.0" or "HTTP/1.1";
    #$server_addr, The server address;
    #$server_name, The server name to which the request is received;
    #$server_port, The server port number to which the request is received;
    #$uri, The requested URI, which may be different from the initial value, such as after redirection, etc.
    #-------------The meaning of global variables with the $ symbol--------------#

    
    # Error Page
    #error_page 404 https://www.baidu.com; # Error Page
    #error_page 404 500 502 503 504 403 /error.shtml;
    
    # Load Balancing
    upstream insurance-pre {   
      #The weigth parameter represents the value, the higher the value, the greater the chance of being allocated
      #--------------------Load Balancing Method------------------#
      #1. Round Robin (default)
      #2. Weight, the greater the weight,The more tasks to undertake
      #server ip:port weight=5
      #3.ip_hash   
      #ip_hash;
      #4.url_hash
      #hash $request_uri;
      #5. fair(third-party) -- Distribute requests based on the response time of the backend server, with shorter response time given priority. To use this algorithm, the nginx-upstream-fair library needs to be installed.
      #fair;
      #--------------------Load Balancing Method------------------#
      server ip:port   weight=5; # The higher the weight, the greater the weight
      server ip:port weight=1;
      server ip:port  weight=1;
      server ip:port backup; # Hot backup
    {}
	# Forward dynamic requests
    #server {  
        #listen 80;                                                         
        #server_name  localhost;                                               
        #client_max_body_size 1024M;
        #location / {
            #proxy_pass http://localhost:8080;   
            #proxy_set_header Host $host:$server_port;
        #}
    #} 
    # Redirect http requests to https requests
    #server {
        #listen 80;
        #server_name 域名;
        #return 301 https://$server_name$request_uri;
    #}
    server {
        keepalive_requests 120; # Maximum number of requests per connection
        listen       80;   #Listening port
        server_name  域名 #Listen address
        #ssi on;
		#autoindex on;
        charset utf-8;
        client_max_body_size 10M; # Limit the size of the uploaded files by the user, default 1M
        #access_log  logs/host.access.log  myFormat; # Define access logs, which can be set for each server (i.e., each site) to have their own access logs.
        # Forward dynamic requests to the web application server
        #location ^~ /api {
            #rewrite ^/api/(.*)$ /$1 break;
            #proxy_pass https://Domain Name;
            #break;#Termination of matching
        #}
		location / {
           # Use proxy_pass to forward requests to a group of application servers defined by upstream
			proxy_pass       http://tomcat_gray1;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			proxy_set_header Host $http_host;
			proxy_redirect off;
			proxy_set_header X-Real-IP  $remote_addr;
        {}
		location  ~*^.+$ {       #Request URL filtering, regular expression matching, ~ is case-sensitive, ~* is case-insensitive.  
			proxy_pass  	http://Domain Name;  #Request forwarding to the list of servers defined by the Domain Name          
        {} 
    {}
    # Standard pre-release environment
	upstream tomcat_gray1 {
		server ip; 
		Server Domain;
	{}

	upstream tomcat_gray2 {
		Server Domain;
	{}
{}

Host Configuration

127.0.0.1  Domain Name

Browser access Domain Name

Request access logs can be found by observing the access.log.

2. Lua Basic Syntax

Tutorial: https://www.runoob.com/lua/if-else-statement-in-lua.html

Lua IDE Editor: https://github.com/rjpcomputing/luaforwindows

3. Nginx Implementation of Gray

Gray to different nodes based on front-end request parameters.

#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
    worker_connections 1024;
{}
http {
    include mime.types;
    default_type application/octet-stream;
    #log_format main '$time_local Client address: $remote_addr–$remote_port Request URI and HTTP protocol: $request Request address: $http_host HTTP request status: $status Upstream status: $upstream_status Load address: $upstream_addr URL redirection source: $http_referer $body_bytes_sent $http_user_agent $request_uri';
    log_format  logFormat '$group $time_local Client:$remote_addr–$remote_port Request URI and HTTP Protocol:$request Request:$http_host HTTP Status:$status Upstream Status:$upstream_status Load:$upstream_addr 
                          URL redirection: $http_referer $body_bytes_sent $http_user_agent $request_uri request parameters $query_string $args $document_root $uri
                          -----$request_uri $request_filename $http_cookie';
    access_log logs/access.log logFormat;
    sendfile        on;
    #tcp_nopush     on;
    #keepalive_timeout  0;
    keepalive_timeout  65;
    #gzip  on;
    server {
        listen       80;   #Listening port
        server_name  域名; #Listening address 
        access_log  logs/xx.com.access.log  logFormat;
        #Method two: nginx+lua to implement grayscale
        ## 1. The access to localhost will be processed by /opt/app/lua/dep.lua
        ## 2. After logical processing, decide to callback one of the following two internal jumps
        #Method three: routing based on request parameter value matching
        #/policy/policyInfoList?thirdPolicystatus=2
        set $group "default";
        if ($query_string ~* "thirdPolicystatus=1"){ # Dynamic control of routing
            set $group new_version;
        {}
        if ($query_string ~* "thirdPolicystatus=2"){
            set $group old_version;
        {}
        location / 
        {
            default_type "text/html"; 
            #content_by_lua_file D:/sortware/openresty/openresty-1.17.8.2-win64/conf/dep.lua; # Specify the lua file to handle http requests
            proxy_pass http://$group;
            proxy_set_header  Host       $host;
            proxy_set_header  X-Real-IP    $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            index index.html index.htm;
        {}
    {}
	# Standard pre-release environment
	upstream default {
		server ip:port; 
	{}
    # Pre-release 2
	upstream new_version {
		server ip:port;
	{}
    # Pre-release 3
	upstream old_version {
		server ip:port;
	{}
{}

host as follows:

127.0.0.1  Domain Name

Access Address:

Domain

Menu operation data --- policy data, default goes through the default cluster, policy status underwriting successful goes through the new_version cluster, policy status terminated goes through the old_version cluster

Load balancing based on parameters in the cookie

#user nobody;
worker_processes 1;

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/nginx.pid;


events {
    worker_connections 1024;
{}


http {
    include mime.types;
    default_type application/octet-stream;
    #log_format main '$time_local Client address: $remote_addr–$remote_port Request URI and HTTP protocol: $request Request address: $http_host HTTP request status: $status Upstream status: $upstream_status Load address: $upstream_addr URL redirection source: $http_referer $body_bytes_sent $http_user_agent $request_uri';
    log_format logFormat '$http_cookie $group $time_local Client: $remote_addr–$remote_port Request URI and HTTP protocol: $request Request: $http_host HTTP status: $status Upstream status: $upstream_status Load: $upstream_addr 
                          URL redirection: $http_referer $body_bytes_sent $http_user_agent $request_uri request parameters $query_string $args $document_root $uri
                          -----$request_uri $request_filename ';
    access_log logs/access.log logFormat;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    #gzip  on;
    server {
        listen       80;   #Listening port
        server_name  域名; #Listening address 
        access_log  logs/xx.com.access.log  logFormat;
        #Method two: nginx+lua to implement grayscale
        ## 1. The access to localhost will be processed by /opt/app/lua/dep.lua
        ## 2. After logical processing, decide to callback one of the following two internal jumps
        #Method three: routing based on request parameter value matching
        #domain policy/policyInfoList?thirdPolicystatus=2
        set $group "default";
        if ($query_string ~* "thirdPolicystatus=1"){ # Dynamic control of routing
            set $group new_version;
            {}
        if ($query_string ~* "thirdPolicystatus=2"){
            set $group old_version;
            {}
        if ($http_cookie ~* "sso.xx.com=BJ.E2C7D319112E7F6252BF010770269E235820211121073248"){
            set $group pro_version;
            {}
        if ($http_cookie ~* "sso.xx.com!=BJ.E2C7D319112E7F6252BF010770269E235820211121073248"){
            set $group grey_version;
            {}
        location / 
        {
            default_type "text/html"; 
            #content_by_lua_file D:/sortware/openresty/openresty-1.17.8.2-win64/conf/dep.lua; # Specify the lua file to handle http requests
            proxy_pass http://$group;
            proxy_set_header  Host       $host;
            proxy_set_header  X-Real-IP    $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            index index.html index.htm;
        {}
    {}
	# Standard pre-release environment
	upstream default {
		server ip:port; 
	{}
    # Pre-release 2
	upstream new_version {
		server ip:port;
	{}
    # Pre-release 3
	upstream old_version {
		server ip:port;
	{}
        # Pre-release 2
	upstream pro_version {
		server ip:port;
	{}
    # Pre-release 3
	upstream grey_version {
		server ip:port;
	{}
{}

Forward based on cookie content to different clusters

Four, operable and replaceable

Idea one: If at this time we need a dynamic configuration console, we can operate Redis through javaweb and other projects to realize real-time update of Redis data, thus controlling gray release

Idea two: Switch to other data sources such as

1. MySQL/MariaDB: By using Lua's lua-mysql or LuaSQL library, you can connect and query MySQL or MariaDB databases in Lua.

2. PostgreSQL: By using Lua's lua-postgres or LuaSQL library, you can connect and query the PostgreSQL database in Lua.

3. MongoDB: By using Lua's mongo-lua-driver library, you can connect and operate the MongoDB database in Lua.

4. HTTP API: By using Lua's LuaHTTP library, you can initiate HTTP requests and communicate with remote HTTP APIs in Lua.

5. Cassandra: By using Lua's lua-cassandra library, you can connect and query the Cassandra database in Lua.

Idea three: Switch to other script languages

1. JavaScript: By using Nginx's ngx_http_js_module, you can use JavaScript in Nginx. This allows you to implement some gray release or other functions using JavaScript scripts. In addition, JavaScript is widely used in front-end development, so it is easier to share code logic in projects that integrate front-end and back-end.

2. LuaJIT: LuaJIT is a high-performance Lua interpreter implemented through just-in-time compilation. It provides an API compatible with the standard Lua interpreter but is faster. With LuaJIT, you can achieve higher performance while maintaining compatibility with Lua.

3. Python: If you are familiar with Python, you can embed Python in Nginx using the Python-NGINX-Module. This allows you to write Nginx configuration files and handle request logic with Python.

4. Java: Using modules such as nginx-jvm-clojure or nginx-jwt, you can embed Java or Clojure in Nginx. These modules provide the functionality to run Java or Clojure code on Nginx and can be integrated with other Java or Clojure libraries and frameworks.

Idea four: Switch to other web servers or reverse proxy servers

1. Apache HTTP Server: Apache is a widely used open-source web server and reverse proxy server that supports various modules and extensions, providing rich functions and configuration options.

2. Microsoft IIS: Internet Information Services (IIS) is a web server developed by Microsoft, designed for Windows operating systems. It is the default web server of Windows Server and provides a wide range of functions and integration.

3. Caddy: Caddy is a modern web server and reverse proxy server written in Go. It has features such as simple configuration, automatic HTTPS, and HTTP/2 support.

4. HAProxy: HAProxy is a high-performance load balancer and reverse proxy server suitable for high-traffic web applications. It has rich load balancing and proxy features.

5. Envoy: Envoy is a lightweight open-source proxy server and communication bus suitable for cloud-native and microservices architecture. It has features such as dynamic configuration, load balancing, and traffic management.

Everyone can conduct research based on their own ideas or interests. This article will not make too much introduction.

Author: JD Health, Ma Renxi

Source: JD Cloud Developer Community. Please indicate the source when转载.

你可能想看:

It is possible to perform credible verification on the system boot program, system program, important configuration parameters, and application programs of computing devices based on a credible root,

5: Determine if the email account exists (if an existing email is found, you can directly exploit the vulnerability)

HTTP data packets & request methods & status code judgment & brute force encryption password & exploiting data packets

5. Collect exercise results The main person in charge reviews the exercise results, sorts out the separated exercise issues, and allows the red and blue sides to improve as soon as possible. The main

Flexible, quick, and low-maintenance cost data integration method: Data Federation Architecture

Different SRC vulnerability discovery approach: Practical case of HTTP request splitting vulnerability

4.5 Main person in charge reviews the simulation results, sorts out the separated simulation issues, and allows the red and blue teams to improve as soon as possible. The main issues are as follows

Distributed Storage Technology (Part 2): Analysis of the architecture, principles, characteristics, and advantages and disadvantages of wide-column storage and full-text search engines

Announcement regarding the addition of 7 units as technical support units for the Ministry of Industry and Information Technology's mobile Internet APP product security vulnerability database

b) It should have the login failure handling function, and should configure and enable measures such as ending the session, limiting the number of illegal logins, and automatically exiting when the lo

最后修改时间:
admin
上一篇 2025年03月25日 15:12
下一篇 2025年03月25日 15:35

评论已关闭