I have a PHP project, and the Nginx configuration before releasing a new version is:
location ~ .php$ {
root /data/web/php-project-v1.0.0;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
}
The server is still handling a large volume of requests. At this point, I modified the Nginx configuration file.
location ~ .php$ {
root /data/web/php-project-v1.1.0;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
}
Please note that I have pointed the root directive to a new directory. Then I executed: nginx -s reload.
Can this achieve zero-downtime deployment? Advantages, disadvantages, and points to pay attention to, especially during high traffic periods.
I have tested this approach, and it at least doesn’t cause server-side 500 errors. After modifying the configuration, requests that were not yet completed still return results based on the old project’s logic once their processing finishes. However, from my observation, this change doesn’t take effect immediately after executing nginx -s reload. It seems the logic of the old project persists for a while. I hope developers familiar with Nginx can explain this phenomenon and answer my question from a deeper, underlying perspective. Additionally, while searching for this issue on Google, I noticed that very few people use this method. Why don’t more people adopt this technique for automated deployment, especially in scenarios where there aren’t many redundant hosts? Is it because they haven’t thought of it, or are there potential risks involved?