Cloudflare doesn’t cache any of our PHP files so don’t need to worry about that. Cloudflare connects to our nginx SSL reverse proxy.
nginx has a split_clients directive, so could potentially add header there:
Performing A/B Testing with NGINX and NGINX Plus
The SSL reverse proxy connects to a varnish server on the same system.
Varnish can also set the a/b testing header:
https://info.varnish-software.com/blog/live-ab-testing-varnish-and-vcs
Where ever the header is set, Varnish will need to Vary based on that header.
Varnish then connects to nginx, which then connects to the php-fpm server(s). At this point, there needs to be a different wordpress instance loaded, and there are a couple different ways.
The main idea I have now is to bind mount different plugin and theme directories on top of a base directory. The base directory contains the basic wordpress install, and the images, etc (basically everything not themes/plugins). One easy way to do this is with containers, but you should be able to do it outside containers as well.
No Containers Method:
– bind mount base directory to branch directory (ie /vhosts/base -> /vhosts/test-a)
– bind mount plugins and theme directories on top (ie /usr/src/test-a/plugins -> /vhosts/test-a/wordpress/wp-content/plugins)
– map header to root directories in nginx: (needs to be in http block)
https://serversforhackers.com/c/nginx-mapping-headers
#should use mapping so that there is always a default (don’t trust header)
map $http_ab_group $ati_ab_dir {
default “all-that-is-interesting”;
a “test-a”;
}
server {
server_name allthatsinteresting.com;
server_name www.allthatsinteresting.com;
root /vhosts/$ati_ab_dir;
}
Containers Method:
– switch nginx config to use containers (basically need to use a reverse proxy instead of fastcgi)
– which reverse proxy to use would be mapped in from the header
– containers would have the base dir, plugins dir, and themes dir mounted as volumes
Containers method would require more work, and the worker pool will be segmented among the php-fpm backends, so it’s more difficult to size the max workers per container. Like say the server has memory to handle 128 workers, if you have 3 separate worker pools, do you assume that all 3 pools will be maxed, or do you oversubscribe since chances are the load will be uneven? It’s simpler and safer to just have a single PHP backend. (not to mention the added complexity of Docker)