Using a Cloudflare Firewall to Reduce Server Load

How Cloudflare’s free firewall allowed me to cut CPU usage by over 90%.

I host this site (and several others) using SiteGround’s managed WordPress hosting service (disclosure: that’s an affiliate link). They’ve provided great service for years, and I’m happy to stick with them.

A while back, they sent me an email to say that my account was nearing its monthly CPU usage limits. Nothing here is exceptionally high volume on a normal day, so I suspected there was something nefarious afoot. After some searching through the file system, I couldn’t find any evidence of a hacked site or a spam relay, so I started browsing through the reports and statistics in my account admin to try to narrow things down.

What particularly stood out to me: this very domain,, was receiving millions of hits a month, significantly above the few hundreds to thousands of visitors my analytics tell me come to the site in the same span. After drilling in a bit further, almost all of those requests were to /wp-login.php. You might be surprised to learn that I, the only author on this infrequently updated website, do not log in millions of times a month; thrice would push the bounds of credulity. Someone is trying to brute-force their way into my site.

With the problem sufficiently identified, I needed a system to stop all of that traffic to the login URL, while still allowing myself to log in when necessary (stop laughing, I know I should write more). More out of curiosity than any real technical need, I’ve had this site proxied through Cloudflare’s free plan for several years (about as long as I’ve been with SiteGround). Maybe, thought I, there’s a way to set a firewall on Cloudflare that could mitigate this ongoing threat.

Delighted was I to find that the solution was just a few clicks away. I set up a rule to match any traffic to /wp-login.php. Before a visitor makes it to my host, there’s a brief (approximately 5 second) delay while Cloudflare decides if I’m a real visitor. Traffic to the login page stopped immediately, and SiteGround is much happier with my CPU usage.

An example of Cloudflare firewall rules filtering requests to wp-login.php

I haven’t had any issues with the JS Challenge filter. I did try the Captcha option but found it too difficult to prove myself human, so it looks like this filter is my best option for now. The stats are showing me that I should probably address xmlrpc.php next.

Instant Braintree Transaction Settlement

I’m working on an e-commerce project using WooCommerce and the Braintree payment gateway. One of the areas I’m testing involves issuing partial refunds on orders (e.g., one item out of a larger order is canceled). Part of the credit card transaction process involves submitting an order for settlement, then waiting for that order to settle with the credit card processor. You cannot issue a partial refund until the transaction is settled. This process can take anywhere from 30 minutes to 24 hours, an inconvenient bottleneck to development and testing.

Fortunately for us, there is a route to circumvent this delay. The Braintree API provides a method to immediately settle sandbox transactions.

In the context of WooCommerce, we can take advantage of this method while still using the API connection established by the Braintree plugin. This can be run in the context of a WP-CLI shell session, built into your own custom CLI command, or called from an automated testing suite.

// get the ID of the transaction you want to settle
$transaction_id = 'your-id-here';

// get the instance of the WooCommerce payment gateway for braintree
$gateway = wc_braintree()->get_gateway( 'braintree_credit_card' );

// use the WooCommerce gateway to authenticate with Braintree's SDK
$sdk = new \Braintree\Gateway( [ 'accessToken' => $gateway->get_auth_access_token() ] );

// call the testing API to immediately settle your transaction
$transaction = $sdk->testing()->settle( $transaction_id );

Your transaction’s status should now be “settled”, and your order is ready for issuing partial refunds.

Augustine on the relationship between scripture and science

We should not rush in headlong and so firmly take our stand on one side that, if further progress in the search of truth justly undermines this position, we too fall with it.

From The Literal Meaning of Genesis by Augustine, translated by John Hammond Taylor.

Book One, Chapter 18

In matters that are obscure and far beyond our vision, even in such as we may find treaded in Holy Scripture, different interpretations are sometimes possible without prejudice to the faith we have received. In such a case, we should not rush in headlong and so firmly take our stand on one side that, if further progress in the search of truth justly undermines this position, we too fall with it. That would be to battle not for the teaching of Holy Scripture but for our own, wishing its teaching to conform to ours, whereas we ought to wish ours to conform to that of Sacred Scripture.

Book One, Chapter 19

Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of this world, about the motion and orbit of the starts and even their size and relative positions, about the predictable eclipses of the sun and moon, the cycles of the years and the seasons, about the kinds of animals, shrubs, stones, and so forth, and this knowledge he holds to as being certain from reason and experience. Now, it is a disgraceful and dangerous thing for an infidel to hear a Christian, presumably giving the meaning of Holy Scripture, talking nonsense on these topics; and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn. The shame is not so much than an ignorant individual is derided, but that people outside the household of faith think our sacred writers held such opinions, and, to the great loss of those for whose salvation we toil, the writers of our Scripture are criticized and rejected as unlearned men. If they find a Christian mistaken in a field which they themselves know well and hear him maintaining his foolish opinions about our books, how are they going to believe those books in matters concerning the resurrection of the dead, the hope of eternal life, and the kingdom of heaven, when they think their pages are full of falsehoods on facts which they themselves have learnt from experience and the light of reason?

nginx as HTTPS proxy for Elasticsearch

Since Elasticsearch is exposed via an HTTP API, we can user our nginx server to proxy Elasticsearch requests using the HTTPS protocol.

Let’s say you have your local dev environment configured to use SSL. Your dev site is accessible at Wonderful! Now you need to add Elasticsearch to your project. Let’s add it to docker-compose.yml, something like:

version: "2"
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        soft: -1
        hard: -1
        soft: 65536
        hard: 65536
      mem_limit: 1g
      - elasticsearchindex:/usr/share/elasticsearch/data
      - "9200"
    network_mode: "bridge"
  # and some other services like PHP, nginx, memcached, mysql

How do you make requests to Elasticsearch from the browser?

Option 1: Set up a proxy in your app. This probably resembles what you’ll ultimately get in production. You don’t really need any security on Elasticsearch for local dev, but in production it will need some sort of access control so users can’t send arbitrary requests to the server. If you’re not using a third-party service that already handles this for you, this is where you’ll filter out invalid or dangerous requests. I prefer to let more experienced hands manage server security for me, though, and this is a lot of overhead just to set up a local dev server.

Option 2: Expose Elasticsearch directly. Since I don’t need security locally, I could just open up port 9200 on my container and make requests directly to it from the browser at http://localhost:9200/. Notice the protocol there, though. If my local site is at, then the browser will block insecure requests to Elasticsearch.

Option 3: Use nginx as a proxy. I’m already using a reverse proxy in front of my project containers. It terminates the SSL connections and then passes through unencrypted requests to each project’s nginx server. The project’s nginx container doesn’t need to deal with SSL. It listens on port 80 and passes requests to PHP with fastcgi.

server {
	listen 80 default_server;
	# ... more server boilerplate

Since Elasticsearch is exposed via an HTTP API, we can create another server block to proxy Elasticsearch requests. First, make sure the nginx container can talk to the Elasticsearch container. In docker-compose.yml:

    image: nginx:stable-alpine
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
      - ./nginx/elasticsearch-proxy.conf:/etc/nginx/conf.d/elasticsearch-proxy.conf:ro
      - ./nginx/php.conf:/etc/nginx/php.conf:ro
      - php
      - elasticsearch
      - "80"
    network_mode: "bridge"

And then create elasticsearch-proxy.conf to handle the requests:

upstream es {
	server elasticsearch:9200;
	keepalive 15;

server {
	listen 80;

	location / {
		proxy_pass http://es;
		proxy_http_version 1.1;
		proxy_set_header Connection "Keep-Alive";
		proxy_set_header Proxy-Connection "Keep-Alive";

Now we can make requests to Elasticsearch from the browser at The nginx proxy will handle the SSL termination, and communicate with Elasticsearch using its standard HTTP API.

Create and Trust Local SSL Certificate

Automate the creation of locally trusted SSL certificates for use with Docker-based development environments

I use Jason Wilder’s nginx reverse proxy container as the gateway to my various Docker dev environments. Among its other services, it provides SSL termination, so I don’t need to worry about configuring SSL in every container I run.

The set up is pretty simple. Make a directory of certificates and mount it into the container at /etc/nginx/certs. In docker-compose.yml, it would look something like:

version: "2"
    image: jwilder/nginx-proxy
      - "80:80"
      - "443:443"
      - ./nginx/certs:/etc/nginx/certs
      - /var/run/docker.sock:/tmp/docker.sock

You’ll need to create a new certificate for each domain you want to serve. Add them to the certs dir, and the proxy will find them and serve those domains with SSL.

I’ve created a script that will create the certificate and, on OS X at least, add it to your login keychain as a trusted certificate so you can avoid SSL warnings from your browser. Create the file in your certs directory and run it from there. E.g., certs/

CERTDIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
if [ $# -lt 1 ]; then
  echo 1>&2 "Usage: $0"
  exit 2
cat > ${DOMAIN}.cnf < <-EOF
  distinguished_name = req_distinguished_name
  x509_extensions = v3_req
  prompt = no
  CN = *.${DOMAIN}
  keyUsage = keyEncipherment, dataEncipherment
  extendedKeyUsage = serverAuth
  subjectAltName = @alt_names
  DNS.1 = *.${DOMAIN}
  DNS.2 = ${DOMAIN}
openssl req \
  -new \
  -newkey rsa:2048 \
  -sha1 \
  -days 3650 \
  -nodes \
  -x509 \
  -keyout ${DOMAIN}.key \
  -out ${DOMAIN}.crt \
  -config ${DOMAIN}.cnf
rm ${DOMAIN}.cnf
if [[ $OSTYPE == darwin* ]]; then
  sudo security add-trusted-cert -d -r trustRoot -k $HOME/Library/Keychains/login.keychain ${DOMAIN}.crt

Reload your proxy, and you can now visit