Where the Sidewalk Ends

Software developers work on the border between problems that have already been solved and the uncharted wilderness of possibility.

In my work as a software developer, I solve problems. Specifically, I tend to solve problems that require me to write code that interacts with other systems, frameworks, APIs, etc., extending them to do things that they were never designed to do. Some of those systems are designed to be extensible, others… less so. And it’s typically in those hard-to-implement edge cases that my skills are most called upon.

I work on the border between problems that have already been solved and the uncharted wilderness of possibility. Living in this rustic environment, I spend most of my day staring at flaws and limitations. If only this system exposed the data I need; if only that API didn’t have a jarring inconsistency; if only I could trust the value returned by this method call. But these things exist as they are, and it’s my job to find ways around them. Despite some internal grumbling, it’s usually quite enjoyable work.

Many of the products and platforms I’m trying to extend were designed with particular audiences in mind. Customers in the target audience have common problems that can be solved with common solutions. The problems I’m trying to solve… these are not the common case. When a product successfully anticipates the needs of a customer, when it solves the customer’s problem with minimal effort, when it’s exactly the right tool for the right job… I never see these projects. They’re someone else’s responsibility, because they don’t require my particular skills.

As I poke and prod at these systems, as I uncover the edge cases that push them to their limits, I need to remind myself that my challenges and my frustrations are not indicative of the quality of the product. A product that solves most of a problem for some percentage of its audience (whether that be 95% or 5%) is to be applauded. It’s doing its job well for those users. Even if it’s not for everyone, even if it doesn’t solve every problem, for some users, this is just the solution they need. For everyone else… it’s a big world; there’s room for another product in the market.

Docker Desktop Filesystem Caching: Faster with Mutagen

I started down the Docker path for my local dev environment six years ago. As soon as an alpha version of Docker for Mac was available, I installed it to replace my boot2docker-based VM. I mentioned at the time that its one major drawback was performance of the osxfs filesystem. All these years later, and it’s still sluggish compared to the native filesystem.

I’ve tried various solutions to mitigate the issue. NFS volumes could give a minor performance boost for some applications, but its effects were negligible for my own dev experience. I tried docker-sync for a while, but constantly ran into problems with the sync lagging or stalling. Mutagen seemed similarly promising, but would run into the same issues.

Exciting News!

‘Twas with great delight that I read the announcement that the Docker Desktop team would finally be implementing a solution. That brings us, a couple months later, to today, where I’ve had the opportunity to test it out.

The syncing solution is built on top of Mutagen. Though I’ve had my issues with it in the past, I’m hopeful that the Docker Desktop team’s official blessing and support will help the tool become efficient, stable, and reliable. It took a little bit of troubleshooting to get to a working installation, so I thought it best to document my steps.


To start with, the official “Edge” release of Docker Desktop for Mac is outdated (yup, that’s what I said). Instead, you can find links to newer versions in the GitHub forums. Today I’m running on build 45494, which I found in a discussion about excluding files from the sync. This build resolves two key issues that I ran into with the Edge release. First, it opens up file permissions on the synced files to resolve write permission errors. Second, it adds support for a global Mutagen config.

The Mutagen config is an essential tool for excluding certain files/directories from the sync. In my particular case, I don’t want my node_modules directories to sync. I use nvm and run my node commands on my host machine. Excluding these directories can cut a large chunk off of the synchronization time. So I created my config file at ~/.mutagen.yml with the following rules:

      vcs: true
        - node_modules

Only after this file is in place can I configure caching according to the documentation. If you enable it beforehand, you’ll have to remove the directory from your config, restart Docker Desktop, and then re-add it.


I ran into some errors with symlinks in my project directory. Mutagen will complain and refuse to sync if there are absolute symlinks in the cached directory. Fortunately, I was able to remove them from my current projects. Otherwise, and option might have been to use the global config to ignore them.

The debugging output is not particularly helpful. When Mutagen encounters an error, all you get is an “Error” status in the File Sharing settings of Docker Desktop. Another comment in the forum showed me the proper path to viewing the error. The docker daemon’s HTTP API will show the state of the sync, along with any error messages (note that jq is here to make the output prettier).

curl -X GET --unix-socket ~/Library/Containers/com.docker.docker/Data/docker-api.sock http:/localhost/cache/state | jq

With all of my errors resolved, I can now start up my containers with the synced directories. The application performance is noticeably faster, with WordPress pages loading in a few hundred milliseconds instead of a few seconds. This shaved about 80-90% off of the total time to run my automated test suites under the osxfs mounts.

Docker Desktop file sharing settings

After a couple of days running, I haven’t seen any show-stopping issues with this new caching. Nice work, Docker Desktop team. I’m looking forward to watching this tool stabilize and improve.

Using a Cloudflare Firewall to Reduce Server Load

How Cloudflare’s free firewall allowed me to cut CPU usage by over 90%.

I host this site (and several others) using SiteGround’s managed WordPress hosting service (disclosure: that’s an affiliate link). They’ve provided great service for years, and I’m happy to stick with them.

A while back, they sent me an email to say that my account was nearing its monthly CPU usage limits. Nothing here is exceptionally high volume on a normal day, so I suspected there was something nefarious afoot. After some searching through the file system, I couldn’t find any evidence of a hacked site or a spam relay, so I started browsing through the reports and statistics in my account admin to try to narrow things down.

What particularly stood out to me: this very domain, xplus3.net, was receiving millions of hits a month, significantly above the few hundreds to thousands of visitors my analytics tell me come to the site in the same span. After drilling in a bit further, almost all of those requests were to /wp-login.php. You might be surprised to learn that I, the only author on this infrequently updated website, do not log in millions of times a month; thrice would push the bounds of credulity. Someone is trying to brute-force their way into my site.

With the problem sufficiently identified, I needed a system to stop all of that traffic to the login URL, while still allowing myself to log in when necessary (stop laughing, I know I should write more). More out of curiosity than any real technical need, I’ve had this site proxied through Cloudflare’s free plan for several years (about as long as I’ve been with SiteGround). Maybe, thought I, there’s a way to set a firewall on Cloudflare that could mitigate this ongoing threat.

Delighted was I to find that the solution was just a few clicks away. I set up a rule to match any traffic to /wp-login.php. Before a visitor makes it to my host, there’s a brief (approximately 5 second) delay while Cloudflare decides if I’m a real visitor. Traffic to the login page stopped immediately, and SiteGround is much happier with my CPU usage.

An example of Cloudflare firewall rules filtering requests to wp-login.php

I haven’t had any issues with the JS Challenge filter. I did try the Captcha option but found it too difficult to prove myself human, so it looks like this filter is my best option for now. The stats are showing me that I should probably address xmlrpc.php next.

Instant Braintree Transaction Settlement

I’m working on an e-commerce project using WooCommerce and the Braintree payment gateway. One of the areas I’m testing involves issuing partial refunds on orders (e.g., one item out of a larger order is canceled). Part of the credit card transaction process involves submitting an order for settlement, then waiting for that order to settle with the credit card processor. You cannot issue a partial refund until the transaction is settled. This process can take anywhere from 30 minutes to 24 hours, an inconvenient bottleneck to development and testing.

Fortunately for us, there is a route to circumvent this delay. The Braintree API provides a method to immediately settle sandbox transactions.

In the context of WooCommerce, we can take advantage of this method while still using the API connection established by the Braintree plugin. This can be run in the context of a WP-CLI shell session, built into your own custom CLI command, or called from an automated testing suite.

// get the ID of the transaction you want to settle
$transaction_id = 'your-id-here';

// get the instance of the WooCommerce payment gateway for braintree
$gateway = wc_braintree()->get_gateway( 'braintree_credit_card' );

// use the WooCommerce gateway to authenticate with Braintree's SDK
$sdk = new \Braintree\Gateway( [ 'accessToken' => $gateway->get_auth_access_token() ] );

// call the testing API to immediately settle your transaction
$transaction = $sdk->testing()->settle( $transaction_id );

Your transaction’s status should now be “settled”, and your order is ready for issuing partial refunds.

nginx as HTTPS proxy for Elasticsearch

Since Elasticsearch is exposed via an HTTP API, we can user our nginx server to proxy Elasticsearch requests using the HTTPS protocol.

Let’s say you have your local dev environment configured to use SSL. Your dev site is accessible at https://mysite.dev/. Wonderful! Now you need to add Elasticsearch to your project. Let’s add it to docker-compose.yml, something like:

version: "2"
    image: docker.elastic.co/elasticsearch/elasticsearch:5.2.2
      - xpack.security.enabled=false
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        soft: -1
        hard: -1
        soft: 65536
        hard: 65536
      mem_limit: 1g
      - elasticsearchindex:/usr/share/elasticsearch/data
      - "9200"
    network_mode: "bridge"
  # and some other services like PHP, nginx, memcached, mysql

How do you make requests to Elasticsearch from the browser?

Option 1: Set up a proxy in your app. This probably resembles what you’ll ultimately get in production. You don’t really need any security on Elasticsearch for local dev, but in production it will need some sort of access control so users can’t send arbitrary requests to the server. If you’re not using a third-party service that already handles this for you, this is where you’ll filter out invalid or dangerous requests. I prefer to let more experienced hands manage server security for me, though, and this is a lot of overhead just to set up a local dev server.

Option 2: Expose Elasticsearch directly. Since I don’t need security locally, I could just open up port 9200 on my container and make requests directly to it from the browser at http://localhost:9200/. Notice the protocol there, though. If my local site is at https://mysite.dev/, then the browser will block insecure requests to Elasticsearch.

Option 3: Use nginx as a proxy. I’m already using a reverse proxy in front of my project containers. It terminates the SSL connections and then passes through unencrypted requests to each project’s nginx server. The project’s nginx container doesn’t need to deal with SSL. It listens on port 80 and passes requests to PHP with fastcgi.

server {
	listen 80 default_server;
	server_name mysite.dev;
	# ... more server boilerplate

Since Elasticsearch is exposed via an HTTP API, we can create another server block to proxy Elasticsearch requests. First, make sure the nginx container can talk to the Elasticsearch container. In docker-compose.yml:

    image: nginx:stable-alpine
      - VIRTUAL_HOST=mysite.dev,*.mysite.dev
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
      - ./nginx/elasticsearch-proxy.conf:/etc/nginx/conf.d/elasticsearch-proxy.conf:ro
      - ./nginx/php.conf:/etc/nginx/php.conf:ro
      - php
      - elasticsearch
      - "80"
    network_mode: "bridge"

And then create elasticsearch-proxy.conf to handle the requests:

upstream es {
	server elasticsearch:9200;
	keepalive 15;

server {
	listen 80;
	server_name search.mysite.dev;

	location / {
		proxy_pass http://es;
		proxy_http_version 1.1;
		proxy_set_header Connection "Keep-Alive";
		proxy_set_header Proxy-Connection "Keep-Alive";

Now we can make requests to Elasticsearch from the browser at https://search.mysite.dev/. The nginx proxy will handle the SSL termination, and communicate with Elasticsearch using its standard HTTP API.