How to reduce latency with Samsung Galaxy Buds on macOS

I use Samsung Galaxy Buds as my primary headphones. They’re great. Sometimes, though, I notice that audio coming from my Mac is delayed by ~1 second compared to the video.

After a bit of digging, I’ve discovered that macOS defaults to using the SBC codec, which does some buffering that can lead to that delay. Switching to the AAC codec (or fiddling with SBC settings) can eliminate that delay.

The magic incantation:

sudo defaults write bluetoothaudiod "Enable AAC codec" -bool true

Reconnect your headset, and the delay is unnoticeable.

Organize SSH config with Includes

My SSH config file has grown organically over the years. Overall, I try to keep it organized and well-commented. Today I thought to ask if I could include other files in my config, to separate groups of related servers into separate files. The answer, delightfully, is yes!

You can include specific files, or even an entire directory. I went with the latter:

Include config.d/*

Adding this line to the top of ~/.ssh/config means that I can create as many files as I want in ~/.ssh/config.d/, and any Host stanzas in those files will be included as if they are in ~/.ssh/config.

Now I have a separate file for each customer who’s servers I access: ~/.ssh/config.d/customer1, ~/.ssh/config.d/customer2, etc. Any time I add a new customer’s servers, I can add a new file and it’s automatically included. This makes it easier both to see which servers are associated with a customer and to see which customers’ servers I may have access to (for when I need to clean up later).

Where the Sidewalk Ends

Software developers work on the border between problems that have already been solved and the uncharted wilderness of possibility.

In my work as a software developer, I solve problems. Specifically, I tend to solve problems that require me to write code that interacts with other systems, frameworks, APIs, etc., extending them to do things that they were never designed to do. Some of those systems are designed to be extensible, others… less so. And it’s typically in those hard-to-implement edge cases that my skills are most called upon.

I work on the border between problems that have already been solved and the uncharted wilderness of possibility. Living in this rustic environment, I spend most of my day staring at flaws and limitations. If only this system exposed the data I need; if only that API didn’t have a jarring inconsistency; if only I could trust the value returned by this method call. But these things exist as they are, and it’s my job to find ways around them. Despite some internal grumbling, it’s usually quite enjoyable work.

Many of the products and platforms I’m trying to extend were designed with particular audiences in mind. Customers in the target audience have common problems that can be solved with common solutions. The problems I’m trying to solve… these are not the common case. When a product successfully anticipates the needs of a customer, when it solves the customer’s problem with minimal effort, when it’s exactly the right tool for the right job… I never see these projects. They’re someone else’s responsibility, because they don’t require my particular skills.

As I poke and prod at these systems, as I uncover the edge cases that push them to their limits, I need to remind myself that my challenges and my frustrations are not indicative of the quality of the product. A product that solves most of a problem for some percentage of its audience (whether that be 95% or 5%) is to be applauded. It’s doing its job well for those users. Even if it’s not for everyone, even if it doesn’t solve every problem, for some users, this is just the solution they need. For everyone else… it’s a big world; there’s room for another product in the market.

Docker Desktop Filesystem Caching: Faster with Mutagen

I started down the Docker path for my local dev environment six years ago. As soon as an alpha version of Docker for Mac was available, I installed it to replace my boot2docker-based VM. I mentioned at the time that its one major drawback was performance of the osxfs filesystem. All these years later, and it’s still sluggish compared to the native filesystem.

I’ve tried various solutions to mitigate the issue. NFS volumes could give a minor performance boost for some applications, but its effects were negligible for my own dev experience. I tried docker-sync for a while, but constantly ran into problems with the sync lagging or stalling. Mutagen seemed similarly promising, but would run into the same issues.

Exciting News!

‘Twas with great delight that I read the announcement that the Docker Desktop team would finally be implementing a solution. That brings us, a couple months later, to today, where I’ve had the opportunity to test it out.

The syncing solution is built on top of Mutagen. Though I’ve had my issues with it in the past, I’m hopeful that the Docker Desktop team’s official blessing and support will help the tool become efficient, stable, and reliable. It took a little bit of troubleshooting to get to a working installation, so I thought it best to document my steps.

Configuration

To start with, the official “Edge” release of Docker Desktop for Mac is outdated (yup, that’s what I said). Instead, you can find links to newer versions in the GitHub forums. Today I’m running on build 45494, which I found in a discussion about excluding files from the sync. This build resolves two key issues that I ran into with the Edge release. First, it opens up file permissions on the synced files to resolve write permission errors. Second, it adds support for a global Mutagen config.

The Mutagen config is an essential tool for excluding certain files/directories from the sync. In my particular case, I don’t want my node_modules directories to sync. I use nvm and run my node commands on my host machine. Excluding these directories can cut a large chunk off of the synchronization time. So I created my config file at ~/.mutagen.yml with the following rules:

sync:
  defaults:
    ignore:
      vcs: true
      paths:
        - node_modules

Only after this file is in place can I configure caching according to the documentation. If you enable it beforehand, you’ll have to remove the directory from your config, restart Docker Desktop, and then re-add it.

Troubleshooting

I ran into some errors with symlinks in my project directory. Mutagen will complain and refuse to sync if there are absolute symlinks in the cached directory. Fortunately, I was able to remove them from my current projects. Otherwise, and option might have been to use the global config to ignore them.

The debugging output is not particularly helpful. When Mutagen encounters an error, all you get is an “Error” status in the File Sharing settings of Docker Desktop. Another comment in the forum showed me the proper path to viewing the error. The docker daemon’s HTTP API will show the state of the sync, along with any error messages (note that jq is here to make the output prettier).

curl -X GET --unix-socket ~/Library/Containers/com.docker.docker/Data/docker-api.sock http:/localhost/cache/state | jq

With all of my errors resolved, I can now start up my containers with the synced directories. The application performance is noticeably faster, with WordPress pages loading in a few hundred milliseconds instead of a few seconds. This shaved about 80-90% off of the total time to run my automated test suites under the osxfs mounts.

Docker Desktop file sharing settings

After a couple of days running, I haven’t seen any show-stopping issues with this new caching. Nice work, Docker Desktop team. I’m looking forward to watching this tool stabilize and improve.

Update (2020-07-07): The latest edge version of Docker Desktop makes this even simpler. By using the delegated mount strategy, Mutagen will be automatically enabled for the directory. According to the discussion, future versions will also allow one to disable the Mutagen caching for a directory by explicitly setting the consistent or cached strategy.

Using a Cloudflare Firewall to Reduce Server Load

How Cloudflare’s free firewall allowed me to cut CPU usage by over 90%.

I host this site (and several others) using SiteGround’s managed WordPress hosting service (disclosure: that’s an affiliate link). They’ve provided great service for years, and I’m happy to stick with them.

A while back, they sent me an email to say that my account was nearing its monthly CPU usage limits. Nothing here is exceptionally high volume on a normal day, so I suspected there was something nefarious afoot. After some searching through the file system, I couldn’t find any evidence of a hacked site or a spam relay, so I started browsing through the reports and statistics in my account admin to try to narrow things down.

What particularly stood out to me: this very domain, xplus3.net, was receiving millions of hits a month, significantly above the few hundreds to thousands of visitors my analytics tell me come to the site in the same span. After drilling in a bit further, almost all of those requests were to /wp-login.php. You might be surprised to learn that I, the only author on this infrequently updated website, do not log in millions of times a month; thrice would push the bounds of credulity. Someone is trying to brute-force their way into my site.

With the problem sufficiently identified, I needed a system to stop all of that traffic to the login URL, while still allowing myself to log in when necessary (stop laughing, I know I should write more). More out of curiosity than any real technical need, I’ve had this site proxied through Cloudflare’s free plan for several years (about as long as I’ve been with SiteGround). Maybe, thought I, there’s a way to set a firewall on Cloudflare that could mitigate this ongoing threat.

Delighted was I to find that the solution was just a few clicks away. I set up a rule to match any traffic to /wp-login.php. Before a visitor makes it to my host, there’s a brief (approximately 5 second) delay while Cloudflare decides if I’m a real visitor. Traffic to the login page stopped immediately, and SiteGround is much happier with my CPU usage.

An example of Cloudflare firewall rules filtering requests to wp-login.php

I haven’t had any issues with the JS Challenge filter. I did try the Captcha option but found it too difficult to prove myself human, so it looks like this filter is my best option for now. The stats are showing me that I should probably address xmlrpc.php next.