I have some exciting news to share before we start—Frontend at Scale has officially reached 1,000 subscribers 🎉
This is absolutely incredible to me, and I cannot thank you enough for being part of this journey so far. Whether you’ve been reading my ramblings from the very beginning or you just joined us last week, I want you to know how much I appreciate you hanging out with me every two weeks.
If you enjoy the newsletter, the absolutely best way to support it is by sharing it with anyone you think would find it useful. If you’d like to write a social post about it (thank you 🙏) and need some inspiration, here’s a list of terrible tweets I asked ChatGTP to write about the newsletter. They’re really bad.
This week, we’ll talk about preparatory refactoring, empirical software design, the rules of programming, and what’s happening in the world of local-first development.
Let’s dive in.
Hard Changes Made Easy
One of the main reasons our project estimates miss the mark (sometimes, by a lot) is that we underestimate how hard seemingly simple changes truly are.
Unless we’re working on a greenfield project, the effort to implement a new feature will depend not only on the complexity of the feature itself but also on the structure of the codebase it’ll be a part of.
Even if we build our new feature as a nicely encapsulated LEGO block, it will still need to integrate with our existing codebase somehow—and this can make all the difference. Adding a LEGO block to a castle is easy if the castle is made of LEGOs; not so easy if it’s made of sand.
When this happens, we have essentially two options: we could charge ahead and implement the feature anyway, or we could take a step back and think about how we could make the change easy first. As Ken Beck puts it:
“For each desired change, make the change easy (warning: this may be hard), then make the easy change.”
This is what Martin Fowler calls Preparatory Refactoring, and while sometimes it might be obvious that you need to refactor a piece of code before implementing a new feature, it’s not always clear what “making the change easy” looks like.
So today I wanted to share a few examples of this type of refactoring in frontend applications. I’ve grouped them into three categories that represent some of the most common types of changes you might encounter in your own projects: changes in shape, changes in rules, and changes in number.
Changes in Shape
Changing the shape of an app’s user interface is arguably the most important part of our jobs as frontend engineers. These changes could be big (e.g. adjusting the entire layout of a page), or they could be pretty small, like changing the color of a button.
Sometimes, changing the shape of a component is as easy as changing a prop. But this isn’t always the case. Depending on how the structure of your components evolved over time, changes in shape can sometimes be very hard.
A good example of this is a Card component. If we build it with a minimal interface, the component will be super easy to use, but also very inflexible. Imagine a Card component that looks like the example below, how could we make a change to the inner button’s size or label?
The only way to change the shape of this component is by adding new props to it, like buttonSize or buttonLabel. This isn’t a big deal if we only add one or two props, but it’s easy to imagine how this could evolve to dozens of props if we wanted to make this component fully customizable.
Preparatory refactoring in this case involves taking advantage of component composition. We could change the structure of the Card component so that we can change its shape via its contents instead of its interface. This gives us the flexibility we need without having to add a bunch of props to it.
Component composition is not a silver bullet, but it's a good pattern to keep in mind if your components need to change their shape depending on the use case.
Changes in Rules
The rules of your system may or may not be meant to be broken, but they’re definitely meant to be changed.
This applies to any rule in your application, whether implicit or explicit: validation rules, user permissions, routing, redirects, authorization, and so on. These are constantly evolving, and if you find that changing them is hard, it’s typically because of one of two reasons:
- A set of rules is spread across different parts of your codebase, or
- The rules are intertwined with business logic.
We saw an example of the former when we talked about cognitive load a couple of issues ago. Here’s another example from a talk I gave recently where mixing up rules and logic can make things really hard to change:
This is part of a messy function that filters down a list of links based on whether the user has access to it or not. You can probably just take a glance at it and imagine that changing the rules here would be really hard since they’re so intertwined with the filtering logic.
When this happens, it’s often a good idea to do some preparatory refactoring and separate your rules from your business logic so that they’re easier to change. Bonus points if you manage to represent your rules as a data strucuture.
As with any type of refactoring, it’s important to keep your structural changes separated from your behavioral changes. Make the change easy first, commit, review, and then make the easy change.
Changes in Number
Here’s one type of refactoring you’ve almost certainly done in the past: you built a feature in your app thinking that there would ever be only one of a thing, but now, of course, you need to support many of them.
Changes in number could take many forms:
- Your users could only belong to a single team, but now they can belong to many.
- Your website was only in English, but now you need to support multiple languages.
- Your search results screen only had a single page, but now you need to add pagination to it.
Going from one to many almost always requires some sort of refactoring, and in some cases (e.g. internationalization), those changes can spread across your entire codebase.
But there’s one type of change in number that it’s worth calling out in particular, which happens when we add a hard limit to the number of things we can support. So instead of going from one to many, we only go from one to two.
For instance, imagine you wanted to add “dark mode” to your website. Since you only need to support two themes, one light, and one dark, it’s easy to think of implementing this with a boolean flag:
But if there’s even a remote chance that your website might need to support multiple themes in the future (for instance, during the transitional phase of a rebranding), you should consider if supporting an unlimited number of themes could be done with a similar level of effort:
You might be thinking:
“Alright, but what about YAGNI? What if I never end up needing more than two themes? Isn’t that just wasted effort?”
To which I say… that’s a great point.
In general, you don’t want to prematurely optimize your codebase for an imaginary future, but if you’re going to have to refactor your codebase anyway—whether you plan to support two themes or a thousand—you should opt for the more general solution as long as it doesn’t blow up in scope and complexity. The downsides of doing this are small, and the upsides are monumental.
You might have noticed a common thread in these different types of changes. When adding a feature is hard, we can make it easy by answering a simple question:
“How would the codebase look like had it been designed from the beginning with this feature in mind?”
This is easier said than done, of course, but hopefully, today’s newsletter gave you a few ideas for how to answer it in your own projects.
A Daily Practice of Empirical Software Design
Today’s essay was inspired by this talk from Kent Beck at this year's Domain-Driven Design Europe conference.
In his talk, Kent walks us through two of the main concepts in software design: coupling and cohesion, going all the way back to when the terms were originally coined almost sixty years ago. He talks about why these concepts matter so much and how understanding them well can help us answer a very important question: “Why is software so expensive?” Spoiler alert: it’s because making big changes to it is often very hard.
He also gives us the key to making software less costly to maintain, which is software design itself. According to Kent, the reason we design software is so that we can change it, and change it at a reasonable cost.
This talk is also a great introduction to his latest book “Tidy First?”, in which he covers some of the same topics we talked about today: when we find a piece of code that is messy or hard to work with, do we invoke our inner Marie Kondo and tidy things up first, or should we leave the cleanup for later? I grabbed a copy of this book earlier this week, and I’m excited to dive into it next, so expect a review in an upcoming issue of the newsletter 🙂
|Watch on YouTube
Links Worth Checking Out
Great write-up from the GitHub team on the emerging architecture of LLM applications. If you're building one yourself (of course you are), you should give this one a read.
Probably the most comprehensive guide on micro-frontends you'll find for free online. It has dozens of articles with everything you need to know about this architectural pattern.
This is something we talk about in the newsletter pretty often. The key to reducing cognitive load is to write simpler code, and this article shares a few strategies for how to do it.
There has been a ton of activity in the world of local-first development recently. If you haven't been following this trend, this article is a great way to catch up.
This 16-minute video is one of the best explainers of signals I've seen, using Preact signals as an example and comparing them to React hooks. Super clear and straight to the point.
The Rules of Programming
The Rules of Programming by Chris Zimmerman was a really pleasant surprise. I picked it up last month without knowing anything about it, and it quickly became one of my favorite technical reads of the year.
Zimmerman is a co-founder of the Sucker Punch gaming studio, so if you’ve played games like Infamous or Ghost of Tsushima, you’ve played with code written using these very rules. These are the rules his team uses to write simpler, more maintainable code; which I’d say is an important thing to do when building the massive pieces of software modern video games are.
Here are a few of my favorite rules:
- Rule 1: As Simple as Possible, but No Simpler. I love that this is rule #1—writing simple code that is both correct and complete is our best tool for managing complexity.
- Rule 3: Generalization Takes Three Examples. You might know this as the rule of three—before creating a reusable abstraction, make sure you have at least three concrete use cases for it.
- Rule 7: Eliminate Failure Cases. When writing a piece of code, we can prevent user errors by making it impossible (or at least, very hard) to do the wrong thing.
- Rule 15: Pull the Weeds. Every codebase has those small issues that are easy to fix but also easy to ignore. Pulling the weeds regularly is a low-effort method for keeping your system in good shape.
And if you need an extra incentive to pick up a copy of The Rules of Programming, you should know that all of Zimmerman’s royalties from the book go directly to Girls Who Code, an organization helping young women thrive and lead in the tech workforce.
|Read more on O'Reilly
That’s all for today, friends! Thank you for making it all the way to the end. If you enjoyed the newsletter, it would mean the world to me if you’d share it with your friends and coworkers. (And if you didn't enjoy it, why not share it with an enemy?)
Did someone forward this to you? First of all, tell them how awesome they are, and then consider subscribing to the newsletter to get the next issue right in your inbox.
I read and reply to all of your comments. Feel free to reach out on Twitter or reply to this email directly with any feedback or questions.
Have a great week 👋