Cléber Zavadniak Website

Things that are completely unacceptable

This article is yet another rant. Be careful.

The build and test processes are not reproducible

A weird trend I’ve been seeing in the last years are new projects in which README.md contains the phrase “in order to run the project locally, you’ll have to” followed by half a dozen items, or maybe a big chunk of text explaining what the hell whoever is reading that must do, something that should be quite simple and probably fast.

It’s not like we don’t have a better way of doing this, right?

But what amazes me most is the fact that… it’s still there. The problem persists and nobody takes any action to make it go away! Not a single tech lead in the team, not a single developer in the entire company, no one moves a finger to correct this damn horrible situation because people think that “now it’s running on my machine it’s okay” or even “you’re going to do all that only once, anyway”.

Bullshit!

What’s actually hapenning is that the process of onboarding new developers is going to take days instead of hours because everything is pure shit in this damn company because nobody really cares.

Yes, it really gets in my nerves!

First, no project should start without the supervision of a senior developer. Unfortunately, that’s a reality in a lot of places: junior developers starting projects. And these junior developers are not to blame - on the contrary, companies are abusing them by making them “deliver” things without the needed (and desired!) proper guidance.

When a developer says that “it’s only five commands, doesn’t seem so complex to me”, there you have evidence of totally not-seniority, because senior programmers (1) are lazy, because (2) they know that when the world is on fire, complexity is gasoline: doesn’t matter the amount, it’s not going to help you at all and because (3) they value their mental energies and like to focus that where it’s really needed, since programmers are the most expensive.

Building, then, should be done in one command. It doesn’t matter, really, if it’s bin/build.sh or make build: it must be simple and it must be obvious, so that you can say to the new developer “whatever you need to do in the project, take a look in the bin/ directory first - it’s probably there” (or “type make and hit Tab”, even).

Second, tests should run quickly whenever possible. If it’s not possible, they should run daily. Either way, there must be a script ready to build and test - that’s the way you tell a computer how to build, right? So why lose time writing for a human being using an extremely ambiguous language when you can save time writing code in the form of a build script that can be objectively executed and objectively scrutinized?

Testing must be simple

If testing your project is hard, the result is obvious: it’s not going to be tested at all. And if you force it, then you’re burning your developers, because this process is probably stupid and a pain in the ass.

Ah, and your testing results will be shit, too, varying according to each developer mood, covering more or less each time and each release.

And I won’t trust it at all.

Which tools to use?

I like using Docker and docker-compose, but you can also use Vagrant to run your project locally and Packer to generate images for whatever way you’re using to deploy your project.

If your company has a separate “devops” team (and that’s usually a terrible idea…), make sure you’re communicating properly with them. You don’t want to choose Docker now and discover later that going Hashicorp would be much better in terms of deploy into production.

Remember: the main idea is to make everything reproducible, so you probably don’t want your local environment to be that different than production, really.

The test suit itself varies a lot depending on your language. I’m not going to recommend anything, but I’ll say something very important: if you have no idea at all, ask a more experienced developer. Search for real experience, here, because beautiful Websites can be not enough to determine if a tool is going to be a good asset or an inconvenience.

Simple rules

Whatever tools you choose, be sure to cover the basics:

If your tests depends on building, the last rule still applies: it must be run in one command (not two, like “first build, then run”).

It’s okay to have a “setup” script, too, but it must be clearly stated in the README file. There are some cases where it fits very well into the CI/CD pipeline, even. Sometimes setting up the project locally can take some time, so it’s generally advisable that developers run it right after cloning the repository.

About operating systems: always conform to whatever you’re using in production. It you have Windows servers (may God have mercy on your soul), then probably it’s a… good?… idea to run Windows on each developers machines (ugh!). But, of course, if you’re a sane person, you’ll be using some POSIX-compliant system in production, so it’s safe to assume most developers will be capable of running bash-and-Docker in their machines.

There are no automated tests

Completely unacceptable. And don’t even start with the “if it compiles, it probably works”. That’s bullshit and you probably already know that.

Now, I’d like to hightlight this word: automated. It is much more important than if they are “unit tests”, “functional tests”, “end-to-end tests” or whatever. I don’t care that much. But whatever fits your project and your team, these tests must be automated: the developer calls a script and after some time a veredict is reached: it passes or it fails. The same for your CI/CD pipeline: it must be able to run the same tests before deploying your new code.

Your git history is full of shit

Your codebase is a precious asset and, trust me, nobody is interested in knowing what process you, personally, followed to reach the accepted state of your changes to the code - if you realized you were making a mistake after you already opened a Pull/Merge Request or whatever, “fix typos from last PR review” is completely nonsensical in the next week, already, so don’t taint your git history with this type of thing.

Think about the codebase as something that is working. It’s functional, it’s generating revenue, it’s paying everybody’s salary and now you, you little piece of shit, is trying to change that and risk the well-being of all our families.

GET OFF MY LAWN!

So, obviously, the owner of the codebase is not going to accept a deficient commit. It would be madness! Right? Who in his/her right mind would do such a thing?

We’re not animals!

And that’s why (1) you must deliver atomic commits, that is, each commit is going to deliver something in its entirety, be it a new feature, a bug fix or whatever and (2) you want that many, many, colleagues of your review your code, since it is indeed a big deal and you’re humble enough and interested enough in the well-being of all our families to ask people to help you to deliver the best code you can.

Teams with strong code review culture sometimes fail hard in this part: everybody knows how to cooperate in the review, but each one has no idea how to keep improving the code during the review phase and, in the end, what was bad (3 commits) becomes much worse (30 commits).

$ git status  # ALWAYS be sure of what you're doing
  $ git add .
  $ git commit --amend -C HEAD
  $ git push --force

Nowadays the default behavior of git pull is the “rebase”, but if you want to be sure, you can use git pull --rebase to update your local branch after your colleague changed and force-pushed it.

(If you don’t feel force-pushing is okay, at least squash all the commits into only one before merging. Most Web tools already provide this option.)

It does make a difference

I can´t believe how many developers think that “it doesn’t make any real difference”. What’s hapenning in this crazy world???

If you want to deliver one feature and is doing this in three different commits, am I wrong in asking you why the hell should I accept the first one, since it’s obviously and demonstratedly wrong?

Why should I?

It’s a matter of principles. And principles matter. Always. If you’re going to accept any shit in your codebase because whatever reason you may have, man!, what a terrible codebase you probably have!

Besides, why would a developer not deliver atomic commits? You don’t have time, so we’re going to accept it forever? Are we not commited to our processes, trying to make them always better?

Not everything is git

I know, some other CVSs present alternative “cultures” about commits or “deltas”, maybe encouraging you to record the path you took to reach the final version of your changes, because "coding is about people and the team.

And I ask: are you analyzing this history? You do that two months after the feature is already in production? And if you do that, anyone else does?

If that’s a strong part of your team culture, fine. If not, get rid of this and focus on what really matters: wheter each change to the codebase will jeopardize your babies milk or not.

In short


Back to index