No One Told Me Deployment Would Be the Hardest Part

A nervous software developer in a hoodie, sweating and wide-eyed, hesitates with a shaky finger above a large red "DEPLOY" button, looking terrified about pressing it.

Like most developers early in their careers, I used to think the hardest part was writing code. Designing architecture, hunting bugs, optimizing queries — that’s where the real pain is, I thought. But over the years, I learned a simple truth: all of that is just a warm-up for the real nightmare.

Deployment — a word that makes your palms sweat on its own. Especially when it's a large-scale project with critical changes you can't just roll back if things go sideways. Behind you — hundreds of thousands of users and terabytes of valuable data that simply can’t be lost or corrupted.

They say many developers have been through a deployment — but none of them came out the same. Some started meditating. Some turned to herbal sedatives. And some suddenly remembered they still had a LinkedIn profile.

The Calm Before the Deploy

It seemed like this time, there would be no surprises. We were ready to deploy. But, as it often goes...

First, a bit of context:

I had no illusions that it would be smooth or painless. I can’t go into details about the project itself, but I can tell you this — it was a financial system. And fintech doesn’t forgive mistakes. Everything has to be fast, the data — consistent, and the security — flawless.

I already had a solid background in this field, so I knew exactly what to expect: complex development, high responsibility, and total focus. Especially when it came to deployment. In fintech, it’s always a risk — something can (and probably will) go wrong, even if you've spent months preparing.

Still, every developer has that naive hope inside: maybe this time it’ll be perfect — no bugs, no screwups, not a single crack in the system.

Development was going well. We had no hard deadlines, no one breathing down our necks asking “Is it done yet? Almost done? How about now?” The core features were nearly finished and everything seemed stable overall.

Our test plan was ready. Most checks were passing — with only the occasional minor issue. So we focused on the migration scripts, carefully thinking through worst-case scenarios and rollback strategies.

Still, I felt uneasy. Because there’s one thing anyone who’s worked with large, complex systems knows: even when you think you’ve tested everything — you haven’t.

And no matter how well you prepare, you’re never truly confident it will go flawlessly.

Then came the day of the deploy.

When It Gets Real

"Anything that starts well will end badly. Anything that starts badly will end even worse."
Murphy's Law

We started deploying changes to the production environment. And, of course, everything immediately went off-script.

In true genre fashion, the moment deployment began, the non-obvious issues came out of hiding: version conflicts between libraries that suddenly needed resolving. It felt like Composer had been lying in wait, just for the perfect moment to stab us in the back. And it did — I lost nearly four hours wrestling with it before I could move on.

Next came the database migrations, which also weren’t too eager to run in production. Why? Because naturally, only in production did a lovely collection of edge cases emerge — the kind we never saw in development. Classic.

Surprisingly though, things started going according to plan after that. We deployed the changes and launched the script that was supposed to migrate all data from the old format to the new one. And I know what you're expecting: a twist where the script corrupts all user data or crashes halfway through.

But no — the script worked flawlessly. Data was transferred cleanly, and for a moment, it looked like we’d actually pulled it off. It ran for several hours, and I honestly don’t know how many new gray hairs I earned in that time.

Since it was already the middle of the night, QA only went through the most critical scenarios in the test plan — just to make sure we hadn’t broken anything important.

I started to relax a little, watching those “passed” statuses slowly light up. And then — a personal message from QA. And no, it wasn’t “All good.” At that moment, I clenched so hard inside I could barely breathe.

Yes, we had a problem — a small, but important part of the system wasn’t working. The app itself looked totally fine. I checked the service in question separately — and it was working.

After a few hours of digging, we finally found the cause. And, as is often the case, it was something painfully trivial: one of the existing services didn’t have access to the new one. Together with our DevOps engineers, we fixed it. QA finished testing the critical flows and confirmed that nothing else was broken.

Only then — tired, but alive — we finally went to bed.

The Aftertaste of Deployment

The first thing I did after opening my eyes in the morning was check the work chats. And you know what? Nothing. Total silence. Not a single word about yesterday’s deploy.

"Great," I thought, and went about my morning routine.

But I didn’t even manage to make coffee before it started all over again. The issues were back — and this time, from real users, not QA.

The only good news: they were minor and non-critical. I fixed them quickly and finally exhaled. It seemed like everything was working as it should — but the next day, a new issue popped up. Then another. And so it went, day after day, for several weeks.

Tough weeks. Constant tension. Constant anticipation that something’s going to break again any minute now. You’d think it’s all behind you… but your nervous system refuses to believe it. You start checking production on a Sunday night, just to make sure everything’s still running. You wake up in the morning and the first thing you do is scan the logs. You flinch every time a new message pops up in the work chat.

It’s that strange feeling when everything’s technically fine — and yet you’re still waiting for something to go wrong. And maybe that’s the hardest part of deployment: the mental pressure. Before, during, and long after it’s over. And maybe — just maybe — that’s why deployment is the hardest part of all development.

What I Wish I Knew Beforehand

If you're migrating data with a script — think about what happens if it crashes

If you're using a script to migrate data, take the time to plan what happens if it suddenly stops halfway through.

What happens to the data? How do you know what’s already been migrated and what hasn’t? Your script should either be able to resume from where it left off — or at the very least, skip over data it has already processed.

I know, you’re confident it won’t crash. You’ve tested everything. But what if production hides that one weird record among millions that breaks everything?

If you're running SQL in production — write a rollback script first

Before executing any critical SQL changes in production, spend a little time preparing a rollback script — something you can run to undo it all if things go sideways.

Yes, it’s extra work. Yes, most likely everything will go smoothly. But if something breaks while you’re updating important data, you’ll thank yourself for writing that rollback in advance.

Plan your deployment before you even start coding

Think about how you’ll deploy your changes to production — before you even begin building them. Even though the actual deploy happens at the end, in reality, it’s the culmination of development. And many developers underestimate how important it is to plan for it ahead of time.

Can you roll out changes without users noticing? Will you need to stop the system? How much data needs migrating — and how long will that take? How easy will it be to revert the changes if something breaks?

Sometimes deployment needs to happen gradually, in several phases — and that directly affects the architecture of the whole implementation. It’s better to plan for that up front than to rewrite everything last minute.

The real fun starts after the deploy

Deployment doesn’t end the problems — it kicks them off. Be ready for a week of work that’s more intense and stressful than usual.

Because real users are very creative.And sometimes they’ll use your product in ways no QA ever imagined.

Even the best, most experienced tester won’t come up with a scenario like a user who occasionally scratches their left ear with their right hand.

Final Thoughts

There’s no such thing as a perfect deploy. If you’ve just deployed a complex project full of changes and everything’s working — don’t celebrate just yet. It probably just hasn’t broken yet. And if things went wrong during the deploy — and you weren’t prepared — don’t beat yourself up. It happens. To everyone. More often than you'd think.

The goal of deployment prep isn’t to make sure nothing goes wrong — it’s to know what to do when it inevitably does.

Maybe the best we can hope for is to make it out of a deploy alive, with our data intact and slightly fewer gray hairs than last time.

By the way, this wasn’t my first stress-inducing deploy. If you know the feeling of constantly putting out fires — check out my story about Panic-Driven Development. There's even more truth, tension, and bitter irony waiting for you there.

Read more