★ Senior developer

Defining “senior” is an ongoing and surprisingly difficult process, but we do it because it’s business-critical for us. Without a clear definition of “senior developer", we have no clear path for our own employees to get there. We have no concrete way to evaluate people joining the team, no way to hold ourselves accountable, and no way of improving the process.

The Conjoined Triangles of Senior-Level Development (cache)

There is a moment in your developer career when you wonder if you’re senior enough to depict yourself as a senior developer. This is not at all a matter of how old you are (cache), neither how much you’re being paid. This is more related to how many and diverse experiments you made, how many different peers you helped onboarding a project, how easy it becomes to transmit your knowledge, how much confidence you accumulated and how fast you can admit you’re totally wrong. Actually, this is all about the fluidity you can have with a team within an evolving complex context. That is the moment you realize you are more valuable than the code you produce.

You’re here to speed up the learning process but not too much, otherwise your fellow companions are totally missing the potential failures and are pursuing without accumulating knowledge. Going fast is useful only if everybody within the boat is aware of what has been tried before and what was wrong (and right!) for that particular journey. It can only be achieved with a ton of communication.

When you’re lucky enough to be part of a team of highly skilled developers, you know that everybody will still progress technically because it’s part of the team’s DNA. Besides some long-running trolls, you know that the hard part will not be about technical capabilities anymore, the team is confident enough on that side to learn quickly if necessary. The hard part will be to consider the team — present and future — as a whole. It requires a tremendous amount of empathy to make the right social decisions.

Senior team members should be expected to spend half their time mentoring and helping others on the team get better. Their job isn’t just to be the code hero bottleneck.

Want to be an Engineering Manager? (cache)

Here the important word is bottleneck and I think that better than trying to reach the senior label individually, it has to be gained as a team. It’s way more challenging to be part of something bigger than yourself. You can mesure how “senior” a team is by how good it is at reducing bottlenecks and sharing responsibilities.

Finally, it also creates social problems as well. Bugs that span multiple services and require many changes can languish as multiple teams need to coordinate and synchronize their efforts on fixing things. It can also breed a situation where people don’t feel responsible, and will push as many of the issues onto other teams as possible. When engineers work together in the same codebase, their knowledge of each other and the system itself grows in kind. They’re more willing and capable when working together to tackle problems, as opposed to being the kings and queens of isolated little fiefdoms.

Microservices - Please, don’t (cache)

Choosing carefully which trends you’re following is key. Some are particularly destructive for the social interactions. I already talked about GraphQL, I think that microservices are even worse. This is a particular case when there is so much tensions within the team that you need to separate people and their products to still be able to deliver some value. A senior developer has to be inclusive in his productions and reactions, sometimes at the expense of speed or relevance.

The last step is to write about it. This could be a blog post, a book, or a conference talk. When I write about a topic, I explore the edges of what I know, the edges outside of what I needed to initially implement the idea.

How do I learn? (cache)

One part of becoming a senior developer is to be able to go just a bit deeper than the average developer and be able to share it. That’s a tiny advantage that makes all the difference. Sharing can take many forms, from blogging to giving a presentation or pushing some code on a repository. The end-result is not the most important (except for ego maybe). The moment you dig into the concrete issue and spend some time on it, the process of acquiring that knowledge and being capable of transmitting it. That’s the key point.

We are knowledgeable and productive, yes, but we also understand that we may actually know fewer (useful) things than we did at a prior point in our career. A non-trivial amount of our knowledge has decayed, and we may not have had the time to accumulate enough new knowledge to compensate.


We realize that it’ll require real effort to just maintain our level proficiency - and without that effort, we could be worse at our jobs in 5 years than we are today.

Reflections of an "Old" Programmer (cache)

The combination of our knowledge decay being extremely fast and our knowledge accumulation rate being quite slow leads to burnouts and endless questioning. Both being quite destructive on the long term. Senior developers are survivors. The ones finding a steady pace in their learning and a clear balance between theory and practice on a day-to-day basis. The ones taking the time to transmit their experience and to be kind enough (cache) to reduce the pain for newcomers. The ones avoiding depression and dead-ends like management and entrepreneurship. The ones escaping the craftsmanship and perfection rabbit holes. The ones considering themselves not senior enough to push the limits of its definitions. What is your one?

That battle for Web standards we used to fight

Do you remember when fighting for the Web standards was cool and the W3C HTML validator was a thing? I do, and that’s great if you don’t. It means you’re younger than me and that long, exhaustive battle against a Web designed for Internet Explorer 6 is a thing from the past. I...

Getting rid of the fantom indexes menace on Elasticsearch zombi masters

Split brains is a recurring problem when running any kind of clusters. A sudden server crash or network partition might lead to inconsistent state and data corruption. Elasticsearch addresses this problem by allowing multiple nodes to be configured as master. Running an odd number of master node...

From France 2002 to USA 2016

I don’t write about politics often. I stopped being interested by local politics after I dropped out from my politic sciences school back in 2001, with 2 exceptions. The first one was Barack Obama first election, because a black man being elected president of a country having a long story...

How we reindexed 36 billion documents in 5 days within the same Elasticsearch cluster

At Synthesio, we use ElasticSearch at various places to run complex queries that fetch up to 50 million rich documents out of tens of billion in the blink of an eye. Elasticsearch makes it fast and easily scalable where running the same queries over multiple MySQL clusters would take minutes and...

Happy birthday Dr Frankenstein

200 years ago was written what would become one of the most important fantastic and at some points philosophic novel, Mary Shelley’s Frankenstein. Despite its old fashion, Victorian era style the Frankenstein is still worth reading and studying at the light of today’s progress and...

★ Slow Data

In our search for answers to a problem which appears if not intractable then complex, is the speed of the media’s technology – and the politicians’ willing participation in the 24/7 news cycle – obscuring rather than illuminating the issues?

Are we simplifying the arguments if only by default, by not investigating them fully, or by appealing to an emotional response rather than an explanatory one?


But it does not mean we are covering the news more deeply or more analytically. We may be generating heat. But are we really delivering light?


We may think we are absorbing more information. In fact we are simply giving in to the temptation of the easy over the hard, the quick over the slow.

BBC Radio Director Helen Boaden resigns, criticising state of journalism (cache)

The idea of slow journalism is not new (see The Slow Media Manifesto (cache)) and I recently discovered that it can be applied to data too (cache). For quite a long time actually:

Data is growing in volume, as it always has, but only a small amount of it is useful. Data is being generating and transmitted at an increasing velocity, but the race is not necessarily for the swift; slow and steady will win the information race. Data is branching out in ever-greater variety, but only a few of these new choices are sure. Small, slow, and sure should be our focus if we want to use data more effectively to create a better world.

The Slow Data Movement: My Hope for 2013 (cache)

As a member of a team building an OpenData portal, these are questions we’re discussing on a regular basis. I wondered what would happened if I had to build something new from scratch. A few months ago, I made that experiment using Riot and Falcon (eventually not published because I don’t want to maintain it). The goal was to play with technical concepts from these frameworks and to deal with the complexity to serve data from various sources and qualities. My budget was quite constrained with less than ten evenings. After a while, I realized how hard the task was. Not (only) on a User eXperience point of view but because current data are so messy that you can’t easily pick up — even manually — some datasets and make them shine.

Maybe what we need the most is a Chief Data Editor, not a Chief Data Officer. Someone in charge of refining, storytelling and finally caring about the data. And when I say someone, this is actually a whole team that is needed given how ambitious the task is. Indexing data submissions is only the stage 1 of what could be achieved with OpenData and we experienced how limited it is in its externalities. Raw data yesterday, curated data tomorrow?

What if hackathons were not gigantic buzzword bingo sprints. Maybe we can turn these events into marathons. Put together a team for a week that focuses on a unique dataset, not necessarily full-time. The goal is to deliver a usable version at the end of the week and to celebrate what has been accomplished. Turn the shiny investor/mentor crap demo into a useful explanation of dead-ends and tools in use for the clean up that can be useful to the whole community. Curathons, really?!

Another option is to improve data directly at the source. Data is somehow a static API and as such a conversation too! Both producers and consumers of the data would benefit from more communication on how they actually (re)use it, why they are blocked, which are technical/political challenges to provide a better version and so on. The OpenData cannot succeed with the current one-shot approach, it has to be a continuous process.

It takes way more time to understand the actual issues in the lack of reutilizations and maybe it would lead to less datasets released at the end of the day. But hopefully of better quality. And quality matters to lower barriers to (re)adoption. Giving thousands of datasets to a couple of geeks does not produce the same results as giving a hundred of reusable datasets to millions of citizens. Don’t get me wrong, we desperately need geeks to make them reusable in the first place…

The deadly difference between hiding the symptoms and solving the problem

There’s a common misconception between solving a problem and hiding the symptoms. The tech world is full of examples both because it’s an easy falling trap and because of the move fast culture. You have your application being down for short periods several times a day because your...

An Elasticsearch cheat sheet

I’m using Elasticsearch a lot, which brings me to run the same commands again and again to manage my clusters. Even though they’re now all automated in Ansible, I thought it would be interesting to share them here. Mass index deletion with pattern I often have to delete hundreds of indexes at...

Don't fire the underperformers (yet)

Soon or later, every company ends hiring underperformers. Often unnoticed in large corporations, they can be fatal to small businesses where everyone counts in large amount. The main problem with underperformers is that they sometimes take months to detect. No one can join an existing company and...