A Technical Guide on Open Sourcing your Code without Pain

At Synthesio, we have recently started to release part of our Ansible deployment stack on Github. It’s the achievement of a 2 years long project. We wanted to do it, but didn’t for many good and bad reasons.

The code was not good enough to be released. That’s the excuse I hear the most from companies that are reluctant to open source their code.

We can’t release that code, it’s crappy and people will think our engineering team sucks.

That’s the wrong way of thinking. Don’t wait for your code to be perfect or you won’t release anything. Push something that might be useful for someone. If people use it, they will contribute and improve your code.

We didn’t have the time to do it. To tell the truth, open sourcing our code was not a priority. We had to deliver fast, fix many things, so doing simple stuff like writing documentation or pushing on Github came second.

The code had Synthesio specific stuff we couldn’t push. That might be the only good reason to keep our code closed for so long. We had to make our code less Synthesio specific by moving things from the core to the configuration. It didn’t take long, and made our code more readable and reusable as our infrastructure is growing. The process is still ongoing and we’ll keep pushing stuff as we clean them.

If you want to open source part of your code, here’s a way to do it without causing a mess or getting crazy.

Split your code source into modules

The first part of the job is splitting your existing code into modules you can release. In this example, we’re releasing our Mesos deployment Ansible role, which used to be in our Ansible stack core.

To do this, we’ll rely on Git submodules. Many people hate submodules, but in our case, that’s the best way to go. We’ll be able to have a separate Git repository on our internal Gitlab we can mirror on Github in the blink of an eye when we update it.

First, create 2 new git repositories:

  • One on your internal git infrastructure, we’ll call ansible-mesos-internal
  • One on Github, we’ll call infra-ansible-mesos because that’s how we called it.

We want to keep the code on our internal Git infrastructure in case of Github would close or be down someday.

Now, we can actually split the code into modules. Since we don’t want to lose the git history, we’ll use some git tricks to keep that code and its revisions only.

Clone your local repository into a new one:

git clone ./ansible ./ansible-release
cd ./ansible-release

Make sure the place is clean before you start working

git checkout master
git reset --hard
for remote in $(git remote); do git remote rm $remote; done
for branch in $(git branch -a | grep -v master); do git branch -D $branch; done

It’s now time to do the real thing:

git filter-branch --tag-name-filter cat --prune-empty --subdirectory-filter roles/users -- --all

You’re now left alone with your mesos role. Good. Some cleaning might be needded before we push to our new repositories.

git for-each-ref --format="%(refname)" refs/original/ | xargs -n 1 git update-ref -d
git reflog expire --expire=now --all
git gc --aggressive --prune=now

Remove references to your specific / secret code

Our Mesos role used to have some Synthesio internal things we don't want to release, like machine hostnames, or usernames / passwords. To do it, we had to clean Git history so people won't find it while browsing Github.

If you've just deleted that file, it's easy:

git filter-branch -f --index-filter 'git update-index --remove defaults/main.yml' <sha1 of introduction>..HEAD

However, we didn't delete the file, we replaced the data so we need some more git tricks to get rid of the history until the replacement.

First, you need to find the commit SHA1 you replaced your sensitive data with.

git log defaults/main.yml

Now, create a branch ending to that commit

git checkout -b secrets <sha1 of the commit>

git checkout master

git filter-branch -f --index-filter 'git update-index --remove defaults/main.yml' <sha1 of introduction>..secrets

You're done! Your file is still alive but all the sensitive history has been deleted.

Pushing and using

Add a LICENSE and a README file, you're now ready to push your code

git remote add origin ansible-mesos internal
git push -u origin master
git remote add github infra-ansible-mesos
git push -u github master

Now, make use of the newly separated module in your main project. Since we don't want the whole mesos history thing, we'll delete it as well here.

cd ../ansible
git checkout -b feature/split-mesos
git filter-branch -f — index-filter ‘git update-index — remove roles/mesos’ <sha1 of introduction>..HEAD
git submodule add roles/mesos ansible-mesos-internal
git add .gitmodules
git commit -m "Splitting mesos from the main project"
git push origin feature/split-mesos

Get your commit reviewed, tell your pals to update. TADA! Your company is now a proud open source contributor!


A Technical Guide on Open Sourcing your Code without Pain was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.

How to Fix a Lagging MySQL Replication

A few weeks ago, we added a new slave to a 22TB MySQL server. The time to transfer the data, play innobackupex apply_log, the slave was already way behind the master. Things started to worsen during the weekend as the server performed a RAID check which slowed down the replication even more. With about 100 million writes a day on that cluster, we started the week with a good 500.000 seconds lag.

Replication lag is a frequent issue with loaded MySQL clusters. It can become critical when the lag gets too important: missing data when the slaves are used for reading, temporary data loss when losing the master… In our case, it blocks the cluster migration to GTID until the replication fully catches up.

Many people on the Web had the same problem but no one provided a comprehensive answer to that problem so I had to dig into MySQL documentation and internals to understand how to fix that.

Following the replication catching up

First, the setup:

  • Bi Xeon E5–2660 v3 20 core, 40 threads, 256GB RAM
  • 24 7200 RPM hard disks of 4TB each, RAID 10.
  • Percona Server 5.7.17–11–1 on Debian Jessie
  • 100 million writes / day (~1150 queries / second)
  • No reads, because of the lag

Multi-threaded replication

MySQL introduced multi-threaded replication with version 5.6. MTR has since then been improved with MySQL 5.7. It still needs to be used with caution when not using GTID or you might get into trouble.

First, we enabled parallel replication using all available cores on the server:

STOP SLAVE;
SET GLOBAL slave_parallel_workers=40;
START SLAVE;

You don't need to stop / start slave to change the slave_parallel_workers but according to the documentation MySQL won't use them until the next start slave.

Parallel replication was useless first, as the host has only one database, and the default parallel replication type works on a database lock. We switched slave_parallel_type to LOGICAL_CLOCK, and the result was tremendous.

Transactions that are part of the same binary log group commit on a master are applied in parallel on a slave. There are no cross-database constraints, and data does not need to be partitioned into multiple databases.
STOP SLAVE;
SET GLOBAL slave_parallel_type = LOGICAL_CLOCK;
START SLAVE;

Please, flush the logs before leaving

Before we found the LOGICAL_CLOCK trick, we tuned the flushing a bit.

First, we make sure that MySQL never synchronizes the binary log to disk. Instead, we let the operating system do it from time to time. Note that sync_binlog default value is 0, but we used a higher value to avoid problems instead of crash.

SET GLOBAL sync_binlog=0;

Now comes the best part.

SET GLOBAL innodb_flush_log_at_trx_commit=2;
SET GLOBAL innodb_flush_log_at_timeout=1800;

For ACID compliance, MySQL writes the contents of the InnoDB log buffer out to the log file at each transaction commit, then the log file is flushed to disk. Setting innodb_flush_log_at_trx_commit to 2 makes the flush happen every second (depending on the system load). This means that, in case of crash, innodb will have to replay all the non commited transactions (up you one second here).

innodb_flush_log_at_trx_commit=2 works in pair with innodb_flush_log_at_timeout. With this setting, we ensure MySQL writes and flushes the log every 1800 second. This avoids impacting performances of binary log group commit, but you might have to replay up to 30 minutes of transaction in case of crash.

Conclusions

MySQL default settings are not meant to be used under a heavy workload. They aim at ensuring a correct replication work while ensuring ACID. After studying how our database cluster is used, we were able to decide that ACID was less a priority and catch up with our lagging replication.

Remember: if there's a problem, there's a solution. And if there's no solution, then there's no problem. So:

  • Read the manual. The solution is often hidden there.
  • Read the source when the documentation is not enough.
  • Connect the dots (like innodb_flush_log_at_trx_commit + innodb_flush_log_at_timeout)
  • Make sure you understand what you do
  • Always have Baloo to proofread your article and tell you when you misunderstood parts of the doc and their consequences 😜.

Photo: 白士 李


How to Fix a Lagging MySQL Replication was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.

Introducing: Engineering Weekly

I’m launching Engineering Weekly, a free newsletter lovingly crafted for the computer engineers who care.

Every Sunday, you’ll get the best resources for computer engineers directly in your mailbox. There will be feature articles, useful tools and exciting projects, all curated by yours truly.

When blogging was cool and RSS was a thing, I used to publish a weekly press review with all the cool UX links I found on the Web. Now that newsletters are the new blog posts, I wanted to publish the best of my daily watch in a short, easy to read format. That’s how Engineering Weekly was born.

If you want your latest article or project featured on Engineering Weekly, drop me an email frederic[at]t37[dot]net.

You’re just one click away from the best resources! Subscribe, it’s free!


Introducing: Engineering Weekly was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.

★ Des données aux communs

Le numérique auquel nous aspirons est différent. Il ne menace ni l’économie, ni l’environnement, ni la démocratie, ni la culture. Il permet au contraire de renouveler ces domaines dans leurs fondements par une perspective centrée sur l’humain. Il protège nos libertés tout en nous donnant des moyens puissants d’exercer nos droits. Il ne concentre pas de nouveaux pouvoirs ainsi que les ressources entre les mains d’un petit nombre. Il contribue plutôt à redistribuer équitablement les pouvoirs et les richesses d’une manière durable. Il pose que nous sommes tous égaux et interdépendants, il vise à restaurer notre relation au monde et en prendre soin dans une démocratie inclusive.

Ce numérique auquel nous aspirons est un commun, une ressource partagée par les communautés qui se mobilisent et s’organisent pour la produire, la créer, la protéger, la valoriser au bénéfice de toutes et de tous. Ce numérique existe et prospère. Pour des communautés engagées dans le partage des savoirs co-créés, ces pratiques issues du modèle des communaux trouvent, par l’entremise du numérique, un territoire qui n’aura jamais été aussi vaste. Le domaine public, les logiciels libres sont des exemples de communs de la connaissance, de communs numériques, qui sont vitaux pour le travail, l’éducation, la science, la culture, la liberté d’expression aujourd’hui. De surcroit, ce numérique constitue la dorsale d’une économie collaborative en plein essor mobilisant les ressources, le talent et l’énergie des citoyen.ne.s dans la concrétisation de projets inédits et porteurs.

Nous aspirons à voir ce numérique humaniste reconnu et soutenu.

SavoirsCom1 salue la « Déclaration des communs numériques » au Québec (cache)

Ceci est un résumé de mon intervention à Confoo, il s’agit même d’une suite de ce que j’ai pu partager l’année dernière autour de l’OpenData. Le déroulé était ponctué de fragments de Python que je n’ai pas reproduits ici mais que vous pourrez retrouver sur le support.

1. Données ouvertes

Data.gouv.fr est la plateforme ouverte des données publiques françaises. Il s’agit d’un moyen de publier ses données brutes et de consulter celles des autres. Elle s’adresse aussi bien aux ministères et collectivités publiques qu’aux citoyens ou aux entreprises et associations. Elle est ouverte à tous et la modération se fait a posteriori. Elle est gratuite et tous les développements sont publiés en open-source. D’autres pays réutilisent le code de la plateforme.

Je participe à son évolution depuis bientôt deux ans.

2. Données exploitables

La publication des données n’est que la première étape d’un long processus d’appropriation par les personnes intéressées. Un format de fichier propriétaire ou un encoding non spécifié et cela devient plus compliqué de plonger le nez dedans. Une archive corrompue ou un site inaccessible et l’on arrive rapidement à une frustration ainsi qu’une perte de confiance qui seront difficiles à aller récupérer.

Les discussions permettent aujourd’hui d’exprimer ces freins de la part des consommateur potentiels et d’engager une discussion avec les producteurs de la donnée.

3. Données compréhensibles

Une fois le fichier ouvert, il s’agit de comprendre ce qu’il y a dedans. C’est loin d’être intuitif dans la majorité des cas s’il n’y a pas une documentation exhaustive associée à la donnée. La description des jeux de données et de leurs ressources permet à ceux qui soumettent leurs données de préciser à quoi correspondent les termes métier par exemple ou les intitulés de colonnes peu explicites.

Il est parfois pertinent de proposer une interface simplifiée à une documentation PDF de plusieurs centaines de pages.

4. Données interopérables

Même documentées, certaines données sont difficiles à appréhender du fait de leur complexité ou de leur taille. Retraiter cette donnée brute en aval est ce que j’ai tenté de faire avec GeoHisto pour le diff du Code Officiel Géographique de l’INSEE ou avec Ulysse pour traiter le fichier volumineux du SIRENE.

Il ne s’agit aucunement de remplacer les données initialement publiées mais de proposer des outils et éventuellement leurs résultats pour être à même de les exploiter plus rapidement.

5. Données requêtables

Par exemple, l’une des problématiques à laquelle nous sommes confrontés est de pouvoir découper des fichiers CSV à la volée en fonction de certains paramètres. Un petit sécateur nous permettrait de réaliser ceci de manière asynchrone et de proposer des liens vers des sous-ensembles propres à des territoires par exemple.

Lorsque le fichier est trop volumineux, il est possible de fournir les outils pour réaliser cela de manière relativement performante.

6. Données conviviales

Parfois le simple fait de proposer un sous-ensemble des données générées facilite leur représentation et donc leur compréhension. C’est une suite de petits détails qui semblent insignifiants mais qui une fois mis bout à bout montrent que vous prenez soin de vos données et de leurs utilisateurs potentiels.

Encore une fois, la documentation est critique pour encourager l’adoption et la réutilisation. Fournir des exemples de réutilisations réalisés ou imaginés peut également aider. Expliquer ce qui ne peut pas être fait avec est encore mieux en documentant par exemple les précédentes tentatives qui ont échouées. De même qu’il peut être pertinent de décrire la façon dont les données sources sont générées pour en comprendre les contraintes.

7. Données résilientes

La rapidité avec laquelle la Maison Blanche a vidé son portail opendata soulève forcément des questions (cache) lorsqu’on a en charge un tel portail dans un pays qui pourrait prochainement devenir tout aussi totalitaire. L’hébergement des données en utilisant un outil décentralisé comme git permet de les répliquer (et de les enrichir) à l’infini tout en conservant l’historique des modifications apportées.

Il y aurait beaucoup à faire à partir de git-lfs ou dat par exemple. Je ne suis pas loin de prendre le temps de faire ça en tant que citoyen à partir de l’API.

8. Données pérennes

Les problématiques liées à l’historique sont intéressantes car l’on peut distinguer les versions de la donnée brute et celles des sujets qu’elle traite. Je me suis par exemple focalisé sur ce second point avec GeoHisto et l’évolution des communes ainsi qu’avec l’historique des entreprises du fichier SIRENE. Il s’agit d’un angle d’attaque qui se focalise sur une exploitation particulière des données, celle de travailler sur des versions/diffs pour une commune ou une entreprise précise.

Dans le cas des départements, cela m’a permis de revoir mon Histoire d’une manière pratique et assez ludique.

9. Gouvernance ouverte

Il ne s’agit pas de s’en tenir à publier des données et à les rendre utilisables mais d’être à l’écoute de la communauté des réutilisateurs pour l’améliorer. Aussi bien dans le fond que dans la forme, il est difficile de savoir a priori ce qui va être pertinent pour un type de données. Prendre en compte les retours dans une boucle de rétro-action vertueuse constitue le graal de la donnée ouverte.

Avoir un lieu d’expression et de décision qui soit documenté et ouvert à tous permet de fédérer une communauté autour d’un besoin et d’itérer, aussi bien sur le plan technique que politique.

10. Biens communs

Au même titre que la libération du code, au début on souhaite garder le contrôle et nombreux sont les projets open-source qui ne dépassent pas cette étape. Puis l’on s’ouvre à l’autre, à ses différences de points de vues et d’expériences et on prend le temps de l’écouter pour améliorer le produit. Et enfin on s’en remet à l’intelligence collective de la communauté pour continuer d’avancer et alors seulement la résultante prend vie.

La libération d’une donnée est un lâcher-prise progressif.

Un bien ne peut se transformer en commun sans que son initiateur dépasse son propre ego et accepte les divergences de la communauté qui vient itérativement polliniser cette production.

Administration ?

Le rôle de l’État dans cette démarche n’est plus d’administrer mais de mettre en relation des personnes autour de la donnée pour faciliter la production d’externalités positives. La finalité n’étant pas le bien commun en lui-même mais le faire en commun qui nous permet de vivre en commun.

Je pense pour ma part que nous pouvons opposer à ces deux options un État qui serait au service des communs, où les communs seraient le moyen de créer de la valeur pour les citoyens. Cet État serait centré sur les citoyens, son rôle serait de faciliter et de responsabiliser ; il serait au service des citoyens et c’est ainsi qu’il se percevrait.

Confrontation Constructive ou Tension Constructive - l’État et les Communs (cache)

Il y avait une dizaine de personnes durant la session et voici les retours proposés par Confoo dans l’heure qui suit (!) par email.

Basic Hacking Advice from a 12 Years Old Selling his School on Craiglist

Last year, my 12 years old son tried to sell his school on Leboncoin, a French equivalent of Craiglist. It was one of the funniest thing he ever did, and the best had yet to come. He did exactly what you need to do not to get caught, and no one told him about how to do it. Here is the full story.

I would never have heard about it without him telling me. He was trying to convince me that he was smart enough online to get Twitter and Facebook accounts. I refused an, limited his Internet access for the exact reason he was too smart online. I gave him a book about programming instead.

He went to the public library, where you can use computers for free. He didn’t go to the one next to our place where he’s a regular but picked up another one where no one knows him. Also, that public library is old and doesn’t have internal CCTV.

Computer access is limited to library card holders above 13. Since he was 12 he asked a random adult to give him their access. And since he’s cute as an angel and looks like he’s 9, someone gave him their access without thinking about what he could do.

Once connected, he created a Gmail address he would only use once to publish the announce and never check again. He admitted he thought some intelligence agency would try to find him if the prank went too far.

Then, he picked a random name from an online newspaper to create his account. He didn’t want the username to look like it had been created by a kid, and an adult one would make the announce look legit.

He published the announce using “house” instead of “school” and add some random house picture to pass the site moderation. Once validated — it takes a few minutes — he updated his announce, changing the title, adding real photos and raising the price.

He added the school phone number as a contact number. I have no idea whether or not someone called and neither of us want to know.

He logged off from Gmail, Lebonoin and cleared the browser history, just in case, then logged off from the computer.

Finally, he spent an hour reading so no one would notice a kid leaving the library too hastily after surfing on the computer.

And he never went to that public library again.


Basic Hacking Advice from a 12 Years Old Selling his School on Craiglist was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.

Writing Personal Postmortems

I’m a huge fan of writing postmortems every time something goes wrong at work. I write postmortems after every major crises, when a deployment goes wrong or every time a human mistake breaks something in production. They allow us to analyse what happened and provide the whole team a valuable knowledge base of past incidents.

I have built a simple, formal, 3 points postmortem layout I've been using for a decade now:

  • What happened?, a summary and chronological tale of the events, including everything we did during the incident.
  • Why did it happen?, providing the deep, root cause analysis of the problem.
  • What was done to prevent it in the future?, a list of measures and fixes we deployed.

The third part is the most important for our knowledge base. It provides a comprehensive explanation of why we did this or that when the reasons are not obvious.

Postmortems take time to write. They require to rethink then problem. They expose the mistakes we did and we'd rather forget about. But they are essential in my job where our primary duty is to ensure the service availability.

I recently started personal postmortems. Instead of writing about work incidents, I work about personal stuffs using the same template. It helps me a lot dealing with tough weeks, and not making the same mistakes again. They're half a diary and half an introspective analysis of what happened and how I reacted.

As a complete nerd, I write markdown files I encrypt using GPG and version on a git depot. Markdown is a handy, text formatting syntax. Encryption makes sure no one access things I keep personal by accident. And git is a nice way to access my postmortems from everywhere. But you can use anything for writing personal postmortems: a notebook and paper, your favorite productivity tool, or a word processor like Google Docs.

Personal postmortems have become a useful complement to my personal therapy. Before meeting my therapist, I've already had the time to sort out the mess in my brain our work is more efficient. Even though you don't have a coach / therapy, personal postmortems are a great way to turn lemons life give you into a tasty lemonade.


Writing Personal Postmortems was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.

Killing your Inner Superhero Before they Kill you

Superman painting

The tech world seems to vow a cult to superheroes and that’s terribly wrong.

Super heroes in the tech world have nothing to do with your favorite Marvel / DC character. Startup superheroes are people who believe they will do everything faster and better than anyone else. The problem is they often do, until they can’t anymore. Superheroes are harmful to themselves, they are harmful to their colleagues, and they will eventually kill your company.

I used to be that kind of person and it almost killed me.

A few years ago, the 35 person company I was working at for got acquired by a 100,000 people corporation. Because we were leading a corporation changing project, we reported to the company CEO and former French minister of finance. And because we were leading a corporation changing project with an incredible access level, everyone else wanted us to fail.

I was already working a lot for our regular clients, but integrating our application to the company infrastructure added a tremendous load. I was all alone facing the corporation internal providers bad faith and technical disasters in a project gathering 30 people from 6 countries, with 7 different languages over 16 timezones. And I had zero experience with the corporate world and its political game. My job was to get things done, in time, whatever the impact on my health and my family.

I spent 9 months running a 24/7 marathon in a tunnel until launch day.

That day, my body started to send me extremely worrying and visible signals. I don't care about my health, but this time I called my doctor who urged me to go to the hospital. I spent 10 hours in the emergency room, eventually deploying the application in production between 2 examinations using my 3G plan as a modem.

I used to tell that story with proud, showing how dedicated and superhero ish I am. I don't do this anymore. Looking back, this is one of the most stupid things I ever did.

First, I put myself in danger. I put so much pressure on myself during so much time, spending a decade running a marathon faster than Usain Bolt, that my body decided to give up.

Second, I put my company in danger. There were things I was the only one to know about. Every security audit pointed out the fact I was the company point of failure, but I didn't care. I even was proud of it.

Third, I put my colleagues in danger, keeping information and accesses to myself, being the only one able to deploy the application, troubleshoot backend stuff and manage the whole system.

Cemeteries are full of irreplaceable people who eventually got replaced. — George Clemenceau.

After I left that company, people and circumstances forced me kill the super hero in me.

There were too many things to do, too many critical, long term projects to run in parallel, and the platform needs to be monitored 24/7. I had to rethink the way I manage an infrastructure and a team, and I had to force myself to manage my time and health.

Superheroes have 4 deadly sins I had to get rid of.

Superheroes are control freaks

Superheroes don't delegate because they can't trust other people. That makes them unsuitable to work in team, therefore they become a company single point of failure.

There's no I in team, but there's a Y in victory.

I managed to overcome my superhero syndrome by learning to trust people. And I learned to trust people by letting prove me what they can do. For someone with a superhero syndrome, it's a huge effort that requires to stop controlling everything. For a control freak, learning to trust is a long, sometimes frustrating "I will do it faster" but definitely worth it process. And it ends making your situation much more comfortable.

The tipping point for me was 18 months ago. I had a 3 weeks summer vacation, leaving my team with a major database cluster corruption. That's exactly the kind of thing I would either cancel my vacation or work 24/7 using my laptop and 4G cellphone.

Instead, I left someone in my team build a recovery plan, validated it and left. He spent 36 hours implementing the plan. I didn't do anything but connect on Slack every now and then to check how things were going. The plan was a total success, and he probably did it even better than I would have. He earned my trust and I now go on vacation without the urging need to work or check Slack every 5 minutes. Big win. Long, painful but big win.

Superheroes do information retention without realising it

Superheroes don't write documentation, they don't communicate and they don't transfer knowledge. They don't need to because they are the only one they need to do the job.

A handover from a superhero rimes with massive hangover. You end with a headache and don't remember anything that happened during the handover because they didn't tell you anything.

It's a deadly sin a smart manager and a bit of luck can overcome without all the "learn to trust people process" pain. 2 things saved me from that sin.

I started to work with 2 awesome automation freaks. When you talk about infrastructure automation, you talk about code to manage that infrastructure. The knowledge is translated into code, and the code becomes the documentation. I started documenting the infrastructure without realising it. Anyone joining the team and willing to read the code knew how it is designed and how to operate it. It changes everything.

I also had to fix a management problem. I want to be aware of what my team members do, and I want everybody to know about it, but I don't want to micro manage. We're a small team, and I want at least 2 people in the team to be aware of any change.

The solution is Github pull requests. No code goes into master without a pull request. And no one is allowed to validate their own pull requests. My colleagues do the code review on my requests, and I do some of theirs.

Once again there's a problem for a superhero. A superhero doesn't need anyone to review his code. I fixed it by working with someone who often wrote twice much comments than the lines of code I pushed. Comments were always relevant, and after adapting to the frustration of seeing my merge rejected, I started to love every bit it taught me. I technically improved more in 14 months working with that guy than I did in years acting like a superhero.

Superheroes believe they’re the company's most important asset

Believing they're the company's most important asset makes superheroes unable to work in teams. It's OK when you're starting, small and need someone who work incredibly fast. It becomes a nightmare as your company is growing, so you need to hire more people and build teams.

The best way I found to overcome that problem, learning how to work in and with a team was a complete accident. The company I joined had no infrastructure people, and all the automation process was done by the backend team. During the first months after I joined, they kept doing infrastructure related things, mostly around automation. For the first time, even though I was still working alone, the porosity allowed me to start working inside a team. It was a life changing experience.

Superheroes blame other people for their mistakes

Comics superheroes have 2 sides. As superheroes, they are powerful. As humans, they have flaws. Not only the kryptonite style flaws, but human flaws, doubts, even though they don't make mistakes in the end.

People with the superhero syndrome make mistakes too. They screw up projects, the break things in production because they don't rely on other people and because they can't imagine they can fail. So they blame other people for their mistakes.

After I left the hospital, I blamed my management for not helping me, being way too happy not to get involved in all that corporate shit. Looking back, the truth is different. Even though I'm sure my management knew and was happy to let me deal with the crap, I'm also responsible for that partial disaster (the project was deployed on time and was working as expected, so things got done).

I never asked for help, never alerted my management, and even though I’m sure they knew I was in a tunnel, everything that happened to me was my fault. There are multiple reasons for that. Being 24/7 in a tunnel for 9 months prevented me from taking the necessary distance to realise I was going to crash. Being sure I was able to get things done without help was another.


Killing your Inner Superhero Before they Kill you was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.

A letter to my 18 years old self

Hi Fred, that’s me, yeah, the 38 years old you.

First, let me tell you a great news: you failed miserably. You're a total failure, and that's a good thing. Your process of self destruction didn't work. You neither managed to kill yourself nor managed to have someone to do it. You put lots of efforts in it but didn't succeed. I won't spoil you the next 20 years, but I'm now the proud father of 3 cute, smart and healthy kids, and the owner of a lovely house in a Parisian suburb. Not bad after everything you had me to get through.

I have a few things to tell you. I don't expect the stupid stubborn asshole you are to read them. You're too proud and selfish to listen to anyone anyway.

You have a smart, cute and popular girlfriend. That's cool, but it's like having a huge dick. You can be proud of it, but don't show it everywhere or brag about it in public. She's going to leave you anyway and you'll spend a full year in the black. Whatever you do, please don't hurt anyone for they're not responsible for your blindness.

That fucking social class you're so proud to belong to, forget about it. It's pure bullshit.

You'll do studies you hate before you find your way. Don't waste that time and learn something you love during those years.

You'll eventually stop slacking someday and you'll love it. You'll switch from "I didn't learn a lesson or do my homework in 10 years" to "I'm working 24/7 and it's amazing". Listen to your body before it stops working because you've gone too far.

Dont pick a job because it makes your social surroundings comfortable. Do something you love and you'll never feel like working a single day of your life.

Having sex is cool. Having safe sex is still cool and responsible too. Be responsible.

You'll have kids sooner than you'd expect. Don't be too harsh on them because you don't understand the world you're living in anymore.

Speaking of your kids, don't try to escape in work, alcohol or anything else or you'll wake up with your baby being able to read, write and play tennis before you realize it.

Stop paying attention to what people say about you, start paying attention to what people tell you about you. They're life changers.

You'll make friends someday. I mean real friends who last for 20 years and more, not the kind of friends you had so far. Be careful to who you give your trust to.

You'll make mistakes, lots of them. Learn to forgive yourself. Don't let guilt ruin your life.

Never be ashamed of what you get through. Shame is a poison instilled by what other people think of you that kills slowly.

Fix your own problems before they ruin your life. Take them one by one for you can't win multiple battles at a time.

Don't wait 20 years to see a therapist.

Stop talking and start acting. Be reliable. May your "yes" be "yes" and your "no" be "no".

Stop being a follower. Make decisions even though they might not please everyone and it kicks you out of your comfort zone. You'll end up loving it.

Finally, the most important: you're awesome, don't destroy yourself, life's worth living.

Cheers,

Yourself.


A letter to my 18 years old self was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.

How a British Pop Duo Shaped my Musical Tastes the most Ironic Way

I don't talk about non tech things often here, but the way Pet Shop Boys music shaped my musical tastes and views about the world the most ironic way, even more ironic than any of their texts, is worth the tale as the British duo gives a gig tonight in Paris.

If you're over 30, you probably know the Pet Shop Boys from their debut smash hit West End Girls or their 1993 cover of Village People's Go West. Despite not being as popular as they used to be outside of England, the band is still alive and touring around the world after releasing their Super album last year.

Like many people, I discovered the Boys circa 1984. A friend of mine had recorded West End Girls on a tape we used to listen to in their 4X4 while touring in the Saudi desert. I liked the song but was far from being a pethead, as their fans call themselves.

I got more interested in their tracks It's a Sin and Heart during summer 1987. The same friends had put another mixtape together including both Boys songs and a couple of Sandra's greatest hits. We would listen to those mid 80’s pop tracks while waiting for the right time to go to the beach, and I remember watching Heart music video on a French TV network the first school day in September. But there was still a long way to go for me.

October 1993. I'm a 15 years old teenager crazy about electronic music. I fell in love with House Music mid 1988 while listening to a late Saturday night show on a French radio, and hearing Moby's Go on the same radio at 4AM in 1991 changed my musical life for good. But I don't expect a pop duo to blow my mind the most unexpected way in the next days.

October 1993. The Pet Shop Boys cover of Village People of Go West continuously airs on MTV. The song is catchy and the music video incredibly cheesy but entertaining. The ultra right wing, conservative catholic I am back then likes the song for many reasons. For me, it's a great, ironic anti communism anthem. Having the Red Army Choir singing "Go west, life is peaceful there / Go west, in the open air / Go west, where the skies are blue / Go west, this is what we’re gonna do" makes that song an incredible joke and a great far right political anthem.

Plop twist: the joke's on me. Go West is a gay anthem about San Francisco and living in a gay utopia.

But it took me about a decade to learn about it.

October 1993. I'm spending the weekend at a friend's place and we decide to skip and go in town at night. Back then, the only place a broke 15 years old teenager living in a medium city could go after 10 PM for free was the Virgin Megastore. The biggest record store in town was opened until midnight, so we decided to go there.

Back then, there was a spot where you could listen to a record for 5 minutes. The spot was closed, but there were a few records available anyway. One of them was the latest Pet Shop Boys album, a ugly orange CD titled Very. While my friend was browsing the rock records, I started to listen to that one.

The first track, Can You Forgive Her, blew my mind. The synthpop intro, the lyrics, everything was perfect. I listened to the whole album until the store closed, skipping Go West and always coming back to the first track.

One thing made me think the song had been written for me.

She made you some kind of laughing stock / Because you dance to Disco and you don't like Rock.

We were deep in the Nirvana madness, and people at school used to break my techno / house CDs because it was not music, just shit. With another ironical plot twist, 5–6 years the same people asked me to listen to that DJ I had been following for years when techno became fashionable.

The record costed 137 Francs (20.5 Euros), and my parents only gave me 3 euros a week I would spend in cigarettes. I stopped smoking for a while to save enough for that record. I never regretted it.

In an incredible plot twist, the most fascist, homophobic guy in town back then (well, almost, and I changed a lot since then) fell in love with the most gay band's most gay album ever produced, their influences and the artists they influenced, produced, remixed or wrote for. A 24 years old love story that continues tonight and that's meant to last. Because Tonight is Forever.

Photo: Known People


How a British Pop Duo Shaped my Musical Tastes the most Ironic Way was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.

The History of the Catholic Church explained to a developer

Last night, I told a friend the long and complicated History of Christianism in general and the Catholic Church in particular. The best way I found to tell a 2000 years story was through code versioning and the life of an open source project. Here we go!

Image: Wikimedia Commons

Around 33 after himself, the project leader disappeared from the grid, leaving many contributors all alone. They started to add more feature and documentation from various places such as Roma, Antiochus, Alexandria or Jerusalem, sending each other patches via email.

As the project gained both users and contributors, the codebase became messy. There were neither coding style nor roadmap, and they eventually elected a project leader and setup a Github repository to manage the source code.

Some people decided to fork the project, mostly for historical, l10n and i18n reasons. The most well known are still alive, like the Egyptian Copts and Lebanese Maronites. Those forks cherry picked feature and UI here and there and the community survived the Crusades and Arab invasions.

In 1053, the project was forked by the Greek East church. Main reasons were leader conflicts, license issues and UX. This led to the Eastern Orthodoxes. The project is still one of the most important one in Eastern Europe.

During the next centuries, the project grew organically. Change of project leaders were frequent, mostly for political reasons. Lots of countries wanted to take over the huge market created using incredible growth hacking techniques. Lots of local churches would branch the code and use custom templating and UI even though the core features remained the same. Lots of PRs were accepted creating an incredibly messy codebase that definitely needed cleaning.

The political climate was so tense that the whole project moved from Github to Savannah from 1309 to 1377 under the GNU fans pressure. This led to a massive flamewar in the .fr and .it hierarchies of Usenet.

During the 16th Century, the project was so much a bloatware that Martin Luther decided to fork and rewrite the code and make it a CLI anyone could use. Once again, that fork led to a great divide in the community on Usenet and IRC. Many users were permanently banned or had their account deleted.

The new, clean codebase allowed many contributors to join with little knowledge of coding. Many branches were created, like Calvinism or Evangelism. Each branch started to grow without rebase, only cherry picking commits here and there. The forked project quickly became an incredible mess but the various communities are still strong.

In 1530, England forked the project after Henry the 8th was refused his mariage canceling by the main project leader. Since that day, the king or queen of England is the leader of the English fork of the project. They sometimes would cherry pick commits from the other forks here and there.

In 1545, the main project started a giant hackaton to clean the codebase and UI in Trent, Italy. The hackaton lasted 18 years after a new major version of the project was released. Most feature were kept, but the templating options were removed so everyone would now use exactly the same UI and experiment the same experience.

In 1670, Frenchman Bossuet explained how the local chapters leaders were of divine essence because they had been chosen by RMS himself. This led to leaders ego exploding and lots of abuses and harassments that are still discussed today. Another long and deadly flamewar involving the fr, nl and uk Usenet leaves started as using forks of the project became forbidden in many countries.

In 1801 a group of French and Belgian Roman Catholics separated from the main project in France following the Concordat of 1801 between Pope Pius VII and Napoleon Bonaparte. The project died in 1829 after the last maintainer died, but had up to 100,000 users.

In 1962, because of an impressive churn rate, the project decided a major rewrite. The codebase was cleaned, and that ugly Web 2.0 UI was changed for a modern flat design. After almost 2000 years, the project added l10n and i18n, and the README stated that every user was free to chose the project and license they liked the most. That particular point led to another fork, and the maintainers decided to freeze the codebase to the 1563 version. The project is still alive with a small and active community even though the code and documentation are not maintained anymore.

In 1978, to counter the SCO trial, the Catholic Church elected a SCO employee as a leader. Joan Paul II was the first non Italian leader since 1522. This smart political move helped the fall of SCO as a company 13 year later.

In 2013, a new elected leader decided of a new marketing roadmap. The roadmap plans to make the project more user friendly to attract new users and reduce the churn rate, and focuses on communication by sending apologies to all the people harassed by the community during the many flamewars.


The History of the Catholic Church explained to a developer was originally published in Fred Thoughts on Medium, where people are continuing the conversation by highlighting and responding to this story.