Cours IUT : les bases

The plan is a lie.

Retours sur mon premier cours à l’IUT d’Arles. La journée a assez mal commencée avec l’impossibilité de retrouver mes adaptateurs DVI-miniDVI… ce qui ajoutait une légère contrainte en plus. Du coup après un petit tour de classe où j’ai pu confirmer que les niveaux étaient vraiment disparates ET que le cours précédents sur les bases de HTML/CSS n’avait pas été assimilé, on est partis sur un petit projet qui nous a servi de fil rouge tout au long de la matinée. J’ai retenu 2 volontés fortes de la part des étudiants : devenir plus autonomes et améliorer la qualité de leurs productions. Yay!

Par groupe de 4 ou 5, les étudiants ont créé une page selon le brief précédemment décrit avec pour consigne de se répartir en groupes de niveaux homogènes. Après 45 minutes, l’un des étudiants (pas celui qui était sur le clavier) présente le travail du groupe à toute la classe. On part ensuite sur l’itération suivante avec des contraintes supplémentaires (dont celle permanente d’avoir une rotation au niveau de la personne qui code). On a pu faire 4 itérations sur la matinée avec les contraintes suivantes :

  • démarrage libre ;
  • repartir sur des bases saines comme HTML5Boilerplate avec les avantages/inconvénients associés, rappels sur les reset (connu) et le centrage des éléments ;
  • ne pas utiliser les attributs id/class pour styler la page (merci Vincent !) et donc mieux utiliser les balises HTML 5 et les sélecteurs, introduction aux sélecteurs + et > notamment ;
  • réorganiser sa CSS pour avoir quelque chose de propre et transmissible, introduction aux frameworks CSS.

Les itérations se sont fluidifiées au cours de la matinée avec des rappels et des conseils au fil de l’eau de ma part. Les résultats étaient finalement assez différents en fonction de la priorité du groupe : transmettre et homogénéiser les connaissances (collaboration) ou arriver à un résultat en se répartissant les tâches (coopération). Les deux approches étaient intéressantes car elles sont représentatives de ce qu’ils pourront rencontrer par la suite.

Quelques réflexions en vrac :

  • tous les groupes ont commencé par faire un menu alors qu’une seule page était demandée, assez marrant ;
  • aucun groupe ne s’est préoccupé du contenu sur la première itération, l’attention était entièrement sur les images et la CSS ;
  • aucun échange n’a été fait entre les groupes, ni même un coup d’œil pour se rendre compte qu’ils avaient pris la même image sur Google pour illustrer le site ;
  • j’aurais dû changer l’étudiant qui a initialement pris le clavier (le plus compétent) pour laisser mettre en place les bases par quelqu’un de moins expérimenté ;
  • les étudiants ont maintenant leur propre machine (majoritairement des Macbook) et passent par des bidouilles à base de clés USB et de connexions 3G pour travailler alors qu’il y a des machines connectées en Windows juste à côté, je vais essayer d’apporter mon propre réseau local la prochaine fois car la situation est assez hallucinante.

Globalement les étudiants avaient l’air assez satisfaits. La mini-rétrospective en fin de cours a fait émerger 2 propositions pour le prochain cours :

  • travailler en plus petits groupes (2/3) ;
  • plancher sur un sujet plus proche de leurs intérêts.

Ce sera donc adopté en repartant des bases acquises pour aller vers un peu plus de dynamisme vu qu’ils sont friands d’effets en JavaScript/jQuery, il faut aussi que je leur parle de Flexbox et qu’on prenne le temps de faire une introduction aux différentes méthodes pour initier un site. J’ai reçu 3 emails d’élèves qui souhaitaient me montrer ce qu’ils avaient déjà produit (à mon initiative), c’est peu sur un effectif de 24 mais c’est déjà ça :-).

SimCity that I used to know

Simcity 2000

If I fell in love with a computer in 1984, meeting Maxis SimCity at a friend’s place in September 1991 was my second honeymoon. I wasn’t in video games at all if you except a form of jealousy towards my friends who owned a NES, but SimCity changed the deal a deep way. I’m still not sure if it ruined my social life for half a decade or saved me from killing myself for too much loneliness.

My relation with Sim City quickly became passionate. Reading Will Wright’s 25th anniversary interviewpretty much sums up why, pointing fingers at many things I had never thought about before that day.

In 1992, my uncle gave me an antique Thomson TO16 XPDD under the condition it would stay at my grandmother’s place. Its 4.77 MHz 8088 CPU, 512 Kb RAM, 4 colors 320x200 CGA graphic card and 2 5.25 inches floppy disks had been out of date for a while, but they meant more than a treasure to me.

Thomson TO16

Take a 14 years old nerdy urban teenager to spend every weekend gardening in a cold country house, you’ll turn his life into a nightmare. Promise him a computer, a book about BASIC and some ultimately geeky games, he’ll follow you in hell. That’s what happened to me.

I spent my week-ends building cities I named from the girl I was about to get a refusal from – or already had as far as I remember – on a black and white screen. The color version of the game required buying a new screen and an expensive 16 colors EGA card that was way beyond what I could afford, but I was OK with it anyway.

My towns were all variation a perfectly aligned versions of a dystopian nightmare that would turn Epcot Center into a messy fantasy. Elodie / Oriane / Aurélie city were the combination of a perfect lack of soul an freedom, standardized places for perfectly normal people that were meant to end in an ecologic nightmare after all my nuclear power plant meltdown.

It’s also the time I first switched from GWBASIC to hexadecimal representation of binary code. Resources on that topic were extraordinary hard to find, and you could only rely on word to mouth to learn anything about it or, it you were lucky, on a passionate teacher eager to give you extra lessons out of school time. I can’t remember who taught me about PCTools and how I was able to modify my SimCity files to get more cash. but I still remember the excitement that paved the path for many unexpected, untold things.

The release of SimCity 2000 in 1994 was even more a blast for me.

For the first time my old 8088 was not enough. I spent the whole summer working in a factory to earn enough cash 80386 DX 33 with a 120MB hard drive, a 3.5 inches floppy disk and 2MB RAM. I remember paying it 3500 francs (717 € after converting from 1994 value). It was not enough, and I had to spend another 500 francs (102 €) for a 1 MB Vesa Local Bus graphic adapter and 600 francs (122 €) for 2 MB of RAM. It’s still less than 1000 €, but it was more than the 16 years old teenager I was had ever earned.

My SimCity 2000 towns were even more the image of a perfect dystopia than ever. The game was richer, adding a complexity level my perfectly regular cities could not support anymore. They looked much more like what you would expect in the real world, except it was clear they were the fruit of the mind of a twisted powerful divinity. I had just read Gibson’s Sprawl trilogy and archologies mixed with a drop of Huxley’s Brave New World were no secret to me.

I stopped naming my cities after girls I’d never have. I actually didn’t need a girl anymore, spending too much time playing. Instead most of them were called « Paradise City » after Guns and Roses Appetite For Destruction. My cities had nothing of paradise, and they were so perfectly balanced most human beings would have killed themselves of depression.

I played until 1996, the year I discovered Elite2 Frontier, a game I still play from time to time today.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

Would you recommend your company to your best friend?

Working nightmare

Years ago, I asked a friend who was working in a trendy Parisian restaurant if it was the good place to take my wife for her birthday.

Don’t go there, it’s terrible.

He was of a striking honesty and left him with lots to think about. If asked, would I recommend my own company to my best friend?

Even though our product may fit their needs, there’s no such thing like a disastrous experience involving money to ruin a long established friendship.

I thought about that a couple of times since then. I had many opportunities to hire good friends to sell them our product or services. It happened when I was working for a Web agency, at blueKiwi, and at Botify. Many times I had to balance my loyalty between my company and my friends.

It had an interesting outcome. I realized my work / life ethics was more balanced than I thought. When you join a new company, trying to please everyone is a common mistake, improving the overall sales or hire new people whatever the way an easy way to achieve it.

I started to ask myself many questions. Was our product good enough to be sold to my mum? Did I really want my best friends to mess with our salespeople? Did I really want my wife to experiment our support, then complain about me all day? Was the pricing fair and adapted to the needs we wanted to fulfill?

Answering to these questions led to 2 unexpected things. I became a better, more loyal friend, and the quality of what I was delivering improved drastically.

Answering these questions went far beyond the « will we still be friends after that? » question. They dealt with my core values and what (who) I really am.

I remember the first time I refused to recommend my company to someone I know. It was an easy sell, but both the product and customer experience were terrible. To be honest, the whole company was terrible and the only way to fix it was to replace everybody – including me – and rewrite the product from scratch.

I realized the company did not fit my core values. I liked lots of my coworkers, but I didn’t belong to this place. I was working for a company I despised and refused to identify myself up to the point I stopped mentioning where I was working at when asked.

I think I would have been more comfortable admitting I was working for an animal porn company than telling the truth to people who had experienced us.

I didn’t leave immediately though. I have kids to feed, and starting over was leaving a very comfortable familiar zone. There are lots of reason why you keep working at a place you don’t like: comfort zone, job scarcity, lack of time. But in the end, it’s about defining who you are.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

Cours IUT Arles

De toute façon, celui qui donne des conseils cherche d’abord à s’éduquer lui-même. Parler à quelqu’un est une manière détournée de se parler à soi. Ne croyez pas que j’aie une triste vision des rapports humains. Certes, je pense que l’autre nous permet d’accéder à notre propre intimité. Mais se comprendre est le meilleur service qu’on puisse rendre à ceux qu’on aime.

Manuel d’écriture et de survie, Martin Page

Je vais donner des cours à partir de lundi à des étudiants de licence à l’IUT d’Arles. Officiellement, il faut que je leur transmette des connaissances en CSS avancées, JavaScript, jQuery et PHP en 6 demi-journées. J’ai lu avec grand intérêt les témoignages de Romy et Rémi à ce sujet et je me pose encore de trop nombreuses questions. Les participants auront un bagage technique assez hétérogène et auront plutôt une culture design que code d’après ce qui m’a été dit.

Je compte utiliser la première matinée pour prendre la température et m’adapter par la suite. Je souhaiterais avoir le déroulé suivant :

  1. Nous sommes le 20 décembre 2014, cette formation s’est déroulée jusqu’à son terme, imaginez 2 scenarios (l’un positif, l’autre négatif) de ce que vous allez dire à la promotion suivante sur ce cours.
  2. Parcours personnel et compétences transmissibles.
  3. Envoyez-moi une URL dont vous êtes fier/heureuse par email.
  4. Vous allez être évalués (malheureusement requis) sur votre coopération, votre curiosité, votre bienveillance et votre énergie.
  5. Faites des groupes de 4/5 personnes. Vous venez d’intégrer une agence et on vous donne le brief suivant : Nous sommes une association de triathlon/autre qui souhaite montrer ses résultats et son ambiance conviviale sur le net. Vous avez 45 minutes et toutes les ressources que vous voulez pour produire quelque chose ensemble.
  6. Présentation et débriefing groupe par groupe. Discussion et corrections pour la fois suivante.
  7. Qui connait ParisWeb ? Qui a participé au hackathon OpenData ce weekend organisé dans les locaux de l’IUT ?
  8. Culture web et apprentissage.
  9. Quelles améliorations pour la prochaine fois ?
  10. Des liens à consulter/comprendre/discuter d’ici le prochain cours : The End of Design As We Know It, High-level advice and guidelines for writing sane, manageable, scalable CSS, Designer’s guide to DPI, Responsive Web Design Tips, La méthode Daisy, Solved by Flexbox, jQuery, c’est bien, le DOM moderne, c’est mieux !, les vôtres ?

Je vais essayer d’être rigoureux au sujet de mes retours sur cette nouvelle expérience pour les publier ici tout au long du processus. Les commentaires sont évidemment bienvenus.

A Poodle proof, bulletproof Nginx SSL configuration

My little Poney

2014 has been an annus horribilis (yes, with 2 « n ») for SSL. Both protocols and implementations have known several critical vulnerabilities from Heartbleed to Poodle. The good news is: SSLv3 is finally dead, it’s time to move to something else.

I’ve recently added https support to my blog, and I thought it would be a good idea to share my SSL Labs A+ (with a SHA256 key) Poodle proof, Beast proof, Heartbeat proof configuration for Nginx. It was implemented on FreeBSD, which means you’ll have to change a few things here and there if you’re running on Linux, but most things are exactly the same.

Remember our pon.ey domain we recently added DNSSEC to? We’re now going to give him some https love.

Generate a strong SSL private key

First, you need to generate a strong sha256 private key for your SSL certificate. We won’t use the -des3 option to protect it with a password (you would need to type it every time you start Nginx, like after a random reboot), but we’ll use -rand/var/log/messages for some more randomness.

Don’t waste CPU cycles generating a 8196 bytes key, most SSL certificate resellers won’t accept it.

  # cd /usr/local/ssl
  # openssl genrsa -rand/var/log/messages 4096 -out pon.ey.key
  # chmod 400 pon.ey.key

Create a CSR with a SHA256 signature algorithm

You’re now going to generate the Certificate Signing Request you’ll send to your SSL reseller. Before chosing one, carefuly check he supports SHA256 CSRs.

SHA1 collision have occured since almost 10 years, and most vendors won’t accept SHA1 certificates anymore after 2016. If like me you’re chosing StartSSL you’ll have to renew your certificate when the implement SHA256.

# openssl req -new -nodes -sha256 -out pon.ey.csr  

Answer the few questions and send your CSR to your SSL reseller.

Nginx basic SSL configuration

Here’s the time to add some SSL love to your vhost. Here’s a basic Nginx vhost configuration. The first part is not SSL related but ensures your pon.ey lovers will use a secure connection.

server {
  listen  62.210.113.68:80;
  listen [::]:80;

  server_name  pon.ey;

  return 301 https://pon.ey$request_uri;
}

server {
  listen  62.210.113.68:443;
  listen [::]:443;

  server_name  pon.ey;

  ssl  on;
  ssl_certificate  /usr/local/etc/ssl/pon.ey.pem;
  ssl_certificate_key  /usr/local/etc/ssl/pon.ey.key;
  ssl_session_timeout  10m;
  ssl_prefer_server_ciphers on;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'AES256+EECDH:AES256+EDH';
  ssl_session_cache shared:SSL:10m;

  location / {
    root   /data/t37.net/public;

    access_log /data/t37.net/log/access.log;
    error_log 
  }
}

Note how we’re using a return 301 in the http only vhost instead of the classical rewrite rule relying on an often confusing regular expression (trick courtesy of Les Aker).

Let’s have a look at a few options there.

ssl_ciphers enables only AES256 with Ephemeral Diffie-Hellman and Ephemeral Elliptic-Curve Diffie-Hellman key exchange. It generates session keys so only the two parties involved in the communication can get them. No one else can, even if they can access the server’s private key. After the session is over and the session keys are destroyed, the only way to decrypt the communication is to break the session keys themselves. This protocol feature is called as forward secrecy.

ssl_protocols avoids broken SSLv1, SSLv2 and SSLv3 and enables TLS only. This means your site breaks with Internet Explorer 6, which may cause trouble in some corporate environment.

ssl_session_cache sets the type and size of caches that store session parameters. We’re using a shared cache named SSL and having a value of 10 megabytes. One megabyte can store about 4000 sessions, which should be enough for our pon.ey Web site.

ssl_session_timeout specifies the time during which the client is allowed to reuse the session parameters stored in cache.

Hardening EDH and EDCH

When using Ephemeral Diffie-Hellman ciphers, a prime number is shared between the client and the server to perform the key exchange. Nginx lets you specify the prime number you want the server send to the client, the bigger the better:

# openssl dhparam -out dh4096.pem -outform PEM -2 4096

Once you’re done (it can be long), add the following to your vhost:

ssl_dhparam /usr/local/etc/nginx/ssl/dh4096.pem;

HTTP Strict Transport Security

Next thing is to enable HTTP Strict Transport Security. This makes Nginx declare to users that he’ll use only HTTPS secured connections.

The HSTS policy is communicated to the client by the server using a HTTP response header named Strict-Transport-Security. HSTS policy specifies a period of time during which the user agent needs to access the server in a secure-only way.

Edit your vhost file, and add the following line just under the SSL configuration:

add_header Strict-Transport-Security max-age=535680000;  

Be careful when you add a long max age period: this means you’ll have to renew your SSL certificate if you want returning visitors to access your site during that period.

Configure SSL stapling

The Online Certificate Status Protocol (OCSP) is a protocol to check if a SSL certificate has been revoked. It’s been created to reduce the SSL negotiation time as an alternative to the Certificate Revocation List (CRL).

With CRL, the client downloads a list of revoked certificate and checks which can be huge and take lots of time to process. With OCSP, the client sends a request to a URL that returns the validity information of the certificate.

OCSP stapling is an alternative to OCSP that delegates the check to the certificate user instead of the Certification Authority.

Download the root CA and intermediate CA’s certificate of your SSL certificate in PEM format and save them in the same file. Save it as /usr/local/etc/pon.ey.trusted.pem

Add the following to your vhost configuration, following your SSL section.

ssl_trusted_certificate /usr/local/etc/ssl/pon.ey.trusted.pem;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;

Here, you’ll use Google DNS resolvers to query your certification authority for validity information.

Conclusion

Here you are. Your perfect Nginx SSL configuration is almost over. Before I let you go, here’s the complete vhost configuration as it should be:

server {
  listen  62.210.113.68:80;
  listen [::]:80;

  server_name  pon.ey;

  return 301 https://domain.com$request_uri;
}

server {
  listen  62.210.113.68:443;
  listen [::]:443;

  server_name  pon.ey;

  ssl  on;
  ssl_certificate  /usr/local/etc/ssl/pon.ey.pem;
  ssl_certificate_key  /usr/local/etc/ssl/pon.ey.key;
  ssl_dhparam /usr/local/etc/ssl/dh4096.pem;
  ssl_session_timeout  10m;
  ssl_prefer_server_ciphers on;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'AES256+EECDH:AES256+EDH';
  ssl_session_cache shared:SSL:10m;
  ssl_trusted_certificate /usr/local/etc/ssl/pon.ey.trusted.pem;
  ssl_stapling on;
  ssl_stapling_verify on;
  resolver 8.8.4.4 8.8.8.8 valid=300s;
  resolver_timeout 10s;
  add_header Strict-Transport-Security max-age=535680000;
  
  location / {
    root   /data/t37.net/public;

    access_log /data/t37.net/log/access.log;
    error_log 
  }
}

If you have implemented DNSSEC, you can add your certificate fingerprint to your zone using a TXT field:

openssl x509 -in pon.ey.pem -outform DER | sha256 | awk '{print $1}'

Don’t forget to resign your zone after doing this!


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

3 quick tips to improve a low self confidence

As you wish

Daria is one of my favorite cartoon ever. It’s been 17 years since it was first broadcasted on MTV, but everything is still relevant and incredibly funny.

If you don’t know Daria, you’re really missing something. Daria is a cartoon about a smart, acerbic, and somewhat misanthropic teenage girl who observes the world around her. S1E1 Esteemsters give the tone with a quote I used as a mail signature for years.

I don’t have low self-esteem. It’s a mistake. I have low esteem for everyone else

I know what low self esteem and low self confidence are about. I’ve worked on both topics a lot a few years ago and it helped me a lot both in my family and work life.

I also know how it’s hard to chose where to start. If you’ve ever read a productivity book, you already know aiming at the stars brings you nowhere when you’re in the gutter.

Talking or smiling to random people in the street is a classical advice but it was too much for me. I had to focus on small achievements that would kick me out of my comfort zone and I could easily turn into habits.

1. When in group, makes suggestion that concern everyone

At work, I used to follow my colleagues where they wanted to it. Not having to take a single decision was easy as there was always someone deciding for me.

When someone asked where we wanted to eat, I started suggesting some popular places. I didn’t take any risk as I knew most of us liked to eat there, but I was expressing my point of view in front of the group.

Taking this kind of small decisions is important. They don’t turn you into a leader, far from that, but you stop being a simple follower. The first times are hard and you barely hear your own voice, then you gain in self confidence and suggest more and more, sometimes controversial things.

2. Stop saying « up to you »

Unless you’re The Dread Pirate Robert or Boba Fett, « as you wish » is something you should ban from your vocabulary.

Just like the « where shall we eat » question, it’s very easy to let someone else decide for yourself. It’s incredibly comfortable as you’re sure you’ll never fail. If something gets wrong, it’s someone else’s fault.

Unfortunately, it brings you nowhere. Or, more exactly, it will bring you to many places you don’t want to go.

3. Start saying no to thing that matters

Speaking of going to places I didn’t want to go, I found myself in many uncomfortable situation because I did not say « no » in time.

I used to hate confrontation, and saying no to someone, even for a very small thing was hard to me. Accepting everything was a way to stay in my comfort zone and avoid a fight that rarely occurred.

As I started to say « no » more often, I realized there was no or very little confrontation. Most of the time, people would simply say « ok » and moved to something else.

It’s critical to say « no » to things that really matters to you, or your « no » will have little to no value. That’s exactly like saying « yes » too often: your « yes » has no value anymore.

There’s something so simple it’s stupid I took years to understand. People don’t expect you to always rely on them, and they even don’t expect you to please them. Don’t expect to be the next Captain Kirk type leader with this, but those small exercises are a good start.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

I didn’t prioritize it (therefore I didn't want to do it)

Different priorities

A few years ago, before I started to setup goals and priorities, I used to say « I didn’t have the time » for things I planned or committed to do, but I obviously didn’t.

Indeed it was not honest from me, and my wife once told me:

You have the time for what you want.

I think it was about building some Ikea furniture or doing some boring administrative tasks I didn’t want to do. Would I have been honest, I would have said « Do it if you want because I don’t want to do it ».

Since the day I started to use tasks list (not TODO) and milestones, I’ve made an important language switch. I’ve stopped saying:

I didn’t have the time.

Instead, I say:

I didn’t prioritize it.

This is much more honest. The words say exactly what they mean.

The reason why I didn’t do that thing, whatever it is, is not because I did not have the time. Saying you didn’t have the time implies some external events prevented you from doing what you had to do. Most of the time, it’s a lie. It’s part of the ostrich policy I was writing about yesterday. The job not being done is not your fault, you can’t be held responsible for what (did not) happened.

Admitting you did not do the job because you did not prioritize it is admitting it was not a priority for you. That may seem obvious, but it’s important: instead of rejecting the fault on the time you didn’t have, you admit judging what you were ask less important than some other things.

Because you didn’t want to do it.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

On positive failure

Batman failed

Every time I write about how I failed and the lesson learned, I get great reactions.

How brave of you for publicly admitting you’ve failed. It must be so hard to write about it. The lessons learned are both helpful and inspiring. Thank you so much, you’re really the man. I love you. Will you marry me? Xoxo.

Well, almost.

When I talk with people around me, I realize I’ve failed much more than they did. I don’t think I’m that bad, but it appears I’ve tried much more things than they’ll ever do. I believe it is OK as long as you never make the same mistake twice.

When you fail, there are 2 ways to react. The first one is to go to your bedroom, lock the door, cry listening to a Linkin Park song and never leave anymore. The second one is to go to your bedroom, cry, think about it and try again. Tennis legend Roger Federer lost his first competition game 0/6 0/6. He obviously dried his tears and tried again until he had the career everyone knows about.

Failure or not, it’s been a while since I haven’t cried.

When I fail at something I try to find a lesson to learn, even a small one. It’s very important to me. If I can learn something, I can’t consider it a total failure. Writing about it helps me a lot. Publishing my notes as blog posts help me even more. It turns the lesson learned into something real. People can read about it, they can judge me if they want, and they can learn from it. That’s the first part of what I call positive failure.

Positive failure is about having a positive state of mind after you failed. It’s critical if you want to start again from a solid ground. The lesson you’ve learned is part of what makes your ground much stronger and better. Positive failure is not playing the ostrich policy. It’s not about rejecting your failure on someone else,

It didn’t work, but everything’s alright and I did nothing wrong. It’s someone else / my competitor / my teachers / the economics fault.

There’s one reason why I both love and hate tennis so much: It’s mentally the hardest sport ever.

When playing, you’re alone on the court against 3: your opponent, the ball and yourself. Defeat is all your fault, victory is all yours too. Team sports like soccer don’t have this personal responsibility in victory or failure. You can always hide your poor performance behind the team, it makes losing much more easier.

Losing a tennis game is about facing your failure naked. Of course, you can try the ostrich policy. You can blame the weather, the court, your hard day, but in the end, it’s all about you losing to your opponent. It forces you to adopt a positive failure state of mind if you don’t want to keep playing.

Positive failure is saying:

OK, we’ve screwed up. We’re obviously knee deep in the shit because of this and that. There was this, it’s the past and we’re now ready to start over, here’s what we’re gonna do.

It’s just a question of being honest with yourself and others.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

Running LEAN

Je suis en train de refaire le site internet de scopyleft, la coopérative web que j’ai co-créée avec des amis. Je réalise une série d’interviews afin de vérifier si mes premières pistes sur cette refonte sont pertinentes sur le public que je me suis fixé… et dont tu as la chance incroyable de faire partie ! Enfin je crois. Je vais te poser quelques questions pour vérifier cela :

Début d’interview rédigée dans le cadre de TrampoLEAN

J’ai eu la chance d’assister à la première édition de TrampoLEAN (la prochaine session est le 24 octobre à Montpellier) qui consiste à mettre en pratique Running LEAN sur un projet personnel en étant accompagné. Je pense que l’approche est intéressante lorsque l’on souhaite concevoir un produit qui réponde vraiment à des besoins utilisateurs. L’utilisation du Lean Canvas et la réalisation d’interviews en amont même de la première ligne de code permet de pivoter à moindres frais pour maximiser la valeur apportée à la cible choisie. Je vous renvoie à l’excellent billet de Lionel pour plus de précisions sur les motivations de la méthode :

Penser pour l’utilisateur c’est garder le confort de ne pas se confronter à lui. On fait de belles théories, les intervenants du projet trouvent que les idées sont bonnes entre eux, alors que la seule préoccupation est d’avoir la certitude que l’idée est bonne pour l’utilisateur.

Pourquoi Running Lean ?

Le problème que j’ai rencontré lors de sa mise en application est que j’ai choisi un projet bien singulier : la refonte du site de scopyleft. Mon objectif était de tester les limites de l’approche et je pense les avoir atteintes. J’ai l’impression qu’il est très difficile d’avoir une approche artistique au sens large avec Running LEAN. Lorsque l’on reste sur des besoins, c’est très pertinent. Dès que l’on va vers du style et de la personnalité ça l’est beaucoup moins car cela devient propre à chaque individu. Je ne pense pas qu’il soit possible d’écrire un livre ou de réaliser un tableau avec une telle approche car la cible se réduit alors à une seule personne : l’auteur.

Il doit être possible d’identifier ces cas aux limites lors de la recherche des hypothèses à tester, lorsque celles-ci sont trop difficiles à formuler c’est qu’il y a une difficulté à cerner le problème ou que le problème n’est pas résoluble par cette méthode. Dans les deux cas il faut se remettre en question avant de passer aux interviews qui apporteront peu d’intérêt si ce n’est la confirmation que chaque personne est singulière :-).

Mais pourtant un site doit bien répondre à un besoin ? Tout à fait. Mais il repose aussi sur du rédactionnel qui a plus ou moins d’importance. La subtilité réside dans ce curseur entre utilité et personnalité. Dans le cadre du site de scopyleft, je pense que l’on est plus proches de la personnalité. Ou plutôt j’ai envie que l’on reste plus proches de ce que l’on est. Peut-être faudrait-il un nom pour cet écueil dans la méthode : Getting personal ?

Malgré ce relatif échec personnel (earn or learn est notre nouveau motto), la méthode a montré de bons résultats avec les autres participants et sur les projets que l’on accompagne. Il y a vraiment du bon dans cette approche si elle arrive suffisamment en amont des projets, lorsque les porteurs ne se sont pas encore enfermés dans leurs propres certitudes. Ou cherchent un retour sur investissement sur l’énergie déjà déployée et l’argent déjà dépensé sans avoir le recul nécessaire pour lâcher prise et revenir aux bases : le besoin utilisateur.

Au détriment de la satisfaction du porteur ? De l’égo de l’auteur ? Oups.

My epic Elasticsearch bug and how I fucked up my investigation not focusing wide enough

Pulling my hair

I’m pissed off.

I’ve spent an insane amount of time struggling with an epic Elasticsearch bug because broaden my focus to consider the problem from a higher point of view.

This post is about my Elasticsearch bug and investigating issues in a complex environment. If you don’t know what Elasticsearch is, please jump to the conclusion.

I’m running a tiny Elasticsearch cluster on Amazon Web Service made of 1 routing and 3 data nodes. Each node runs on a 2 core and 7.5GB RAM m3.large virtual machine. The cluster has about 600GB of data, and has been running smoothly since early April.

2 weeks ago, Amazon had to reboot about 10% of EC2 to fix a Xen bug. This reboot operation included the routing and one of the data nodes. Since I hate being awaken by monitoring alerts, all the machines are running within EC2 Autoscaling Groups. Autoscaling groups are great to upscale or downscale a platform. They’re even greater at replacing machines when they crash.

To avoid a service interruption, I’ve upscaled the routing node Autoscaling Group to ensure a spare one. I was ready for Amazon mass reboot.

The routing node acts as the cluster master nodes. On an Elasticsearh cluster, running 2 master nodes is OK. With the proper configuration it even prevents from split brain when a network issue happens.

Everything happened as expected:

  • Amazon rebooted the main routing node.
  • The spare node did the job during the reboot time.
  • The usual master came back as epxected.

Then, I downscaled my group to keep one routing machine only. That’s the time my issues started.

Until that day, my routing node memory consumption had been all flat. It suddenly started to grow lineraly until the OOM trigger killed the process. And again. And again. And again.

I started investigating, and investigating the wrong way.

My first assertion was:

Since this machine is the one that has been running for months I must have lost up a runtime setting. I didn’t save that setting and now I need to find what it’s about.

It was my first mistake.

I decided the problem was on the routing node since this behavior only happened there. I narrowed my point of view on that single machine based on a partial observation.

I launched a second routing node to see if it behaved the same way. It didn’t. Only the main master node had the memory issue.

It should have had the same problem but I did not setup perfect experimentation conditions. The cluster configuration mentions a single master node. I should have updated it to take a second one into account.

I then made 2 other assertions:

  1. The virtual machine was corrupted since it worked with one using the same AMI.
  2. Only the machine getting traffic from the clients had the memory issues.

I killed the apparently corrupted virtual machine and replaced it with another one. The problem remained the same.

That’s when I started to focus on Elasticsearch and Java Virtual Machine memory allocation. I read an insane amount of docs about Java memory management. That’s the positive point. I know more about the JVM, memory allocation and garbage collection than I’ve ever expected to.

I did lots of test. I tuned both my JVM and Elasticsearch configuration, looking for memory allocation issues. I changed the garbage collectors just in case. I downsized the minimum and maximum heap allocation. It took a long time. I had to wait a few hours to see how the memory use was growing and I had many things to manage aside. Note: run your Elasticsearch node with mlockall, you’ll see memory issues quickly.

I was really upset because it was not supposed to eat more than 4GB RAM (+ more or less 200MB of non heap allocation). My graphs showed the heap was not taking more than allocated at runtime, and there was something like 40MB of non heap memory used as well.

It was my second mistake. I focused on memory because the process was eating lots of RAM. Spoiler: it wasn’t.

After reading the JVM documentation, I started to focus on non heap memory. Java allocates a pool of memory for every single thread it creates. Before I read that, I had not looked at my threads consumption graphs. The number of active threads was insane. The virtual machine would create up to 20,000 concurrent threads before the machine ran out of memory. None of them was ever closed.

I started to work with my colleague Han, who also has a good knowledge of Elasticsearch. Han was the perfect investigation partner. He’s a smart developper with a good Elasticsearch knowledge so he brought both a fresh look and a different point of of to the problem. Having a look at our centralized syslog server, he noticed strange messages sent by one of the data servers.

The data server was constantly sending auto discovery requests not only to the current master server but also to the old one.

As a consequence, the new master was also sending 1 auto discovery requests per second to that node. Doing so, it was creating a new thread each time that was never closed despite requests timeout. For every opened thread, Elasticsearch was eating a bit more RAM. The data node Java process was stuck in an infinite loop. It was impossible to gracefully restart it and we had to kill it.

I hadn’t looked at the syslog frontend because I SSHed myself on the machine to look at the logs. Han does not have SSH access, so he had to check on the frontend. Doing this, he was able to get a global view of the cluster when I was only focusing on the ill node.

Conclusion

As every time I fuck up at something, I’ve learned some obvious lessons I’m now sharing here.

Start your observation universe wide to see if everything else seems to work correctly. An Elasticsearch cluster is a complex environment. It relies on many interconnected node and the Java Virtual Machine itself is a complex thing.

Don’t make assumptions that the problem comes from somewhere because it’s visible from that place. Gather information from the whole environment even though you’re working on a specific thing. I would have lost less time digging if I had looked at all the system data instead of focusing on the memory.

Don’t wait for a fresh look at your problem, even more if you’re working with people who’ll look at it from a completely different point of view. It helps a lot.

Everyone can make mistakes like this starting with you. Stay humble and learn from your and other people’s mistakes. After you learn that, take a short break, stop looking at the past and focus on the next problem.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.